0132786796 CISSP All In One Exam Guide 6e
CISSP%20All-in-One%20Exam%20Guide%206e
CISSP%20All-in-One%20Exam%20Guide%206e
User Manual:
Open the PDF directly: View PDF
.
Page Count: 1472 [warning: Documents this large are best viewed by clicking the View PDF Link!]
- Cover Page
- Title Page
- Copyright Page
- Contents
- Foreword
- Acknowledgments
- Chapter 1 Becoming a CISSP
- Chapter 2 Information Security Governance and Risk Management
- Fundamental Principles of Security
- Security Definitions
- Control Types
- Security Frameworks
- Security Management
- Risk Management
- Risk Assessment and Analysis
- Policies, Standards, Baselines, Guidelines, and Procedures
- Information Classification
- Layers of Responsibility
- Security Steering Committee
- Audit Committee
- Data Owner
- Data Custodian
- System Owner
- Security Administrator
- Security Analyst
- Application Owner
- Supervisor
- Change Control Analyst
- Data Analyst
- Process Owner
- Solution Provider
- User
- Product Line Manager
- Auditor
- Why So Many Roles?
- Personnel Security
- Hiring Practices
- Termination
- Security-Awareness Training
- Degree or Certification?
- Security Governance
- Summary
- Quick Tips
- Chapter 3 Access Control
- Access Controls Overview
- Security Principles
- Identification, Authentication, Authorization, and Accountability
- Access Control Models
- Access Control Techniques and Technologies
- Access Control Administration
- Access Control Methods
- Accountability
- Access Control Practices
- Access Control Monitoring
- Threats to Access Control
- Summary
- Quick Tips
- Chapter 4 Security Architecture and Design
- Computer Security
- System Architecture
- Computer Architecture
- Operating System Architectures
- System Security Architecture
- Security Models
- Security Modes of Operation
- Systems Evaluation Methods
- The Orange Book and the Rainbow Series
- Information Technology Security Evaluation Criteria
- Common Criteria
- Certification vs. Accreditation
- Open vs. Closed Systems
- A Few Threats to Review
- Summary
- Quick Tips
- Chapter 5 Physical and Environmental Security
- Chapter 6 Telecommunications and Network Security
- Chapter 7 Cryptography
- The History of Cryptography
- Cryptography Definitions and Concepts
- Types of Ciphers
- Methods of Encryption
- Types of Symmetric Systems
- Types of Asymmetric Systems
- Message Integrity
- Public Key Infrastructure
- Key Management
- Trusted Platform Module
- Link Encryption vs. End-to-End Encryption
- E-mail Standards
- Internet Security
- Attacks
- Summary
- Quick Tips
- Chapter 8 Business Continuity and Disaster Recovery Planning
- Chapter 9 Legal, Regulations, Investigations, and Compliance
- The Many Facets of Cyberlaw
- The Crux of Computer Crime Laws
- Complexities in Cybercrime
- Intellectual Property Laws
- Privacy
- Liability and Its Ramifications
- Compliance
- Investigations
- Incident Management
- Incident Response Procedures
- Computer Forensics and Proper Collection of Evidence
- International Organization on Computer Evidence
- Motive, Opportunity, and Means
- Computer Criminal Behavior
- Incident Investigators
- The Forensics Investigation Process
- What Is Admissible in Court?
- Surveillance, Search, and Seizure
- Interviewing and Interrogating
- A Few Different Attack Types
- Cybersquatting
- Ethics
- Summary
- Quick Tips
- Chapter 10 Software Development Security
- Software’s Importance
- Where Do We Place Security?
- System Development Life Cycle
- Software Development Life Cycle
- Secure Software Development Best Practices
- Software Development Models
- Capability Maturity Model Integration
- Change Control
- Programming Languages and Concepts
- Distributed Computing
- Mobile Code
- Web Security
- Database Management
- Expert Systems/Knowledge-Based Systems
- Artificial Neural Networks
- Malicious Software (Malware)
- Summary
- Quick Tips
- Chapter 11 Security Operations
- Appendix A: Comprehensive Questions
- Appendix B: About the Download
- Glossary
- Index


ALL IN ONE
CISSP®
EXAM GUIDE
Sixth Edition
Shon Harris
New York • Chicago • San Francisco • Lisbon
London • Madrid • Mexico City • Milan • New Delhi
San Juan • Seoul • Singapore • Sydney • Toronto
McGraw-Hill is an independent entity from (ISC)2® and is not affiliated with (ISC)2 in any manner. This study/training guide and/or
material is not sponsored by, endorsed by, or affiliated with (ISC)2 in any manner. This publication and digital content may be used
in assisting students to prepare for the CISSP exam. Neither (ISC)2 nor McGraw-Hill warrant that use of this publication and digital
content will ensure passing any exam. (ISC)2®, CISSP®, CA P®, ISSAP®, ISSEP® ISSMP®, SSCP® and CBK are trademarks or registered
trademarks of (ISC)2
in the United States and certain other countries. All other trademarks are trademarks of their respective owners.
®

Copyright © 2013 by McGraw-Hill Companies. All rights reserved. Except as permitted under the United States Copyright Act of
1976, no part of this publication may be reproduced or distributed in any form or by any means, or stored in a database or retrieval
system, without the prior written permission of the publisher, with the exception that the program listings may be entered, stored,
and executed in a computer system, but they may not be reproduced for publication.
ISBN: 978-0-07-178173-2
MHID: 0-07-178173-0
The material in this eBook also appears in the print version of this title: ISBN: 978-0-07-178174-9,
MHID: 0-07-178174-9.
All trademarks are trademarks of their respective owners. Rather than put a trademark symbol after every occurrence of a
trademarked name, we use names in an editorial fashion only, and to the benefi t of the trademark owner, with no intention of
infringement of the trademark. Where such designations appear in this book, they have been printed with initial caps.
McGraw-Hill eBooks are available at special quantity discounts to use as premiums and sales promotions, or for use in corporate
training programs. To contact a representative please e-mail us at bulksales@mcgraw-hill.com.
Information has been obtained by McGraw-Hill from sources believed to be reliable. However, because of the possibility of
human or mechanical error by our sources, McGraw-Hill, or others, McGraw-Hill does not guarantee the accuracy, adequacy, or
completeness of any information and is not responsible for any errors or omissions or the results obtained from the use of such
information.
TERMS OF USE
This is a copyrighted work and The McGraw-Hill Companies, Inc. (“McGraw-Hill”) and its licensors reserve all rights in and to
the work. Use of this work is subject to these terms. Except as permitted under the Copyright Act of 1976 and the right to store and
retrieve one copy of the work, you may not decompile, disassemble, reverse engineer, reproduce, modify, create derivative works
based upon, transmit, distribute, disseminate, sell, publish or sublicense the work or any part of it without McGraw-Hill’s prior
consent. You may use the work for your own noncommercial and personal use; any other use of the work is strictly prohibited. Your
right to use the work may be terminated if you fail to comply with these terms.
THE WORK IS PROVIDED “AS IS.” McGRAW-HILL AND ITS LICENSORS MAKE NO GUARANTEES OR
WARRANTIES AS TO THE ACCURACY, ADEQUACY OR COMPLETENESS OF OR RESULTS TO BE OBTAINED FROM
USING THE WORK, INCLUDING ANY INFORMATION THAT CAN BE ACCESSED THROUGH THE WORK VIA HYPER-
LINK OR OTHERWISE, AND EXPRESSLY DISCLAIM ANY WARRANTY, EXPRESS OR IMPLIED, INCLUDING BUT NOT
LIMITED TO IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
McGraw-Hill and its licensors do not warrant or guarantee that the functions contained in the work will meet your requirements
or that its operation will be uninterrupted or error free. Neither McGraw-Hill nor its licensors shall be liable to you or anyone else
for any inaccuracy, error or omission, regardless of cause, in the work or for any damages resulting therefrom. McGraw-Hill has
no responsibility for the content of any information accessed through the work. Under no circumstances shall McGraw-Hill and/or
its licensors be liable for any indirect, incidental, special, punitive, consequential or similar damages that result from the use of or
inability to use the work, even if any of them has been advised of the possibility of such damages. This limitation of liability shall
apply to any claim or cause whatsoever whether such claim or cause arises in contract, tort or otherwise.
I dedicate this book to some of the most wonderful people
I have lost over the last several years.
My grandfather (George Fairbairn), who taught me about
integrity, unconditional love, and humility.
My grandmother (Marge Fairbairn), who taught me about the importance
of living life to the fullest, having “fun fun,” and of course, black jack.
My dad (Tom Conlon), who taught me how to be strong and face adversity.
My father-in-law (Maynard Harris), who taught me
a deep meaning of the importance of family that I never knew before.
Each person was a true role model to me. I learned a lot from them,
I appreciate all that they have done for me, and I miss them terribly.

ABOUT THE AUTHOR
Shon Harris is the founder and CEO of Shon Harris Security LLC and Logical Security
LLC, a security consultant, a former engineer in the Air Force’s Information Warfare
unit, an instructor, and an author. Shon has owned and run her own training and con-
sulting companies since 2001. She consults with Fortune 100 corporations and govern-
ment agencies on extensive security issues. She has authored three best-selling CISSP
books, was a contributing author to Gray Hat Hacking: The Ethical Hacker’s Handbook
and Security Information and Event Management (SIEM) Implementation, and a technical
editor for Information Security Magazine. Shon has also developed many digital security
products for Pearson Publishing.
About the Technical Editor
Polisetty Veera Subrahmanya Kumar, CISSP, CISA, PMP, PMI-RMP, MCPM, ITIL, has
more than two decades of experience in the field of Information Technology. His areas
of specialization include information security, business continuity, project manage-
ment, and risk management. In the recent past he served his term as Chairperson for
Project Management Institute’s PMI-RMP (PMI - Risk Management Professional) Cre-
dentialing Committee and was a member of ISACA’s India Growth Task Force team. In
the past he worked as content development team leader on a variety of PMI standards
development projects. He was a lead instructor for the PMI PMBOK review seminars.

CONTENTS AT A GLANCE
Chapter 1 Becoming a CISSP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Chapter 2 Information Security Governance and Risk Management . . . . . . . . 21
Chapter 3 Access Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
Chapter 4 Security Architecture and Design . . . . . . . . . . . . . . . . . . . . . . . . . . 297
Chapter 5 Physical and Environmental Security . . . . . . . . . . . . . . . . . . . . . . . . 427
Chapter 6 Telecommunications and Network Security . . . . . . . . . . . . . . . . . . 515
Chapter 7 Cryptography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759
Chapter 8 Business Continuity and Disaster Recovery . . . . . . . . . . . . . . . . . . 885
Chapter 9 Legal, Regulations, Compliance, and Investigations . . . . . . . . . . . . . 979
Chapter 10 Software Development Security . . . . . . . . . . . . . . . . . . . . . . . . . . . 1081
Chapter 11 Security Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1233
Appendix A Comprehensive Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1319
Appendix B About the Download . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1379
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1385
v
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G-1
This page intentionally left blank

CONTENTS
Foreword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xx
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii
Chapter 1 Becoming a CISSP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Why Become a CISSP? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
The CISSP Exam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
CISSP: A Brief History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
How Do You Sign Up for the Exam? . . . . . . . . . . . . . . . . . . . . . . . . 7
What Does This Book Cover? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Tips for Taking the CISSP Exam . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
How to Use This Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Chapter 2 Information Security Governance and Risk Management . . . . . . . . 21
Fundamental Principles of Security . . . . . . . . . . . . . . . . . . . . . . . . . 22
Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Integrity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Confidentiality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Balanced Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Security Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Control Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Security Frameworks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
ISO/IEC 27000 Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Enterprise Architecture Development . . . . . . . . . . . . . . . . . . . 41
Security Controls Development . . . . . . . . . . . . . . . . . . . . . . . 55
COSO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Process Management Development . . . . . . . . . . . . . . . . . . . . 60
Functionality vs. Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Security Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Risk Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Who Really Understands Risk Management? . . . . . . . . . . . . . 71
Information Risk Management Policy . . . . . . . . . . . . . . . . . . 72
The Risk Management Team . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Risk Assessment and Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Risk Analysis Team . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
The Value of Information and Assets . . . . . . . . . . . . . . . . . . . 76
Costs That Make Up the Value . . . . . . . . . . . . . . . . . . . . . . . . 76
Identifying Vulnerabilities and Threats . . . . . . . . . . . . . . . . . 77
Methodologies for Risk Assessment . . . . . . . . . . . . . . . . . . . . 78
Risk Analysis Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
vii

CISSP All-in-One Exam Guide
viii
Qualitative Risk Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Protection Mechanisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Putting It Together . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Total Risk vs. Residual Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Handling Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Outsourcing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Policies, Standards, Baselines, Guidelines,
and Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Security Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Baselines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Information Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Classifications Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Classification Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3
Layers of Responsibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 4
Board of Directors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 5
Executive Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 6
Chief Information Officer . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 8
Chief Privacy Officer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 8
Chief Security Officer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 9
Security Steering Committee . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Audit Committee . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Data Owner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Data Custodian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
System Owner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Security Administrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Security Analyst . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Application Owner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Supervisor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Change Control Analyst . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
Data Analyst . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
Process Owner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
Solution Provider . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Product Line Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Auditor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Why So Many Roles? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Personnel Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Hiring Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Termination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Security-Awareness Training . . . . . . . . . . . . . . . . . . . . . . . . . . 130
Degree or Certification? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Security Governance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132

Contents
ix
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Quick Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Chapter 3 Access Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
Access Controls Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
Security Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Integrity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Confidentiality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
Identification, Authentication, Authorization,
and Accountability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
Identification and Authentication . . . . . . . . . . . . . . . . . . . . . 162
Password Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Authorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Access Control Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
Discretionary Access Control . . . . . . . . . . . . . . . . . . . . . . . . . 220
Mandatory Access Control . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
Role-Based Access Control . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
Access Control Techniques and Technologies . . . . . . . . . . . . . . . . . 227
Rule-Based Access Control . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
Constrained User Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . 228
Access Control Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
Content-Dependent Access Control . . . . . . . . . . . . . . . . . . . . 231
Context-Dependent Access Control . . . . . . . . . . . . . . . . . . . . 231
Access Control Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
Centralized Access Control Administration . . . . . . . . . . . . . . 233
Decentralized Access Control Administration . . . . . . . . . . . . 240
Access Control Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
Access Control Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
Administrative Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
Physical Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
Technical Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
Accountability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
Review of Audit Information . . . . . . . . . . . . . . . . . . . . . . . . . 250
Protecting Audit Data and Log Information . . . . . . . . . . . . . . 251
Keystroke Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
Access Control Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
Unauthorized Disclosure of Information . . . . . . . . . . . . . . . . 253
Access Control Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
Intrusion Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
Intrusion Prevention Systems . . . . . . . . . . . . . . . . . . . . . . . . . 265
Threats to Access Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
Dictionary Attack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
Brute Force Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
Spoofing at Logon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270

CISSP All-in-One Exam Guide
x
Phishing and Pharming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
Threat Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
Quick Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
Chapter 4 Security Architecture and Design . . . . . . . . . . . . . . . . . . . . . . . . . . 297
Computer Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
Computer Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
The Central Processing Unit . . . . . . . . . . . . . . . . . . . . . . . . . . 304
Multiprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
Operating System Components . . . . . . . . . . . . . . . . . . . . . . . 31 2
Memory Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
Virtual Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
Input/Output Device Management . . . . . . . . . . . . . . . . . . . . . 340
CPU Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
Operating System Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
Virtual Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
System Security Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
Security Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
Security Architecture Requirements . . . . . . . . . . . . . . . . . . . . 359
Security Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
State Machine Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
Bell-LaPadula Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
Biba Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
Clark-Wilson Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
Information Flow Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
Noninterference Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
Lattice Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
Brewer and Nash Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
Graham-Denning Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
Harrison-Ruzzo-Ullman Model . . . . . . . . . . . . . . . . . . . . . . . 385
Security Modes of Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
Dedicated Security Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
System High-Security Mode . . . . . . . . . . . . . . . . . . . . . . . . . . 387
Compartmented Security Mode . . . . . . . . . . . . . . . . . . . . . . . 387
Multilevel Security Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
Trust and Assurance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
Systems Evaluation Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
Why Put a Product Through Evaluation? . . . . . . . . . . . . . . . . 391
The Orange Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
The Orange Book and the Rainbow Series . . . . . . . . . . . . . . . . . . . . 397
The Red Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
Information Technology Security
Evaluation Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399

Contents
xi
Common Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
Certification vs. Accreditation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
Certification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
Accreditation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
Open vs. Closed Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
Open Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
Closed Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
A Few Threats to Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
Maintenance Hooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
Time-of-Check/Time-of-Use Attacks . . . . . . . . . . . . . . . . . . . . 410
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
Quick Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
Chapter 5 Physical and Environmental Security . . . . . . . . . . . . . . . . . . . . . . . . 427
Introduction to Physical Security . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
The Planning Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
Crime Prevention Through Environmental Design . . . . . . . . 435
Designing a Physical Security Program . . . . . . . . . . . . . . . . . 442
Protecting Assets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
Internal Support Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
Electric Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
Environmental Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
Ventilation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467
Fire Prevention, Detection, and Suppression . . . . . . . . . . . . . 467
Perimeter Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
Facility Access Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
Personnel Access Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
External Boundary Protection Mechanisms . . . . . . . . . . . . . . 484
Intrusion Detection Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 493
Patrol Force and Guards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
Dogs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
Auditing Physical Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498
Testing and Drills . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
Quick Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502
Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
Chapter 6 Telecommunications and Network Security . . . . . . . . . . . . . . . . . . 515
Telecommunications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 7
Open Systems Interconnection Reference Model . . . . . . . . . . . . . . . 517
Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 518
Application Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
Presentation Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
Session Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523

CISSP All-in-One Exam Guide
xii
Transport Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
Network Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527
Data Link Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528
Physical Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530
Functions and Protocols in the OSI Model . . . . . . . . . . . . . . 530
Tying the Layers Together . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
TCP/IP Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534
TCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
IP Addressing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541
IPv6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
Layer 2 Security Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . 547
Types of Transmission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
Analog and Digital . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
Asynchronous and Synchronous . . . . . . . . . . . . . . . . . . . . . . 552
Broadband and Baseband . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554
Cabling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
Coaxial Cable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
Twisted-Pair Cable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
Fiber-Optic Cable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558
Cabling Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560
Networking Foundations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
Network Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
Media Access Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . 565
Network Protocols and Services . . . . . . . . . . . . . . . . . . . . . . . 580
Domain Name Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590
E-mail Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599
Network Address Translation . . . . . . . . . . . . . . . . . . . . . . . . . 604
Routing Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608
Networking Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612
Repeaters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612
Bridges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613
Routers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 615
Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617
Gateways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
PBXs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 624
Firewalls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 628
Proxy Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 653
Honeypot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655
Unified Threat Management . . . . . . . . . . . . . . . . . . . . . . . . . . 656
Cloud Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 657
Intranets and Extranets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 660
Metropolitan Area Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663
Wide Area Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 665
Telecommunications Evolution . . . . . . . . . . . . . . . . . . . . . . . 666
Dedicated Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 669
WAN Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 673

Contents
xiii
Remote Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 695
Dial-up Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 695
ISDN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 697
DSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 698
Cable Modems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
VPN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 702
Authentication Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . 709
Wireless Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 712
Wireless Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . 712
WLAN Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 716
Wireless Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 723
War Driving for WLANs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 728
Satellites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 729
Mobile Wireless Communication . . . . . . . . . . . . . . . . . . . . . . 730
Mobile Phone Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 736
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 739
Quick Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 740
Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 744
Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 753
Chapter 7 Cryptography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759
The History of Cryptography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 760
Cryptography Definitions and Concepts . . . . . . . . . . . . . . . . . . . . . 765
Kerckhoffs’ Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 767
The Strength of the Cryptosystem . . . . . . . . . . . . . . . . . . . . . . 768
Services of Cryptosystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 769
One-Time Pad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 771
Running and Concealment Ciphers . . . . . . . . . . . . . . . . . . . . 773
Steganography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 774
Types of Ciphers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 777
Substitution Ciphers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 778
Transposition Ciphers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 778
Methods of Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 781
Symmetric vs. Asymmetric Algorithms . . . . . . . . . . . . . . . . . . 782
Symmetric Cryptography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 782
Block and Stream Ciphers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 787
Hybrid Encryption Methods . . . . . . . . . . . . . . . . . . . . . . . . . . 792
Types of Symmetric Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 800
Data Encryption Standard . . . . . . . . . . . . . . . . . . . . . . . . . . . . 800
Triple-DES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 808
The Advanced Encryption Standard . . . . . . . . . . . . . . . . . . . . 809
International Data Encryption Algorithm . . . . . . . . . . . . . . . 809
Blowfish . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 810
RC4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 810
RC5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 810
RC6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 810

CISSP All-in-One Exam Guide
xiv
Types of Asymmetric Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 812
The Diffie-Hellman Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 812
RSA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 815
El Gamal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 818
Elliptic Curve Cryptosystems . . . . . . . . . . . . . . . . . . . . . . . . . 818
Knapsack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 819
Zero Knowledge Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 819
Message Integrity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 820
The One-Way Hash . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 820
Various Hashing Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 826
MD2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 826
MD4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 826
MD5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 827
Attacks Against One-Way Hash Functions . . . . . . . . . . . . . . . 827
Digital Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 829
Digital Signature Standard . . . . . . . . . . . . . . . . . . . . . . . . . . . 832
Public Key Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 833
Certificate Authorities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 834
Certificates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 837
The Registration Authority . . . . . . . . . . . . . . . . . . . . . . . . . . . 837
PKI Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 838
Key Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 840
Key Management Principles . . . . . . . . . . . . . . . . . . . . . . . . . . 841
Rules for Keys and Key Management . . . . . . . . . . . . . . . . . . . 842
Trusted Platform Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 843
TPM Uses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 843
Link Encryption vs. End-to-End Encryption . . . . . . . . . . . . . . . . . . . 845
E-mail Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 849
Multipurpose Internet Mail Extension . . . . . . . . . . . . . . . . . . 849
Pretty Good Privacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 850
Internet Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 853
Start with the Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 854
Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 865
Ciphertext-Only Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 865
Known-Plaintext Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 865
Chosen-Plaintext Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 866
Chosen-Ciphertext Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . 866
Differential Cryptanalysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 866
Linear Cryptanalysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 867
Side-Channel Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 867
Replay Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 868
Algebraic Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 868
Analytic Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 868
Statistical Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 869
Social Engineering Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . 869
Meet-in-the-Middle Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . 869

Contents
xv
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 870
Quick Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 871
Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 874
Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 880
Chapter 8 Business Continuity and Disaster Recovery Planning . . . . . . . . . . . 885
Business Continuity and Disaster Recovery . . . . . . . . . . . . . . . . . . . 887
Standards and Best Practices . . . . . . . . . . . . . . . . . . . . . . . . . 890
Making BCM Part of the Enterprise Security Program . . . . . . 893
BCP Project Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 897
Scope of the Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 899
BCP Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 901
Project Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 901
Business Continuity Planning Requirements . . . . . . . . . . . . . 904
Business Impact Analysis (BIA) . . . . . . . . . . . . . . . . . . . . . . . 905
Interdependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 2
Preventive Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 913
Recovery Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 914
Business Process Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . 918
Facility Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 919
Supply and Technology Recovery . . . . . . . . . . . . . . . . . . . . . . 926
Choosing a Software Backup Facility . . . . . . . . . . . . . . . . . . . 930
End-User Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 933
Data Backup Alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . 934
Electronic Backup Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . 938
High Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 941
Insurance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 944
Recovery and Restoration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 945
Developing Goals for the Plans . . . . . . . . . . . . . . . . . . . . . . . 949
Implementing Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 951
Testing and Revising the Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 953
Checklist Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 955
Structured Walk-Through Test . . . . . . . . . . . . . . . . . . . . . . . . 955
Simulation Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 955
Parallel Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 955
Full-Interruption Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 956
Other Types of Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 956
Emergency Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 956
Maintaining the Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 958
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 961
Quick Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 961
Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 964
Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 972
Chapter 9 Legal, Regulations, Investigations, and Compliance . . . . . . . . . . . . . 979
The Many Facets of Cyberlaw . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 980
The Crux of Computer Crime Laws . . . . . . . . . . . . . . . . . . . . . . . . . 981

CISSP All-in-One Exam Guide
xvi
Complexities in Cybercrime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 983
Electronic Assets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 985
The Evolution of Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 986
International Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 990
Types of Legal Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 994
Intellectual Property Laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 998
Trade Secret . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 999
Copyright . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1000
Trademark . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1001
Patent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1001
Internal Protection of Intellectual Property . . . . . . . . . . . . . . 1003
Software Piracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1004
Privacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1006
The Increasing Need for Privacy Laws . . . . . . . . . . . . . . . . . . . 1008
Laws, Directives, and Regulations . . . . . . . . . . . . . . . . . . . . . . 1009
Liability and Its Ramifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1022
Personal Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1027
Hacker Intrusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1027
Third-Party Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1028
Contractual Agreements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1029
Procurement and Vendor Processes . . . . . . . . . . . . . . . . . . . . 1029
Compliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1030
Investigations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1032
Incident Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1033
Incident Response Procedures . . . . . . . . . . . . . . . . . . . . . . . . 1037
Computer Forensics and Proper Collection of Evidence . . . . 1042
International Organization on Computer Evidence . . . . . . . . 1043
Motive, Opportunity, and Means . . . . . . . . . . . . . . . . . . . . . . 1044
Computer Criminal Behavior . . . . . . . . . . . . . . . . . . . . . . . . . 1044
Incident Investigators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1045
The Forensics Investigation Process . . . . . . . . . . . . . . . . . . . . 1046
What Is Admissible in Court? . . . . . . . . . . . . . . . . . . . . . . . . . 1053
Surveillance, Search, and Seizure . . . . . . . . . . . . . . . . . . . . . . 1057
Interviewing and Interrogating . . . . . . . . . . . . . . . . . . . . . . . . 1058
A Few Different Attack Types . . . . . . . . . . . . . . . . . . . . . . . . . 1058
Cybersquatting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1061
Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1061
The Computer Ethics Institute . . . . . . . . . . . . . . . . . . . . . . . . 1062
The Internet Architecture Board . . . . . . . . . . . . . . . . . . . . . . . 1063
Corporate Ethics Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . 1064
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1065
Quick Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1065
Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1069
Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1076

Contents
xvii
Chapter 10 Software Development Security . . . . . . . . . . . . . . . . . . . . . . . . . . . 1081
Software’s Importance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1081
Where Do We Place Security? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1082
Different Environments Demand Different Security . . . . . . . 1083
Environment versus Application . . . . . . . . . . . . . . . . . . . . . . . 1084
Functionality versus Security . . . . . . . . . . . . . . . . . . . . . . . . . 1085
Implementation and Default Issues . . . . . . . . . . . . . . . . . . . . 1086
System Development Life Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1087
Initiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1089
Acquisition/Development . . . . . . . . . . . . . . . . . . . . . . . . . . . 1091
Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1092
Operations/Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1092
Disposal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1093
Software Development Life Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . 1095
Project Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1096
Requirements Gathering Phase . . . . . . . . . . . . . . . . . . . . . . . . 1096
Design Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1098
Development Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 2
Testing/Validation Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 4
Release/Maintenance Phase . . . . . . . . . . . . . . . . . . . . . . . . . . 110 6
Secure Software Development Best Practices . . . . . . . . . . . . . . . . . . 110 8
Software Development Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1111
Build and Fix Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1111
Waterfall Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 2
V-Shaped Model (V-Model) . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 2
Prototyping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 3
Incremental Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 4
Spiral Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 5
Rapid Application Development . . . . . . . . . . . . . . . . . . . . . . . 111 6
Agile Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 8
Capability Maturity Model Integration . . . . . . . . . . . . . . . . . . . . . . 11 2 0
Change Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1122
Software Configuration Management . . . . . . . . . . . . . . . . . . . 1124
Programming Languages and Concepts . . . . . . . . . . . . . . . . . . . . . . 1125
Assemblers, Compilers, Interpreters . . . . . . . . . . . . . . . . . . . . 1128
Object-Oriented Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3 0
Distributed Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1142
Distributed Computing Environment . . . . . . . . . . . . . . . . . . 1142
CORBA and ORBs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1143
COM and DCOM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1146
Java Platform, Enterprise Edition . . . . . . . . . . . . . . . . . . . . . . 1148
Service-Oriented Architecture . . . . . . . . . . . . . . . . . . . . . . . . . 1148
Mobile Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1153
Java Applets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1154
ActiveX Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1156

CISSP All-in-One Exam Guide
xviii
Web Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1157
Specific Threats for Web Environments . . . . . . . . . . . . . . . . . 1158
Web Application Security Principles . . . . . . . . . . . . . . . . . . . . 1167
Database Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1168
Database Management Software . . . . . . . . . . . . . . . . . . . . . . . 11 70
Database Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 70
Database Programming Interfaces . . . . . . . . . . . . . . . . . . . . . 1176
Relational Database Components . . . . . . . . . . . . . . . . . . . . . 1177
Integrity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1180
Database Security Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1183
Data Warehousing and Data Mining . . . . . . . . . . . . . . . . . . . 1188
Expert Systems/Knowledge-Based Systems . . . . . . . . . . . . . . . . . . . 1192
Artificial Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1195
Malicious Software (Malware) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1197
Viruses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1199
Worms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1202
Rootkit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1202
Spyware and Adware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1204
Botnets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1204
Logic Bombs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1206
Trojan Horses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1206
Antivirus Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1207
Spam Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1210
Antimalware Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1212
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1214
Quick Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1215
Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1220
Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1227
Chapter 11 Security Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1233
The Role of the Operations Department . . . . . . . . . . . . . . . . . . . . . 1234
Administrative Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1235
Security and Network Personnel . . . . . . . . . . . . . . . . . . . . . . . 1237
Accountability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1239
Clipping Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1239
Assurance Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1240
Operational Responsibilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1240
Unusual or Unexplained Occurrences . . . . . . . . . . . . . . . . . . 1241
Deviations from Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . 1241
Unscheduled Initial Program Loads (aka Rebooting) . . . . . . 1242
Asset Identification and Management . . . . . . . . . . . . . . . . . . 1242
System Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1243
Trusted Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1244
Input and Output Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . 1246
System Hardening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1248
Remote Access Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1250

Contents
xix
Configuration Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1251
Change Control Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1252
Change Control Documentation . . . . . . . . . . . . . . . . . . . . . . 1253
Media Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1254
Data Leakage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1262
Network and Resource Availability . . . . . . . . . . . . . . . . . . . . . . . . . . 1263
Mean Time Between Failures . . . . . . . . . . . . . . . . . . . . . . . . . 1264
Mean Time to Repair . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1264
Single Points of Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1265
Backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1273
Contingency Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1276
Mainframes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1277
E-mail Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1279
How E-mail Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1281
Facsimile Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1283
Hack and Attack Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1285
Vulnerability Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1295
Penetration Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1298
Wardialing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1302
Other Vulnerability Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1303
Postmortem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1305
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1307
Quick Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1307
Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1309
Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1315
Appendix A Comprehensive Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1319
Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1357
Appendix B About the Download . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1379
Downloading the Total Tester . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1379
1379
About Total Tester CISSP Practice Exam Software . . . . . . . . . . . 1380
Installing and Running Total Tester . . . . . . . . . . . . . . . . . . . . 1379
Technical Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1382
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1385
Total Tester System Requirements . . . . . . . . . . . . . . . . . . . . . .
Media Center Download . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1380
Cryptography Video Sample . . . . . . . . . . . . . . . . . . . . . . . . . . . 1380
Glossary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G-1

FOREWORDS
This year marks my 39th year in the security business, with 26 of those concentrating
on information security. What changes we have seen over that period: from computers
the size of rooms that had to be cooled with water to phones that from a computing
aspect are orders of magnitude more powerful than the computers NASA used to land
man on the moon.
While we have watched in awe, the technology has evolved and our quality of life
has improved. As examples, we can make payments from our phones, our cars are
operated by computers that optimize performance and fuel economy, and we can
check in and get our boarding passes from our hotel room.
Unfortunately there are those who see these advances as opportunities for them
personally or their organization to gain politically or financially by finding vulnera-
bilities within these technologies and exploit them to their advantage.
To combat these adversaries the information security specialty evolved. One of the
first organizations to formerly recognize this specialty was (ISC)2, which was founded
in 1988. In 1994 (ISC)2 created the Certified Information Systems Security Profes-
sional (CISSP) credential and conducted the first examination. This credential gave
assurance to managers and employers (and potential employers) that the holder pos-
sessed a baseline understanding of the ten domains that comprise the Common Body
of Knowledge (CBK).
The CISSP All-in-One Exam Guide by Shon Harris is one of many publications de-
signed to prepare a potential CISSP exam taker for the exam. I admittedly have prob-
ably not seen every one of these guides, but I would argue that I have seen most. The
difference between Shon’s book and the others is that her book doesn’t try to teach
the exam, and as such, her book teaches the material one needs to pass the exam. This
means that when the exam is over and the credential has been awarded, Shon’s book
will still be on your bookshelf (or tablet) as a valuable reference guide.
I have known Shon for close to 15 years and continue to be struck by her dedica-
tion, ethics, and honor. She is the most dedicated advocate of our profession I have
met. She works constantly to improve her materials so learners at all levels can better
understand the topics. She has developed a learning model that helps make sure ev-
eryone in an organization from the bottom to the C-suite know what they need to
know to make informed choices resulting in due diligence with the data entrusted to
them.
It is a privilege to have the opportunity to help introduce this work to the perspec-
tive CISSP. I’m a huge Shon fan and I think that after learning from this book, you will
be too. Enjoy your experience and know that your work in achieving this credential
will be worth the effort.
Tom Madden, MPA, CISSP, CISM
Chief Information Security Officer for the Centers for Disease Control
xx

Forewords
xxi
Today’s cybersecurity landscape is an ever-changing environment where the greatest
vector of attack is our normal activities, e-mail, surfing, etc. The adversary can and will
continue to use mobile apps, e-mail (phishing, spam), the Web (redirects), and other
tools of our day-to-day activities to get to intellectual property, personal information, and
other sensitive data. Hactivists, criminals, and terrorists use this information to gain un-
authorized access to systems and sensitive data. Hactivists commonly use the informa-
tion that is gathered during attacks to support and convey messages that support their
agenda. According to Symantec, there were 163 new vulnerabilities associated with mo-
bile computing, 286 million unique malware variants, a 93 percent increase in web at-
tacks, and 260,000 identities exposed in 2010. McAffee reported that during the first two
quarters of 2011 there were 16,200 new websites with malicious reputations, and an aver-
age of 2,600 new phishing sites appeared each day.
We have also seen the proliferation of new technologies into our day-to-day activi-
ties. New smart phones and tablets have become as common as the home computer.
New tools mean new vulnerabilities that cybersecurity experts must deal with. Informa-
tion technology implementers tend to design and implement quickly. One of the most
underdiscussed aspects of new technology is the risks it may pose to the overall system
or organization. Tech departments, in conjunction with the CSO/CISO, need to find
the best way to utilize the new technology, but with security (physical and cyber) a very
strong part of the deployment strategy.
Training and awareness are important to help end users, cybersecurity practitioners,
and CxOs understand what effect their actions have within the organization. Even what
technology users do at home when they telework and connect to the business network
or carry home data could have consequences. Security policy and its enforcement will
also aid in establishing a stronger security baseline. Organizations have realized this
and many of those organizations have begun writing and implementing acceptable-use
policies for their networks. These policies, if properly enforced, will help minimize
adverse events on their networks.
There will never be one solution that can solve the cybersecurity challenges. De-
fense-in-depth, best business practices, training and awareness, and timely information
sharing are going to be the techniques used to minimize the impact of cyber adversar-
ies. Cyber experts need to keep their leadership informed of the latest threats and meth-
ods to minimize effects from those threats. Many of the day-to-day actions and decisions
a CFO, CEO, COO, etc., make will have an impact on the mission; security decisions
must be part of that process as well.
The CISSP All-in-One Exam Guide has been a staple of my reference library well after
obtaining my certification. Shon Harris has written and continues to improve this com-
prehensive study guide. Individuals at all levels of an organization, especially those
with a role in cybersecurity, should strive to get the CISSP. This study guide is a key part
to successfully gaining this certification. Prepping for and obtaining the CISSP can be
an important tool in the cyber expert’s toolkit. The ten domains associated with the
CISSP assist individuals at all levels to look at information security from many angles.
One tends to focus on one aspect of the information security field; the CISSP opens our

CISSP All-in-One Exam Guide
xxii
eyes to multiple aspects of issues. It provides the security professional an overall per-
spective of cybersecurity. From policy-related domains to technical domains, the CISSP
and this study guide tie many aspects of cybersecurity together. Even if that is not part
of your immediate training plan, this book will help any person become a better secu-
rity practitioner and improve the security posture of their organization.
Randy Vickers
Alexa Strategies, Inc.
Cybersecurity Strategist and Consultant
Former Director of the US CERT and former Chief of the DoD CERT (JTF-GNO)

ACKNOWLEDGMENTS
I would like to thank all the people who work in the information security industry who
are driven by their passion, dedication, and a true sense of doing right. The best secu-
rity people are the ones who are driven toward an ethical outcome.
I would like to thank so many people in the industry who have taken the time to
reach out to me and let me know how my work directly affects their lives. I appreciate
the time and effort people put in just to send me these kind words.
For my sixth edition, I would also like to thank the following individuals:
• David Miller, whose work ethic, loyalty, and friendship have inspired me. I am
truly grateful to have David as part of my life. I never would have known the
secret world of tequila without him.
• Clement Dupuis, who, with his deep passion for sharing and helping others,
has proven a wonderful and irreplaceable friend.
• My company team members: Susan Young and Teresa Griffin. I never express
my deep appreciation for each one of you enough.
• Greg Andelora, the graphic artist who created several of the graphics for this
book and my other projects. We tragically lost him at a young age and miss
him.
• Tom Madden and Randy Vickers, for their wonderful forewords to this book.
• My editor, Tim Green, and acquisitions coordinator, Stephanie Evans, who
sometimes have to practice the patience of Job when dealing with me on these
projects.
• The military personnel who have fought so hard over the last ten years in two
wars and have given such sacrifice.
• My best friend and mother, Kathy Conlon, who is always there for me through
thick and thin.
Most especially, I would like to thank my husband, David Harris, for his continual
support and love. Without his steadfast confidence in me, I would not have been able
to accomplish half the things I have taken on in my life.
xxiii

To obtain material from the disk that
accompanies the printed version of
please click here.
this e-Book, please follow the instructions this e-Book, please follow the instructions
To obtain material from the disk that
in Appendix B, About the Download.

CHAPTER 1
Becoming a CISSP
This chapter presents the following:
• Description of the CISSP certification
• Reasons to become a CISSP
• What the CISSP exam entails
• The Common Body of Knowledge and what it contains
• The history of (ISC)2 and the CISSP exam
• An assessment test to gauge your current knowledge of security
This book is intended not only to provide you with the necessary information to help
you gain a CISSP certification, but also to welcome you into the exciting and challeng-
ing world of security.
The Certified Information Systems Security Professional (CISSP) exam covers ten
different subject areas, more commonly referred to as domains. The subject matter of
each domain can easily be seen as its own area of study, and in many cases individuals
work exclusively in these fields as experts. For many of these subjects, you can consult
and reference extensive resources to become an expert in that area. Because of this, a
common misconception is that the only way to succeed at the CISSP exam is to im-
merse yourself in a massive stack of texts and study materials. Fortunately, an easier
approach exists. By using this sixth edition of the CISSP All-in-One Exam Guide, you can
successfully complete and pass the CISSP exam and achieve your CISSP certification.
The goal of this book is to combine into a single resource all the information you need
to pass the CISSP exam and help you understand how the domains interact with each
other so that you can develop a comprehensive approach to security practices. This
book should also serve as a useful reference tool long after you’ve achieved your CISSP
certification.
Why Become a CISSP?
As our world changes, the need for improvements in security and technology continues
to grow. Security was once a hot issue only in the field of technology, but now it is be-
coming more and more a part of our everyday lives. Security is a concern of every orga-
nization, government agency, corporation, and military unit. Ten years ago computer
and information security was an obscure field that only concerned a few people. Because
the risks were essentially low, few were interested in security expertise.
1

CISSP All-in-One Exam Guide
2
Things have changed, however, and today corporations and other organizations are
desperate to recruit talented and experienced security professionals to help protect the
resources they depend on to run their businesses and to remain competitive. With a
CISSP certification, you will be seen as a security professional of proven ability who has
successfully met a predefined standard of knowledge and experience that is well under-
stood and respected throughout the industry. By keeping this certification current, you
will demonstrate your dedication to staying abreast of security developments.
Consider the reasons for attaining a CISSP certification:
• To meet the growing demand and to thrive in an ever-expanding field
• To broaden your current knowledge of security concepts and practices
• To bring security expertise to your current occupation
• To become more marketable in a competitive workforce
• To show a dedication to the security discipline
• To increase your salary and be eligible for more employment opportunities
The CISSP certification helps companies identify which individuals have the ability,
knowledge, and experience necessary to implement solid security practices; perform
risk analysis; identify necessary countermeasures; and help the organization as a whole
protect its facility, network, systems, and information. The CISSP certification also
shows potential employers you have achieved a level of proficiency and expertise in
skill sets and knowledge required by the security industry. The increasing importance
placed on security in corporate success will only continue in the future, leading to even
greater demands for highly skilled security professionals. The CISSP certification shows
that a respected third-party organization has recognized an individual’s technical and
theoretical knowledge and expertise, and distinguishes that individual from those who
lack this level of knowledge.
Understanding and implementing security practices is an essential part of being a
good network administrator, programmer, or engineer. Job descriptions that do not
specifically target security professionals still often require that a potential candidate
have a good understanding of security concepts as well as how to implement them. Due
to staff size and budget restraints, many organizations can’t afford separate network
and security staffs. But they still believe security is vital to their organization. Thus, they
often try to combine knowledge of technology and security into a single role. With a
CISSP designation, you can put yourself head and shoulders above other individuals in
this regard.
The CISSP Exam
Because the CISSP exam covers the ten domains making up the CISSP Common Body
of Knowledge (CBK), it is often described as being “an inch deep and a mile wide,” a
reference to the fact that many questions on the exam are not very detailed and do not
require you to be an expert in every subject. However, the questions do require you to
be familiar with many different security subjects.

Chapter 1: Becoming a CISSP
3
The CISSP exam comprises 250 multiple-choice questions, and you have up to six
hours to complete it. The questions are pulled from a much larger question bank to
ensure the exam is as unique as possible for each entrant. In addition, the test bank
constantly changes and evolves to more accurately reflect the real world of security. The
exam questions are continually rotated and replaced in the bank as necessary. Each
question has four answer choices, only one of which is correct. Only 225 questions are
graded, while 25 are used for research purposes. The 25 research questions are inte-
grated into the exam, so you won’t know which go toward your final grade. To pass the
exam, you need a minimum raw score of 700 points out of 1,000. Questions are
weighted based on their difficulty; not all questions are worth the same number of
points. The exam is not product- or vendor-oriented, meaning no questions will be
specific to certain products or vendors (for instance, Windows, Unix, or Cisco). Instead,
you will be tested on the security models and methodologies used by these types of
systems.
(ISC)2, which stands for International Information Systems Security Certification
Consortium, has also added scenario-based questions to the CISSP exam. These ques-
tions present a short scenario to the test taker rather than asking the test taker to iden-
tify terms and/or concepts. The goal of the scenario-based questions is to ensure that
test takers not only know and understand the concepts within the CBK, but also can
apply this knowledge to real-life situations. This is more practical because in the real
world, you won’t be challenged by having someone asking you “What is the definition
of collusion?” You need to know how to detect and prevent collusion from taking place,
in addition to knowing the definition of the term.
After passing the exam, you will be asked to supply documentation, supported by a
sponsor, proving that you indeed have the type of experience required to obtain this
certification. The sponsor must sign a document vouching for the security experience
you are submitting. So, make sure you have this sponsor lined up prior to registering
for the exam and providing payment. You don’t want to pay for and pass the exam, only
to find you can’t find a sponsor for the final step needed to achieve your certification.
The reason behind the sponsorship requirement is to ensure that those who achieve
the certification have real-world experience to offer organizations. Book knowledge is
extremely important for understanding theory, concepts, standards, and regulations,
but it can never replace hands-on experience. Proving your practical experience sup-
ports the relevance of the certification.
A small sample group of individuals selected at random will be audited after pass-
ing the exam. The audit consists mainly of individuals from (ISC)2 calling on the can-
didates’ sponsors and contacts to verify the test taker’s related experience.
What makes this exam challenging is that most candidates, although they work in
the security field, are not necessarily familiar with all ten CBK domains. If a security
professional is considered an expert in vulnerability testing or application security, for
example, she may not be familiar with physical security, cryptography, or forensics.
Thus, studying for this exam will broaden your knowledge of the security field.
The exam questions address the ten CBK security domains, which are described in
Table 1-1.

CISSP All-in-One Exam Guide
4
Domain Description
Access Control This domain examines mechanisms and methods used to enable
administrators and managers to control what subjects can access, the
extent of their capabilities after authorization and authentication, and
the auditing and monitoring of these activities. Some of the topics
covered include
• Access control threats
• Identification and authentication technologies and
techniques
• Access control administration
• Single sign-on technologies
• Attack methods
Telecommunications and
Network Security
This domain examines internal, external, public, and private
communication systems; networking structures; devices; protocols;
and remote access and administration. Some of the topics covered
include
• OSI model and layers
• Local area network (LAN), metropolitan area network (MAN), and
wide area network (WAN) technologies
• Internet, intranet, and extranet issues
• Virtual private networks (VPNs), firewalls, routers, switches, and
repeaters
• Network topologies and cabling
• Attack methods
Information Security
Governance and Risk
Management
This domain examines the identification of company assets, the
proper way to determine the necessary level of protection required,
and what type of budget to develop for security implementations,
with the goal of reducing threats and monetary loss. Some of the
topics covered include
• Data classification
• Policies, procedures, standards, and guidelines
• Risk assessment and management
• Personnel security, training, and awareness
Software Development
Security
This domain examines secure software development approaches,
application security, and software flaws. Some of the topics covered
include
• Data warehousing and data mining
• Various development practices and their risks
• Software components and vulnerabilities
• Malicious code
Cryptography This domain examines cryptography techniques, approaches, and
technologies. Some of the topics covered include
• Symmetric versus asymmetric algorithms and uses
• Public key infrastructure (PKI) and hashing functions
• Encryption protocols and implementation
• Attack methods
Table 1-1 Security Domains That Make Up the CISSP CBK

Chapter 1: Becoming a CISSP
5
Domain Description
Security Architecture
and Design
This domain examines ways that software should be designed
securely. It also covers international security measurement standards
and their meaning for different types of platforms. Some of the topics
covered include
• Operating states, kernel functions, and memory mapping
• Security models, architectures, and evaluations
• Evaluation criteria: Trusted Computer Security Evaluation Criteria
(TCSEC), Information Technology Security Evaluation Criteria
(ITSEC), and Common Criteria
• Common flaws in applications and systems
• Certification and accreditation
Security Operations This domain examines controls over personnel, hardware, systems,
and auditing and monitoring techniques. It also covers possible abuse
channels and how to recognize and address them. Some of the topics
covered include
• Administrative responsibilities pertaining to personnel and job
functions
• Maintenance concepts of antivirus, training, auditing, and resource
protection activities
• Preventive, detective, corrective, and recovery controls
• Security and fault-tolerance technologies
Business Continuity
and Disaster Recovery
Planning
This domain examines the preservation of business activities when
faced with disruptions or disasters. It involves the identification
of real risks, proper risk assessment, and countermeasure
implementation. Some of the topics covered include
• Business resource identification and value assignment
• Business impact analysis and prediction of possible losses
• Unit priorities and crisis management
• Plan development, implementation, and maintenance
Legal, Regulations,
Investigations, and
Compliance
This domain examines computer crimes, laws, and regulations. It
includes techniques for investigating a crime, gathering evidence, and
handling procedures. It also covers how to develop and implement an
incident-handling program. Some of the topics covered include
• Types of laws, regulations, and crimes
• Licensing and software piracy
• Export and import laws and issues
• Evidence types and admissibility into court
• Incident handling
• Forensics
Table 1-1 Security Domains That Make Up the CISSP CBK (continued)

CISSP All-in-One Exam Guide
6
(ISC)2 attempts to keep up with changes in technology and methodologies in the
security field by adding numerous new questions to the test question bank each year.
These questions are based on current technologies, practices, approaches, and stan-
dards. For example, the CISSP exam given in 1998 did not have questions pertaining to
wireless security, cross-site scripting attacks, or IPv6.
Other examples of material not on past exams include security governance, instant
messaging, phishing, botnets, VoIP, and spam. Though these subjects weren’t issues in
the past, they are now.
The test is based on internationally accepted information security standards and
practices. If you look at the (ISC)2 website for test dates and locations, you may find, for
example, that the same test is offered this Tuesday in California and next Wednesday in
Saudi Arabia.
If you do not pass the exam, you have the option of retaking it as soon as you like.
(ISC)2 used to subject individuals to a waiting period before they could retake the exam,
but this rule has been removed. (ISC)2 keeps track of which exam version you were
given on your first attempt and ensures you receive a different version for any retakes.
(ISC)2 also provides a report to a CISSP candidate who did not pass the exam, detailing
the areas where the candidate was weakest. Though you could retake the exam soon
afterward, it’s wise to devote additional time to these weak areas to improve your score
on the retest.
CISSP: A Brief History
Historically, the field of computer and information security has not been a structured
and disciplined profession; rather, the field has lacked many well-defined professional
objectives and thus has often been misperceived.
In the mid-1980s, members of the computer security profession recognized that
they needed a certification program that would give their profession structure and pro-
vide ways for security professionals to demonstrate competence and to present evi-
dence of their qualifications. Establishing such a program would help the credibility of
the security profession as a whole and the individuals who comprise it.
In November 1988, the Special Interest Group for Computer Security (SIG-CS) of
the Data Processing Management Association (DPMA) brought together several organi-
Domain Description
Physical (Environmental)
Security
This domain examines threats, risks, and countermeasures to protect
facilities, hardware, data, media, and personnel. This involves facility
selection, authorized entry methods, and environmental and safety
procedures. Some of the topics covered include
• Restricted areas, authorization methods, and controls
• Motion detectors, sensors, and alarms
• Intrusion detection
• Fire detection, prevention, and suppression
• Fencing, security guards, and security badge types
Table 1-1 Security Domains That Make Up the CISSP CBK (continued)

Chapter 1: Becoming a CISSP
7
zations interested in forming a security certification program. They included the Infor-
mation Systems Security Association (ISSA), the Canadian Information Processing
Society (CIPS), the Computer Security Institute (CSI), Idaho State University, and sev-
eral U.S. and Canadian government agencies. As a voluntary joint effort, these organi-
zations developed the necessary components to offer a full-fledged security certification
for interested professionals. (ISC)2 was formed in mid-1989 as a nonprofit corporation
to develop a security certification program for information systems security practitio-
ners. The certification was designed to measure professional competence and to help
companies in their selection of security professionals and personnel. (ISC)2 was estab-
lished in North America, but quickly gained international acceptance and now offers
testing capabilities all over the world.
Because security is such a broad and diversified field in the technology and business
world, the original consortium decided on an information systems security CBK com-
posed of ten domains that pertain to every part of computer, network, business, and
information security. In addition, because technology continues to rapidly evolve, stay-
ing up-to-date on security trends, technology, and business developments is required to
maintain the CISSP certification. The group also developed a Code of Ethics, test speci-
fications, a draft study guide, and the exam itself.
How Do You Sign Up for the Exam?
To become a CISSP, start at www.isc2.org, where you will find an exam registration
form you must fill out and send to (ISC)2. You will be asked to provide your security
work history, as well as documents for the necessary educational requirements. You
will also be asked to read the (ISC)2 Code of Ethics and to sign a form indicating that
you understand these requirements and promise to abide by them. You then provide
payment along with the completed registration form, where you indicate your prefer-
ence as to the exam location. The numerous testing sites and dates can be found at
www.isc2.org.
What Does This Book Cover?
This book covers everything you need to know to become an (ISC)2-certified CISSP. It
teaches you the hows and whys behind organization’s development and implementa-
tion of policies, procedures, guidelines, and standards. It covers network, application,
and system vulnerabilities; what exploits them; and how to counter these threats. The
book explains physical security, operational security, and why systems implement the
security mechanisms they do. It also reviews the U.S. and international security criteria
and evaluations performed on systems for assurance ratings, what these criteria
mean, and why they are used. This book also explains the legal and liability issues that
surround computer systems and the data they hold, including such subjects as com-
puter crimes, forensics, and what should be done to properly prepare computer evi-
dence associated with these topics for court.
While this book is mainly intended to be used as a study guide for the CISSP exam,
it is also a handy reference guide for use after your certification.

CISSP All-in-One Exam Guide
8
Tips for Taking the CISSP Exam
The exams are monitored by CISSP proctors, if you are taking the Scantron test. They
will require that any food or beverage you bring with you be kept on a back table and
not at your desk. Proctors may inspect the contents of any and all articles entering the
test room. Restroom breaks are usually limited to allowing only one person to leave at
a time, so drinking 15 cups of coffee right before the exam might not be the best idea.
NOTE
NOTE (ISC)2 still uses the physical Scantron tests that require you to color
in bubbles on a test paper. The organization is moving towards providing the
exam in a digital format through Pearson Vue centers.
You will not be allowed you to keep your smartphones and mobile devices with you
during the testing process. You may have to leave them at the front of the room and
retrieve them when you are finished with the exam. In the past too many people have
used these items to cheat on the exam, so precautions are now in place.
Many people feel as though the exam questions are tricky. Make sure to read the
question and its answers thoroughly instead of reading a few words and immediately
assuming you know what the question is asking. Some of the answer choices may have
only subtle differences, so be patient and devote time to reading through the question
more than once.
As with most tests, it is best to go through the questions and answer those you know
immediately; then go back to the ones causing you difficulty. The CISSP exam is not
computerized (although (ISC)2 is moving to a computerized model), so you will re-
ceive a piece of paper with bubbles to fill in and one of several colored exam booklets
containing the questions. If you scribble outside the lines on the answer sheet, the
machine that reads your answers may count a correct answer as wrong. I suggest you go
through each question and mark the right answer in the booklet with the questions.
Repeat this process until you have completed your selections. Then go through the
questions again and fill in the bubbles. This approach leads to less erasing and fewer
potential problems with the scoring machine. You are allowed to write and scribble on
your question exam booklet any way you choose. You will turn it in at the end of your
exam with your answer sheet, but only answers on the answer sheet will be counted, so
make sure you transfer all your answers to the answer sheet.
Other certification exams may be taking place simultaneously in the same room,
such as exams for certification as an SSCP (Systems Security Certified Professional), IS-
SAP or ISSMP (Architecture and Management concentrations, respectively), or ISSEP
(Engineering concentration), which are some of (ISC)2’s other certification exams.
These other exams vary in length and duration, so don’t feel rushed if you see others
leaving the room early; they may be taking a shorter exam.
When finished, don’t immediately turn in your exam. You have six hours, so don’t
squander it just because you might be tired or anxious. Use the time wisely. Take an
extra couple of minutes to make sure you answered every question, and that you did
not accidentally fill in two bubbles for the same question.

Chapter 1: Becoming a CISSP
9
Unfortunately, exam results take some time to be returned. (ISC)2 states it can take
up to six weeks to get your results to you, but on average it takes around two weeks to
receive your results through e-mail and/or the mail.
If you passed the exam, the results sent to you will not contain your score—you will
only know that you passed. Candidates who do not pass the test are always provided
with a score, however. Thus, they know exactly which areas to focus more attention on
for the next exam. The domains are listed on this notification with a ranking of weakest
to strongest. If you do not pass the exam, remember that many smart and talented se-
curity professionals didn’t pass on their first try either, chiefly because the test covers
such a broad range of topics.
One of the most commonly heard complaints is about the exam itself. The ques-
tions are not long-winded, like many Microsoft tests, but at times it is difficult to dis-
tinguish between two answers that seem to say the same thing. Although (ISC)2 has
been removing the use of negatives, such as “not,” “except for,” and so on, they do still
appear on the exam. The scenario-based questions may expect you to understand con-
cepts in more than one domain to properly answer the question.
Another complaint heard about the test is that some questions seem a bit subjec-
tive. For example, whereas it might be easy to answer a technical question that asks for
the exact mechanism used in Secure Sockets Layer (SSL) that protects against man-in-the-
middle attacks, it’s not quite as easy to answer a question that asks whether an eight-
foot perimeter fence provides low, medium, or high security. Many questions ask the
test taker to choose the “best” approach, which some people find confusing and subjec-
tive. These complaints are mentioned here not to criticize (ISC)2 and the test writers,
but to help you better prepare for the test. This book covers all the necessary material
for the test and contains many questions and self-practice tests. Most of the questions
are formatted in such a way as to better prepare you for what you will encounter on the
actual test. So, make sure to read all the material in the book, and pay close attention
to the questions and their formats. Even if you know the subject well, you may still get
some answers wrong—it is just part of learning how to take tests.
Familiarize yourself with industry standards and expand your technical knowledge
and methodologies outside the boundaries of what you use today. I cannot stress enough
that just because you are the top dog in your particular field, it doesn’t mean you are
properly prepared for every domain the exam covers. Take the assessment test in this
chapter to gauge where you stand, and be ready to read a lot of material new to you.
How to Use This Book
Much effort has gone into putting all the necessary information into this book. Now it’s
up to you to study and understand the material and its various concepts. To best ben-
efit from this book, you might want to use the following study method:
1. Study each chapter carefully and make sure you understand each concept
presented. Many concepts must be fully understood, and glossing over a couple
here and there could be detrimental to you. The CISSP CBK contains thousands
of individual topics, so take the time needed to understand them all.

CISSP All-in-One Exam Guide
10
2. Make sure to study and answer all of the questions at the end of the chapter,
as well as those in the digital content included with this book and in Appendix A,
Comprehensive Questions. If any questions confuse you, go back and study
those sections again. Remember, some of the questions on the actual exam are
a bit confusing because they do not seem straightforward. I have attempted to
draft several questions in the same manner to prepare you for the exam. So do
not ignore the confusing questions, thinking they’re not well worded. Instead,
pay even closer attention to them because they are there for a reason.
3. If you are not familiar with specific topics, such as firewalls, laws, physical
security, or protocol functionality, use other sources of information (books,
articles, and so on) to attain a more in-depth understanding of those subjects.
Don’t just rely on what you think you need to know to pass the CISSP exam.
4. After reading this book, study the questions and answers, and take the practice
tests. Then review the (ISC)2 study guide and make sure you are comfortable
with each bullet item presented. If you are not comfortable with some items,
revisit those chapters.
5. If you have taken other certification exams—such as Cisco, Novell, or Microsoft—
you might be used to having to memorize details and configuration parameters.
But remember, the CISSP test is “an inch deep and a mile wide,” so make sure
you understand the concepts of each subject before trying to memorize the small,
specific details.
6. Remember that the exam is looking for the “best” answer. On some questions
test takers do not agree with any or many of the answers. You are being asked
to choose the best answer out of the four being offered to you.
Questions
To get a better feel for your level of expertise and your current level of readiness for the
CISSP exam, run through the following questions:
1. Which of the following provides an incorrect characteristic of a memory leak?
A. Common programming error
B. Common when languages that have no built-in automatic garbage
collection are used
C. Common in applications written in Java
D. Common in applications written in C++
2. Which of the following is the best description pertaining to the “Trusted
Computing Base”?
A. The term originated from the Orange Book and pertains to firmware.
B. The term originated from the Orange Book and addresses the security
mechanisms that are only implemented by the operating system.
C. The term originated from the Orange Book and contains the protection
mechanisms within a system.

Chapter 1: Becoming a CISSP
11
D. The term originated from the Rainbow Series and addressed the level of
significance each mechanism of a system portrays in a secure environment.
3. Which of the following is the best description of the security kernel and the
reference monitor?
A. The reference monitor is a piece of software that runs on top of the security
kernel. The reference monitor is accessed by every security call of the security
kernel. The security kernel is too large to test and verify.
B. The reference monitor concept is a small program that is not related to
the security kernel. It will enforce access rules upon subjects who attempt
to access specific objects. This program is regularly used with modern
operating systems.
C. The reference monitor concept is used strictly for database access control
and is one of the key components in maintaining referential integrity within
the system. It is impossible for the user to circumvent the reference monitor.
D. The reference monitor and security kernel are core components of modern
operating systems. They work together to mediate all access between subjects
and objects. They should not be able to be circumvented and must be called
upon for every access attempt.
4. Which of the following models incorporates the idea of separation of duties and
requires that all modifications to data and objects be done through programs?
A. State machine model
B. Bell-LaPadula model
C. Clark-Wilson model
D. Biba model
5. Which of the following best describes the hierarchical levels of privilege
within the architecture of a computer system?
A. Computer system ring structure
B. Microcode abstraction levels of security
C. Operating system user mode
D. Operating system kernel mode
6. Which of the following is an untrue statement?
i. Virtual machines can be used to provide secure, isolated sandboxes for
running untrusted applications.
ii. Virtual machines can be used to create execution environments with
resource limits and, given the right schedulers, resource guarantees.
iii. Virtualization can be used to simulate networks of independent
computers.
iv. Virtual machines can be used to run multiple operating systems
simultaneously: different versions, or even entirely different systems,
which can be on hot standby.

CISSP All-in-One Exam Guide
12
A. All of them
B. None of them
C. i, ii
D. ii, iii
7. Which of the following is the best means of transferring information
when parties do not have a shared secret and large quantities of sensitive
information must be transmitted?
A. Use of public key encryption to secure a secret key, and message
encryption using the secret key
B. Use of the recipient’s public key for encryption, and decryption based on
the recipient’s private key
C. Use of software encryption assisted by a hardware encryption accelerator
D. Use of elliptic curve encryption
8. Which algorithm did NIST choose to become the Advanced Encryption
Standard (AES) replacing the Data Encryption Standard (DES)?
A. DEA
B. Rijndael
C. Twofish
D. IDEA
Use the following scenario to answer questions 9–11. John is the security administrator for
company X. He has been asked to oversee the installation of a fire suppression sprinkler
system, as recent unusually dry weather has increased the likelihood of fire. Fire could
potentially cause a great amount of damage to the organization’s assets. The sprinkler
system is designed to reduce the impact of fire on the company.
9. In this scenario, fire is considered which of the following?
A. Vulnerability
B. Threat
C. Risk
D. Countermeasure
10. In this scenario, the sprinkler system is considered which of the following?
A. Vulnerability
B. Threat
C. Risk
D. Countermeasure
11. In this scenario, the likelihood and damage potential of a fire is considered
which of the following?
A. Vulnerability
B. Threat

Chapter 1: Becoming a CISSP
13
C. Risk
D. Countermeasure
Use the following scenario to answer questions 12–14. A small remote facility for a company
is valued at $800,000. It is estimated, based on historical data and other predictors, that
a fire is likely to occur once every ten years at a facility in this area. It is estimated that
such a fire would destroy 60 percent of the facility under the current circumstances and
with the current detective and preventative controls in place.
12. What is the single loss expectancy (SLE) for the facility suffering from a fire?
A. $80,000
B. $480,000
C. $320,000
D. 60 percent
13. What is the annualized rate of occurrence (ARO)?
A. 1
B. 10
C. .1
D. .01
14. What is the annualized loss expectancy (ALE)?
A. $480,000
B. $32,000
C. $48,000
D. .6
15. Which of the following is not a characteristic of Protected Extensible
Authentication Protocol?
A. Authentication protocol used in wireless networks and point-to-point
connections
B. Designed to provide improved secure authentication for 802.11 WLANs
C. Designed to support 802.1x port access control and Transport Layer Security
D. Designed to support password-protected connections
16. Which of the following best describes the Temporal Key Integrity Protocol’s
(TKIP) role in the 802.11i standard?
A. It provides 802.1x and EAP to increase the authentication strength.
B. It requires the access point and the wireless device to authenticate to each
other.
C. It sends the SSID and MAC value in ciphertext.
D. It adds more keying material for the RC4 algorithm.

CISSP All-in-One Exam Guide
14
17. Vendors have implemented various solutions to overcome the vulnerabilities
of the wired equivalent protocol (WEP). Which of the following provides an
incorrect mapping between these solutions and their characteristics?
A. LEAP requires a PKI.
B. PEAP only requires the server to authenticate using a digital certificate.
C. EAP-TLS requires both the wireless device and server to authenticate using
digital certificates.
D. PEAP allows the user to provide a password.
18. Encapsulating Security Payload (ESP), which is one protocol within the IPSec
protocol suite, is primarily designed to provide which of the following?
A. Confidentiality
B. Cryptography
C. Digital signatures
D. Access control
19. Which of the following redundant array of independent disks
implementations uses interleave parity?
A. Level 1
B. Level 2
C. Level 4
D. Level 5
20. Which of the following is not one of the stages of the dynamic host
configuration protocol (DHCP) lease process?
i. Discover
ii. Offer
iii. Request
iv. Acknowledgment
A. All of them
B. None of them
C. i
D. ii
21. Which of the following has been deemed by the Internet Architecture Board
as unethical behavior for Internet users?
A. Creating computer viruses
B. Monitoring data traffic
C. Wasting computer resources
D. Concealing unauthorized accesses

Chapter 1: Becoming a CISSP
15
22. Most computer-related documents are categorized as which of the following
types of evidence?
A. Hearsay evidence
B. Direct evidence
C. Corroborative evidence
D. Circumstantial evidence
23. During the examination and analysis process of a forensics investigation, it is
critical that the investigator works from an image that contains all of the data
from the original disk. The image must have all but which of the following
characteristics?
A. Byte-level copy
B. Captured slack spaces
C. Captured deleted files
D. Captured unallocated clusters
24. __________ is a process of interactively producing more detailed versions
of objects by populating variables with different values. It is often used to
prevent inference attacks.
A. Polyinstantiation
B. Polymorphism
C. Polyabsorbtion
D. Polyobject
25. Tim is a software developer for a financial institution. He develops
middleware software code that carries out his company’s business logic
functions. One of the applications he works with is written in the C
programming language and seems to be taking up too much memory as it
runs over a period of time. Which of the following best describes what Tim
needs to look at implementing to rid this software of this type of problem?
A. Bounds checking
B. Garbage collection
C. Parameter checking
D. Compiling
26. __________ is a software testing technique that provides invalid, unexpected,
or random data to the inputs of a program.
A. Agile testing
B. Structured testing
C. Fuzzing
D. EICAR

CISSP All-in-One Exam Guide
16
27. Which type of malware can change its own code, making it harder to detect
with antivirus software?
A. Stealth virus
B. Polymorphic virus
C. Trojan horse
D. Logic bomb
28. What is derived from a passphrase?
A. A personal password
B. A virtual password
C. A user ID
D. A valid password
29. Which access control model is user-directed?
A. Nondiscretionary
B. Mandatory
C. Identity-based
D. Discretionary
30. Which item is not part of a Kerberos authentication implementation?
A. A message authentication code
B. A ticket-granting ticket
C. Authentication service
D. Users, programs, and services
31. If a company has a high turnover rate, which access control structure is best?
A. Role-based
B. Decentralized
C. Rule-based
D. Discretionary
32. In discretionary access control, who/what has delegation authority to grant
access to data?
A. A user
B. A security officer
C. A security policy
D. An owner
33. Remote access security using a token one-time password generation is an
example of which of the following?
A. Something you have
B. Something you know

Chapter 1: Becoming a CISSP
17
C. Something you are
D. Two-factor authentication
34. What is a crossover error rate (CER)?
A. A rating used as a performance metric for a biometric system
B. The number of Type I errors
C. The number of Type II errors
D. The number reached when Type I errors exceed the number of Type II errors
35. What does a retina scan biometric system do?
A. Examines the pattern, color, and shading of the area around the cornea
B. Examines the patterns and records the similarities between an
individual’s eyes
C. Examines the pattern of blood vessels at the back of the eye
D. Examines the geometry of the eyeball
36. If you are using a synchronous token device, what does this mean?
A. The device synchronizes with the authentication service by using internal
time or events.
B. The device synchronizes with the user’s workstation to ensure the
credentials it sends to the authentication service are correct.
C. The device synchronizes with the token to ensure the timestamp is valid
and correct.
D. The device synchronizes by using a challenge-response method with the
authentication service.
37. What is a clipping level?
A. The threshold for an activity
B. The size of a control zone
C. Explicit rules of authorization
D. A physical security mechanism
38. Which intrusion detection system would monitor user and network behavior?
A. Statistical/anomaly-based
B. Signature-based
C. Static
D. Host-based
39. When should a Class C fire extinguisher be used instead of a Class A?
A. When electrical equipment is on fire
B. When wood and paper are on fire
C. When a combustible liquid is on fire
D. When the fire is in an open area

CISSP All-in-One Exam Guide
18
40. How does halon suppress fires?
A. It reduces the fire’s fuel intake.
B. It reduces the temperature of the area.
C. It disrupts the chemical reactions of a fire.
D. It reduces the oxygen in the area.
41. What is the problem with high humidity in a data processing environment?
A. Corrosion
B. Fault tolerance
C. Static electricity
D. Contaminants
42. What is the definition of a power fault?
A. Prolonged loss of power
B. Momentary low voltage
C. Prolonged high voltage
D. Momentary power outage
43. Who has the primary responsibility of determining the classification level for
information?
A. The functional manager
B. Middle management
C. The owner
D. The user
44. Which best describes the purpose of the ALE calculation?
A. It quantifies the security level of the environment.
B. It estimates the loss potential from a threat.
C. It quantifies the cost/benefit result.
D. It estimates the loss potential from a threat in a one-year time span.
45. How do you calculate residual risk?
A. Threats × risks × asset value
B. (Threats × asset value × vulnerability) × risks
C. SLE × frequency
D. (Threats × vulnerability × asset value) × control gap
46. What is the Delphi method?
A. A way of calculating the cost/benefit ratio for safeguards
B. A way of allowing individuals to express their opinions anonymously

Chapter 1: Becoming a CISSP
19
C. A way of allowing groups to discuss and collaborate on the best security
approaches
D. A way of performing a quantitative risk analysis
47. What are the necessary components of a smurf attack?
A. Web server, attacker, and fragment offset
B. Fragment offset, amplifying network, and victim
C. Victim, amplifying network, and attacker
D. DNS server, attacker, and web server
48. What do the reference monitor and security kernel do in an operating system?
A. Intercept and mediate a subject attempting to access objects
B. Point virtual memory addresses to real memory addresses
C. House and protect the security kernel
D. Monitor privileged memory usage by applications
Answers
1. C
2. C
3. D
4. C
5. A
6. B
7. A
8. B
9. B
10. D
11. C
12. B
13. C
14. C
15. D
16. D
17. A
18. A

CISSP All-in-One Exam Guide
20
19. D
20. B
21. C
22. A
23. A
24. A
25. B
26. C
27. B
28. B
29. D
30. A
31. A
32. D
33. A
34. A
35. C
36. A
37. A
38. A
39. A
40. C
41. A
42. D
43. C
44. D
45. D
46. B
47. C
48. A

CHAPTER 2
Information Security
Governance and Risk
Management
This chapter presents the following:
• Security terminology and principles
• Protection control types
• Security frameworks, models, standards, and best practices
• Security enterprise architecture
• Risk management
• Security documentation
• Information classification and protection
• Security awareness training
• Security governance
In reality, organizations have many other things to do than practice security. Businesses
exist to make money. Most nonprofit organizations exist to offer some type of service, as
in charities, educational centers, and religious entities. None of them exist specifically to
deploy and maintain firewalls, intrusion detection systems, identity management tech-
nologies, and encryption devices. No business really wants to develop hundreds of secu-
rity policies, deploy antimalware products, maintain vulnerability management systems,
constantly update their incident response capabilities, and have to comply with the al-
phabet soup of security regulations (SOX, GLBA, PCI-DSS, HIPAA) and federal and state
laws. Business owners would like to be able to make their widgets, sell their widgets, and
go home. But these simpler days are long gone. Now organizations are faced with attack-
ers who want to steal businesses’ customer data to carry out identity theft and banking
fraud. Company secrets are commonly being stolen by internal and external entities for
economic espionage purposes. Systems are being hijacked and being used within bot-
nets to attack other organizations or to spread spam. Company funds are being secretly
siphoned off through complex and hard-to-identify digital methods, commonly by or-
ganized criminal rings in different countries. And organizations that find themselves in
the crosshairs of attackers may come under constant attack that brings their systems and
websites offline for hours or days. Companies are required to practice a wide range of
security disciplines today to keep their market share, protect their customers and bottom
line, stay out of jail, and still sell their widgets. 21

CISSP All-in-One Exam Guide
22
In this chapter we will cover many of the disciplines that are necessary for organiza-
tions to practice security in a holistic manner. Each organization must develop an en-
terprise-wide security program that consists of technologies, procedures, and processes
covered throughout this book. As you go along in your security career, you will find that
most organizations have some pieces to the puzzle of an “enterprise-wide security pro-
gram” in place, but not all of them. And almost every organization struggles with the
way to assess the risks their company faces and how to allocate funds and resources
properly to mitigate those risks. Many of the security programs in place today can be
thought of as lopsided or lumpy. The security programs excel within the disciplines that
the team is most familiar with, and the other disciplines are found lacking. It is your
responsibility to become as well-rounded in security as possible, so that you can iden-
tify these deficiencies in security programs and help improve upon them. This is why
the CISSP exam covers a wide variety of technologies, methodologies, and processes—
you must know and understand them holistically if you are going to help an organiza-
tion carry out security holistically.
We will begin with the foundational pieces of security and build upon them through
the chapter and then throughout the book. Building your knowledge base is similar to
building a house: without a solid foundation, it will be weak, unpredictable, and fail in
the most critical of moments. Our goal is to make sure you have solid and deep roots
of understanding so that you can not only protect yourself against many of the threats
we face today, but also protect the commercial and government organizations who
depend upon you and your skill set.
Fundamental Principles of Security
We need to understand the core goals
of security, which are to provide avail-
ability, integrity, and confidentiality
(AIC triad) protection for critical assets.
Each asset will require different levels
of these types of protection, as we will
see in the following sections. All secu-
rity controls, mechanisms, and safe-
guards are implemented to provide one
or more of these protection types, and
all risks, threats, and vulnerabilities are
measured for their potential capability
to compromise one or all of the AIC
principles.

Chapter 2: Information Security Governance and Risk Management
23
NOTE
NOTE In some documentation, the “triad” is presented as CIA:
confidentiality, integrity, and availability.
Availability
Emergency! I can’t get to my data!
Response: Turn the computer on!
Availability protection ensures reliability and timely access to data and resources to
authorized individuals. Network devices, computers, and applications should provide
adequate functionality to perform in a predictable manner with an acceptable level of
performance. They should be able to recover from disruptions in a secure and quick
fashion so productivity is not negatively affected. Necessary protection mechanisms
must be in place to protect against inside and outside threats that could affect the avail-
ability and productivity of all business-processing components.
Like many things in life, ensuring the availability of the necessary resources within
an organization sounds easier to accomplish than it really is. Networks have so many
pieces that must stay up and running (routers, switches, DNS servers, DHCP servers,
proxies, firewalls). Software has many components that must be executing in a healthy
manner (operating system, applications, antimalware software). There are environ-
mental aspects that can negatively affect an organization’s operations (fire, flood, HVAC
issues, electrical problems), potential natural disasters, and physical theft or attacks. An
organization must fully understand its operational environment and its availability
weaknesses so that the proper countermeasures can be put into place.
Integrity
Integrity is upheld when the assurance of the accuracy and reliability of information
and systems is provided and any unauthorized modification is prevented. Hardware,
software, and communication mechanisms must work in concert to maintain and pro-
cess data correctly and to move data to intended destinations without unexpected al-
teration. The systems and network should be protected from outside interference and
contamination.
Environments that enforce and provide this attribute of security ensure that attack-
ers, or mistakes by users, do not compromise the integrity of systems or data. When an
attacker inserts a virus, logic bomb, or back door into a system, the system’s integrity is
compromised. This can, in turn, harm the integrity of information held on the system
by way of corruption, malicious modification, or the replacement of data with incorrect
data. Strict access controls, intrusion detection, and hashing can combat these threats.
Users usually affect a system or its data’s integrity by mistake (although internal users
may also commit malicious deeds). For example, users with a full hard drive may unwit-
tingly delete configuration files under the mistaken assumption that deleting a boot.ini
file must be okay because they don’t remember ever using it. Or, for example, a user may
insert incorrect values into a data processing application that ends up charging a cus-
tomer $3,000 instead of $300. Incorrectly modifying data kept in databases is another
common way users may accidentally corrupt data—a mistake that can have lasting effects.

CISSP All-in-One Exam Guide
24
Security should streamline users’ capabilities and give them only certain choices
and functionality, so errors become less common and less devastating. System-critical
files should be restricted from viewing and access by users. Applications should provide
mechanisms that check for valid and reasonable input values. Databases should let
only authorized individuals modify data, and data in transit should be protected by
encryption or other mechanisms.
Confidentiality
I protect my most secret secrets.
Response: No one cares.
Confidentiality ensures that the necessary level of secrecy is enforced at each junc-
tion of data processing and prevents unauthorized disclosure. This level of confidenti-
ality should prevail while data resides on systems and devices within the network, as it
is transmitted, and once it reaches its destination.
Attackers can thwart confidentiality mechanisms by network monitoring, shoulder
surfing, stealing password files, breaking encryption schemes, and social engineering.
These topics will be addressed in more depth in later chapters, but briefly, shoulder surf-
ing is when a person looks over another person’s shoulder and watches their keystrokes
or views data as it appears on a computer screen. Social engineering is when one person
tricks another person into sharing confidential information, for example, by posing as
someone authorized to have access to that information. Social engineering can take
many forms. Any one-to-one communication medium can be used to perform social
engineering attacks.
Users can intentionally or accidentally disclose sensitive information by not en-
crypting it before sending it to another person, by falling prey to a social engineering
attack, by sharing a company’s trade secrets, or by not using extra care to protect confi-
dential information when processing it.
Confidentiality can be provided by encrypting data as it is stored and transmitted,
enforcing strict access control and data classification, and by training personnel on the
proper data protection procedures.
Availability, integrity, and confidentiality are critical principles of security. You
should understand their meaning, how they are provided by different mechanisms, and
how their absence can negatively affect an organization.
Balanced Security
In reality, when information security is dealt with, it is commonly only through the lens
of keeping secrets secret (confidentiality). The integrity and availability threats can be
overlooked and only dealt with after they are properly compromised. Some assets have
a critical confidentiality requirement (company trade secrets), some have critical integ-
rity requirements (financial transaction values), and some have critical availability re-
quirements (e-commerce web servers). Many people understand the concepts of the AIC
triad, but may not fully appreciate the complexity of implementing the necessary con-
trols to provide all the protection these concepts cover. The following provides a short list
of some of these controls and how they map to the components of the AIC triad:

Chapter 2: Information Security Governance and Risk Management
25
• Availability
• Redundant array of inexpensive disks (RAID)
• Clustering
• Load balancing
• Redundant data and power lines
• Software and data backups
• Disk shadowing
• Co-location and off-site facilities
• Roll-back functions
• Fail-over configurations
• Integrity
• Hashing (data integrity)
• Configuration management (system integrity)
• Change control (process integrity)
• Access control (physical and technical)
• Software digital signing
• Transmission CRC functions
• Confidentiality
• Encryption for data at rest (whole disk, database encryption)
• Encryption for data in transit (IPSec, SSL, PPTP, SSH)
• Access control (physical and technical)
All of these control types will be covered in this book. What is important to realize
at this point is that while the concept of the AIC triad may seem simplistic, meeting its
requirements is commonly more challenging.
Key Terms
•Availability Reliable and timely access to data and resources is
provided to authorized individuals.
•Integrity Accuracy and reliability of the information and systems are
provided and any unauthorized modification is prevented.
•Confidentiality Necessary level of secrecy is enforced and
unauthorized disclosure is prevented.
•Shoulder surfing Viewing information in an unauthorized manner
by looking over the shoulder of someone else.
•Social engineering Gaining unauthorized access by tricking someone
into divulging sensitive information.

CISSP All-in-One Exam Guide
26
Security Definitions
I am vulnerable and see you as a threat.
Response: Good.
The words “vulnerability,” “threat,” “risk,” and “exposure” are often interchanged,
even though they have different meanings. It is important to understand each word’s
definition and the relationships between the concepts they represent.
Avulnerability is a lack of a countermeasure or a weakness in a countermeasure that
is in place. It can be a software, hardware, procedural, or human weakness that can be
exploited. A vulnerability may be a service running on a server, unpatched applications
or operating systems, an unrestricted wireless access point, an open port on a firewall,
lax physical security that allows anyone to enter a server room, or unenforced password
management on servers and workstations.
Athreat is any potential danger that is associated with the exploitation of a vulner-
ability. The threat is that someone, or something, will identify a specific vulnerability
and use it against the company or individual. The entity that takes advantage of a vul-
nerability is referred to as a threat agent. A threat agent could be an intruder accessing
the network through a port on the firewall, a process accessing data in a way that vio-
lates the security policy, a tornado wiping out a facility, or an employee making an
unintentional mistake that could expose confidential information.
Arisk is the likelihood of a threat agent exploiting a vulnerability and the corre-
sponding business impact. If a firewall has several ports open, there is a higher likeli-
hood that an intruder will use one to access the network in an unauthorized method.
If users are not educated on processes and procedures, there is a higher likelihood that
an employee will make an unintentional mistake that may destroy data. If an intrusion
detection system (IDS) is not implemented on a network, there is a higher likelihood
an attack will go unnoticed until it is too late. Risk ties the vulnerability, threat, and
likelihood of exploitation to the resulting business impact.
An exposure is an instance of being exposed to losses. A vulnerability exposes an
organization to possible damages. If password management is lax and password rules
are not enforced, the company is exposed to the possibility of having users’ passwords
captured and used in an unauthorized manner. If a company does not have its wiring
inspected and does not put proactive fire prevention steps into place, it exposes itself to
potentially devastating fires.
Acontrol, or countermeasure, is put into place to mitigate (reduce) the potential
risk. A countermeasure may be a software configuration, a hardware device, or a proce-
dure that eliminates a vulnerability or that reduces the likelihood a threat agent will be
able to exploit a vulnerability. Examples of countermeasures include strong password
management, firewalls, a security guard, access control mechanisms, encryption, and
security-awareness training.
NOTE
NOTE The terms “control,” “countermeasure,” and “safeguard” are
interchangeable terms. They are mechanisms put into place to reduce risk.

Chapter 2: Information Security Governance and Risk Management
27
If a company has antimalware software but does not keep the signatures up-to-date,
this is a vulnerability. The company is vulnerable to malware attacks. The threat is that
a virus will show up in the environment and disrupt productivity. The likelihood of a
virus showing up in the environment and causing damage and the resulting potential
damage is the risk. If a virus infiltrates the company’s environment, then a vulnerabil-
ity has been exploited and the company is exposed to loss. The countermeasures in this
situation are to update the signatures and install the antimalware software on all com-
puters. The relationships among risks, vulnerabilities, threats, and countermeasures are
shown in Figure 2-1.
Applying the right countermeasure can eliminate the vulnerability and exposure,
and thus reduce the risk. The company cannot eliminate the threat agent, but it can
protect itself and prevent this threat agent from exploiting vulnerabilities within the
environment.
Many people gloss over these basic terms with the idea that they are not as impor-
tant as the sexier things in information security. But you will find that unless a security
team has an agreed-upon language in place, confusion will quickly take over. These
terms embrace the core concepts of security and if they are confused in any manner,
then the activities that are rolled out to enforce security are commonly confused.
Figure 2-1 The relationships among the different security concepts

CISSP All-in-One Exam Guide
28
Control Types
We have this ladder, some rubber bands, and this Band-Aid.
Response: Okay, we are covered.
Up to this point we have covered the goals of security (availability, integrity, confi-
dentiality) and the terminology used in the security industry (vulnerability, threat, risk,
control). These are foundational components that must be understood if security is
going to take place in an organized manner. The next foundational issue we are going
to tackle is control types that can be implemented and their associated functionality.
Controls are put into place to reduce the risk an organization faces, and they come
in three main flavors: administrative, technical, and physical. Administrative controls
are commonly referred to as “soft controls” because they are more management-orient-
ed. Examples of administrative controls are security documentation, risk management,
personnel security, and training. Technical controls (also called logical controls) are
software or hardware components, as in firewalls, IDS, encryption, identification and
authentication mechanisms. And physical controls are items put into place to protect
facility, personnel, and resources. Examples of physical controls are security guards,
locks, fencing, and lighting.
These control types need to be put into place to provide defense-in-depth, which is
the coordinated use of multiple security controls in a layered approach, as shown in
Figure 2-2. A multilayered defense system minimizes the probability of successful pen-
etration and compromise because an attacker would have to get through several differ-
ent types of protection mechanisms before she gained access to the critical assets. For
example, Company A can have the following physical controls in place that work in a
layered model:
• Fence
• Locked external doors
• Closed-circuit TV
Key Terms
•Vulnerability Weakness or a lack of a countermeasure.
•Threat agent Entity that can exploit a vulnerability.
•Threat The danger of a threat agent exploiting a vulnerability.
•Risk The probability of a threat agent exploiting a vulnerability and
the associated impact.
•Control Safeguard that is put in place to reduce a risk, also called a
countermeasure.
•Exposure Presence of a vulnerability, which exposes the organization
to a threat.

Chapter 2: Information Security Governance and Risk Management
29
• Security guard
• Locked internal doors
• Locked server room
• Physically secured computers (cable locks)
Technical controls that are commonly put into place to provide this type of layered
approach are
• Firewalls
• Intrusion detection system
• Intrusion prevention systems
• Antimalware
• Access control
• Encryption
The types of controls that are actually implemented must map to the threats the
company faces, and the numbers of layers that are put into place map to the sensitivity
of the asset. The rule of thumb is the more sensitive the asset, the more layers of protec-
tion that must be put into place.
Figure 2-2 Defense-in-depth
Potential threat
Asset
Physical security
Virus scanners
Patch management
Rule-based access control
Account management
Secure architecture
Demilitarized zones (DMZ)
Firewalls
Virtual private networks (VPN)
Policies and procedures

CISSP All-in-One Exam Guide
30
So the different categories of controls that can be used are administrative, technical,
and physical. But what do these controls actually do for us? We need to understand the
different functionality that each control type can provide us in our quest to secure our
environments.
The different functionalities of security controls are preventive, detective, corrective,
deterrent, recovery, and compensating. By having a better understanding of the different
control functionalities, you will be able to make more informed decisions about what
controls will be best used in specific situations. The six different control functionalities
are as follows:
•Deterrent Intended to discourage a potential attacker
•Preventive Intended to avoid an incident from occurring
•Corrective Fixes components or systems after an incident has occurred
•Recovery Intended to bring the environment back to regular operations
•Detective Helps identify an incident’s activities and potentially an intruder
•Compensating Controls that provide an alternative measure of control
Once you understand fully what the different controls do, you can use them in
the right locations for specific risks—or you can just put them where they would look the
prettiest.
When looking at a security structure of an environment, it is most productive to use
a preventive model and then use detective, recovery, and corrective mechanisms to help
support this model. Basically, you want to stop any trouble before it starts, but you
must be able to quickly react and combat trouble if it does find you. It is not feasible to
prevent everything; therefore, what you cannot prevent, you should be able to quickly
detect. That’s why preventive and detective controls should always be implemented
together and should complement each other. To take this concept further: what you
can’t prevent, you should be able to detect, and if you detect something, it means you
weren’t able to prevent it, and therefore you should take corrective action to make sure
it is indeed prevented the next time around. Therefore, all three types work together:
preventive, detective, and corrective.
The control types described next (administrative, physical, and technical) are pre-
ventive in nature. These are important to understand when developing an enterprise-
wide security program.
• Preventive: Administrative
• Policies and procedures
• Effective hiring practices
• Pre-employment background checks
• Controlled termination processes
• Data classification and labeling
• Security awareness
• Preventive: Physical

Chapter 2: Information Security Governance and Risk Management
31
• Badges, swipe cards
• Guards, dogs
• Fences, locks, mantraps
• Preventive: Technical
• Passwords, biometrics, smart cards
• Encryption, secure protocols, call-back systems, database views, constrained
user interfaces
• Antimalware software, access control lists, firewalls, intrusion prevention
system
Table 2-1 shows how these categories of control mechanisms perform different se-
curity functions. Many students get themselves wrapped around the axle when trying to
get their mind around which control provides which functionality. This is how this
train of thought usually takes place: “A firewall is a preventive control, but if an at-
tacker knew that it was in place it could be a deterrent.” Let’s stop right here. Do not
make this any harder than it has to be. When trying to map the functionality require-
ment to a control, think of the main reason that control would be put into place. A
firewall tries to prevent something bad from taking place, so it is a preventative control.
Auditing logs is done after an event took place, so it is detective. A data backup system
is developed so that data can be recovered; thus, this is a recovery control. Computer
images are created so that if software gets corrupted, they can be loaded; thus, this is a
corrective control.
One control type that some people struggle with is a compensating control. Let’s
look at some examples of compensating controls to best explain their function. If your
company needed to implement strong physical security, you might suggest to manage-
ment that security guards are employed. But after calculating all the costs of security
guards, your company might decide to use a compensating (alternate) control that
provides similar protection but is more affordable—as in a fence. In another example,
let’s say you are a security administrator and you are in charge of maintaining the com-
pany’s firewalls. Management tells you that a certain protocol that you know is vulner-
able to exploitation has to be allowed through the firewall for business reasons. The
network needs to be protected by a compensating (alternate) control pertaining to this
protocol, which may be setting up a proxy server for that specific traffic type to ensure
that it is properly inspected and controlled. So a compensating control is just an alter-
nate control that provides similar protection as the original control, but has to be used
because it is more affordable or allows specifically required business functionality.
Several types of security controls exist, and they all need to work together. The com-
plexity of the controls and of the environment they are in can cause the controls to con-
tradict each other or leave gaps in security. This can introduce unforeseen holes in the
company’s protection not fully understood by the implementers. A company may have
very strict technical access controls in place and all the necessary administrative controls
up to snuff, but if any person is allowed to physically access any system in the facility, then
clear security dangers are present within the environment. Together, these controls should
work in harmony to provide a healthy, safe, and productive environment.

CISSP All-in-One Exam Guide
32
Type of Control:
Preventive
Avoid
undesirable
events from
occurring
Detective
Identify undesirable
events that have
occurred
Corrective
Correct
undesirable events
that have occurred
Deterrent
Discourage security
violations
Recovery
Restore resources
and capabilities
Category of Control:
Physical
Fences X
Locks X
Badge system X
Security guard X
Biometric system X
Mantrap doors X
Lighting X
Motion detectors X
Closed-circuit TVs X
Offsite facility X
Administrative
Security policy X
Monitoring and
supervising
X
Separation of duties X
Table 2-1 Service That Security Controls Provide

Chapter 2: Information Security Governance and Risk Management
33
Type of Control:
Preventive
Avoid
undesirable
events from
occurring
Detective
Identify undesirable
events that have
occurred
Corrective
Correct
undesirable events
that have occurred
Deterrent
Discourage security
violations
Recovery
Restore resources
and capabilities
Category of Control:
Job rotation X
Information
classification
X
Personnel
procedures
X
Investigations X
Testing X
Security-awareness
training
X
Technical
ACLs X
Routers X
Encryption X
Audit logs X
IDS X
Antivirus software X
Server images X
Smart cards X
Dial-up call-back
systems
X
Data backup X
Table 2-1 Service That Security Controls Provide (continued)

CISSP All-in-One Exam Guide
34
Security Frameworks
With each section we are getting closer to some of the overarching topics of this chapter.
Up to this point we know what we need to accomplish (availability, integrity, confiden-
tiality) and we know the tools we can use (administrative, technical, physical controls)
and we know how to talk about this issue (vulnerability, threat, risk, control). Before we
move into how to develop an organization-wide security program, let’s first explore
what not to do, which is referred to as security through obscurity. The concept of secu-
rity through obscurity is assuming that your enemies are not as smart as you are and that
they cannot figure out something that you feel is very tricky. A nontechnical example of
security through obscurity is the old practice of putting a spare key under a doormat in
case you are locked out of the house. You assume that no one knows about the spare
key, and as long as they don’t, it can be considered secure. The vulnerability here is that
anyone could gain easy access to the house if they have access to that hidden spare key,
and the experienced attacker (in this example, a burglar) knows that these kinds of
vulnerabilities exist and takes the appropriate steps to seek them out.
In the technical realm, some vendors work on the premise that since their product’s
code is compiled this provides more protection than products based upon open-source
code because no one can view their original programming instructions. But attackers
have a wide range of reverse-engineering tools available to them to reconstruct the
product’s original code, and there are other ways to figure out how to exploit software
without reverse-engineering it, as in fuzzing, data validation inputs, etc. (These soft-
ware security topics will be covered in depth in Chapter 10.) The proper approach to
security is to ensure the original software does not contain flaws—not to assume that
putting the code into a compiled format provides the necessary level of protection.
Control Types and Functionalities
•Control types Administrative, technical, and physical
•Control functionalities
•Deterrent Discourage a potential attacker
•Preventive Stop an incident from occurring
•Corrective Fix items after an incident has occurred
•Recovery Restore necessary components to return to normal
operations
•Detective Identify an incident’s activities after it took place
•Compensating Alternative control that provides similar protection
as the original control
•Defense-in-depth Implementation of multiple controls so that
successful penetration and compromise is more difficult to attain

Chapter 2: Information Security Governance and Risk Management
35
Another common example of practicing security through obscurity is to develop
cryptographic algorithms in-house instead of using algorithms that are commonly used
within the industry. Some organizations assume that if attackers are not familiar with
the logic functions and mathematics of their homegrown algorithms, this lack of un-
derstanding by the attacker will serve as a necessary level of security. But attackers are
smart, clever, and motivated. If there are flaws within these algorithms, they will most
likely be identified and exploited. The better approach is to use industry-recognized
algorithms that have proven themselves to be strong and not depend upon the fact that
an algorithm is proprietary, thus assuming that it is secure.
Some network administrators will remap protocols on their firewalls so that HTTP
is not coming into the environment over the well-known port 80, but instead 8080.
The administrator assumes that an attacker will not figure out this remapping, but in
reality a basic port scanner and protocol analyzer will easily detect this port remapping.
So don’t try to outsmart the bad guy with trickery; instead, practice security in a mature,
solid approach. Don’t try to hide the flaws that can be exploited; get rid of those flaws
altogether by following proven security practices.
Reliance on confusion to provide security is obviously dangerous. Though everyone
wants to believe in the innate goodness of their fellow man, no security professional
would have a job if this were actually true. In security, a good practice is illustrated by
the old saying, “There are only two people in the world I trust: you and me—and I’m
not so sure about you.” This is a better attitude to take, because security really can be
compromised by anyone, at any time.
So we do not want our organization’s security program to be built upon smoke and
mirrors, and we understand that we most likely cannot out-trick our enemies—what do
we do? Build a fortress, aka security program. Hundreds of years ago your enemies
would not be attacking you with packets through a network; they would be attacking
you with big sticks while they rode horses. When one faction of people needed to pro-
tect themselves from another, they did not just stack some rocks on top of each other
in a haphazard manner and call that protection. (Well, maybe some groups did, but
they died right away and do not really count.) Groups of people built castles based
upon architectures that could withstand attacks. The walls and ceilings were made of
solid material that was hard to penetrate. The structure of the buildings provided layers
of protection. The buildings were outfitted with both defensive and offensive tools, and
some were surround by moats. That is our goal, minus the moat.
A security program is a framework made up of many entities: logical, administra-
tive, and physical protection mechanisms, procedures, business processes, and people
that all work together to provide a protection level for an environment. Each has an
important place in the framework, and if one is missing or incomplete, the whole
framework may be affected. The program should work in layers: one layer provides sup-
port for the layer above it and protection for the layer below it. Because a security pro-
gram is a framework, organizations are free to plug in different types of technologies,
methods, and procedures to accomplish the necessary protection level for their envi-
ronment.

CISSP All-in-One Exam Guide
36
A security program based upon a flexible framework sounds great, but how do we
build one? Before a fortress was built, the structure was laid out in blueprints by an ar-
chitect. We need a detailed plan to follow to properly build our security program.
Thank goodness industry standards were developed just for this purpose.
ISO/IEC 27000 Series
The British seem to know what they are doing. Let’s follow them.
British Standard 7799 (BS7799) was developed in 1995 by the United Kingdom
government’s Department of Trade and Industry and published by the British Stan-
dards Institution. The standard outlines how an information security management sys-
tem (ISMS) (aka security program) should be built and maintained. The goal was to
provide guidance to organizations on how to design, implement, and maintain poli-
cies, processes, and technologies to manage risks to its sensitive information assets.
The reason that this type of standard was even needed was to try and centrally man-
age the various security controls deployed throughout an organization. Without a secu-
rity management system, the controls are implemented and managed in an ad hoc
manner. The IT department would take care of technology security solutions, personnel
security would be within the human relations department, physical security in the fa-
cilities department, and business continuity in the operations department. We needed
a way to oversee all of these items and knit them together in a holistic manner. This
British standard met this need.
The British Standard actually had two parts: BS7799 Part 1, which outlined control
objectives and a range of controls that can be used to meet those objectives, and BS7799
Part 2, which outlined how a security program (ISMS) can be set up and maintained.
BS7799 Part 2 also served as a baseline that organizations could be certified against.
BS7799 was considered to be a de facto standard, which means that no specific
standards body was demanding that everyone follow the standard—but the standard
seemed to be a really good idea and fit an industry need, so everyone decided to follow
it. When organizations around the world needed to develop an internal security pro-
gram, there were no guidelines or direction to follow except BS7799. This standard laid
out how security should cover a wide range of topics, some of which are listed here:
•Information security policy for the organization Map of business
objectives to security, management’s support, security goals, and
responsibilities.
•Creation of information security infrastructure Create and maintain an
organizational security structure through the use of a security forum, a security
officer, defining security responsibilities, authorization processes, outsourcing,
and independent reviews.
•Asset classification and control Develop a security infrastructure to protect
organizational assets through accountability and inventory, classification, and
handling procedures.

Chapter 2: Information Security Governance and Risk Management
37
•Personnel security Reduce risks that are inherent in human interaction by
screening employees, defining roles and responsibilities, training employees
properly, and documenting the ramifications of not meeting expectations.
•Physical and environmental security Protect the organization’s assets by
properly choosing a facility location, erecting and maintaining a security
perimeter, implementing access control, and protecting equipment.
•Communications and operations management Carry out operations
security through operational procedures, proper change control, incident
handling, separation of duties, capacity planning, network management, and
media handling.
•Access control Control access to assets based on business requirements, user
management, authentication methods, and monitoring.
•System development and maintenance Implement security in all phases
of a system’s lifetime through development of security requirements,
cryptography, integrity protection, and software development procedures.
•Business continuity management Counter disruptions of normal
operations by using continuity planning and testing.
•Compliance Comply with regulatory, contractual, and statutory
requirements by using technical controls, system audits, and legal awareness.
The need to expand and globally standardize BS7799 was identified, and this task
was taken on by the International Organization for Standardization (ISO) and the In-
ternational Electrotechnical Commission (IEC). ISO is the world’s largest developer
and publisher of international standards. The standards this group works on range
from meteorology, food technology, and agriculture to space vehicle engineering, min-
ing, and information technology. ISO is a network of the national standards institutes
of 159 countries. So these are the really smart people who come up with really good
ways of doing stuff, one being how to set up information security programs within or-
ganizations. The IEC develops and publishes international standards for all electrical,
electronic, and related technologies. These two organizations worked together to build
on top of what was provided by BS7799 and launch the new version as a global stan-
dard, known as the ISO/IEC 27000 series.
As BS7799 was being updated it went through a long range of confusing titles, in-
cluding different version numbers. So you could see this referenced as BS7799,
BS7799v1, BS7799 v2, ISO 17799, BS7799-3:2005, etc. The industry has moved from
the more ambiguous BS7799 standard to a whole list of ISO/IEC standards that at-
tempt to compartmentalize and modularize the necessary components of an ISMS, as
shown here:
•ISO/IEC 27000 Overview and vocabulary
•ISO/IEC 27001 ISMS requirements
•ISO/IEC 27002 Code of practice for information security management

CISSP All-in-One Exam Guide
38
•ISO/IEC 27003 Guideline for ISMS implementation
•ISO/IEC 27004 Guideline for information security management
measurement and metrics framework
•ISO/IEC 27005 Guideline for information security risk management
•ISO/IEC 27006 Guidelines for bodies providing audit and certification of
information security management systems
•ISO/IEC 27011 Information security management guidelines for
telecommunications organizations
•ISO/IEC 27031 Guideline for information and communications technology
readiness for business continuity
•ISO/IEC 27033-1 Guideline for network security
•ISO 27799 Guideline for information security management in health
organizations
The following ISO/IEC standards are in development as of this writing:
•ISO/IEC 27007 Guideline for information security management systems
auditing
•ISO/IEC 27013 Guideline on the integrated implementation of ISO/IEC
20000-1 and ISO/IEC 27001
•ISO/IEC 27014 Guideline for information security governance
•ISO/IEC 27015 Information security management guidelines for the finance
and insurance sectors
•ISO/IEC 27032 Guideline for cybersecurity
•ISO/IEC 27033 Guideline for IT network security, a multipart standard
based on ISO/IEC 18028:2006
•ISO/IEC 27034 Guideline for application security
•ISO/IEC 27035 Guideline for security incident management
•ISO/IEC 27036 Guideline for security of outsourcing
•ISO/IEC 27037 Guideline for identification, collection, and/or acquisition
and preservation of digital evidence
This group of standards is known as the ISO/IEC 27000 series and serves as industry
best practices for the management of security controls in a holistic manner within or-
ganizations around the world. ISO follows the Plan – Do – Check – Act (PDCA) cycle,
which is an iterative process that is commonly used in business process quality control
programs. The Plan component pertains to establishing objectives and making plans,
the Do component deals with the implementation of the plans, the Check component
pertains to measuring results to understand if objectives are met, and the last piece, Act,
provides direction on how to correct and improve plans to better achieve success. Fig-
ure 2-3 depicts this cycle and how it correlates to the development and maintenance of
an ISMS as laid out in the ISO/IEC 27000 series.

Chapter 2: Information Security Governance and Risk Management
39
It is common for organizations to seek an ISO/IEC 27001 certification by an accred-
ited third party. The third party assesses the organization against the ISMS requirements
laid out in ISO/IEC 27001 and attests to the organization’s compliance level. Just as
(ISC)2 attests to a person’s security knowledge once he passes the CISSP exam, the third
party attests to the security practices within the boundaries of the company it evaluates.
Figure 2-3
PDCA cycle used
in the ISO/IEC 27000
series (Used with
permission from
www.gammassl.co.uk)
Plan
Act
Check
Do
Execute monitoring
procedures
Undertake regular
reviews of
ISMS effectiveness
Measure effectiveness
of controls
Review level of residual
and acceptable risk
Conduct internal
ISMS audit
Regular management
review
Update security plans
Record actions
and events
Implement identified
improvements
Take corrective/
preventative action
Apply lessons learned
(including other
organizations)
Communicate results
to interested parties
Ensure improvements
to achieve objectives
Define the scope
of the ISMS
Define ISMS policy
Define approach
to risk assessment
Identify the risks
Analyze and evaluate
the risks
Identify and evaluate
options for the
treatment of risk
Management approves
residual risks
Management
authorizes ISMS
Select control
objectives and controls
Prepare a Statement
of Applicability (SOA)
Implement risk
treatment plan
Implement controls
Implement training and
awareness programs
Manage operations
Manage resources
Implement procedures to
direct/respond to
security incidents
Formulate risk
treatment plan

CISSP All-in-One Exam Guide
40
Many Standards, Best Practices, and Frameworks
As you will see in the following sections, various profit and nonprofit organiza-
tions have developed their own approaches to security management, security
control objectives, process management, and enterprise development. We will
examine their similarities and differences and illustrate where each is used within
the industry.
The following is a basic breakdown:
•Security Program Development
•ISO/IEC 27000 series International standards on how to develop
and maintain an ISMS developed by ISO and IEC
•Enterprise Architecture Development
•Zachman framework Model for the development of enterprise
architectures developed by John Zachman
•TOGAF Model and methodology for the development of enterprise
architectures developed by The Open Group
•DoDAF U.S. Department of Defense architecture framework that
ensures interoperability of systems to meet military mission goals
•MODAF Architecture framework used mainly in military support
missions developed by the British Ministry of Defence
•Security Enterprise Architecture Development
•SABSA model Model and methodology for the development of
information security enterprise architectures
•Security Controls Development
•CobiT Set of control objectives for IT management developed by
Information Systems Audit and Control Association (ISACA) and the
IT Governance Institute (ITGI)
•SP 800-53 Set of controls to protect U.S. federal systems developed
by the National Institute of Standards and Technology (NIST)
•Corporate Governance
•COSO Set of internal corporate controls to help reduce the risk
of financial fraud developed by the Committee of Sponsoring
Organizations (COSO) of the Treadway Commission
•Process Management
•ITIL Processes to allow for IT service management developed by the
United Kingdom’s Office of Government Commerce
•Six Sigma Business management strategy that can be used to carry
out process improvement
•Capability Maturity Model Integration (CMMI) Organizational
development for process improvement developed by Carnegie Mellon

Chapter 2: Information Security Governance and Risk Management
41
NOTE
NOTE The CISSP Common Body of Knowledge places all architectures
(enterprise and system) within the domain Security Architecture and Design.
Enterprise architectures are covered in this chapter of this book because they
directly relate to the organizational security program components covered
throughout the chapter. The Security Architecture and Design chapter deals
specifically with system architectures that are used in software engineering
and design.
Enterprise Architecture Development
Should we map and integrate all of our security efforts with our business efforts?
Response: No, we are more comfortable with chaos and wasting money.
Organizations have a choice when attempting to secure their environment as a
whole. They can just toss in products here and there, which are referred to as point solu-
tions or stovepipe solutions, and hope the ad hoc approach magically works in a man-
ner that secures the environment evenly and covers all of the organization’s
vulnerabilities. Or the organization can take the time to understand the environment,
understand the security requirements of the business and environment, and lay out an
overarching framework and strategy that maps the two together. Most organizations
choose option one, which is the “constantly putting out fires” approach. This is a love-
ly way to keep stress levels elevated and security requirements unmet, and to let confu-
sion and chaos be the norm.
The second approach would be to define an enterprise security architecture, allow
it to be the guide when implementing solutions to ensure business needs are met, pro-
vide standard protection across the environment, and reduce the amount of security
surprises the organization will run into. Although implementing an enterprise security
architecture will not necessarily promise pure utopia, it does tame the chaos and gets
the security staff, and organization, into a more proactive and mature mindset when
dealing with security as a whole.
Developing an architecture from scratch is not an easy task. Sure, it is easy to draw a big
box with smaller boxes inside of it, but what do the boxes represent? What are the relation-
ships between the boxes? How does information flow between the boxes? Who needs to
view these boxes and what aspects of the boxes do they need for decision making? An ar-
chitecture is a conceptual construct. It is a tool to help individuals understand a complex
item (i.e., enterprise) in digestible chunks. If you are familiar with the OSI networking
model, this is an abstract model used to illustrate the architecture of a networking stack. A
networking stack within a computer is very complex because it has so many protocols,
interfaces, services, and hardware specifications. But when we think about it in a modular
framework (seven layers), we can better understand the network stack as a whole and the
relationships between the individual components that make it up.
NOTE
NOTE OSI network stack will be covered extensively in Chapter 6.

CISSP All-in-One Exam Guide
42
An enterprise architecture encompasses the essential and unifying components of
an organization. It expresses the enterprise structure (form) and behavior (function). It
embodies the enterprise’s components, their relationships to each other, and to the
environment.
In this section we will be covering several different enterprise architecture frame-
works. Each framework has its own specific focus, but they all provide guidance on how
to build individual architectures so that they are useful tools to a diverse set of indi-
viduals. Notice the difference between an architecture framework and an actual architec-
ture. You use the framework as a guideline on how to build an architecture that best fits
your company’s needs. Each company’s architecture will be different because they have
different business drivers, security and regulatory requirements, cultures, and organiza-
tional structures—but if each starts with the same architectural framework then their
architectures will have similar structures and goals. It is similar to three people starting
with a two-story ranch-style house blueprint. One person chooses to have four bed-
rooms built because they have three children, one person chooses to have a larger living
room and three bedrooms, and the other person chooses two bedrooms and two living
rooms. Each person started with the same blueprint (framework) and modified it to
meet their needs (architecture).
When developing an architecture, first the stakeholders need to be identified, which
is who will be looking at and using the architecture. Next, the views need to be devel-
oped, which is how the information that is most important to the different stakehold-
ers will be illustrated in the most useful manner. For example, as you see in Figure 2-4,
companies have several different viewpoints. Executives need to understand the com-
pany from a business point of view, business process developers need to understand
what type of information needs to be collected to support business activities, applica-
tion developers need to understand system requirements that maintain and process the
information, data modelers need to know how to structure data elements, and the
technology group needs to understand the network components required to support
the layers above it. They are all looking at an architecture of the same company, it is just
being presented in views that they understand and that directly relate to their responsi-
bilities within the organization.
An enterprise architecture allows you to not only understand the company from
several different views, but also understand how a change that takes place at one level
will affect items at other levels. For example, if there is a new business requirement, how
is it going to be supported at each level of the enterprise? What type of new information
must be collected and processed? Do new applications need to be purchased or current
ones modified? Are new data elements required? Will new networking devices be re-
quired? An architecture allows you to understand all the things that will need to change
just to support one new business function. The architecture can be used in the opposite
direction also. If a company is looking to do a technology refresh, will the new systems
still support all of the necessary functions in the layers above the technology level? An
architecture allows you to understand an organization as one complete organism and
illustrate how changes to one internal component can directly affect another one.

Chapter 2: Information Security Governance and Risk Management
43
Why Do We Need Enterprise Architecture Frameworks?
As you have probably experienced, business people and technology people sometimes
seem like totally different species. Business people use terms like “net profits,” “risk
universes,” “portfolio strategy,” “hedging,” “commodities,” etc. Technology people use
terms like “deep packet inspection,” “level three devices,” “cross-site scripting,” “load
balancing,” etc. Think about the acronyms techies like us throw around—TCP, APT,
ICMP, RAID, UDP, L2TP, PPTP, IPSec, AES, and DES. We can have complete conversa-
tions between ourselves without using any real words. And even though business peo-
ple and technology people use some of the same words, they have totally different
meanings to the individual groups. To business people, a protocol is a set of approved
processes that must be followed to accomplish a task. To technical people, a protocol is
Feedback
Prescribes
Identifies
Supported by
Business
architecture
Drives
Information
architecture
Information systems
architecture
Data architecture
Delivery systems architecture
hardware, software, communications
Enterprise
discretionary and
non-discretionary
standards/
regulations
External discretionary
and non-discretionary
standard/requirements
Figure 2-4
NIST Enterprise
Architecture
Framework

CISSP All-in-One Exam Guide
44
a standardized manner of communication between computers or applications. Busi-
ness and technical people use the term “risk,” but each group is focusing on very differ-
ent risks a company can face—market share versus security breaches. And even though
each group uses the term “data” the same, business people look at data only from a
functional point of view and security people look at data from a risk point of view.
This divide between business perspectives and technology perspectives can not only
cause confusion and frustration—it commonly costs money. If the business side of the
house wants to offer customers a new service, as in paying bills online, there may have
to be extensive changes to the current network infrastructure, applications, web servers,
software logic, cryptographic functions, authentication methods, database structures,
etc. What seems to be a small change in a business offering can cost a lot of money
when it comes to adding up the new technology that needs to be purchased and imple-
mented, programming that needs to be carried out, re-architecting of networks, etc. It
is common for business people to feel as though the IT department is more of an im-
pediment when it comes to business evolution and growth, and in turn the IT depart-
ment feels as though the business people are constantly coming up with outlandish
and unrealistic demands with no supporting budgets.
Because of this type of confusion between business and technology people, organi-
zations around the world have implemented incorrect solutions because the business
functionality to technical specifications was not understood. This results in having to
repurchase new solutions, carry out rework, and wasting an amazing amount of time.
Not only does this cost the organization more money than it should have in the first
place, business opportunities may be lost, which can reduce market share. This type of
waste has happened so much that the U.S. Congress passed the Clinger-Cohen Act,
which requires federal agencies to improve their IT expenditures. So we need a tool that
both business people and technology people can use to reduce confusion, optimize
business functionality, and not waste time and money. This is where business enter-
prise architectures come into play. It allows both groups (business and technology) to
view the same organization in ways that make sense to them.
When you go to the doctor’s office, there is a poster of a skeleton system on one
wall, a poster of a circulatory system on the other wall, and another poster of the organs
that make up a human body. These are all different views of the same thing, the human
body. This is the same functionality that enterprise architecture frameworks provide:
different views of the same thing. In the medical field we have specialists (podiatrists,
brain surgeons, dermatologists, oncologists, ophthalmologists, etc.). Each organization
is also made up of its own specialists (HR, marketing, accounting, IT, R&D, manage-
ment). But there also has to be an understanding of the entity (whether it is a human
body or company) holistically, which is what an enterprise architecture attempts to
accomplish.
Zachman Architecture Framework
Does anyone really understand how all the components within our company work together?
Response: How long have you worked here?
One of the first enterprise architecture frameworks that was created is the Zachman
framework, created by John Zachman, which is depicted in Table 2-2. (The full frame-
work can be viewed at www.zafa.com.)

Chapter 2: Information Security Governance and Risk Management
45
Order Layer What
(Data)
How
(Function)
Where
(Network)
Who
(People)
When
(Time)
Why
(Motivation)
1Scope context
boundary
•Planner
List of things
important to
the business
List of processes
the business
performs
List of locations
in which
the business
operates
List of
organizations
important to
the business
List of
events
significant
to the
business
List of business
goals/strategies
2Business model
concepts
•Owner
e.g., semantic
or entity-
relationship
model
e.g., business
process model
e.g., business
logistics model
e.g.,
workflow
model
e.g., master
schedule
e.g., business plan
3System model
logic
•Designer
e.g., logical
data model
e.g., application
architecture
e.g., distributed
system
architecture
e.g., human
interface
architecture
e.g.,
processing
structure
e.g., business rule
model
4Technology
model physics
•Builder
e.g., physical
data model
e.g., system
design
e.g., technology
architecture
e.g.,
presentation
architecture
e.g., control
structure
e.g., rule design
5Component
configuration
•Implementer
e.g., data
definition
e.g., program e.g., network
architecture
e.g., security
architecture
e.g., timing
definition
e.g., rule
specification
6Functioning
enterprise
instances
•Worker
e.g., data e.g., function e.g., network e.g.,
organization
e.g.,
schedule
e.g., strategy
Table 2-2 Zachman Framework for Enterprise Architecture

CISSP All-in-One Exam Guide
46
The Zachman framework is a two-dimensional model that uses six basic communi-
cation interrogatives (What, How, Where, Who, When, and Why) intersecting with dif-
ferent viewpoints (Planner, Owner, Designer, Builder, Implementer, and Worker) to
give a holistic understanding of the enterprise. This framework was developed in the
1980s and is based on the principles of classical business architecture that contain rules
that govern an ordered set of relationships.
The goal of this model is to be able to look at the same organization from different
views (Planner, Owner, Designer, Builder, etc.). Different groups within a company
need the same information, but presented in ways that directly relate to their responsi-
bilities. A CEO needs financial statements, scorecards, and balance sheets. A network
administrator needs network schematics, a systems engineer needs interface require-
ments, and the operations department needs configuration requirements. If you have
ever carried out a network-based vulnerability test, you know that you cannot tell the
CEO that some systems are vulnerable to SYN-based attacks, or that the company soft-
ware allows for client-side browser injections, or that some Windows-based applica-
tions are vulnerable to alternate data stream attacks. The CEO needs to know this
information, but in a language he can understand. People at each level of the organiza-
tion need information in a language and format that is most useful to them.
A business enterprise architecture is used to optimize often fragmented processes
(both manual and automated) into an integrated environment that is responsive to
change and supportive of the business strategy. The Zachman framework has been
around for many years and has been used by many organizations to build or better
define their business environment. This framework is not security-oriented, but it is a
good template to work with because it offers direction on how to understand an actual
enterprise in a modular fashion.
The Open Group Architecture Framework
Our business processes, data flows, software programs, and network devices are strung together
like spaghetti.
Response: Maybe we need to implement some structure here.
Another enterprise architecture framework is The Open Group Architecture Frame-
work (TOGAF), which has its origins in the U.S. Department of Defense. It provides an
approach to design, implement, and govern an enterprise information architecture.
TOGAF is a framework that can be used to develop the following architecture types:
• Business Architecture
• Data Architecture
• Applications Architecture
• Technology Architecture

Chapter 2: Information Security Governance and Risk Management
47
So this architecture framework can be used to create individual architectures
through the use of its Architecture Development Method (ADM). This method is an it-
erative and cyclic process that allows requirements to be continuously reviewed and the
individual architectures updated as needed. These different architectures can allow a
technology architect to understand the enterprise from four different views (business,
data, application, and technology) so she can ensure her team develops the necessary
technology to work within the environment and all the components that make up that
environment and meet business requirements. The technology may need to span many
different types of network types, interconnect with various software components, and
work within different business units. As an analogy, when a new city is being construct-
ed, people do not just start building houses here and there. Civil engineers lay out
roads, bridges, waterways, and commercial and housing zoned areas. A large organiza-
tion that has a distributed and heterogeneous environment that supports many differ-
ent business functions can be as complex as a city. So before a programmer starts
developing code, the architect of the software needs to be developed in the context of
the organization it will work within.
NOTE
NOTE Many technical people have a negative visceral reaction to models like
this. They feel it’s too much work, that it’s a lot of fluff, is not directly relevant,
and so on. If you handed the same group of people a network schematic with
firewalls, IDSs, and VPNs, they would say, “Now we’re talking about security!”
Security technology works within the construct of an organization, so the
organization must be understood also.
Business
Processes and
activities use...
Data
That must be collected,
organized, safeguarded, and
distributed using...
Applications
Such as custom or off-the-shelf
software tools that run off...
Technology
Such as computer system and telephone networks.

CISSP All-in-One Exam Guide
48
Military-Oriented Architecture Frameworks
Our reconnaissance mission gathered important intelligence on our enemy, but a software
glitch resulted in us bombing the wrong country.
Response: Let’s blame it on NATO.
It is hard enough to construct enterprise-wide solutions and technologies for one
organization—think about an architecture that has to span many different complex
government agencies to allow for interoperability and proper hierarchical communica-
tion channels. This is where the Department of Defense Architecture Framework (DoDAF)
comes into play. When the U.S. DoD purchases technology products and weapon sys-
tems, enterprise architecture documents must be created based upon DoDAF standards
to illustrate how they will properly integrate into the current infrastructures. The focus
of the architecture framework is on command, control, communications, computers,
intelligence, surveillance, and reconnaissance systems and processes. It is not only im-
portant that these different devices communicate using the same protocol types and
interoperable software components, but also that they use the same data elements. If
an image is captured from a spy satellite, downloaded to a centralized data repository,
and then loaded into a piece of software to direct an unmanned drone, the military
personnel cannot have their operations interrupted because one piece of software can-
not read another software’s data output. The DoDAF helps ensure that all systems,
processes, and personnel work in a concerted effort to accomplish its missions.
The British Ministry of Defence Architecture Framework (MODAF) is another recog-
nized enterprise architecture framework based upon the DoDAF. The crux of the frame-
work is to be able to get data in the right format to the right people as soon as possible.
Modern warfare is complex, and activities happen fast, which requires personnel and
systems to be more adaptable than ever before. Data needs to be captured and properly
presented so that decision makers understand complex issues quickly, which allows for
fast and hopefully accurate decisions.
NOTE
NOTE While both DoDAF and MODAF were developed to support mainly
military missions, they have been expanded upon and morphed for use in
business enterprise environments.
When attempting to figure out which architecture framework is best for your orga-
nization, you need to find out who the stakeholders are and what information they
need from the architecture. The architecture needs to represent the company in the
most useful manner to the people who need to understand it the best. If your company
has people (stakeholders) who need to understand the company from a business pro-
cess perspective, your architecture needs to provide that type of view. If there are people
who need to understand the company from an application perspective, your architec-
ture needs a view that illustrates that information. If people need to understand the
enterprise from a security point of view, that needs to be illustrated in a specific view.
So one main difference between the various enterprise architecture frameworks is what
type of information are they providing and how are they providing it.

Chapter 2: Information Security Governance and Risk Management
49
Enterprise Security Architecture
An enterprise security architecture is a subset of an enterprise architecture and defines
the information security strategy that consists of layers of solutions, processes, and pro-
cedures and the way they are linked across an enterprise strategically, tactically, and
operationally. It is a comprehensive and rigorous method for describing the structure
and behavior of all the components that make up a holistic information security man-
agement system (ISMS). The main reason to develop an enterprise security architecture
is to ensure that security efforts align with business practices in a standardized and cost-
effective manner. The architecture works at an abstraction level and provides a frame of
reference. Besides security, this type of architecture allows organizations to better
achieve interoperability, integration, ease-of-use, standardization, and governance.
How do you know if an organization does not have an enterprise security architec-
ture in place? If the answer is “yes” to most of the following questions, this type of ar-
chitecture is not in place:
• Does security take place in silos throughout the organization?
• Is there a continual disconnect between senior management and the security
staff?
• Are redundant products purchased for different departments for overlapping
security needs?
• Is the security program made up of mainly policies without actual
implementation and enforcement?
• When user access requirements increase because of business needs, does
the network administrator just modify the access controls without the user
manager’s documented approval?
• When a new product is being rolled out, do unexpected interoperability issues
pop up that require more time and money to fix?
• Do many “one-off” efforts take place instead of following standardized
procedures when security issues arise?
• Are the business unit managers unaware of their security responsibilities and
how their responsibilities map to legal and regulatory requirements?
• Is “sensitive data” defined in a policy, but the necessary controls are not fully
implemented and monitored?
• Are stovepipe (point) solutions implemented instead of enterprise-wide
solutions?
• Are the same expensive mistakes continuing to take place?
• Is security governance currently unavailable because the enterprise is not
viewed or monitored in a standardized and holistic manner?
• Are business decisions being made without taking security into account?
• Are security personnel usually putting out fires with no real time to look at
and develop strategic approaches?

CISSP All-in-One Exam Guide
50
• Are security efforts taking place in business units that other business units
know nothing about?
• Are more and more security personnel seeking out shrinks and going on
antidepressant or anti-anxiety medication?
If many of these answers are “yes,” no useful architecture is in place. Now, the fol-
lowing is something very interesting the author has seen over several years. Most orga-
nizations have the problems listed earlier and yet they focus on each item as if they
were unconnected. What the CSO, CISO, and/or security administrator does not always
understand is that these are just symptoms of a treatable disease. The “treatment” is to
put one person in charge of a team that develops a phased-approach enterprise security
architecture rollout plan. The goals are to integrate technology-oriented and business-
centric security processes; link administrative, technical, and physical controls to prop-
erly manage risk; and integrate these processes into the IT infrastructure, business
processes, and the organization’s culture.
The main reason organizations do not develop and roll out an enterprise security
architecture is that they do not fully understand what one is and it seems like an over-
whelming task. Fighting fires is more understandable and straightforward, so many
companies stay with this familiar approach.
A group developed the Sherwood Applied Business Security Architecture (SABSA), as
shown in Table 2-3, which is similar to the Zachman framework. It is a layered model,
Assets
(What)
Motivation
(Why)
Process
(How)
People
(Who)
Location
(Where)
Time
(When)
Contextual The
business
Business risk
model
Business process
model
Business
organization and
relationships
Business
geography
Business time
dependencies
Conceptual Business
attributes
profile
Control
objectives
Security
strategies and
architectural
layering
Security entity
model and trust
framework
Security
domain
model
Security-related
lifetimes and
deadlines
Logical Business
information
model
Security
policies
Security services Entity schema
and privilege
profiles
Security
domain
definitions
and
associations
Security
processing cycle
Physical Business
data model
Security rules,
practices, and
procedures
Security
mechanisms
Users,
applications, and
user interface
Platform
and network
infrastructure
Control
structure
execution
Component Detailed
data
structures
Security
standards
Security products
and tools
Identities,
functions,
actions, and
ACLs
Processes,
nodes,
addresses,
and protocols
Security step
timing and
sequencing
Operational Assurance
of operation
continuity
Operation
risk
management
Security service
management and
support
Application
and user
management and
support
Security
of sites,
networks, and
platforms
Security
operations
schedule
Table 2-3 SABSA Architectural Framework

Chapter 2: Information Security Governance and Risk Management
51
with its first layer defining business requirements from a security perspective. Each lay-
er of the model decreases in abstraction and increases in detail so it builds upon the
others and moves from policy to practical implementation of technology and solu-
tions. The idea is to provide a chain of traceability through the strategic, conceptual,
design, implementation, and metric and auditing levels.
The following outlines the questions that are to be asked and answered at each
level of the framework:
•What are you trying to do at this layer? The assets to be protected by your
security architecture.
•Why are you doing it? The motivation for wanting to apply security,
expressed in the terms of this layer.
•How are you trying to do it? The functions needed to achieve security
at this layer.
•Who is involved? The people and organizational aspects of security at
this layer.
•Where are you doing it? The locations where you apply your security,
relevant to this layer.
•When are you doing it? The time-related aspects of security relevant to
this layer.
SABSA is a framework and methodology for enterprise security architecture and
service management. Since it is a framework, this means it provides a structure for indi-
vidual architectures to be built from. Since it is a methodology also, this means it pro-
vides the processes to follow to build and maintain this architecture. SABSA provides a
life-cycle model so that the architecture can be constantly monitored and improved
upon over time.
For an enterprise security architecture to be successful in its development and im-
plementation, the following items must be understood and followed: strategic align-
ment, process enhancement, business enablement, and security effectiveness.
Strategic alignment means the business drivers and the regulatory and legal require-
ments are being met by the security enterprise architecture. Security efforts must pro-
vide and support an environment that allows a company to not only survive, but thrive.
The security industry has grown up from the technical and engineering world, not the
business world. In many organizations, while the IT security personnel and business
personnel might be located physically close to each other, they are commonly worlds
apart in how they see the same organization they work in. Technology is only a tool
that supports a business; it is not the business itself. The IT environment is analogous
to the circulatory system within a human body; it is there to support the body—the
body does not exist to support the circulatory system. And security is analogous to the
immune system of the body—it is there to protect the overall environment. If these
critical systems (business, IT, security) do not work together in a concerted effort, there
will be deficiencies and imbalances. While deficiencies and imbalances lead to disease
in the body, deficiencies and imbalances within an organization can lead to risk and
security compromises.

CISSP All-in-One Exam Guide
52
When looking at the business enablement requirement of the security enterprise ar-
chitecture, we need to remind ourselves that companies are in business to make money.
Companies and organizations do not exist for the sole purpose of being secure. Secu-
rity cannot stand in the way of business processes, but should be implemented to better
enable them.
Business enablement means the core business processes are integrated into the se-
curity operating model—they are standards-based and follow a risk tolerance criteria.
What does this mean in the real world? Let’s say a company’s accountants have figured
out that if they allow the customer service and support staff to work from home, the
company would save a lot of money on office rent, utilities, and overhead—plus, their
ISMS vs. Security Enterprise Architecture
We need to develop stuff and stick that stuff into an organized container.
What is the difference between an ISMS and an enterprise security architec-
ture? An ISMS outlines the controls that need to put into place (risk manage-
ment, vulnerability management, BCP, data protection, auditing, configuration
management, physical security, etc.) and provides direction on how those con-
trols should be managed throughout their life cycle. The ISMS specifies the pieces
and parts that need to be put into place to provide a holistic security program for
the organization overall and how to properly take care of those pieces and parts.
The enterprise security architecture illustrates how these components are to be
integrated into the different layers of the current business environment. The secu-
rity components of the ISMS have to be interwoven throughout the business en-
vironment and not siloed within individual company departments.
For example, the ISMS will dictate that risk management needs to be put in
place, and the enterprise architecture will chop up the risk management compo-
nents and illustrate how risk management needs to take place at the strategic,
tactical, and operational levels. As another example, the ISMS could dictate that
data protection needs to be put into place. The architecture can show how this
happens at the infrastructure, application, component, and business level. At the
infrastructure level we can implement data loss protection technology to detect
how sensitive data is traversing the network. Applications that maintain sensitive
data must have the necessary access controls and cryptographic functionality. The
components within the applications can implement the specific cryptographic
functions. And protecting sensitive company information can be tied to business
drivers, which is illustrated at the business level of the architecture.
The ISO/IEC 27000 series (which outlines the ISMS) is very policy-oriented
and outlines the necessary components of a security program. This means that
the ISO standards are general in nature, which is not a defect—they were created
that way so that they could be applied to various types of businesses, companies,
and organizations. But since these standards are general, it can be difficult to
know how to implement them and map them to your company’s infrastructure
and business needs. This is where the enterprise security architecture comes into
play. The architecture is a tool used to ensure that what is outlined in the security
standards is implemented throughout the different layers of an organization.

Chapter 2: Information Security Governance and Risk Management
53
insurance is cheaper. The company could move into this new model with the use of
VPNs, firewalls, content filtering, and so on. Security enables the company to move to
this different working model by providing the necessary protection mechanisms. If a
financial institution wants to allow their customers the ability to view bank account
information and carry out money transfers, it can offer this service if the correct secu-
rity mechanisms are put in place (access control, authentication, secure connections,
etc.). Security should help the organization thrive by providing the mechanisms to do
new things safely.
The process enhancement piece can be quite beneficial to an organization if it takes
advantage of this capability when it is presented to them. When an organization is seri-
ous about securing their environment, it means they will have to take a close look at
many of the business processes that take place on an ongoing process. Many times
these processes are viewed through the eyeglasses of security, because that’s the reason
for the activity, but this is a perfect chance to enhance and improve upon the same
processes to increase productivity. When you look at many business processes taking
place in all types of organizations, you commonly find a duplication of efforts, manual
steps that can be easily automated, or ways to streamline and reduce time and effort
that are involved in certain tasks. This is commonly referred to as process reengineering.
When an organization is developing its security enterprise components, those com-
ponents must be integrated into the business processes to be effective. This can allow
for process management to be refined and calibrated. This allows for security to be in-
tegrated in system life cycles and day-to-day operations. So while business enablement
means “we can do new stuff,” process enhancement means “we can do stuff better.”
Security effectiveness deals with metrics, meeting service level agreement (SLA) re-
quirements, achieving return on investment (ROI), meeting set baselines, and provid-
ing management with a dashboard or balanced scorecard system. These are ways to
determine how useful the current security solutions and architecture as a whole are
performing.
Many organizations are just getting to the security effectiveness point of their archi-
tecture, because there is a need to ensure that the controls in place are providing the
necessary level of protection and that finite funds are being used properly. Once base-
lines are set, then metrics can be developed to verify baseline compliancy. These metrics
are then rolled up to management in a format they can understand that shows them the
health of the organization’s security posture and compliance levels. This also allows
management to make informed business decisions. Security affects almost everything
today in business, so this information should be readily available to senior manage-
ment in a form they can actually use.
Enterprise vs. System Architectures
Our operating systems follow strict and hierarchical structures, but our company is a mess.
There is a difference between enterprise architectures and system architectures, al-
though they do overlap. An enterprise architecture addresses the structure of an organi-
zation. A system architecture addresses the structure of software and computing
components. While these different architecture types have different focuses (organiza-
tion versus system), they have a direct relationship because the systems have to be able
to support the organization and its security needs. A software architect cannot design

CISSP All-in-One Exam Guide
54
an application that will be used within a company without understanding what the
company needs the application to do. So the software architect needs to understand the
business and technical aspects of the company to ensure that the software is properly
developed for the needs of the organization.
It is important to realize that the rules outlined in an organizational security policy
have to be supported all the way down to application code, the security kernel of an
operating system, and hardware security provided by a computer’s CPU. Security has to
be integrated at every organizational and technical level if it is going to be successful.
This is why some architecture frameworks cover company functionality from the busi-
ness process level all the way down to how components within an application work. All
of this detailed interaction and interdependencies must be understood. Otherwise, the
wrong software is developed, the wrong product is purchased, interoperability issues
arise, and business functions are only partially supported.
As an analogy, an enterprise and system architecture relationship is similar to the
relationship between a solar system and individual planets. A solar system is made up
of planets, just like an enterprise is made up of systems. It is very difficult to under-
stand the solar system as a whole while focusing on the specific characteristics of a
planet (soil compensation, atmosphere, etc.). It is also difficult to understand the
complexities of the individual planets when looking at the solar system as a whole.
Each viewpoint (solar system versus planet) has its focus and use. The same is true
when viewing an enterprise versus a system architecture. The enterprise view is looking
at the whole enchilada, while the system view is looking at the individual pieces that
make up that enchilada.
NOTE
NOTE While we covered security architecture mainly from an enterprise
view in this chapter, we will cover more system-specific architecture
frameworks, such as ISO/IEC 42010:2007, in Chapter 4.
Enterprise Architectures: Scary Beasts
If these enterprise architecture models are new to you and a bit confusing, do not
worry; you are not alone. While enterprise architecture frameworks are great tools
to understand and help control all the complex pieces within an organization, the
security industry is still maturing in its use of these types of architectures. Most
companies develop policies and then focus on the technologies to enforce those
policies, which skips the whole step of security enterprise development. This is
mainly because the information security field is still learning how to grow up and
out of the IT department and into established corporate environments. As security
and business truly become more intertwined, these enterprise frameworks won’t
seem as abstract and foreign, but useful tools that are properly leveraged.

Chapter 2: Information Security Governance and Risk Management
55
Security Controls Development
We have our architecture. Now what do we put inside it?
Response: Marshmallows.
Up to now we have our ISO/IEC 27000 series, which outlines the necessary compo-
nents of an organizational security program. We also have our security enterprise archi-
tecture, which helps us integrate the requirements outlined in our security program
into our existing business structure. Now we are going to get more focused and look at
the objectives of the controls we are going to put into place to accomplish the goals
outlined in our security program and enterprise architecture.
CobiT
The Control Objectives for Information and related Technology (CobiT) is a framework
and set of control objectives developed by the Information Systems Audit and Control
Association (ISACA) and the IT Governance Institute (ITGI). It defines goals for the
controls that should be used to properly manage IT and to ensure that IT maps to busi-
ness needs. CobiT is broken down into four domains: Plan and Organize, Acquire and
Implement, Deliver and Support, and Monitor and Evaluate. Each category drills down
into subcategories. For example, the Acquire and Implement category contains the fol-
lowing subcategories:
• Acquire and Maintain Application Software
• Acquire and Maintain Technology Infrastructure
• Develop and Maintain Procedures
• Install and Accredit Systems
• Manage Changes
So this CobiT domain provides goals and guidance to companies that they can fol-
low when they purchase, install, test, certify, and accredit IT products. This is very pow-
erful because most companies use an ad hoc and informal approach when making
purchases and carrying out procedures. CobiT provides a “checklist” approach to IT
governance by providing a list of things that must be thought through and accom-
plished when carrying out different IT functions.
CobiT lays out executive summaries, management guidelines, frameworks, con-
trol objectives, an implementation toolset, performance indicators, success factors,
maturity models, and audit guidelines. It lays out a complete roadmap that can be
followed to accomplish each of the 34 control objectives this model deals with. Fig-
ure 2-5 illustrates how the framework connects business requirements, IT resources,
and IT processes.

CISSP All-in-One Exam Guide
56
DS1 Define service levels
DS2 Manage third-party services
DS3 Manage performance and capacity
DS4 Ensure continuous service
DS5 Ensure systems security
DS6 Identify and attribute costs
DS7 Educate and train users
DS8 Manage service desk and incidents
DS9 Manage the configuration
DS10 Manage problems
DS11 Manage data
DS12 Manage the physical environment
DS13 Manage operations
ME1 Monitor and evaluate IT performance
ME2 Monitor and evaluate internal control
ME3 Ensure regulatory compliance
ME4 Provide IT governance
PO1 Define a strategic IT plan
PO2 Define the information architecture
PO3 Determine the technological direction
PO4 Define the IT processes, organization,
and relationships
PO5 Manage the IT investment
PO6 Communicate management aims and directions
PO7 Manage IT human resources
PO8 Manage quality
PO9 Assess and manage risks
PO10 Manage projects
AI1 Identify automated solutions
AI2 Acquire and maintain application software
AI3 Acquire and maintain technology infrastructure
AI4 Enable operation and use
AI5 Procure IT resources
AI6 Manage changes
AI7 Install and accredit solutions and changes
CobiT 4.0
Framework
Governance Drivers
Deliver and
Support
Information Criteria
•
•
•
•
•
•
•
Effectiveness
Efficiency
Confidentiality
Integrity
Availability
Compliance
Reliability
Business Goals
Monitor and
Evaluate
Acquire and
Implement
Plan and
Organize
IT Resources
•
•
•
•
Applications
Information
Infrastructure
People
Figure 2-5 CobiT framework

Chapter 2: Information Security Governance and Risk Management
57
So how does CobiT fit into the big picture? When you develop your security policies
that are aligned with the ISO/IEC 27000 series, these are high-level documents that
have statements like, “Unauthorized access should not be permitted.” But who is au-
thorized? How do we authorize individuals? How are we implementing access control
to ensure that unauthorized access is not taking place? How do we know our access
control components are working properly? This is really where the rubber hits the
road, where words within a document (policy) come to life in real-world practical im-
plementations. CobiT provides the objective that the real-world implementations (con-
trols) you chose to put into place need to meet. For example, CobiT outlines the
following control practices for user account management:
• Using unique user IDs to enable users to be linked to and held accountable
for their actions
• Checking that the user has authorization from the system owner for the
use of the information system or service, and the level of access granted is
appropriate to the business purpose and consistent with the organizational
security policy
• A procedure to require users to understand and acknowledge their access
rights and the conditions of such access
• Ensuring that internal and external service providers do not provide access
until authorization procedures have been completed
• Maintaining a formal record, including access levels, of all persons registered
to use the service
• A timely and regular review of user IDs and access rights
An organization should make sure that it is meeting at least these goals when it
comes to user account management, and in turn this is what an auditor is going to go
by to ensure the organization is practicing security properly. A majority of the security
compliance auditing practices used today in the industry are based off of CobiT. So if
you want to make your auditors happy and pass your compliancy evaluations, you
should learn, practice, and implement the control objectives outlined in CobiT, which
are considered industry best practices.
NOTE
NOTE Many people in the security industry mistakenly assume that CobiT is
purely security focused, when in reality it deals with all aspects of information
technology, security only being one component. CobiT is a set of practices
that can be followed to carry out IT governance, which requires proper
security practices.
NIST 800-53
Are there standard approaches to locking down government systems?
CobiT contains control objectives used within the private sector; the U.S. govern-
ment has its own set of requirements when it comes to controls for federal information
systems and organizations.

CISSP All-in-One Exam Guide
58
The National Institute of Standards and Technology (NIST) is a nonregulatory body
of the U.S. Department of Commerce and its mission is
“Promote U.S. innovation and industrial competitiveness by advancing measurement science,
standards, and technology in ways that enhance economic security and improve quality of life.”
One of the standards that NIST has been responsible for developing is called Spe-
cial Publication 800-53, which outlines controls that agencies need to put into place to
be compliant with the Federal Information Security Management Act of 2002. Table 2-4
outlines the control categories that are addressed in this publication.
The control categories (families) are the management, operational, and technical
controls prescribed for an information system to protect the confidentiality, integrity,
and availability of the system and its information.
Just as IS auditors in the commercial sector follow CobiT for their “checklist” ap-
proach to evaluating an organization’s compliancy with business-oriented regulations,
government auditors use SP 800-53 as their “checklist” approach for ensuring that gov-
ernment agencies are compliant with government-oriented regulations. While these
control objective checklists are different (CobiT versus SP 800-53), there is extensive
overlap because systems and networks need to be protected in similar ways no matter
what type of organization they reside in.
Identifier Family Class
AC Access Control Technical
AT Awareness and Training Operational
AU Audit and Accountability Technical
CA Security Assessment and Authorization Management
CM Configuration Management Operational
CP Contingency Planning Operational
IA Identification and Authentication Technical
IR Incident Response Operational
MA Maintenance Operational
MP Media Protection Operational
PE Physical and Environmental Protection Operational
PL Planning Management
PM Program Management Management
PS Personnel Security Operational
RA Risk Assessment Management
SA System and Services Acquisition Management
SC System and Communications Protection Technical
SI System and Information Integrity Operational
Table 2-4 NIST 800-53 Control Categories

Chapter 2: Information Security Governance and Risk Management
59
CAUTION
CAUTION On the CISSP exam you can see control categories broken down
into administrative, technical, and physical categories and the categories
outlined by NIST, which are management, technical, and operational. Be familiar
with both ways of categorizing control types.
COSO
I put our expenses in the profit column so it looks like we have more money.
Response: Yeah, no one will figure that out.
CobiT was derived from the COSO framework, developed by the Committee of
Sponsoring Organizations (COSO) of the Treadway Commission in 1985 to deal with
fraudulent financial activities and reporting. The COSO framework is made up of the
following components:
• Control environment
• Management’s philosophy and operating style
• Company culture as it pertains to ethics and fraud
• Risk assessment
• Establishment of risk objectives
• Ability to manage internal and external change
• Control activities
• Policies, procedures, and practices put in place to mitigate risk
• Information and communication
• Structure that ensures that the right people get the right information at the
right time
• Monitoring
• Detecting and responding to control deficiencies
COSO is a model for corporate governance, and CobiT is a model for IT governance.
COSO deals more at the strategic level, while CobiT focuses more at the operational
level. You can think of CobiT as a way to meet many of the COSO objectives, but only
from the IT perspective. COSO deals with non-IT items also, as in company culture, fi-
nancial accounting principles, board of director responsibility, and internal communi-
cation structures. COSO was formed to provide sponsorship for the National
Commission on Fraudulent Financial Reporting, an organization that studies deceptive
financial reports and what elements lead to them.
There have been laws in place since the 1970s that basically state that it was illegal
for a corporation to cook its books (manipulate its revenue and earnings reports), but
it took the Sarbanes–Oxley Act (SOX) of 2002 to really put teeth into those existing
laws. SOX is a U.S. federal law that, among other things, could send executives to jail if
it was discovered that their company was submitting fraudulent accounting findings to
the Security Exchange Commission (SEC). SOX is based upon the COSO model, so for

CISSP All-in-One Exam Guide
60
a corporation to be compliant with SOX, it has to follow the COSO model. Companies
commonly implement ISO/IEC 27000 standards and CobiT to help construct and
maintain their internal COSO structure.
NOTE
NOTE The CISSP exam does not cover specific laws, as in the Federal
Information Security Management Act and Sarbanes–Oxley Act, but it does cover
the security control model frameworks, as in ISO standards, CobiT, and COSO.
Process Management Development
Along with ensuring that we have the proper controls in place, we also want to have ways
to construct and improve our business, IT, and security processes in a structured and
controlled manner. The security controls can be considered the “things,” and processes
are how we use these things. We want to use them properly, effectively, and efficiently.
ITIL
How do I make sure our IT supports our business units?
Response: We have a whole library for that.
The Information Technology Infrastructure Library (ITIL) is the de facto standard of
best practices for IT service management. ITIL was created because of the increased de-
pendence on information technology to meet business needs. Unfortunately, a natural
divide exists between business people and IT people in most organizations because
they use different terminology and have different focuses within the organization. The
lack of a common language and understanding of each other’s domain (business versus
IT) has caused many companies to ineffectively blend their business objectives and IT
functions. This improper blending usually generates confusion, miscommunication,
missed deadlines, missed opportunities, increased cost in time and labor, and frustra-
tion on both the business and technical sides of the house. ITIL is a customizable
framework that is provided in a set of books or in an online format. It provides the
goals, the general activities necessary to achieve these goals, and the input and output
values for each process required to meet these determined goals. Although ITIL has a
component that deals with security, its focus is more toward internal service level agree-
ments between the IT department and the “customers” it serves. The customers are usu-
ally internal departments. The main components that make up ITIL are illustrated in
Figure 2-6.
Six Sigma
I have a black belt in business improvement.
Response: Why is it tied around your head?
Six Sigma is a process improvement methodology. It is the “new and improved”
Total Quality Management (TQM) that hit the business sector in the 1980s. Its goal is
to improve process quality by using statistical methods of measuring operation effi-

Chapter 2: Information Security Governance and Risk Management
61
ciency and reducing variation, defects, and waste. Six Sigma is being used in the secu-
rity assurance industry in some instances to measure the success factors of different
controls and procedures. Six Sigma was developed by Motorola with the goal of identi-
fying and removing defects in its manufacturing processes. The maturity of a process is
described by a sigma rating, which indicates the percentage of defects that the process
contains. While it started in manufacturing, Six Sigma has been applied to many types
of business functions, including information security and assurance.
Co
nt
inu
al
Pro
ce
ss
Im
pro
ve
me
nt
Change
management
Knowledge
management
Service
testing and
validation
Configuration
management
system
Release and
deployment
management
ITIL
S
e
r
v
i
c
e
d
e
s
i
g
n
Incident
management
Event
management
Problem
management
Supplier
management
Service level
management
Service catalog
management
Availability
management
Figure 2-6 ITIL

CISSP All-in-One Exam Guide
62
Capability Maturity Model Integration
I only want to get better, and better, and better.
Response: I only want you to go away.
Capability Maturity Model Integration (CMMI) came from the security engineering
world. We will covering it more in depth from that point of view in Chapter 10, but this
model is also used within organizations to help lay out a pathway of how incremental
improvement can take place.
While we know that we constantly need to make our security program better, it is
not always easy to accomplish because “better” is a vague and nonquantifiable concept.
The only way we can really improve is to know where we are starting from, where we
need to go, and the steps we need to take in between. Every security program has a ma-
turity level, which is illustrated in Figure 2-7. Each maturity level within this CMMI
model represents an evolutionary stage. Some security programs are chaotic, ad hoc,
unpredictable, and usually insecure. Some security programs have documentation cre-
ated, but the actual processes are not taking place. Some security programs are quite
evolved, streamlined, efficient, and effective.
Figure 2-7 Capability Maturity Model for a security program
Security
assigned
to IT
Defined
procedures
No
process
Documented
and
communicated
No
assessment
Reactive
activities
Security
and
business
objectives
mapped
Structured
and
enterprise
wide
Ad hoc
and
disorganized
Immature
and
developing
Monitored
and
measured
Automated
practices
Level 0Level 1Level 2 Level 3 Level 4
Nonexistent
management
Unpredictable
processes
Repeatable
processes
Defined
processes
Managed
processes
Optimized
processes
Level 5

Chapter 2: Information Security Governance and Risk Management
63
The crux of CMMI is to develop structured steps that can be followed so an organi-
zation can evolve from one level to the next and constantly improve its processes and
security posture. A security program contains a lot of elements, and it is not fair to ex-
pect them all to be properly implemented within the first year of its existence. And
some components, as in forensics capabilities, really cannot be put into place until
some rudimentary pieces are established, as in incident management. So if we really
want our baby to be able to run, we have to lay out ways that it can first learn to walk.
Security Program Development
No organization is going to put all the previously listed items (ISO/IEC 27000,
COSO, Zachman, SABSA, CobiT, NIST 800-53, ITIL, Six Sigma, CMMI) in place.
But it is a good toolbox of things you can pull from, and you will find some fit
the organization you work in better than others. You will also find that as your
organization’s security program matures, you will see more clearly where these
various standards, frameworks, and management components come into play.
While these items are separate and distinct, there are basic things that need to be
built in for any security program and its corresponding controls. This is because
the basic tenets of security are universal no matter if they are being deployed in a
corporation, government agency, business, school, or nonprofit organization.
Each entity is made up of people, processes, data, and technology and each of
these things needs to be protected.
Top-down Approach
The janitor said we should wrap our computers in tin foil to meet our information secu-
rity needs.
Response: Maybe we should ask management first.
A security program should use a top-down approach, meaning that the initia-
tion, support, and direction come from top management; work their way through
middle management; and then reach staff members. In contrast, a bottom-up ap-
proach refers to a situation in which staff members (usually IT) try to develop a
security program without getting proper management support and direction. A
bottom-up approach is commonly less effective, not broad enough to address all
security risks, and doomed to fail. A top-down approach makes sure the people
actually responsible for protecting the company’s assets (senior management) are
driving the program. Senior management are not only ultimately responsible for
the protection of the organization, but also hold the purse strings for the neces-
sary funding, have the authority to assign needed resources, and are the only ones
who can ensure true enforcement of the stated security rules and policies. Man-
agement’s support is one of the most important pieces of a security program. A
simple nod and a wink will not provide the amount of support required.

CISSP All-in-One Exam Guide
64
While the cores of these various security standards and frameworks are similar, it is
important to understand that a security program has a life cycle that is always continu-
ing, because it should be constantly evaluated and improved upon. The life cycle of any
process can be described in different ways. We will use the following steps:
1. Plan and organize
2. Implement
3. Operate and maintain
4. Monitor and evaluate
Without setting up a life-cycle approach to a security program and the security man-
agement that maintains the program, an organization is doomed to treat security as
merely another project. Anything treated as a project has a start and stop date, and at
the stop date everyone disperses to other projects. Many organizations have had good
intentions in their security program kickoffs, but do not implement the proper struc-
ture to ensure that security management is an ongoing and continually improving pro-
cess. The result is a lot of starts and stops over the years and repetitive work that costs
more than it should, with diminishing results.
The main components of each phase are provided in the following:
• Plan and Organize
• Establish management commitment.
• Establish oversight steering committee.
• Assess business drivers.
• Develop a threat profile on the organization.
• Carry out a risk assessment.
• Develop security architectures at business, data, application, and
infrastructure levels.
• Identify solutions per architecture level.
• Obtain management approval to move forward.
• Implement
• Assign roles and responsibilities.
• Develop and implement security policies, procedures, standards, baselines,
and guidelines.
• Identify sensitive data at rest and in transit.
• Implement the following blueprints:
• Asset identification and management
• Risk management

Chapter 2: Information Security Governance and Risk Management
65
• Vulnerability management
• Compliance
• Identity management and access control
• Change control
• Software development life cycle
• Business continuity planning
• Awareness and training
• Physical security
• Incident response
• Implement solutions (administrative, technical, physical) per blueprint.
• Develop auditing and monitoring solutions per blueprint.
• Establish goals, service level agreements (SLAs), and metrics per blueprint.
• Operate and Maintain
• Follow procedures to ensure all baselines are met in each implemented
blueprint.
• Carry out internal and external audits.
• Carry out tasks outlined per blueprint.
• Manage SLAs per blueprint.
• Monitor and Evaluate
• Review logs, audit results, collected metric values, and SLAs per blueprint.
• Assess goal accomplishments per blueprint.
• Carry out quarterly meetings with steering committees.
• Develop improvement steps and integrate into the Plan and Organize
phase.
Many of the items mentioned in the previous list are covered throughout this book.
This list was provided to show how all of these items can be rolled out in a sequential
and controllable manner.
Although the previously covered standards and frameworks are very helpful, they
are also very high level. For example, if a standard simply states that an organization
must secure its data, a great amount of work will be called for. This is where the secu-
rity professional really rolls up her sleeves, by developing security blueprints. Blueprints
are important tools to identify, develop, and design security requirements for specific
business needs. These blueprints must be customized to fulfill the organization’s secu-
rity requirements, which are based on its regulatory obligations, business drivers, and

CISSP All-in-One Exam Guide
66
legal obligations. For example, let’s say Company Y has a data protection policy, and
their security team has developed standards and procedures pertaining to the data pro-
tection strategy the company should follow. The blueprint will then get more granular
and lay out the processes and components necessary to meet requirements outlined in
the policy, standards, and requirements. This would include at least a diagram of the
company network that illustrates:
• Where the sensitive data resides within the network
• The network segments that the sensitive data transverses
• The different security solutions in place (VPN, SSL, PGP) that protect the
sensitive data
• Third-party connections where sensitive data is shared
• Security measures in place for third-party connections
• And more…
The blueprints to be developed and followed depend upon the organization’s busi-
ness needs. If Company Y uses identity management, there must be a blueprint outlin-
ing roles, registration management, authoritative source, identity repositories, single
sign-on solutions, and so on. If Company Y does not use identity management, there is
no need to build a blueprint for this.
So the blueprint will lay out the security solutions, processes, and components the
organization uses to match its security and business needs. These blueprints must be
applied to the different business units within the organization. For example, the iden-
tity management practiced in each of the different departments should follow the craft-
ed blueprint. Following these blueprints throughout the organization allows for
standardization, easier metric gathering, and governance. Figure 2-8 illustrates where
these blueprints come into play when developing a security program.
To tie these pieces together, you can think of the ISO/IEC 27000 that works mainly
at the policy level as a description of the type of house you want to build (two-story,
ranch-style, five bedroom, three bath). The security enterprise framework is the architec-
ture layout of the house (foundation, walls, ceilings). The blueprints are the detailed
descriptions of specific components of the house (window types, security system, elec-
trical system, plumbing). And the control objectives are the building specifications and
codes that need to be met for safety (electrical grounding and wiring, construction
material, insulation and fire protection). A building inspector will use his checklists
(building codes) to ensure that you are building your house safely. Which is just like
how an auditor will use his checklists (CobiT or SP 800-53) to ensure that you are
building and maintaining your security program securely.
Once your house is built and your family moves in, you set up schedules and pro-
cesses for everyday life to happen in a predictable and efficient manner (dad picks up
kids from school, mom cooks dinner, teenager does laundry, dad pays the bills, every-

Chapter 2: Information Security Governance and Risk Management
67
one does yard work). This is analogous to ITIL—process management and improve-
ment. If the family is made up of anal overachievers with the goal of optimizing these
daily activities to be as efficient as possible, they could integrate a Six Sigma approach
where continual process improvement is a focus.
Figure 2-8 Blueprints must map the security and business requirements.

CISSP All-in-One Exam Guide
68
Functionality vs. Security
Yes, we are secure, but we can’t do anything.
Anyone who has been involved with a security initiative understands it involves a
balancing act between securing an environment and still allowing the necessary level of
functionality so that productivity is not affected. A common scenario that occurs at the
Key Terms
•Security through obscurity Relying upon the secrecy or complexity of
an item as its security, instead of practicing solid security practices.
•ISO/IEC 27000 series Industry-recognized best practices for the
development and management of an information security management
system.
•Zachman framework Enterprise architecture framework used to
define and understand a business environment developed by John
Zachman.
•TOGAF Enterprise architecture framework used to define and
understand a business environment developed by The Open Group.
•SABSA framework Risk-driven enterprise security architecture that
maps to business initiatives, similar to the Zachman framework.
•DoDAF U.S. Department of Defense architecture framework that
ensures interoperability of systems to meet military mission goals.
•MODAF Architecture framework used mainly in military support
missions developed by the British Ministry of Defence.
•CobiT Set of control objectives used as a framework for IT governance
developed by Information Systems Audit and Control Association
(ISACA) and the IT Governance Institute (ITGI).
•SP 800-53 Set of controls that are used to secure U.S. federal systems
developed by NIST.
•COSO Internal control model used for corporate governance to
help prevent fraud developed by the Committee of Sponsoring
Organizations (COSO) of the Treadway Commission.
•ITIL Best practices for information technology services management
processes developed by the United Kingdom’s Office of Government
Commerce.
•Six Sigma Business management strategy developed by Motorola with
the goal of improving business processes.
•Capability Maturity Model Integration (CMMI) Process
improvement model developed by Carnegie Mellon.

Chapter 2: Information Security Governance and Risk Management
69
start of many security projects is that the individuals in charge of the project know the
end result they want to achieve and have lofty ideas of how quick and efficient their
security rollout will be, but they fail to consult the users regarding what restrictions will
be placed upon them. The users, upon hearing of the restrictions, then inform the proj-
ect managers that they will not be able to fulfill certain parts of their job if the security
rollout actually takes place as planned. This usually causes the project to screech to a
halt. The project managers then must initialize the proper assessments, evaluations,
and planning to see how the environment can be slowly secured and how to ease users
and tasks delicately into new restrictions or ways of doing business. Failing to consult
users or to fully understand business processes during the planning phase causes many
headaches and wastes time and money. Individuals who are responsible for security
management activities must realize they need to understand the environment and plan
properly before kicking off the implementation phase of a security program.
Security Management
Now that we built this thing, how do we manage it?
Response: Try kicking it.
We hear about viruses causing millions of dollars in damages, hackers from around
the world capturing credit card information from financial institutions, web sites of
large corporations and governments systems being attacked for political reasons, and
hackers being caught and sent to jail. These are the more exciting aspects of informa-
tion security, but realistically these activities are not what the average corporation or
security professional must usually deal with when it comes to daily or monthly secu-
rity tasks. Although viruses and hacking get all the headlines, security management is
the core of a company’s business and information security structure.
Security management has changed over the years because networked environments,
computers, and the applications that hold information have changed. Information
used to be held primarily in mainframes, which worked in a more centralized network
structure. The mainframe and management consoles used to access and configure the
mainframe were placed in a centralized area instead of having the distributed networks
we see today. Only certain people were allowed access, and only a small set of people
knew how the mainframe worked, which drastically reduced security risks. Users were
able to access information on the mainframe through “dumb” terminals (they were
called this because they had little or no logic built into them). There was not much
need for strict security controls to be put into place because this closed environment
provided a cocoon-like protection environment. However, the computing society did
not stay in this type of architecture. Today, most networks are filled with personal com-
puters that have advanced logic and processing power; users know enough about the
systems to be dangerous; and the information is not centralized within one “glass
house.” Instead, the information lives on servers, workstations, laptops, wireless de-
vices, mobile devices, databases, and other networks. Information passes over wires
and airways at a rate not even conceived of 10 to 15 years ago.

CISSP All-in-One Exam Guide
70
The Internet, extranets (business partner networks), and intranets not only make
security much more complex, but they also make security even more critical. The core
network architecture has changed from being a localized, stand-alone computing envi-
ronment to a distributed computing environment that has increased exponentially
with complexity. Although connecting a network to the Internet adds more functional-
ity and services for the users and expands the organization’s visibility to the Internet
world, it opens the floodgates to potential security risks.
Today, a majority of organizations could not function if they were to lose their com-
puters and processing capabilities. Computers have been integrated into the business
and individual daily fabric, and their sudden unavailability would cause great pain and
disruption. Most organizations have realized that their data is as much an asset to be
protected as their physical buildings, factory equipment, and other physical assets. In
most situations, the organization’s sensitive data is even more important than these
physical assets and is considered the organization’s crown jewels. As networks and en-
vironments have changed, so has the need for security. Security is more than just the
technical controls we put in place to protect the organization’s assets; these controls
must be managed, and a big part of security is managing the actions of users and the
procedures they follow. Security management practices focus on the continuous protec-
tion of an organization’s assets and resources.
Security management encompasses all the activities that are needed to keep a secu-
rity program up and running and evolving. It includes risk management, documenta-
tion, security control implementation and management, processes and procedures,
personnel security, auditing, and continual security awareness training. A risk analysis
identifies the critical assets, discovers the threats that put them at risk, and is used to
estimate the possible damage and potential loss an organization could endure if any of
these threats were to become real. The risk analysis helps management construct a bud-
get with the necessary funds to protect the recognized assets from their identified threats
and develop applicable security policies that provide direction for security activities.
Protection controls are identified, implemented, and maintained to keep the organiza-
tion’s security risks at an acceptable level. Security education and awareness take this
information to each and every employee within the company so everyone is properly
informed and can more easily work toward the same security goals.
The following sections will cover some of the most important components of man-
aging a security program once it is up and running.
Risk Management
Life is full of risk.
Risk in the context of security is the possibility of damage happening and the rami-
fications of such damage should it occur. Information risk management (IRM) is the
process of identifying and assessing risk, reducing it to an acceptable level, and imple-
menting the right mechanisms to maintain that level. There is no such thing as a 100-per-
cent secure environment. Every environment has vulnerabilities and threats. The skill is
in identifying these threats, assessing the probability of them actually occurring and the
damage they could cause, and then taking the right steps to reduce the overall level of
risk in the environment to what the organization identifies as acceptable.

Chapter 2: Information Security Governance and Risk Management
71
Risks to an organization come in different forms, and they are not all computer
related. When a company purchases another company, it takes on a lot of risk in the
hope that this move will increase its market base, productivity, and profitability. If a
company increases its product line, this can add overhead, increase the need for person-
nel and storage facilities, require more funding for different materials, and maybe in-
crease insurance premiums and the expense of marketing campaigns. The risk is that
this added overhead might not be matched in sales; thus, profitability will be reduced
or not accomplished.
When we look at information security, note that an organization needs to be aware
of several types of risk and address them properly. The following items touch on the
major categories:
•Physical damage Fire, water, vandalism, power loss, and natural disasters
•Human interaction Accidental or intentional action or inaction that can
disrupt productivity
•Equipment malfunction Failure of systems and peripheral devices
•Inside and outside attacks Hacking, cracking, and attacking
•Misuse of data Sharing trade secrets, fraud, espionage, and theft
•Loss of data Intentional or unintentional loss of information to
unauthorized receivers
•Application error Computation errors, input errors, and buffer overflows
Threats must be identified, classified by category, and evaluated to calculate their
damage potential to the organization. Real risk is hard to measure, but prioritizing the
potential risks in order of which ones must be addressed first is obtainable.
Who Really Understands Risk Management?
Unfortunately, the answer to this question is that not enough people inside or outside
of the security profession really understand risk management. Even though informa-
tion security is big business today, the focus is more on applications, devices, viruses,
and hacking. Although these items all must be considered and weighed in risk manage-
ment processes, they should be considered small pieces of the overall security puzzle,
not the main focus of risk management.
Security is a business issue, but businesses operate to make money, not just to be
secure. A business is concerned with security only if potential risks threaten its bottom
line, which they can in many ways, such as through the loss of reputation and their
customer base after a database of credit card numbers is compromised; through the loss
of thousands of dollars in operational expenses from a new computer worm; through
the loss of proprietary information as a result of successful company espionage at-
tempts; through the loss of confidential information from a successful social engineer-
ing attack; and so on. It is critical that security professionals understand these
individual threats, but it is more important that they understand how to calculate the
risk of these threats and map them to business drivers.

CISSP All-in-One Exam Guide
72
Knowing the difference between the definitions of “vulnerability,” “threat,” and
“risk” may seem trivial to you, but it is more critical than most people truly understand.
A vulnerability scanner can identify dangerous services that are running, unnecessary
accounts, and unpatched systems. That is the easy part. But if you have a security budget
of only $120,000 and you have a long list of vulnerabilities that need attention, do you
have the proper skill to know which ones should be dealt with first? Since you have a
finite amount of money and an almost infinite number of vulnerabilities, how do you
properly rank the most critical vulnerabilities to ensure that your company is address-
ing the most critical issues and providing the most return on investment of funds? This
is what risk management is all about.
Carrying out risk management properly means that you have a holistic understand-
ing of your organization, the threats it faces, the countermeasures that can be put into
place to deal with those threats, and continuous monitoring to ensure the acceptable
risk level is being met on an ongoing basis.
Information Risk Management Policy
How do I put all of these risk management pieces together?
Response: Let’s check out the policy.
Proper risk management requires a strong commitment from senior management,
a documented process that supports the organization’s mission, an information risk
management (IRM) policy, and a delegated IRM team.
The IRM policy should be a subset of the organization’s overall risk management
policy (risks to a company include more than just information security issues) and should be
mapped to the organizational security policies. The IRM policy should address the fol-
lowing items:
• The objectives of the IRM team
• The level of risk the organization will accept and what is considered an
acceptable level of risk
• Formal processes of risk identification
• The connection between the IRM policy and the organization’s strategic
planning processes
• Responsibilities that fall under IRM and the roles to fulfill them
• The mapping of risk to internal controls
• The approach toward changing staff behaviors and resource allocation in
response to risk analysis
• The mapping of risks to performance targets and budgets
• Key indicators to monitor the effectiveness of controls
The IRM policy provides the foundation and direction for the organization’s secu-
rity risk management processes and procedures, and should address all issues of infor-

Chapter 2: Information Security Governance and Risk Management
73
mation security. It should provide direction on how the IRM team communicates
information on company risks to senior management and how to properly execute
management’s decisions on risk mitigation tasks.
The Risk Management Team
Fred is always scared of stuff. He is going to head up our risk team.
Response: Fair enough.
Each organization is different in its size, security posture, threat profile, and security
budget. One organization may have one individual responsible for IRM or a team that
works in a coordinated manner. The overall goal of the team is to ensure the company
is protected in the most cost-effective manner. This goal can be accomplished only if
the following components are in place:
• An established risk acceptance level provided by senior management
• Documented risk assessment processes and procedures
• Procedures for identifying and mitigating risks
• Appropriate resource and fund allocation from senior management
• Security-awareness training for all staff members associated with
information assets
• The ability to establish improvement (or risk mitigation) teams in specific
areas when necessary
• The mapping of legal and regulation compliancy requirements to control and
implement requirements
• The development of metrics and performance indicators so as to measure and
manage various types of risks
• The ability to identify and assess new risks as the environment and
company change
• The integration of IRM and the organization’s change control process to
ensure that changes do not introduce new vulnerabilities
Obviously, this list is a lot more than just buying a new shiny firewall and calling
the company safe.
The IRM team, in most cases, is not made up of employees with the dedicated task
of risk management. It consists of people who already have a full-time job in the com-
pany and are now tasked with something else. Thus, senior management support is
necessary so proper resource allocation can take place.
Of course, all teams need a leader, and IRM is no different. One individual should
be singled out to run this rodeo and, in larger organizations, this person should be
spending 50 to 70 percent of their time in this role. Management must dedicate funds
to making sure this person receives the necessary training and risk analysis tools to
ensure it is a successful endeavor.

CISSP All-in-One Exam Guide
74
Risk Assessment and Analysis
I have determined that our greatest risk is this paperclip.
Response: Nice work.
A risk assessment, which is really a tool for risk management, is a method of identi-
fying vulnerabilities and threats and assessing the possible impacts to determine where
to implement security controls. A risk assessment is carried out, and the results are ana-
lyzed. Risk analysis is used to ensure that security is cost-effective, relevant, timely, and
responsive to threats. Security can be quite complex, even for well-versed security pro-
fessionals, and it is easy to apply too much security, not enough security, or the wrong
security controls, and to spend too much money in the process without attaining the
necessary objectives. Risk analysis helps companies prioritize their risks and shows
management the amount of resources that should be applied to protecting against
those risks in a sensible manner.
A risk analysis has four main goals:
• Identify assets and their value to the organization.
• Identify vulnerabilities and threats.
• Quantify the probability and business impact of these potential threats.
• Provide an economic balance between the impact of the threat and the cost
of the countermeasure.
Risk analysis provides a cost/benefit comparison, which compares the annualized
cost of controls to the potential cost of loss. A control, in most cases, should not be
implemented unless the annualized cost of loss exceeds the annualized cost of the con-
trol itself. This means that if a facility is worth $100,000, it does not make sense to
spend $150,000 trying to protect it.
It is important to figure out what you are supposed to be doing before you dig right
in and start working. Anyone who has worked on a project without a properly defined
scope can attest to the truth of this statement. Before an assessment and analysis is
started, the team must carry out project sizing to understand what assets and threats
should be evaluated. Most assessments are focused on physical security, technology
security, or personnel security. Trying to assess all of them at the same time can be quite
an undertaking.
One of the risk analysis team’s tasks is to create a report that details the asset valua-
tions. Senior management should review and accept the list, and make them the scope
of the IRM project. If management determines at this early stage that some assets are
not important, the risk assessment team should not spend additional time or resources
evaluating those assets. During discussions with management, everyone involved must
have a firm understanding of the value of the security AIC triad—availability, integrity,
and confidentiality—and how it directly relates to business needs.
Management should outline the scope of the assessment, which most likely will be
dictated by organizational compliance requirements as well as budgetary constraints.
Many projects have run out of funds, and consequently stopped, because proper project
sizing was not conducted at the onset of the project. Don’t let this happen to you.

Chapter 2: Information Security Governance and Risk Management
75
A risk analysis helps integrate the security program objectives with the company’s
business objectives and requirements. The more the business and security objectives are
in alignment, the more successful the two will be. The analysis also helps the company
draft a proper budget for a security program and its constituent security components.
Once a company knows how much its assets are worth and the possible threats they are
exposed to, it can make intelligent decisions about how much money to spend protect-
ing those assets.
A risk analysis must be supported and directed by senior management if it is to be
successful. Management must define the purpose and scope of the analysis, appoint a
team to carry out the assessment, and allocate the necessary time and funds to conduct
the analysis. It is essential for senior management to review the outcome of the risk as-
sessment and analysis and to act on its findings. After all, what good is it to go through
all the trouble of a risk assessment and not react to its findings? Unfortunately, this does
happen all too often.
Risk Analysis Team
Each organization has different departments, and each department has its own func-
tionality, resources, tasks, and quirks. For the most effective risk analysis, an organiza-
tion must build a risk analysis team that includes individuals from many or all depart-
ments to ensure that all of the threats are identified and addressed. The team members
may be part of management, application programmers, IT staff, systems integrators,
and operational managers—indeed, any key personnel from key areas of the organiza-
tion. This mix is necessary because if the risk analysis team comprises only individuals
from the IT department, it may not understand, for example, the types of threats the
accounting department faces with data integrity issues, or how the company as a whole
would be affected if the accounting department’s data files were wiped out by an acci-
dental or intentional act. Or, as another example, the IT staff may not understand all
the risks the employees in the warehouse would face if a natural disaster were to hit, or
what it would mean to their productivity and how it would affect the organization
overall. If the risk analysis team is unable to include members from various depart-
ments, it should, at the very least, make sure to interview people in each department so
it fully understands and can quantify all threats.
The risk analysis team must also include people who understand the processes that
are part of their individual departments, meaning individuals who are at the right levels
of each department. This is a difficult task, since managers tend to delegate any sort of
risk analysis task to lower levels within the department. However, the people who work
at these lower levels may not have adequate knowledge and understanding of the pro-
cesses that the risk analysis team may need to deal with.
When looking at risk, it’s good to keep several questions in mind. Raising these
questions helps ensure that the risk analysis team and senior management know what
is important. Team members must ask the following: What event could occur (threat
event)? What could be the potential impact (risk)? How often could it happen (fre-
quency)? What level of confidence do we have in the answers to the first three questions
(certainty)? A lot of this information is gathered through internal surveys, interviews, or
workshops.

CISSP All-in-One Exam Guide
76
Viewing threats with these questions in mind helps the team focus on the tasks at
hand and assists in making the decisions more accurate and relevant.
The Value of Information and Assets
If information does not have any value, then who cares about protecting it?
The value placed on information is relative to the parties involved, what work was
required to develop it, how much it costs to maintain, what damage would result if it
were lost or destroyed, what enemies would pay for it, and what liability penalties
could be endured. If a company does not know the value of the information and the
other assets it is trying to protect, it does not know how much money and time it
should spend on protecting them. If you were in charge of making sure Russia does not
know the encryption algorithms used when transmitting information to and from U.S.
spy satellites, you would use more extreme (and expensive) security measures than you
would use to protect your peanut butter and banana sandwich recipe from your next-
door neighbor. The value of the information supports security measure decisions.
The previous examples refer to assessing the value of information and protecting it, but
this logic applies toward an organization’s facilities, systems, and resources. The value of
the company’s facilities must be assessed, along with all printers, workstations, servers,
peripheral devices, supplies, and employees. You do not know how much is in danger of
being lost if you don’t know what you have and what it is worth in the first place.
Costs That Make Up the Value
An asset can have both quantitative and qualitative measurements assigned to it, but
these measurements need to be derived. The actual value of an asset is determined by
the importance it has to the organization as a whole. The value of an asset should re-
flect all identifiable costs that would arise if the asset were actually impaired. If a server
cost $4,000 to purchase, this value should not be input as the value of the asset in a risk
assessment. Rather, the cost of replacing or repairing it, the loss of productivity, and the
value of any data that may be corrupted or lost must be accounted for to properly cap-
ture the amount the organization would lose if the server were to fail for one reason or
another.
The following issues should be considered when assigning values to assets:
• Cost to acquire or develop the asset
• Cost to maintain and protect the asset
• Value of the asset to owners and users
• Value of the asset to adversaries
• Price others are willing to pay for the asset
• Cost to replace the asset if lost
• Operational and production activities affected if the asset is unavailable
• Liability issues if the asset is compromised
• Usefulness and role of the asset in the organization

Chapter 2: Information Security Governance and Risk Management
77
Understanding the value of an asset is the first step to understanding what security
mechanisms should be put in place and what funds should go toward protecting it. A
very important question is how much it could cost the company to not protect the asset.
Determining the value of assets may be useful to a company for a variety of reasons,
including the following:
• To perform effective cost/benefit analyses
• To select specific countermeasures and safeguards
• To determine the level of insurance coverage to purchase
• To understand what exactly is at risk
• To comply with legal and regulatory requirements
Assets may be tangible (computers, facilities, supplies) or intangible (reputation,
data, intellectual property). It is usually harder to quantify the values of intangible as-
sets, which may change over time. How do you put a monetary value on a company’s
reputation? This is not always an easy question to answer, but it is important to be able
to do so.
Identifying Vulnerabilities and Threats
Okay, what should we be afraid of?
Earlier, it was stated that the definition of a risk is the probability of a threat agent
exploiting a vulnerability to cause harm to an asset and the resulting business impact.
Many types of threat agents can take advantage of several types of vulnerabilities, result-
ing in a variety of specific threats, as outlined in Table 2-5, which represents only a
sampling of the risks many organizations should address in their risk management
programs.
Other types of threats can arise in an environment that are much harder to identify
than those listed in Table 2-5. These other threats have to do with application and user
errors. If an application uses several complex equations to produce results, the threat
can be difficult to discover and isolate if these equations are incorrect or if the applica-
tion is using inputted data incorrectly. This can result in illogical processing and cascading
errors as invalid results are passed on to another process. These types of problems can
lie within applications’ code and are very hard to identify.
User errors, intentional or accidental, are easier to identify by monitoring and au-
diting user activities. Audits and reviews must be conducted to discover if employees are
inputting values incorrectly into programs, misusing technology, or modifying data in
an inappropriate manner.
Once the vulnerabilities and associated threats are identified, the ramifications of
these vulnerabilities being exploited must be investigated. Risks have loss potential,
meaning what the company would lose if a threat agent actually exploited a vulnerabil-
ity. The loss may be corrupted data, destruction of systems and/or the facility, unau-
thorized disclosure of confidential information, a reduction in employee productivity,
and so on. When performing a risk analysis, the team also must look at delayed loss
when assessing the damages that can occur. Delayed loss is secondary in nature and

CISSP All-in-One Exam Guide
78
takes place well after a vulnerability is exploited. Delayed loss may include damage to
the company’s reputation, loss of market share, accrued late penalties, civil suits, the
delayed collection of funds from customers, and so forth.
For example, if a company’s web servers are attacked and taken offline, the immedi-
ate damage (loss potential) could be data corruption, the man-hours necessary to place
the servers back online, and the replacement of any code or components required. The
company could lose revenue if it usually accepts orders and payments via its web site.
If it takes a full day to get the web servers fixed and back online, the company could lose
a lot more sales and profits. If it takes a full week to get the web servers fixed and back
online, the company could lose enough sales and profits to not be able to pay other
bills and expenses. This would be a delayed loss. If the company’s customers lose con-
fidence in it because of this activity, it could lose business for months or years. This is a
more extreme case of delayed loss.
These types of issues make the process of properly quantifying losses that specific
threats could cause more complex, but they must be taken into consideration to ensure
reality is represented in this type of analysis.
Methodologies for Risk Assessment
Are there rules on how to do this risk stuff or do we just make it up as we go along?
The industry has different standardized methodologies when it comes to carrying
out risk assessments. Each of the individual methodologies has the same basic core
components (identify vulnerabilities, associate threats, calculate risk values), but each
Threat Agent Can Exploit This
Vulnerability
Resulting in This Threat
Malware Lack of antivirus software Virus infection
Hacker Powerful services running on a
server
Unauthorized access to
confidential information
Users Misconfigured parameter in the
operating system
System malfunction
Fire Lack of fire extinguishers Facility and computer damage,
and possibly loss of life
Employee Lack of training or standards
enforcement
Lack of auditing
Sharing mission-critical
information
Altering data inputs and outputs
from data processing applications
Contractor Lax access control mechanisms Stealing trade secrets
Attacker Poorly written application
Lack of stringent firewall settings
Conducting a buffer overflow
Conducting a denial-of-service
attack
Intruder Lack of security guard Breaking windows and stealing
computers and devices
Table 2-5 Relationship of Threats and Vulnerabilities

Chapter 2: Information Security Governance and Risk Management
79
has a specific focus. As a security professional it is your responsibility to know which is
the best approach for your organization and its needs.
NIST developed a risk methodology, which is published in their SP 800-30 docu-
ment. This NIST methodology is named a “Risk Management Guide for Information
Technology Systems” and is considered a U.S. federal government standard. It is spe-
cific to IT threats and how they relate to information security risks. It lays out the fol-
lowing steps:
• System characterization
• Threat identification
• Vulnerability identification
• Control analysis
• Likelihood determination
• Impact analysis
• Risk determination
• Control recommendations
• Results documentation
The NIST risk management methodology is mainly focused on computer systems
and IT security issues. It does not cover larger organizational threat types, as in natural
disasters, succession planning, environmental issues, or how security risks associate to
business risks. It is a methodology that focuses on the operational components of an
enterprise, not necessarily the higher strategic level. The methodology outlines specific
risk methodology activities, as shown in Figure 2-9, with the associated input and out-
put values.
A second type of risk assessment methodology is called FRAP, which stands for Fa-
cilitated Risk Analysis Process. The crux of this qualitative methodology is to focus only
on the systems that really need assessing to reduce costs and time obligations. It stress-
es prescreening activities so that the risk assessment steps are only carried out on the
item(s) that needs it the most. It is to be used to analyze one system, application, or
business process at a time. Data is gathered and threats to business operations are pri-
oritized based upon their criticality. The risk assessment team documents the controls
that need to be put into place to reduce the identified risks along with action plans for
control implementation efforts.
This methodology does not support the idea of calculating exploitation probability
numbers or annual loss expectancy values. The criticalities of the risks are determined
by the team members’ experience. The author of this methodology (Thomas Peltier)
believes that trying to use mathematical formulas for the calculation of risk is too con-
fusing and time consuming. The goal is to keep the scope of the assessment small and
the assessment processes simple to allow for efficiency and cost effectiveness.
Another methodology called OCTAVE (Operationally Critical Threat, Asset, and
Vulnerability Evaluation) was created by Carnegie Mellon University’s Software Engi-
neering Institute. It is a methodology that is intended to be used in situations where

CISSP All-in-One Exam Guide
80
Step 1.
System characterization
Step 2.
Threat identification
Step 3.
Vulnerability identification
Step 4.
Control analysis
Step 5.
Likelihood determination
Step 6. Impact analysis
Step 7.
Risk determination
Step 8.
Control recommendations
Step 9.
Results documentation
Input Risk Assessment Activities Output
Threat statement
Likelihood rating
Impact rating
List of potential
vulnerabilities
List of current and
planned controls
Risks and
associated risk levels
Recommended
controls
Risk assessment
report
•
•
•
•
•
System boundary
System functions
System and data
criticality
System and data
Sensitivity
•
•
•
•
•
•
Hardware
Software
System interfaces
Data and information
People
System mission
•
•
History of system attack
Data from intelligence
agencies, NIPC, OIG,
FedCIRC, mass media
•
•
•
•
Reports from prior
risk assessments
Any audit comments
Security requirements
Security test results
•
•
Current controls
Planned controls
•
•
•
•
Mission impact analysis
Asset criticality assessment
Data criticality
Data sensitivity
•
•
•
Likelihood of threat
exploitation
Magnitude of impact
Adequacy of planned or
current controls
•
•
•
Loss of integrity
Loss of availability
Loss of confidentiality
•
•
•
•
Threat-source motivation
Threat capacity
Nature of vulnerability
Current controls
Figure 2-9 Risk management steps in NIST SP 800-30

Chapter 2: Information Security Governance and Risk Management
81
people manage and direct the risk evaluation for information security within their
company. This places the people that work inside the organization in the power posi-
tions as being able to make the decisions regarding what is the best approach for evalu-
ating the security of their organization. This relies on the idea that the people working
in these environments best understand what is needed and what kind of risks they are
facing. The individuals who make up the risk assessment team go through rounds of
facilitated workshops. The facilitator helps the team members understand the risk
methodology and how to apply it to the vulnerabilities and threats identified within
their specific business units. It stresses a self-directed team approach. The scope of an
OCTAVE assessment is usually very wide compared to the more focused approach of
FRAP. Where FRAP would be used to assess a system or application, OCTAVE would be
used to assess all systems, applications, and business processes within the organization.
While NIST, FRAP, and OCTAVE methodologies focus on IT security threats and
information security risks, AS/NZS 4360 takes a much broader approach to risk man-
agement. This Australian and New Zealand methodology can be used to understand a
company’s financial, capital, human safety, and business decisions risks. Although it
can be used to analyze security risks, it was not created specifically for this purpose. This
risk methodology is more focused on the health of a company from a business point of
view, not security.
If we need a risk methodology that is to be integrated into our security program, we
can use one that was previously mentioned within the ISO/IEC 27000 Series section.
ISO/IEC 27005 is an international standard for how risk management should be car-
ried out in the framework of an information security management system (ISMS). So
where the NIST risk methodology is mainly IT and operational focused, this methodol-
ogy deals with IT and the softer security issues (documentation, personnel security,
training, etc.) This methodology is to be integrated into an organizational security pro-
gram that addresses all of the security threats an organization could be faced with.
NOTE
NOTE In-depth information on some of these various risk methodologies
can be found in the article “Understanding Standards for Risk Management
and Compliance” at http://www.logicalsecurity.com/resources/resources_
articles.html.
Failure Modes and Effect Analysis (FMEA) is a method for determining functions,
identifying functional failures, and assessing the causes of failure and their failure ef-
fects through a structured process. It is commonly used in product development and
operational environments. The goal is to identify where something is most likely going
to break and either fix the flaws that could cause this issue or implement controls to
reduce the impact of the break. For example, you might choose to carry out an FMEA
on your organization’s network to identify single points of failure. These single points
of failure represent vulnerabilities that could directly affect the productivity of the net-
work as a whole. You would use this structured approach to identify these issues (vul-
nerabilities), assess their criticality (risk), and identify the necessary controls that
should be put into place (reduce risk).

CISSP All-in-One Exam Guide
82
The FMEA methodology uses failure modes (how something can break or fail) and
effects analysis (impact of that break or failure). The application of this process to a
chronic failure enables the determination of where exactly the failure is most likely to
occur. Think of it as being able to look into the future and locate areas that have the
potential for failure and then applying corrective measures to them before they do be-
come actual liabilities.
By following a specific order of steps, the best results can be maximized for an FMEA:
1. Start with a block diagram of a system or control.
2. Consider what happens if each block of the diagram fails.
3. Draw up a table in which failures are paired with their effects and an
evaluation of the effects.
4. Correct the design of the system, and adjust the table until the system is not
known to have unacceptable problems.
5. Have several engineers review the Failure Modes and Effect Analysis.
Table 2-6 is an example of how an FMEA can be carried out and documented. Al-
though most companies will not have the resources to do this level of detailed work for
every system and control, it can be carried out on critical functions and systems that can
drastically affect the company.
Prepared by:
Approved by:
Date:
Revision:
Failure Effect on . . .
Item
Identification
Function Failure
Mode
Failure
Cause
Component
or Functional
Assembly
Next
Higher
Assembly
System Failure
Detection
Method
IPS application
content filter
Inline
perimeter
protection
Fails to
close
Traffic
overload
Single point of
failure
Denial of
service
IPS blocks
ingress traffic
stream
IPS is
brought
down
Health check
status sent
to console
and e-mail
to security
administrator
Central
antivirus
signature
update engine
Push
updated
signatures
to all
servers and
workstations
Fails to
provide
adequate,
timely
protection
against
malware
Central
server
goes
down
Individual
node’s
antivirus
software is
not updated
Network is
infected with
malware
Central
server can
be infected
and/or
infect other
systems
Heartbeat
status check
sent to central
console,
and e-mail
to network
administrator
Fire
suppression
water pipes
Suppress fire
in building 1
in 5 zones
Fails to
close
Water
in pipes
freezes
None Building 1
has no
suppression
agent
available
Fire
suppression
system
pipes break
Suppression
sensors tied
directly into
fire system
central
console
Etc.
Table 2-6 How an FMEA Can Be Carried Out and Documented

Chapter 2: Information Security Governance and Risk Management
83
FMEA was first developed for systems engineering. Its purpose is to examine the
potential failures in products and the processes involved with them. This approach
proved to be successful and has been more recently adapted for use in evaluating risk
management priorities and mitigating known threat vulnerabilities.
FMEA is used in assurance risk management because of the level of detail, variables,
and complexity that continues to rise as corporations understand risk at more granular
levels. This methodical way of identifying potential pitfalls is coming into play more as
the need for risk awareness—down to the tactical and operational levels—continues to
expand.
While FMEA is most useful as a survey method to identify major failure modes in a
given system, the method is not as useful in discovering complex failure modes that
may be involved in multiple systems or subsystems. A fault tree analysis usually proves
to be a more useful approach to identifying failures that can take place within more
complex environments and systems. Fault tree analysis follows this general process.
First, an undesired effect is taken as the root or top event of a tree of logic. Then, each
situation that has the potential to cause that effect is added to the tree as a series of
logic expressions. Fault trees are then labeled with actual numbers pertaining to failure
probabilities. This is typically done by using computer programs that can calculate the
failure probabilities from a fault tree.
Figure 2-10 shows a simplistic fault tree and the different logic symbols used to
represent what must take place for a specific fault event to occur.
When setting up the tree, you must accurately list all the threats or faults that can
occur within a system. The branches of the tree can be divided into general categories,
such as physical threats, networks threats, software threats, Internet threats, and compo-
nent failure threats. Then, once all possible general categories are in place, you can trim
them and effectively prune the branches from the tree that won’t apply to the system in
question. In general, if a system is not connected to the Internet by any means, remove
that general branch from the tree.
Figure 2-10 Fault tree and logic components

CISSP All-in-One Exam Guide
84
Some of the most common software failure events that can be explored through a
fault tree analysis are the following:
• False alarms
• Insufficient error handling
• Sequencing or order
• Incorrect timing outputs
• Valid but not expected outputs
Of course, because of the complexity of software and heterogeneous environments,
this is a very small list.
Just in case you do not have enough risk assessment methodologies to choose from,
you can also look at CRAMM (Central Computing and Telecommunications Agency
Risk Analysis and Management Method), which was created by the United Kingdom,
and its automated tools are sold by Siemens. It works in three distinct stages: define
objectives, assess risks, and identify countermeasures. It is really not fair to call it a
unique methodology, because it follows the basic structure of any risk methodology. It
just has everything (questionnaires, asset dependency modeling, assessment formulas,
compliancy reporting) in automated tool format.
Similar to the “Security Frameworks” section that covered things such as ISO/IEC
27000, CobiT, COSO, Zachman, SABSA, ITIL, and Six Sigma, this section on risk meth-
odologies could seem like another list of confusing standards and guidelines. Remem-
ber that the methodologies have a lot of overlapping similarities because each one has
the specific goal of identifying things that could hurt the organization (vulnerabilities
and threats) so that those things could be addressed (risk reduced). What make these
methodologies different from each other are their unique approaches and focuses. If
you need to deploy an organization-wide risk management program and integrate it
into your security program, you should follow the ISO/IEC 27005 or OCTAVE methods.
If you need to focus just on IT security risks during your assessment, you can follow
NIST 800-30. If you have a limited budget and need to carry out a focused assessment
on an individual system or process, the Facilitated Risk Analysis Process can be fol-
lowed. If you really want to dig into the details of how a security flaw within a specific
system could cause negative ramifications, you could use Failure Modes and Effect
Analysis or fault tree analysis. If you need to understand your company’s business risks,
then you can follow the AS/NZS 4360 approach.
So up to this point, we have accomplished the following items:
• Developed a risk management policy
• Developed a risk management team
• Identified company assets to be assessed
• Calculated the value of each asset
• Identified the vulnerabilities and threats that can affect the identified assets
• Chose a risk assessment methodology that best fits our needs

Chapter 2: Information Security Governance and Risk Management
85
The next thing we need to figure out is if our risk analysis approach should be quan-
titative or qualitative in nature, which we will cover in the following section.
NOTE
NOTE A risk assessment is used to gather data. A risk analysis examines the
gathered data to produce results that can be acted upon.
Risk Analysis Approaches
One consultant said this threat could cost us $150,000, another consultant said it was red, and
the audit team assigned it a four. Should we be concerned or not?
The two approaches to risk analysis are quantitative and qualitative. A quantitative
risk analysis is used to assign monetary and numeric values to all elements of the risk
analysis process. Each element within the analysis (asset value, threat frequency, sever-
ity of vulnerability, impact damage, safeguard costs, safeguard effectiveness, uncertain-
ty, and probability items) is quantified and entered into equations to determine total
and residual risks. It is more of a scientific or mathematical approach to risk analysis
compared to qualitative. A qualitative risk analysis uses a “softer” approach to the data
elements of a risk analysis. It does not quantify that data, which means that it does not
assign numeric values to the data so that they can be used in equations. As an example,
Key Terms
•NIST 800-30 Risk Management Guide for Information Technology
Systems A U.S. federal standard that is focused on IT risks.
•Facilitated Risk Analysis Process (FRAP) A focused, qualitative
approach that carries out prescreening to save time and money.
•Operationally Critical Threat, Asset, and Vulnerability Evaluation
(OCTAVE) Team-oriented approach that assesses organizational and
IT risks through facilitated workshops.
•AS/NZS 4360 Australia and New Zealand business risk management
assessment approach.
•ISO/IEC 27005 International standard for the implementation of a
risk management program that integrates into an information security
management system (ISMS).
•Failure Modes and Effect Analysis Approach that dissects a component
into its basic functions to identify flaws and those flaws’ effects.
•Fault tree analysis Approach to map specific flaws to root causes in
complex systems.
•CRAMM Central Computing and Telecommunications Agency Risk
Analysis and Management Method.

CISSP All-in-One Exam Guide
86
the results of a quantitative risk analysis could be that the organization is at risk of los-
ing $100,000 if a buffer overflow was exploited on a web server, $25,000 if a database
was compromised, and $10,000 if a file server was compromised. A qualitative risk
analysis would not present these findings in monetary values, but would assign ratings
to the risks, as in Red, Yellow, and Green.
A quantitative analysis uses risk calculations that attempt to predict the level of
monetary losses and the probability for each type of threat. Qualitative analysis does
not use calculations. Instead, it is more opinion- and scenario-based and uses a rating
system to relay the risk criticality levels.
Quantitative and qualitative approaches have their own pros and cons, and each
applies more appropriately to some situations than others. Company management
and the risk analysis team, and the tools they decide to use, will determine which ap-
proach is best.
In the following sections we will dig into the depths of quantitative analysis and
then revisit the qualitative approach. We will then compare and contrast their attributes.
Automated Risk Analysis Methods
Collecting all the necessary data that needs to be plugged into risk analysis equations
and properly interpreting the results can be overwhelming if done manually. Several
automated risk analysis tools on the market can make this task much less painful and,
hopefully, more accurate. The gathered data can be reused, greatly reducing the time
required to perform subsequent analyses. The risk analysis team can also print reports
and comprehensive graphs to present to management.
NOTE
NOTE Remember that vulnerability assessments are different from risk
assessments. Vulnerability assessments just find the vulnerabilities (the holes).
A risk assessment calculates the probability of the vulnerabilities being
exploited and the associated business impact.
The objective of these tools is to reduce the manual effort of these tasks, perform
calculations quickly, estimate future expected losses, and determine the effectiveness
and benefits of the security countermeasures chosen. Most automatic risk analysis
products port information into a database and run several types of scenarios with dif-
ferent parameters to give a panoramic view of what the outcome will be if different
threats come to bear. For example, after such a tool has all the necessary information
inputted, it can be rerun several times with different parameters to compute the poten-
tial outcome if a large fire were to take place; the potential losses if a virus were to dam-
age 40 percent of the data on the main file server; how much the company would lose
if an attacker were to steal all the customer credit card information held in three data-
bases; and so on. Running through the different risk possibilities gives a company a
more detailed understanding of which risks are more critical than others, and thus
which ones to address first.

Chapter 2: Information Security Governance and Risk Management
87
Steps of a Quantitative Risk Analysis
If we follow along with our previous sections in this chapter, we have already carried
out our risk assessment, which is the process of gathering data for a risk analysis. We
have identified the assets that are to be assessed, associated a value to each asset, and
identified the vulnerabilities and threats that could affect these assets. Now we need to
carry out the risk analysis portion, which means that we need to figure out how to in-
terpret all the data that was gathered during the assessment.
If we choose to carry out a quantitative analysis, then we are going to use mathe-
matical equations for our data interpretation process. The most commonly used equa-
tions used for this purpose are the single loss expectancy (SLE) and the annual loss
expectancy (ALE).
The SLE is a dollar amount that is assigned to a single event that represents the
company’s potential loss amount if a specific threat were to take place. The equation is
laid out as follows:
Asset Value × Exposure Factor (EF) = SLE
The exposure factor (EF) represents the percentage of loss a realized threat could
have on a certain asset. For example, if a data warehouse has the asset value of $150,000,
it can be estimated that if a fire were to occur, 25 percent of the warehouse would be
damaged, in which case the SLE would be $37,500:
Asset Value ($150,000) × Exposure Factor (25%) = $37,500
This tells us that the company can potentially lose $37,500 if a fire took place. But
we need to know what our annual potential loss is, since we develop and use our secu-
rity budgets on an annual basis. This is where the ALE equation comes into play. The
ALE equation is as follows:
SLE × Annualized Rate of Occurrence (ARO) = ALE
The annualized rate of occurrence (ARO) is the value that represents the estimated
frequency of a specific threat taking place within a 12-month timeframe. The range can
be from 0.0 (never) to 1.0 (once a year) to greater than 1 (several times a year) and
anywhere in between. For example, if the probability of a fire taking place and damag-
ing our data warehouse is once every ten years, the ARO value is 0.1.
So, if a fire taking place within a company’s data warehouse facility can cause $37,500
in damages, and the frequency (or ARO) of a fire taking place has an ARO value of 0.1
(indicating once in ten years), then the ALE value is $3,750 ($37,500 × 0.1 = $3,750).
The ALE value tells the company that if it wants to put in controls to protect the as-
set (warehouse) from this threat (fire), it can sensibly spend $3,750 or less per year to
provide the necessary level of protection. Knowing the real possibility of a threat and
how much damage, in monetary terms, the threat can cause is important in determin-
ing how much should be spent to try and protect against that threat in the first place. It
would not make good business sense for the company to spend more than $3,750 per
year to protect itself from this threat.

CISSP All-in-One Exam Guide
88
Now that we have all these numbers, what do we do with them? Let’s look at the
example in Table 2-7, which shows the outcome of a quantitative risk analysis. With
this data, the company can make intelligent decisions on what threats must be ad-
dressed first because of the severity of the threat, the likelihood of it happening, and
how much could be lost if the threat were realized. The company now also knows how
much money it should spend to protect against each threat. This will result in good
business decisions, instead of just buying protection here and there without a clear
understanding of the big picture. Because the company has a risk of losing up to $6,500
if data is corrupted by virus infiltration, up to this amount of funds can be earmarked
toward providing antivirus software and methods to ensure that a virus attack will not
happen.
When carrying out a quantitative analysis, some people mistakenly think that the
process is purely objective and scientific because data is being presented in numeric
values. But a purely quantitative analysis is hard to achieve because there is still some
subjectivity when it comes to the data. How do we know that a fire will only take place
once every ten years? How do we know that the damage from a fire will be 25 percent
of the value of the asset? We don’t know these values exactly, but instead of just pulling
them out of thin air they should be based upon historical data and industry experience.
In quantitative risk analysis, we can do our best to provide all the correct information,
and by doing so we will come close to the risk values, but we cannot predict the future
and how much the future will cost us or the company.
Asset Threat Single Loss
Expectancy
(SLE)
Annualized Rate
of Occurrence
(ARO)
Annualized Loss
Expectancy
(ALE)
Facility Fire $230,000 0.1 $23,000
Trade secret Stolen $40,000 0.01 $400
File server Failed $11,500 0.1 $1,150
Data Virus $6,500 1.0 $6,500
Customer
credit card info
Stolen $300,000 3.0 $900,000
Table 2-7 Breaking Down How SLE and ALE Values Are Used
Uncertainty
I just made all these numbers up.
Response: Well, they look impressive.
In risk analysis, uncertainty refers to the degree to which you lack confidence
in an estimate. This is expressed as a percentage, from 0 to 100 percent. If you
have a 30 percent confidence level in something, then it could be said you have a
70 percent uncertainty level. Capturing the degree of uncertainty when carrying
out a risk analysis is important, because it indicates the level of confidence the
team and management should have in the resulting figures.

Chapter 2: Information Security Governance and Risk Management
89
Results of a Quantitative Risk Analysis
The risk analysis team should have clearly defined goals. The following is a short list of
what generally is expected from the results of a risk analysis:
• Monetary values assigned to assets
• Comprehensive list of all possible and significant threats
• Probability of the occurrence rate of each threat
• Loss potential the company can endure per threat in a 12-month time span
• Recommended controls
Although this list looks short, there is usually an incredible amount of detail under
each bullet item. This report will be presented to senior management, which will be con-
cerned with possible monetary losses and the necessary costs to mitigate these risks. Al-
though the reports should be as detailed as possible, there should be executive abstracts
so senior management can quickly understand the overall findings of the analysis.
Qualitative Risk Analysis
I have a feeling that we are secure.
Response: Great! Let’s all go home.
Another method of risk analysis is qualitative, which does not assign numbers and
monetary values to components and losses. Instead, qualitative methods walk through
different scenarios of risk possibilities and rank the seriousness of the threats and the
validity of the different possible countermeasures based on opinions. (A wide sweeping
analysis can include hundreds of scenarios.) Qualitative analysis techniques include
judgment, best practices, intuition, and experience. Examples of qualitative techniques
to gather data are Delphi, brainstorming, storyboarding, focus groups, surveys, ques-
tionnaires, checklists, one-on-one meetings, and interviews. The risk analysis team will
determine the best technique for the threats that need to be assessed, as well as the
culture of the company and individuals involved with the analysis.
The team that is performing the risk analysis gathers personnel who have experi-
ence and education on the threats being evaluated. When this group is presented with
a scenario that describes threats and loss potential, each member responds with their
gut feeling and experience on the likelihood of the threat and the extent of damage that
may result.
A scenario of each identified vulnerability and how it would be exploited is ex-
plored. The “expert” in the group, who is most familiar with this type of threat, should
review the scenario to ensure it reflects how an actual threat would be carried out. Safe-
guards that would diminish the damage of this threat are then evaluated, and the sce-
nario is played out for each safeguard. The exposure possibility and loss possibility can
be ranked as high, medium, or low on a scale of 1 to 5 or 1 to 10. A common qualitative
risk matrix is shown in Figure 2-11. Once the selected personnel rank the possibility of
a threat happening, the loss potential, and the advantages of each safeguard, this infor-
mation is compiled into a report and presented to management to help it make better
decisions on how best to implement safeguards into the environment. The benefits of

CISSP All-in-One Exam Guide
90
this type of analysis are that communication must happen among team members to
rank the risks, safeguard strengths, and identify weaknesses, and the people who know
these subjects the best provide their opinions to management.
Let’s look at a simple example of a qualitative risk analysis.
The risk analysis team presents a scenario explaining the threat of a hacker accessing
confidential information held on the five file servers within the company. The risk
analysis team then distributes the scenario in a written format to a team of five people
(the IT manager, database administrator, application programmer, system operator,
and operational manager), who are also given a sheet to rank the threat’s severity, loss
potential, and each safeguard’s effectiveness, with a rating of 1 to 5, 1 being the least
severe, effective, or probable. Table 2-8 shows the results.
Figure 2-11 Qualitative risk matrix. Likelihood versus consequences (impact).
Threat = Hacker
Accessing
Confidential
Information
Severity
of Threat
Probability
of Threat
Taking
Place
Potential
Loss
to the
Company
Effectiveness
of Firewall
Effectiveness
of Intrusion
Detection
System
Effectiveness
of Honeypot
IT manager 4 2 4 4 3 2
Database
administrator
44 4 3 4 1
Application
programmer
23 3 4 2 1
System operator 3 4 3 4 2 1
Operational
manager
54 4 4 4 2
Results 3.6 3.4 3.6 3.8 3 1.4
Table 2-8 Example of a Qualitative Analysis

Chapter 2: Information Security Governance and Risk Management
91
This data is compiled and inserted into a report and presented to management.
When management is presented with this information, it will see that its staff (or a
chosen set) feels that purchasing a firewall will protect the company from this threat
more than purchasing an intrusion detection system or setting up a honeypot system.
This is the result of looking at only one threat, and management will view the sever-
ity, probability, and loss potential of each threat so it knows which threats cause the
greatest risk and should be addressed first.
Quantitative vs. Qualitative
So which method should we use?
Each method has its advantages and disadvantages, some of which are outlined in
Table 2-9 for purposes of comparison.
The risk analysis team, management, risk analysis tools, and culture of the company
will dictate which approach—quantitative or qualitative—should be used. The goal of
either method is to estimate a company’s real risk and to rank the severity of the threats
so the correct countermeasures can be put into place within a practical budget.
Table 2-9 refers to some of the positive aspects of the qualitative and quantitative
approaches. However, not everything is always easy. In deciding to use either a qualita-
tive or quantitative approach, the following points might need to be considered.
Qualitative Cons
• The assessments and results are subjective and opinion-based.
• Eliminates the opportunity to create a dollar value for cost/benefit discussions.
• Hard to develop a security budget from the results because monetary values
are not used.
• Standards are not available. Each vendor has its own way of interpreting the
processes and their results.
The Delphi Technique
The oracle Delphi told me that everyone agrees with me.
Response: Okay, let’s do this again—anonymously.
The Delphi technique is a group decision method used to ensure that each
member gives an honest opinion of what he or she thinks the result of a particu-
lar threat will be. This avoids a group of individuals feeling pressured to go along
with others’ thought processes and enables them to participate in an indepen-
dent and anonymous way. Each member of the group provides his or her opinion
of a certain threat and turns it in to the team that is performing the analysis. The
results are compiled and distributed to the group members, who then write down
their comments anonymously and return them to the analysis group. The com-
ments are compiled and redistributed for more comments until a consensus is
formed. This method is used to obtain an agreement on cost, loss values, and
probabilities of occurrence without individuals having to agree verbally.

CISSP All-in-One Exam Guide
92
Quantitative Cons
• Calculations can be complex. Can management understand how these values
were derived?
• Without automated tools, this process is extremely laborious.
• More preliminary work is needed to gather detailed information about the
environment.
• Standards are not available. Each vendor has its own way of interpreting the
processes and their results.
NOTE
NOTE Since a purely quantitative assessment is close to impossible and a
purely qualitative process does not provide enough statistical data for financial
decisions, these two risk analysis approaches can be used in a hybrid approach.
Quantitative evaluation can be used for tangible assets (monetary values), and
a qualitative assessment can be used for intangible assets (priority values).
Protection Mechanisms
Okay, so we know we are at risk, and we know the probability of it happening. Now, what do
we do?
Response: Run.
The next step is to identify the current security mechanisms and evaluate their ef-
fectiveness.
Because a company has such a wide range of threats (not just computer viruses and
attackers), each threat type must be addressed and planned for individually. Access
control mechanisms used as security safeguards are discussed in Chapter 3. Software
Attribute Quantitative Qualitative
Requires no calculations X
Requires more complex calculations X
Involves high degree of guesswork X
Provides general areas and indications of risk X
Is easier to automate and evaluate X
Used in risk management performance tracking X
Allows for cost/benefit analysis X
Uses independently verifiable and objective metrics X
Provides the opinions of the individuals who know
the processes best
X
Shows clear-cut losses that can be accrued within
one year’s time
X
Table 2-9 Quantitative versus Qualitative Characteristics

Chapter 2: Information Security Governance and Risk Management
93
applications and data malfunction considerations are covered in Chapters 4 and 10.
Site location, fire protection, site construction, power loss, and equipment malfunc-
tions are examined in detail in Chapter 5. Telecommunication and networking issues
are analyzed and presented in Chapter 6. Business continuity and disaster recovery
concepts are addressed in Chapter 8. All of these subjects have their own associated
risks and planning requirements.
This section addresses identifying and choosing the right countermeasures for com-
puter systems. It gives the best attributes to look for and the different cost scenarios to
investigate when comparing different types of countermeasures. The end product of the
analysis of choices should demonstrate why the selected control is the most advanta-
geous to the company.
Control Selection
A security control must make good business sense, meaning it is cost-effective (its ben-
efit outweighs its cost). This requires another type of analysis: a cost/benefit analysis. A
commonly used cost/benefit calculation for a given safeguard (control) is:
(ALE before implementing safeguard) – (ALE after implementing safeguard) –
(annual cost of safeguard) = value of safeguard to the company
For example, if the ALE of the threat of a hacker bringing down a web server is
$12,000 prior to implementing the suggested safeguard, and the ALE is $3,000 after
implementing the safeguard, while the annual cost of maintenance and operation of the
safeguard is $650, then the value of this safeguard to the company is $8,350 each year.
The cost of a countermeasure is more than just the amount filled out on the pur-
chase order. The following items should be considered and evaluated when deriving
the full cost of a countermeasure:
• Product costs
• Design/planning costs
• Implementation costs
• Environment modifications
• Compatibility with other countermeasures
• Maintenance requirements
• Testing requirements
• Repair, replacement, or update costs
• Operating and support costs
• Effects on productivity
• Subscription costs
• Extra man-hours for monitoring and responding to alerts
• Beer for the headaches that this new tool will bring about

CISSP All-in-One Exam Guide
94
Many companies have gone through the pain of purchasing new security products
without understanding that they will need the staff to maintain those products. Al-
though tools automate tasks, many companies were not even carrying out these tasks
before, so they do not save on man-hours, but many times require more hours. For
example, Company A decides that to protect many of its resources, purchasing an IDS
is warranted. So, the company pays $5,500 for an IDS. Is that the total cost? Nope. This
software should be tested in an environment that is segmented from the production
environment to uncover any unexpected activity. After this testing is complete and the
security group feels it is safe to insert the IDS into its production environment, the se-
curity group must install the monitoring management software, install the sensors, and
properly direct the communication paths from the sensors to the management console.
The security group may also need to reconfigure the routers to redirect traffic flow, and
it definitely needs to ensure that users cannot access the IDS management console. Fi-
nally, the security group should configure a database to hold all attack signatures, and
then run simulations.
Costs associated with an IDS alert response should most definitely be considered.
Now that Company A has an IDS in place, security administrators may need additional
alerting equipment such as smart phones. And then there are the time costs associated
with a response to an IDS event.
Anyone who has worked in an IT group knows that some adverse reaction almost
always takes place in this type of scenario. Network performance can take an unaccept-
able hit after installing a product if it is an inline or proactive product. Users may no
longer be able to access the Unix server for some mysterious reason. The IDS vendor
may not have explained that two more service patches are necessary for the whole thing
to work correctly. Staff time will need to be allocated for training and to respond to all
of the positive and false-positive alerts the new IDS sends out.
So, for example, the cost of this countermeasure could be $5,500 for the product;
$2,500 for training; $3,400 for the lab and testing time; $2,600 for the loss in user pro-
ductivity once the product is introduced into production; and $4,000 in labor for rout-
er reconfiguration, product installation, troubleshooting, and installation of the two
service patches. The real cost of this countermeasure is $18,000. If our total potential
loss was calculated at $9,000, we went over budget by 100 percent when applying this
countermeasure for the identified risk. Some of these costs may be hard or impossible
to identify before they are incurred, but an experienced risk analyst would account for
many of these possibilities.
Functionality and Effectiveness of Countermeasures
The countermeasure doesn’t work, but it has a fun interface.
Response: Good enough.
The risk analysis team must evaluate the safeguard’s functionality and effectiveness.
When selecting a safeguard, some attributes are more favorable than others. Table 2-10
lists and describes attributes that should be considered before purchasing and commit-
ting to a security protection mechanism.
Safeguards can provide deterrence attributes if they are highly visible. This tells po-
tential evildoers that adequate protection is in place and that they should move on to
an easier target. Although the safeguard may be highly visible, attackers should not be

Chapter 2: Information Security Governance and Risk Management
95
able to discover the way it works, thus enabling them to attempt to modify the safe-
guard, or know how to get around the protection mechanism. If users know how to
disable the antivirus program that is taking up CPU cycles or know how to bypass a
proxy server to get to the Internet without restrictions, they will do so.
Characteristic Description
Modular It can be installed or removed from an environment without
adversely affecting other mechanisms.
Provides uniform protection A security level is applied to all mechanisms it is designed to
protect in a standardized method.
Provides override functionality An administrator can override the restriction if necessary.
Defaults to least privilege When installed, it defaults to a lack of permissions and rights
instead of installing with everyone having full control.
Independent of safeguards and
the asset it is protecting
The safeguard can be used to protect different assets, and
different assets can be protected by different safeguards.
Flexibility and security The more security the safeguard provides, the better. This
functionality should come with flexibility, which enables you to
choose different functions instead of all or none.
User interaction Does not panic users.
Clear distinction between user
and administrator
A user should have fewer permissions when it comes to
configuring or disabling the protection mechanism.
Minimum human intervention When humans have to configure or modify controls, this
opens the door to errors. The safeguard should require the
least possible amount of input from humans.
Asset protection Asset is still protected even if countermeasure needs to
be reset.
Easily upgraded Software continues to evolve, and updates should be able to
happen painlessly.
Auditing functionality There should be a mechanism that is part of the safeguard
that provides minimum and/or verbose auditing.
Minimizes dependence on
other components
The safeguard should be flexible and not have strict
requirements about the environment into which it will
be installed.
Easily usable, acceptable, and
tolerated by personnel
If the safeguards provide barriers to productivity or add extra
steps to simple tasks, users will not tolerate it.
Must produce output in usable
and understandable format
Important information should be presented in a format easy
for humans to understand and use for trend analysis.
Must be able to reset safeguard The mechanism should be able to be reset and returned
to original configurations and settings without affecting the
system or asset it is protecting.
Testable The safeguard should be able to be tested in different
environments under different situations.
Table 2-10 Characteristics to Seek When Obtaining Safeguards

CISSP All-in-One Exam Guide
96
Putting It Together
To perform a risk analysis, a company first decides what assets must be protected and
to what extent. It also indicates the amount of money that can go toward protecting
specific assets. Next, it must evaluate the functionality of the available safeguards and
determine which ones would be most beneficial for the environment. Finally, the com-
pany needs to appraise and compare the costs of the safeguards. These steps and the
resulting information enable management to make the most intelligent and informed
decisions about selecting and purchasing countermeasures.
Total Risk vs. Residual Risk
The reason a company implements countermeasures is to reduce its overall risk to an
acceptable level. As stated earlier, no system or environment is 100 percent secure,
which means there is always some risk left over to deal with. This is called residual risk.
Characteristic Description
Does not introduce other
compromises
The safeguard should not provide any covert channels or
back doors.
System and user performance System and user performance should not be greatly affected.
Universal application The safeguard can be implemented across the environment
and does not require many, if any, exceptions.
Proper alerting Thresholds should be able to be set as to when to alert
personnel of a security breach, and this type of alert should be
acceptable.
Does not affect assets The assets in the environment should not be adversely
affected by the safeguard.
Table 2-10 Characteristics to Seek When Obtaining Safeguards (continued)
We Are Never Done
Only by reassessing the risks on a periodic basis can a statement of safeguard
performance be trusted. If the risk has not changed, and the safeguards imple-
mented are functioning in good order, then it can be said that the risk is being
properly mitigated. Regular IRM monitoring will support the information secu-
rity risk ratings.
Vulnerability analysis and continued asset identification and valuation are
also important tasks of risk management monitoring and performance. The cycle
of continued risk analysis is a very important part of determining whether the
safeguard controls that have been put in place are appropriate and necessary
to safeguard the assets and environment.

Chapter 2: Information Security Governance and Risk Management
97
Residual risk is different from total risk, which is the risk a company faces if it
chooses not to implement any type of safeguard. A company may choose to take on
total risk if the cost/benefit analysis results indicate this is the best course of action. For
example, if there is a small likelihood that a company’s web servers can be compro-
mised, and the necessary safeguards to provide a higher level of protection cost more
than the potential loss in the first place, the company will choose not to implement the
safeguard, choosing to deal with the total risk.
There is an important difference between total risk and residual risk and which type
of risk a company is willing to accept. The following are conceptual formulas:
threats × vulnerability × asset value = total risk
(threats × vulnerability × asset value) × controls gap = residual risk
You may also see these concepts illustrated as the following:
total risk – countermeasures = residual risk
NOTE
NOTE The previous formulas are not constructs you can actually plug
numbers into. They are instead used to illustrate the relation of the different
items that make up risk in a conceptual manner. This means no multiplication
or mathematical functions actually take place. It is a means of understanding
what items are involved when defining either total or residual risk.
During a risk assessment, the threats and vulnerabilities are identified. The possibility
of a vulnerability being exploited is multiplied by the value of the assets being assessed,
which results in the total risk. Once the controls gap (protection the control cannot pro-
vide) is factored in, the result is the residual risk. Implementing countermeasures is a way
of mitigating risks. Because no company can remove all threats, there will always be some
residual risk. The question is what level of risk the company is willing to accept.
Handling Risk
Now that we know about the risk, what do we do with it?
Response: Hide it behind that plant.
Once a company knows the amount of total and residual risk it is faced with, it
must decide how to handle it. Risk can be dealt with in four basic ways: transfer it,
avoid it, reduce it, or accept it.
Many types of insurance are available to companies to protect their assets. If a com-
pany decides the total risk is too high to gamble with, it can purchase insurance, which
would transfer the risk to the insurance company.
If a company decides to terminate the activity that is introducing the risk, this is
known as risk avoidance. For example, if a company allows employees to use instant
messaging (IM), there are many risks surrounding this technology. The company could
decide not to allow any IM activity by their users because there is not a strong enough
business need for its continued use. Discontinuing this service is an example of risk
avoidance.

CISSP All-in-One Exam Guide
98
Another approach is risk mitigation, where the risk is reduced to a level considered
acceptable enough to continue conducting business. The implementation of firewalls,
training, and intrusion/detection protection systems or other control types represent
types of risk mitigation efforts.
The last approach is to accept the risk, which means the company understands the
level of risk it is faced with, as well as the potential cost of damage, and decides to just
live with it and not implement the countermeasure. Many companies will accept risk
when the cost/benefit ratio indicates that the cost of the countermeasure outweighs the
potential loss value.
A crucial issue with risk acceptance is understanding why this is the best approach
for a specific situation. Unfortunately, today many people in organizations are accept-
ing risk and not understanding fully what they are accepting. This usually has to do
with the relative newness of risk management in the security field and the lack of edu-
cation and experience in those personnel who make risk decisions. When business
managers are charged with the responsibility of dealing with risk in their department,
most of the time they will accept whatever risk is put in front of them because their real
goals pertain to getting a project finished and out the door. They don’t want to be
bogged down by this silly and irritating security stuff.
Risk acceptance should be based on several factors. For example, is the potential
loss lower than the countermeasure? Can the organization deal with the “pain” that
will come with accepting this risk? This second consideration is not purely a cost deci-
sion, but may entail noncost issues surrounding the decision. For example, if we accept
this risk, we must add three more steps in our production process. Does that make
sense for us? Or if we accept this risk, more security incidents may arise from it, and are
we prepared to handle those?
The individual or group accepting risk must also understand the potential visibility
of this decision. Let’s say it has been determined that the company does not need to
actually protect customers’ first names, but it does have to protect other items like So-
cial Security numbers, account numbers, and so on. So these current activities are in
compliance with the regulations and laws, but what if your customers find out you are
not properly protecting their names and they associate such things with identity fraud
because of their lack of education on the matter? The company may not be able to
handle this potential reputation hit, even if it is doing all it is supposed to be doing.
Perceptions of a company’s customer base are not always rooted in fact, but the possi-
bility that customers will move their business to another company is a potential fact
your company must comprehend.
Figure 2-12 shows how a risk management program can be set up, which ties to-
gether all the concepts covered in this section.
Key Terms
•Quantitative risk analysis Assigning monetary and numeric values to
all the data elements of a risk assessment.
•Qualitative risk analysis Opinion-based method of analyzing risk
with the use of scenarios and ratings.

Chapter 2: Information Security Governance and Risk Management
99
Figure 2-12 How a risk management program can be set up
•Single loss expectancy One instance of an expected loss if a specific
vulnerability is exploited and how it affects a single asset. Asset Value ×
Exposure Factor = SLE.
•Annualized loss expectancy Annual expected loss if a specific
vulnerability is exploited and how it affects a single asset.
SLE × ARO = ALE.
•Uncertainty analysis Assigning confidence level values to data elements.
•Delphi method Data collection method that happens in an
anonymous fashion.
•Cost/benefit analysis Calculating the value of a control. (ALE before
implementing a control) – (ALE after implementing a control) –
(annual cost of control) = value of control.
•Functionality versus effectiveness of control Functionality is what a
control does, and its effectiveness is how well the control does it.
•Total risk Full risk amount before a control is put into place. Threats
× vulnerabilities × assets = total risk.
•Residual risk Risk that remains after implementing a control. Threats
× vulnerabilities × assets × (control gap) = residual risk.
•Handling risk Accept, transfer, mitigate, avoid.

CISSP All-in-One Exam Guide
100
Outsourcing
I am sure that company, based in another company that we have never met or ever heard of,
will protect our most sensitive secrets just fine.
Response: Yeah, they seem real nice.
More organizations are outsourcing business functions to allow them to focus on
their core business functions. Companies use hosting companies to maintain websites
and e-mail servers, service providers for various telecommunication connections, disas-
ter recovery companies for co-location capabilities, cloud computing providers for in-
frastructure or application services, developers for software creation, and security
companies to carry out vulnerability management. It is important to realize that while
you can outsource functionality, you cannot outsource risk. When your company is us-
ing these third-party companies for these various services, your company can still be
ultimately responsible if something like a data breach takes place. We will go more in
depth into these types of issues from a legal aspect (downstream liabilities) in Chapter
9, but let’s look at some things an organization should do to reduce its risk when it
comes to outsourcing.
• Review service provider’s security program
• Conduct onsite inspection and interviews
• Review contracts to ensure security and protection levels are agreed upon
• Ensure service level agreements are in place
• Review internal and external audit reports and third-party reviews
• Review references and communicate with former and existing customers
• Review Better Business Bureau reports
• Ensure they have a Business Continuity Plan (BCP) in place
• Implement a nondisclosure agreement (NDA)
• Understand provider’s legal and regulatory requirements
• Require a Statement on Auditing Standards (SAS) 70 audit report
NOTE
NOTE SAS 70 is an internal controls audit carried out by a third-party
auditing organization.
Outsourcing is so prevalent within organizations today and commonly forgotten
about when it comes to security and compliance requirements. It may be economical
to outsource certain functionalities, but if this allows security breaches to take place, it
can turn out to be a very costly decision.

Chapter 2: Information Security Governance and Risk Management
101
Policies, Standards, Baselines, Guidelines,
and Procedures
The risk assessment is done. Let’s call it a day.
Response: Nope, there’s more to do.
Computers and the information processed on them usually have a direct relation-
ship with a company’s critical missions and objectives. Because of this level of impor-
tance, senior management should make protecting these items a high priority and
provide the necessary support, funds, time, and resources to ensure that systems, net-
works, and information are protected in the most logical and cost-effective manner
possible. A comprehensive management approach must be developed to accomplish
these goals successfully. This is because everyone within an organization may have a
different set of personal values and experiences they bring to the environment with re-
gard to security. It is important to make sure everyone is regarding security at a level that
meets the needs of the organization as determined by laws, regulations, requirements,
and business goals that have been determined by risk assessments of the environment
of the organization.
For a company’s security plan to be successful, it must start at the top level and be
useful and functional at every single level within the organization. Senior management
needs to define the scope of security and identify and decide what must be protected and
to what extent. Management must understand the regulations, laws, and liability issues
it is responsible for complying with regarding security and ensure that the company as a
whole fulfills its obligations. Senior management also must determine what is expected
from employees and what the consequences of noncompliance will be. These decisions
should be made by the individuals who will be held ultimately responsible if something
goes wrong. But it is a common practice to bring in the expertise of the security officers
to collaborate in ensuring that sufficient policies and controls are being implemented to
achieve the goals being set and determined by senior management.
A security program contains all the pieces necessary to provide overall protection to
a corporation and lays out a long-term security strategy. A security program’s documen-
tation should be made up of security policies, procedures, standards, guidelines, and
baselines. The human resources and legal departments must be involved in the devel-
opment and enforcement of rules and requirements laid out in these documents.
The language, level of detail, formality of the documents, and supporting mechanisms
should be examined by the policy developers. Security policies, standards, guidelines, pro-
cedures, and baselines must be developed with a realistic view to be most effective. Highly
structured organizations usually follow documentation in a more uniform way. Less struc-
tured organizations may need more explanation and emphasis to promote compliance.
The more detailed the rules are, the easier it is to know when one has been violated. How-
ever, overly detailed documentation and rules can prove to be more burdensome than
helpful. The business type, its culture, and its goals must be evaluated to make sure the
proper language is used when writing security documentation.

CISSP All-in-One Exam Guide
102
There are a lot of legal liability issues surrounding security documentation. If your
organization has a policy outlining how it is supposed to be protecting sensitive infor-
mation and it is found out that your organization is not practicing what it is preaching,
criminal charges and civil suits could be filed and successfully executed. It is important
that an organization’s security does not just look good on paper, but in action also.
Security Policy
Oh look, this paper tells us what we need to do. I am going to put smiley-face stickers all over it.
Asecurity policy is an overall general statement produced by senior management (or
a selected policy board or committee) that dictates what role security plays within the
organization. A security policy can be an organizational policy, an issue-specific policy,
or a system-specific policy. In an organizational security policy, management establishes
how a security program will be set up, lays out the program’s goals, assigns responsi-
bilities, shows the strategic and tactical value of security, and outlines how enforcement
should be carried out. This policy must address relative laws, regulations, and liability
issues, and how they are to be satisfied. The organizational security policy provides
scope and direction for all future security activities within the organization. It also de-
scribes the amount of risk senior management is willing to accept.
The organizational security policy has several important characteristics that must be
understood and implemented:
• Business objectives should drive the policy’s creation, implementation, and
enforcement. The policy should not dictate business objectives.
• It should be an easily understood document that is used as a reference point
for all employees and management.
• It should be developed and used to integrate security into all business
functions and processes.
• It should be derived from and support all legislation and regulations
applicable to the company.
• It should be reviewed and modified as a company changes, such as through
adoption of a new business model, a merger with another company, or change
of ownership.
• Each iteration of the policy should be dated and under version control.
• The units and individuals who are governed by the policy must have easy
access to it. Policies are commonly posted on portals on an intranet.
• It should be created with the intention of having the policies in place for
several years at a time. This will help ensure policies are forward-thinking
enough to deal with potential changes that may arise.
• The level of professionalism in the presentation of the policies reinforces their
importance as well as the need to adhere to them.
• It should not contain language that isn’t readily understood by everyone. Use
clear and declarative statements that are easy to understand and adopt.

Chapter 2: Information Security Governance and Risk Management
103
• It should be reviewed on a regular basis and adapted to correct incidents that
have occurred since the last review and revision of the policies.
A process for dealing with those who choose not to comply with the security poli-
cies must be developed and enforced so there is a structured method of response to
noncompliance. This establishes a process that others can understand and thus recog-
nize not only what is expected of them, but also what they can expect as a response to
their noncompliance.
Organizational policies are also referred to as master security policies. An organiza-
tion will have many policies, and they should be set up in a hierarchical manner. The
organizational (master) policy is at the highest level, and then there are policies under-
neath it that address security issues specifically. These are referred to as issue-specific
policies.
An issue-specific policy, also called a functional policy, addresses specific security is-
sues that management feels need more detailed explanation and attention to make sure
a comprehensive structure is built and all employees understand how they are to com-
ply with these security issues. For example, an organization may choose to have an e-
mail security policy that outlines what management can and cannot do with employees’
e-mail messages for monitoring purposes, that specifies which e-mail functionality em-
ployees can or cannot use, and that addresses specific privacy issues.
As a more specific example, an e-mail policy might state that management can read
any employee’s e-mail messages that reside on the mail server, but not when they reside
on the user’s workstation. The e-mail policy might also state that employees cannot use
e-mail to share confidential information or pass inappropriate material, and that they
may be subject to monitoring of these actions. Before they use their e-mail clients, em-
ployees should be asked to confirm that they have read and understand the e-mail
policy, either by signing a confirmation document or clicking Yes in a confirmation
dialog box. The policy provides direction and structure for the staff by indicating what
they can and cannot do. It informs the users of the expectations of their actions, and it
provides liability protection in case an employee cries “foul” for any reason dealing
with e-mail use.
NOTE
NOTE A policy needs to be technology- and solution-independent. It must
outline the goals and missions, but not tie the organization to specific ways of
accomplishing them.
A common hierarchy of security policies is outlined here, which illustrates the rela-
tionship between the master policy and the issue-specific policies that support it:
• Organizational policy
• Acceptable use policy
• Risk management policy
• Vulnerability management policy
• Data protection policy
• Access control policy

CISSP All-in-One Exam Guide
104
• Business continuity policy
• Log aggregation and auditing policy
• Personnel security policy
• Physical security policy
• Secure application development policy
• Change control policy
• E-mail policy
• Incident response policy
Asystem-specific policy presents the management’s decisions that are specific to the
actual computers, networks, and applications. An organization may have a system-spe-
cific policy outlining how a database containing sensitive information should be pro-
tected, who can have access, and how auditing should take place. It may also have a
system-specific policy outlining how laptops should be locked down and managed.
This policy type is directed to one or a group of similar systems and outlines how they
should be protected.
Policies are written in broad terms to cover many subjects in a general fashion.
Much more granularity is needed to actually support the policy, and this happens with
the use of procedures, standards, guidelines, and baselines. The policy provides the
foundation. The procedures, standards, guidelines, and baselines provide the security
framework. And the necessary security controls (administrative, technical, and physi-
cal) are used to fill in the framework to provide a full security program.
Types of Policies
Policies generally fall into one of the following categories:
•Regulatory This type of policy ensures that the organization is
following standards set by specific industry regulations (HIPAA, GLBA,
SOX, PCI-DSS, etc.). It is very detailed and specific to a type of industry.
It is used in financial institutions, healthcare facilities, public utilities,
and other government-regulated industries.
•Advisory This type of policy strongly advises employees as to which
types of behaviors and activities should and should not take place
within the organization. It also outlines possible ramifications if
employees do not comply with the established behaviors and activities.
This policy type can be used, for example, to describe how to handle
medical or financial information.
•Informative This type of policy informs employees of certain topics.
It is not an enforceable policy, but rather one that teaches individuals
about specific issues relevant to the company. It could explain how the
company interacts with partners, the company’s goals and mission, and
a general reporting structure in different situations.

Chapter 2: Information Security Governance and Risk Management
105
Standards
Some things you just gotta do.
Standards refer to mandatory activities, actions, or rules. Standards can give a policy
its support and reinforcement in direction. Organizational security standards may spec-
ify how hardware and software products are to be used. They can also be used to indi-
cate expected user behavior. They provide a means to ensure that specific technologies,
applications, parameters, and procedures are implemented in a uniform (standard-
ized) manner across the organization. An organizational standard may require that all
employees wear their company identification badges at all times, that they challenge
unknown individuals about their identity and purpose for being in a specific area, or
that they encrypt confidential information. These rules are compulsory within a com-
pany, and if they are going to be effective, they must be enforced.
An organization may have an issue-specific data classification policy that states “All
confidential data must be properly protected.” It would need a supporting data protec-
tion standard outlining how this protection should be implemented and followed, as
in “Confidential information must be protected with AES256 at rest and in transit.”
As stated in an earlier section, tactical and strategic goals are different. A strategic
goal can be viewed as the ultimate endpoint, while tactical goals are the steps necessary
to achieve it. As shown in Figure 2-13, standards, guidelines, and procedures are the
tactical tools used to achieve and support the directives in the security policy, which is
considered the strategic goal.
CAUTION
CAUTION The term standard has more than one meaning in our industry.
Internal documentation that lays out rules that must be followed is a standard.
But sometimes, best practices, as in the ISO/IEC 27000 series, are referred to
as standards because they were developed by a standards body. And as we will
see in Chapter 6, we have specific technologic standards, as in IEEE 802.11. You
need to understand the context of how this term is used. The CISSP exam
will not try and trick you on this word; just know that the industry uses it in
several different ways.
Figure 2-13
Policy establishes the
strategic plans, and
the lower elements
provide the tactical
support.

CISSP All-in-One Exam Guide
106
Baselines
The term baseline refers to a point in time that is used as a comparison for future
changes. Once risks have been mitigated and security put in place, a baseline is for-
mally reviewed and agreed upon, after which all further comparisons and development
are measured against it. A baseline results in a consistent reference point.
Let’s say that your doctor has told you that you weigh 400 pounds due to your diet
of donuts, pizza, and soda. (This is very frustrating to you because the TV commercial
said you could eat whatever you wanted and just take their very expensive pills every
day and lose weight.) The doctor tells you that you need to exercise each day and ele-
vate your heart rate to double its normal rate for 30 minutes twice a day. How do you
know when you are at double your heart rate? You find out your baseline (regular heart
rate) by using one of those arm thingies with a little ball attached. So you start at your
baseline and continue to exercise until you have doubled your heart rate or die, which-
ever comes first.
Baselines are also used to define the minimum level of protection required. In se-
curity, specific baselines can be defined per system type, which indicates the necessary
settings and the level of protection being provided. For example, a company may stipu-
late that all accounting systems must meet an Evaluation Assurance Level (EAL) 4 base-
line. This means that only systems that have gone through the Common Criteria process
and achieved this rating can be used in this department. Once the systems are properly
configured, this is the necessary baseline. When new software is installed, when patch-
es or upgrades are applied to existing software, or when other changes to the system
take place, there is a good chance the system may no longer be providing its necessary
minimum level of protection (its baseline). Security personnel must assess the systems
as changes take place and ensure that the baseline level of security is always being met.
If a technician installs a patch on a system and does not ensure the baseline is still being
met, there could be new vulnerabilities introduced into the system that will allow at-
tackers easy access to the network.
NOTE
NOTE Baselines that are not technology-oriented should be created and
enforced within organizations as well. For example, a company can mandate
that while in the facility all employees must have a badge with a picture ID
in view at all times. It can also state that visitors must sign in at a front desk
and be escorted while in the facility. If these are followed, then this creates a
baseline of protection.
Guidelines
Guidelines are recommended actions and operational guides to users, IT staff, opera-
tions staff, and others when a specific standard does not apply. Guidelines can deal
with the methodologies of technology, personnel, or physical security. Life is full of

Chapter 2: Information Security Governance and Risk Management
107
gray areas, and guidelines can be used as a reference during those times. Whereas stan-
dards are specific mandatory rules, guidelines are general approaches that provide the
necessary flexibility for unforeseen circumstances.
A policy might state that access to confidential data must be audited. A supporting
guideline could further explain that audits should contain sufficient information to al-
low for reconciliation with prior reviews. Supporting procedures would outline the
necessary steps to configure, implement, and maintain this type of auditing.
Procedures
Procedures are detailed step-by-step tasks that should be performed to achieve a certain
goal. The steps can apply to users, IT staff, operations staff, security members, and oth-
ers who may need to carry out specific tasks. Many organizations have written proce-
dures on how to install operating systems, configure security mechanisms, implement
access control lists, set up new user accounts, assign computer privileges, audit activi-
ties, destroy material, report incidents, and much more.
Procedures are considered the lowest level in the documentation chain because
they are closest to the computers and users (compared to policies) and provide detailed
steps for configuration and installation issues.
Procedures spell out how the policy, standards, and guidelines will actually be im-
plemented in an operating environment. If a policy states that all individuals who access
confidential information must be properly authenticated, the supporting procedures
will explain the steps for this to happen by defining the access criteria for authorization,
how access control mechanisms are implemented and configured, and how access ac-
tivities are audited. If a standard states that backups should be performed, then the
procedures will define the detailed steps necessary to perform the backup, the timelines
of backups, the storage of backup media, and so on. Procedures should be detailed
enough to be both understandable and useful to a diverse group of individuals.
To tie these items together, let’s walk through an example. A corporation’s security
policy indicates that confidential information should be properly protected. It states the
issue in very broad and general terms. A supporting standard mandates that all cus-
tomer information held in databases must be encrypted with the Advanced Encryption
Standard (AES) algorithm while it is stored and that it cannot be transmitted over the
Internet unless IPSec encryption technology is used. The standard indicates what type
of protection is required and provides another level of granularity and explanation. The
supporting procedures explain exactly how to implement the AES and IPSec technolo-
gies, and the guidelines cover how to handle cases when data is accidentally corrupted
or compromised during transmission. Once the software and devices are configured as
outlined in the procedures, this is considered the baseline that must always be main-
tained. All of these work together to provide a company with a security structure.

CISSP All-in-One Exam Guide
108
Implementation
Our policies are very informative and look very professional.
Response: Doesn’t matter. Nobody cares.
Unfortunately, security policies, standards, procedures, baselines, and guidelines
often are written because an auditor instructed a company to document these items,
but then they are placed on a file server and are not shared, explained, or used. To be
useful, they must be put into action. No one is going to follow the rules if people don’t
know the rules exist. Security policies and the items that support them not only must
be developed, but must also be implemented and enforced.
To be effective, employees need to know about security issues within these docu-
ments; therefore, the policies and their supporting counterparts need visibility. Aware-
ness training, manuals, presentations, newsletters, and legal banners can achieve this
visibility. It must be clear that the directives came from senior management and that
the full management staff supports these policies. Employees must understand what is
expected of them in their actions, behaviors, accountability, and performance.
Implementing security policies and the items that support them shows due care by
the company and its management staff. Informing employees of what is expected of
them and the consequences of noncompliance can come down to a liability issue. If a
company fires an employee because he was downloading pornographic material to the
company’s computer, the employee may take the company to court and win if the em-
ployee can prove he was not properly informed of what was considered acceptable and
unacceptable use of company property and what the consequences were. Security-
awareness training is covered in later sections, but understand that companies that do
not supply this to their employees are not practicing due care and can be held negligent
and liable in the eyes of the law.
Key Terms
•Policy High-level document that outlines senior management’s
security directives.
•Policy types Organizational (master), issue-specific, system-specific.
•Policy functionality types Regulatory, advisory, informative.
•Standard Compulsory rules that support the security policies.
•Guideline Suggestions and best practices.
•Procedures Step-by-step implementation instructions.

Chapter 2: Information Security Governance and Risk Management
109
Information Classification
My love letter to my dog is top secret.
Response: As it should be.
Earlier, this chapter touched upon the importance of recognizing what information
is critical to a company and assigning a value to it. The rationale behind assigning val-
ues to different types of data is that it enables a company to gauge the amount of funds
and resources that should go toward protecting each type of data, because not all data
has the same value to a company. After identifying all important information, it should
be properly classified. A company has a lot of information that is created and main-
tained. The reason to classify data is to organize it according to its sensitivity to loss,
disclosure, or unavailability. Once data is segmented according to its sensitivity level,
the company can decide what security controls are necessary to protect different types
of data. This ensures that information assets receive the appropriate level of protection,
and classifications indicate the priority of that security protection. The primary purpose
of data classification is to indicate the level of confidentiality, integrity, and availability
protection that is required for each type of data set. Many people mistakenly only con-
sider the confidentiality aspects of data protection, but we need to make sure our data
is not modified in an unauthorized manner and that it is available when needed.
Data classification helps ensure data is protected in the most cost-effective manner.
Protecting and maintaining data costs money, but it is important to spend this money
for the information that actually requires protection. Going back to our very sophisti-
cated example of U.S. spy satellites and the peanut butter and banana sandwich recipe,
a company in charge of encryption algorithms used to transmit data to and from U.S.
spy satellites would classify this data as top secret and apply complex and highly techni-
cal security controls and procedures to ensure it is not accessed in an unauthorized
method and disclosed. On the other hand, the sandwich recipe would have a lower
classification, and your only means of protecting it might be to not talk about it.
Each classification should have separate handling requirements and procedures
pertaining to how that data is accessed, used, and destroyed. For example, in a corpora-
tion, confidential information may be accessed only by senior management and a se-
lect few throughout the company. Accessing the information may require two or more
people to enter their access codes. Auditing could be very detailed and its results moni-
tored daily, and paper copies of the information may be kept in a vault. To properly
erase this data from the media, degaussing or zeroization procedures may be required.
Other information in this company may be classified as sensitive, allowing a slightly
larger group of people to view it. Access control on the information classified as sensi-
tive may require only one set of credentials. Auditing happens but is only reviewed
weekly, paper copies are kept in locked file cabinets, and the data can be deleted using
regular measures when it is time to do so. Then, the rest of the information is marked
public. All employees can access it, and no special auditing or destruction methods are
required.

CISSP All-in-One Exam Guide
110
Classifications Levels
There are no hard and fast rules on the classification levels that an organization should
use. An organization could choose to use any of the classification levels presented in
Table 2-11. One organization may choose to use only two layers of classifications, while
another company may choose to use four. Table 2-11 explains the types of classifica-
tions available. Note that some classifications are more commonly used for commer-
cial businesses, whereas others are military classifications.
Classification Definition Examples Organizations That
Would Use This
Public • Disclosure is not
welcome, but it would
not cause an adverse
impact to company or
personnel.
• How many people
are working on a
specific project
• Upcoming projects
Commercial business
Sensitive • Requires special
precautions to ensure
the integrity and
confidentiality of the
data by protecting it
from unauthorized
modification or
deletion.
• Requires higher-than-
normal assurance
of accuracy and
completeness.
• Financial information
• Details of projects
• Profit earnings and
forecasts
Commercial business
Private • Personal information
for use within a
company.
• Unauthorized
disclosure could
adversely affect
personnel or the
company.
• Work history
• Human resources
information
• Medical information
Commercial business
Confidential • For use within the
company only.
• Data exempt from
disclosure under
the Freedom of
Information Act
or other laws and
regulations.
• Unauthorized
disclosure could
seriously affect a
company.
• Trade secrets
• Healthcare
information
• Programming code
• Information that
keeps the company
competitive
Commercial business
Military
Table 2-11 Commercial Business and Military Data Classification

Chapter 2: Information Security Governance and Risk Management
111
The following shows the common levels of sensitivity from the highest to the low-
est for commercial business:
• Confidential
• Private
• Sensitive
• Public
The following shows the levels of sensitivity from the highest to the lowest for mil-
itary purposes:
• Top secret
• Secret
• Confidential
• Sensitive but unclassified
• Unclassified
The classifications listed in the table are commonly used in the industry, but there is
a lot of variance. An organization first must decide the number of data classifications
that best fit its security needs, then choose the classification naming scheme, and then
define what the names in those schemes represent. Company A might use the classifica-
tion level “confidential,” which represents its most sensitive information. Company B
might use “top secret,” “secret,” and “confidential,” where confidential represents its
least sensitive information. Each organization must develop an information classifica-
tion scheme that best fits its business and security needs.
Unclassified • Data is not sensitive or
classified.
• Computer manual
and warranty
information
• Recruiting
information
Military
Sensitive but
unclassified (SBU)
• Minor secret.
• If disclosed, it may not
cause serious damage.
• Medical data
• Answers to test
scores
Military
Secret • If disclosed, it could
cause serious damage
to national security.
• Deployment plans
for troops
• Nuclear bomb
placement
Military
Top secret • If disclosed, it could
cause grave damage to
national security.
• Blueprints of new
wartime weapons
• Spy satellite
information
• Espionage data
Military
Table 2-11 Commercial Business and Military Data Classification (continued)
Classification Definition Examples Organizations That
Would Use This

CISSP All-in-One Exam Guide
112
It is important to not go overboard and come up with a long list of classifications,
which will only cause confusion and frustration for the individuals who will use the
system. The classifications should not be too restrictive and detailed-oriented either,
because many types of data may need to be classified.
Each classification should be unique and separate from the others and not have any
overlapping effects. The classification process should also outline how information is
controlled and handled through their life cycles (from creation to termination).
Once the scheme is decided upon, the organization must develop the criteria it will
use to decide what information goes into which classification. The following list shows
some criteria parameters an organization may use to determine the sensitivity of data:
• The usefulness of data
• The value of data
• The age of data
• The level of damage that could be caused if the data were disclosed
• The level of damage that could be caused if the data were modified or
corrupted
• Legal, regulatory, or contractual responsibility to protect the data
• Effects the data has on security
• Who should be able to access the data
• Who should maintain the data
• Who should be able to reproduce the data
• Lost opportunity costs that could be incurred if the data were not available or
were corrupted
Data are not the only things that may need to be classified. Applications and some-
times whole systems may need to be classified. The applications that hold and process
classified information should be evaluated for the level of protection they provide. You
do not want a program filled with security vulnerabilities to process and “protect” your
most sensitive information. The application classifications should be based on the as-
surance (confidence level) the company has in the software and the type of informa-
tion it can store and process.
NOTE
NOTE An organization must make sure that whoever is backing up classified
data—and whoever has access to backed-up data—has the necessary
clearance level. A large security risk can be introduced if low-end technicians
with no security clearance have access to this information during their tasks.
CAUTION
CAUTION The classification rules must apply to data no matter what format
it is in: digital, paper, video, fax, audio, and so on.

Chapter 2: Information Security Governance and Risk Management
113
Now that we have chosen a sensitivity scheme, the next step is to specify how each
classification should be dealt with. We must specify provisions for access control, iden-
tification, and labeling, along with how data in specific classifications are stored, main-
tained, transmitted, and destroyed. We also must iron out auditing, monitoring, and
compliance issues. Each classification requires a different degree of security and, there-
fore, different requirements from each of the mentioned items.
Classification Controls
I marked our top secret stuff as “top secret.”
Response: Great, now everyone is going to want to see it. Good job.
As mentioned earlier, which types of controls are implemented per classification
depends upon the level of protection that management and the security team have de-
termined is needed. The numerous types of controls available are discussed throughout
this book. But some considerations pertaining to sensitive data and applications are
common across most organizations:
• Strict and granular access control for all levels of sensitive data and programs
(see Chapter 3 for coverage of access controls, along with file system
permissions that should be understood)
• Encryption of data while stored and while in transmission (see Chapter 7 for
coverage of all types of encryption technologies)
• Auditing and monitoring (determine what level of auditing is required and
how long logs are to be retained)
• Separation of duties (determine whether two or more people must be
involved in accessing sensitive information to protect against fraudulent
activities; if so, define and document procedures)
• Periodic reviews (review classification levels, and the data and programs that
adhere to them, to ensure they are still in alignment with business needs; data
or applications may also need to be reclassified or declassified, depending
upon the situation)
• Backup and recovery procedures (define and document)
• Change control procedures (define and document)
• Physical security protection (define and document)
• Information flow channels (where does the sensitive data reside and how does
it transverse the network)
• Proper disposal actions, such as shredding, degaussing, and so on (define and
document)
• Marking, labeling, and handling procedures

CISSP All-in-One Exam Guide
114
Layers of Responsibility
Okay, who is in charge so we have someone to blame?
Senior management and other levels of management understand the vision of the
company, the business goals, and the objectives. The next layer down is the functional
management, whose members understand how their individual departments work,
what roles individuals play within the company, and how security affects their depart-
ment directly. The next layers are operational managers and staff. These layers are closer
to the actual operations of the company. They know detailed information about the
technical and procedural requirements, the systems, and how the systems are used. The
employees at these layers understand how security mechanisms integrate into systems,
how to configure them, and how they affect daily productivity. Every layer offers differ-
ent insight into what type of role security plays within an organization, and each should
have input into the best security practices, procedures, and chosen controls to ensure
the agreed-upon security level provides the necessary amount of protection without
negatively affecting the company’s productivity.
Although each layer is important to the overall security of an organization, some
specific roles must be clearly defined. Individuals who work in smaller environments
(where everyone must wear several hats) may get overwhelmed with the number of
roles presented next. Many commercial businesses do not have this level of structure in
their security teams, but many government agencies and military units do. What you
Data Classification Procedures
The following outlines the necessary steps for a proper classification program:
1. Define classification levels.
2. Specify the criteria that will determine how data are classified.
3. Identify data owners who will be responsible for classifying data.
4. Identify the data custodian who will be responsible for maintaining
data and its security level.
5. Indicate the security controls, or protection mechanisms, required for
each classification level.
6. Document any exceptions to the previous classification issues.
7. Indicate the methods that can be used to transfer custody of the
information to a different data owner.
8. Create a procedure to periodically review the classification and
ownership. Communicate any changes to the data custodian.
9. Indicate procedures for declassifying the data.
10. Integrate these issues into the security-awareness program so all employees
understand how to handle data at different classification levels.

Chapter 2: Information Security Governance and Risk Management
115
need to understand are the responsibilities that must be assigned, and whether they are
assigned to just a few people or to a large security team. These roles are the board of
directors, security officer, data owner, data custodian, system owner, security adminis-
trator, security analyst, application owner, supervisor (user manager), change control
analyst, data analyst, process owner, solution provider, user, product line manager, and
the guy who gets everyone coffee.
Board of Directors
Hey, Enron was successful for many years. What’s wrong with their approach?
The board of directors is a group of individuals who are elected by the shareholders
of a corporation to oversee the fulfillment of the corporation’s charter. The goal of the
board is to ensure the shareholders’ interests are being protected and that the corpora-
tion is being run properly. They are supposed to be unbiased and independent indi-
viduals who oversee the executive staff’s performance in running the company.
For many years, too many people who held these positions either looked the other
way regarding corporate fraud and mismanagement or depended too much on execu-
tive management’s feedback instead of finding out the truth about their company’s
health themselves. We know this because of all of the corporate scandals uncovered in
2002 (Enron, WorldCom, Global Crossing, and so on). The boards of directors of these
corporations were responsible for knowing about these types of fraudulent activities
and putting a stop to them to protect shareholders. Many things caused the directors
not to play the role they should have. Some were intentional, some not. These scandals
forced the U.S. government and the Securities and Exchange Commission (SEC) to
place more requirements, and potential penalties, on the boards of directors of pub-
licly traded companies. This is why many companies today are having a harder time
finding candidates to fulfill these roles—personal liability for a part-time job is a real
downer.
Independence is important if the board members are going to truly work for the
benefit of the shareholders. This means the board members should not have immedi-
ate family who are employees of the company, the board members should not receive
financial benefits from the company that could cloud their judgment or create conflicts
of interests, and no other activities should cause the board members to act other than
as champions of the company’s shareholders. This is especially true if the company
must comply with the Sarbanes-Oxley Act (SOX). Under this act, the board of directors
can be held personally responsible if the corporation does not properly maintain an
internal corporate governance framework, and/or if financials reported to the SEC are
incorrect.
NOTE
NOTE Other regulations also call out requirements of boards of directors, as
in the Gramm-Leach-Bliley Act (GLBA). But SOX is a regulation that holds the
members of the board personally responsible; thus, they can each be fined or
go to jail.

CISSP All-in-One Exam Guide
116
CAUTION
CAUTION The CISSP exam does not cover anything about specific
regulations (SOX, HIPAA, GLBA, Basel II, SB 1386, and so on). So do not get
wrapped up in studying these for the exam. However, it is critical that the
security professional understand the regulations and laws of the country and
region she is working within.
Executive Management
I am very important, but I am missing a “C” in my title.
Response: Then you are not so important.
This motley crew is made up of individuals whose titles start with a C. The chief
executive officer (CEO) has the day-to-day management responsibilities of an organiza-
tion. This person is often the chairperson of the board of directors and is the highest-
ranking officer in the company. This role is for the person who oversees the company’s
finances, strategic planning, and operations from a high level. The CEO is usually seen
as the visionary for the company and is responsible for developing and modifying the
company’s business plan. He sets budgets, forms partnerships, decides on what markets
to enter, what product lines to develop, how the company will differentiate itself, and
so on. This role’s overall responsibility is to ensure that the company grows and thrives.
NOTE
NOTE The CEO can delegate tasks, but not necessarily responsibility. More
and more regulations dealing with information security are holding this
role’s feet to the fire, which is why security departments across the land are
receiving more funding. Personal liability for the decision makers and purse-
string holders has loosened those purse strings, and companies are now able
to spend more money on security than before.
The chief financial officer (CFO) is responsible for the corporation’s account and
financial activities and the overall financial structure of the organization. This person is
responsible for determining what the company’s financial needs will be and how to
finance those needs. The CFO must create and maintain the company’s capital struc-
ture, which is the proper mix of equity, credit, cash, and debt financing. This person
oversees forecasting and budgeting and the processes of submitting quarterly and an-
nual financial statements to the SEC and stakeholders.
The CFO and CEO are responsible for informing stakeholders (creditors, analysts,
employees, management, investors) of the firm’s financial condition and health. After
the corporate debacles uncovered in 2002, the U.S. government and the SEC started
doling out stiff penalties to people who held these roles and abused them, as shown in
the following:
•January 2004 Enron ex-Chief Financial Officer Andrew Fastow was given
a ten-year prison sentence for his accounting scandals, which was a reduced
term because he cooperated with prosecutors.
•June 2005 John Rigas, the CEO of Adelphia Communications Corp., was
sentenced to 15 years in prison for his role in the looting and debt-hiding
scandal that pummeled the company into bankruptcy. His son, who also held
an executive position, was sentenced to 20 years.

Chapter 2: Information Security Governance and Risk Management
117
•July 2005 WorldCom ex-Chief Executive Officer Bernard Ebbers was
sentenced to 25 years in prison for his role in orchestrating the biggest
corporate fraud in the nation’s history.
•August 2005 Former WorldCom Chief Financial Officer Scott Sullivan was
sentenced to five years in prison for his role in engineering the $11 billion
accounting fraud that led to the bankruptcy of the telecommunications
powerhouse.
•December 2005 The former Chief Executive Officer of HealthSouth Corp.
was sentenced to five years in prison for his part in the $2.7 billion scandal.
These are only the big ones that made it into all the headlines. Other CEOs and CFOs
have also received punishments for “creative accounting” and fraudulent activities.
NOTE
NOTE Although the preceding activities took place years ago, these were the
events that motivated the U.S. government to create new laws and regulations
to control different types of fraud.
Figure 2-14 shows how the board members are responsible for setting the organiza-
tion’s strategy and risk appetite (how much risk the company should take on). The
Figure 2-14 Risk must be understood at different departments and levels.

CISSP All-in-One Exam Guide
118
board is also responsible for receiving information from executives, as well as for the
assurance (auditing committee). With these inputs, the board is supposed to ensure
that the company is running properly, thus protecting shareholders’ interests. Also no-
tice that the business unit owners are the risk owners, not the security department. Too
many companies are not extending the responsibility of risk out to the business units,
which is why the CISO position is commonly referred to as the sacrificial lamb.
Chief Information Officer
On a lower rung of the food chain is the chief information officer (CIO). This individu-
al can report to the CEO or CFO, depending upon the corporate structure, and is re-
sponsible for the strategic use and management of information systems and technology
within the organization. Over time, this position has become more strategic and less
operational in many organizations. CIOs oversee and are responsible for the day-in-
day-out technology operations of a company, but because organizations are so depen-
dent upon technology, CIOs are being asked to sit at the big boys’ corporate table more
and more.
CIO responsibilities have extended to working with the CEO (and other manage-
ment) on business-process management, revenue generation, and how business strat-
egy can be accomplished with the company’s underlying technology. This person
usually should have one foot in techno-land and one foot in business-land to be effec-
tive, because he is bridging two very different worlds.
The CIO sets the stage for the protection of company assets and is ultimately re-
sponsible for the success of the company security program. Direction should be com-
ing down from the CEO, and there should be clear lines of communication between
the board of directors, the C-level staff, and mid-management. In SOX, the CEO and
CFO have outlined responsibilities and penalties they can be personally liable for if
those responsibilities are not carried out. The SEC wanted to make sure these roles can-
not just allow their companies to absorb fines if they misbehave. Under this law they
can personally be fined millions of dollars and/or go to jail. Such things always make
them perk up during meetings.
Chief Privacy Officer
The chief privacy officer (CPO) is a newer position, created mainly because of the in-
creasing demands on organizations to protect a long laundry list of different types of
data. This role is responsible for ensuring that customer, company, and employee data
are kept safe, which keeps the company out of criminal and civil courts and hopefully
out of the headlines. This person is usually an attorney and is directly involved with
setting policies on how data are collected, protected, and given out to third parties. The
CPO often reports to the chief security officer.
It is important that the company understand the privacy, legal, and regulatory re-
quirements the organization must comply with. With this knowledge, you can then
develop the organization’s policies, standards, procedures, controls, and contract agree-
ments to see if privacy requirements are being properly met. Remember also that orga-
nizations are responsible for knowing how their suppliers, partners, and other third

Chapter 2: Information Security Governance and Risk Management
119
parties are protecting this sensitive information. Many times, companies will need to
review these other parties (which have copies of data needing protection).
Some companies have carried out risk assessments without including the penalties
and ramifications they would be forced to deal with if they did not properly protect the
information they are responsible for. Without including these liabilities, risk cannot be
properly assessed.
The organization should document how privacy data are collected, used, disclosed,
archived, and destroyed. Employees should be held accountable for not following the
organization’s standards on how to handle this type of information.
NOTE
NOTE Carrying out a risk assessment from the perspective of the protection
of sensitive data is called a privacy impact analysis. You can review “How
to Do a Privacy Assessment” at www.actcda.com/resource/multiapp.pdf to
understand the steps.
Chief Security Officer
Hey, we need a sacrificial lamb in case things go bad.
Response: We already have one. He’s called the chief security officer.
The chief security officer (CSO) is responsible for understanding the risks that the
company faces and for mitigating these risks to an acceptable level. This role is respon-
sible for understanding the organization’s business drivers and for creating and main-
taining a security program that facilitates these drivers, along with providing security,
compliance with a long list of regulations and laws, and any customer expectations or
contractual obligations.
The creation of this role is a mark in the “win” column for the security industry
because it means security is finally being seen as a business issue. Previously, security
was stuck in the IT department and was viewed solely as a technology issue. As organi-
zations saw the need to integrate security requirements and business needs, the need of
creating a position for security in the executive management team became more of a
necessity. The CSO’s job is to ensure that business is not disrupted in any way due to
security issues. This extends beyond IT and reaches into business processes, legal issues,
operational issues, revenue generation, and reputation protection.
Privacy
Privacy is different from security. Privacy indicates the amount of control an indi-
vidual should be able to have and expect as it relates to the release of their own
sensitive information. Security is the mechanisms that can be put into place to
provide this level of control.
It is becoming more critical (and more difficult) to protect personal identifi-
able information (PII) because of the increase of identity theft and financial
fraud threats. PII is a combination of identification elements (name, address,
phone number, account number, etc.). Organizations must have privacy policies
and controls in place to protect their employee and customer PII.

CISSP All-in-One Exam Guide
120
Security Steering Committee
Our steering committee just ran us into a wall.
Asecurity steering committee is responsible for making decisions on tactical and
strategic security issues within the enterprise as a whole and should not be tied to one
or more business units. The group should be made up of people from all over the orga-
nization so they can view risks and the effects of security decisions on individual de-
partments and the organization as a whole. The CEO should head this committee, and
the CFO, CIO, department managers, and chief internal auditor should all be on it.
This committee should meet at least quarterly and have a well-defined agenda.
Some of the group’s responsibilities are as follows:
• Define the acceptable risk level for the organization.
• Develop security objectives and strategies.
• Determine priorities of security initiatives based on business needs.
• Review risk assessment and auditing reports.
• Monitor the business impact of security risks.
• Review major security breaches and incidents.
• Approve any major change to the security policy and program.
They should also have a clearly defined vision statement in place that is set up to
work with and support the organizational intent of the business. The statement should
CSO vs. CISO
The CSO and chief information security officer (CISO) may have similar or very
different responsibilities. How is that for clarification? It is up to the individual
organization to define the responsibilities of these two roles and whether they
will use both, either, or neither. By and large, the CSO role usually has a farther-
reaching list of responsibilities compared to the CISO role. The CISO is usually
focused more on technology and has an IT background. The CSO usually is re-
quired to understand a wider range of business risks, including physical security,
not just technological risks.
The CSO is usually more of a businessperson and typically is present in larger
organizations. If a company has both roles, the CISO reports directly to the CSO.
The CSO is commonly responsible for the convergence, which is the formal
cooperation between previously disjointed security functions. This mainly per-
tains to physical and IT security working in a more concerted manner instead of
working in silos within the organization. Issues such as loss prevention, fraud
prevention, business continuity planning, legal/regulatory compliance, and in-
surance all have physical security and IT security aspects and requirements. So
one individual (CSO) overseeing and intertwining these different security disci-
plines allows for a more holistic and comprehensive security program.

Chapter 2: Information Security Governance and Risk Management
121
be structured in a manner that provides support for the goals of confidentiality, integ-
rity, and availability as they pertain to the business objectives of the organization. This
in turn should be followed, or supported, by a mission statement that provides support
and definition to the processes that will apply to the organization and allow it to reach
its business goals.
Audit Committee
The audit committee should be appointed by the board of directors to help it review and
evaluate the company’s internal operations, internal audit system, and the transparency
and accuracy of financial reporting so the company’s investors, customers, and credi-
tors have continued confidence in the organization.
This committee is usually responsible for at least the following items:
• The integrity of the company’s financial statements and other financial
information provided to stockholders and others
• The company’s system of internal controls
• The engagement and performance of the independent auditors
• The performance of the internal audit function
• Compliance with legal requirements, regulations, and company policies
regarding ethical conduct
The goal of this committee is to provide independent and open communications
among the board of directors, the company’s management, the internal auditors, and
external auditors. Financial statement integrity and reliability are crucial to every organi-
zation, and many times pressure from shareholders, management, investors, and the
public can directly affect the objectivity and correctness of these financial documents. In
the wake of high-profile corporate scandals, the audit committee’s role has shifted from
just overseeing, monitoring, and advising company management to enforcing and en-
suring accountability on the part of all individuals involved. This committee must take
input from external and internal auditors and outside experts to help ensure the com-
pany’s internal control processes and financial reporting are taking place properly.
Data Owner
The data owner (information owner) is usually a member of management who is in
charge of a specific business unit, and who is ultimately responsible for the protection
and use of a specific subset of information. The data owner has due care responsibilities
and thus will be held responsible for any negligent act that results in the corruption or
disclosure of the data. The data owner decides upon the classification of the data she is
responsible for and alters that classification if the business need arises. This person is
also responsible for ensuring that the necessary security controls are in place, defining
security requirements per classification and backup requirements, approving any dis-
closure activities, ensuring that proper access rights are being used, and defining user
access criteria. The data owner approves access requests or may choose to delegate this

CISSP All-in-One Exam Guide
122
function to business unit managers. And the data owner will deal with security viola-
tions pertaining to the data she is responsible for protecting. The data owner, who obvi-
ously has enough on her plate, delegates responsibility of the day-to-day maintenance
of the data protection mechanisms to the data custodian.
Data Custodian
Hey, custodian, clean up my mess!
Response: I’m not that type of custodian.
The data custodian (information custodian) is responsible for maintaining and pro-
tecting the data. This role is usually filled by the IT or security department, and the
duties include implementing and maintaining security controls; performing regular
backups of the data; periodically validating the integrity of the data; restoring data from
backup media; retaining records of activity; and fulfilling the requirements specified in
the company’s security policy, standards, and guidelines that pertain to information
security and data protection.
System Owner
I am god over this system!
Response: You are responsible for a printer? Your mother must be proud.
The system owner is responsible for one or more systems, each of which may hold
and process data owned by different data owners. A system owner is responsible for
integrating security considerations into application and system purchasing decisions
and development projects. The system owner is responsible for ensuring that adequate
security is being provided by the necessary controls, password management, remote
access controls, operating system configurations, and so on. This role must ensure the
systems are properly assessed for vulnerabilities and must report any to the incident
response team and data owner.
Security Administrator
The security administrator is responsible for implementing and maintaining specific
security network devices and software in the enterprise. These controls commonly in-
clude firewalls, IDS, IPS, antimalware, security proxies, data loss prevention, etc. It is
Data Owner Issues
Each business unit should have a data owner who protects the unit’s most critical
information. The company’s policies must give the data owners the necessary
authority to carry out their tasks.
This is not a technical role, but rather a business role that must understand
the relationship between the unit’s success and the protection of this critical as-
set. Not all businesspeople understand this role, so they should be given the
necessary training.

Chapter 2: Information Security Governance and Risk Management
123
common for there to be delineation between the security administrator and the net-
work administrator. The security administrator has the main focus of keeping the net-
work secure, and the network administrator has the focus of keeping things up and
running.
A security administrator’s tasks commonly also include creating new system user
accounts, implementing new security software, testing security patches and compo-
nents, and issuing new passwords. The security administrator must make sure access
rights given to users support the policies and data owner directives.
Security Analyst
I have analyzed your security and you have it all wrong.
Response: What a surprise.
The security analyst role works at a higher, more strategic level than the previously
described roles and helps develop policies, standards, and guidelines, as well as set vari-
ous baselines. Whereas the previous roles are “in the weeds” and focus on pieces and
parts of the security program, a security analyst helps define the security program ele-
ments and follows through to ensure the elements are being carried out and practiced
properly. This person works more at a design level than at an implementation level.
Application Owner
Some applications are specific to individual business units—for example, the account-
ing department has accounting software, R&D has software for testing and develop-
ment, and quality assurance uses some type of automated system. The application own-
ers, usually the business unit managers, are responsible for dictating who can and can-
not access their applications (subject to staying in compliance with the company’s se-
curity policies, of course).
Since each unit claims ownership of its specific applications, the application owner
for each unit is responsible for the security of the unit’s applications. This includes test-
ing, patching, performing change control on the programs, and making sure the right
controls are in place to provide the necessary level of protection.
Supervisor
The supervisor role, also called user manager, is ultimately responsible for all user activ-
ity and any assets created and owned by these users. For example, suppose Kathy is the
supervisor of ten employees. Her responsibilities would include ensuring that these
employees understand their responsibilities with respect to security; making sure the
employees’ account information is up-to-date; and informing the security administra-
tor when an employee is fired, suspended, or transferred. Any change that pertains to
an employee’s role within the company usually affects what access rights they should
and should not have, so the user manager must inform the security administrator of
these changes immediately.

CISSP All-in-One Exam Guide
124
Change Control Analyst
I have analyzed your change request and it will destroy this company.
Response: I am okay with that.
Since the only thing that is constant is change, someone must make sure changes
happen securely. The change control analyst is responsible for approving or rejecting
requests to make changes to the network, systems, or software. This role must make
certain that the change will not introduce any vulnerabilities, that it has been properly
tested, and that it is properly rolled out. The change control analyst needs to under-
stand how various changes can affect security, interoperability, performance, and pro-
ductivity. Or, a company can choose to just roll out the change and see what happens….
Data Analyst
Having proper data structures, definitions, and organization is very important to a
company. The data analyst is responsible for ensuring that data is stored in a way that
makes the most sense to the company and the individuals who need to access and work
with it. For example, payroll information should not be mixed with inventory informa-
tion, the purchasing department needs to have a lot of its values in monetary terms,
and the inventory system must follow a standardized naming scheme. The data analyst
may be responsible for architecting a new system that will hold company information,
or advise in the purchase of a product that will do so.
The data analyst works with the data owners to help ensure that the structures set
up coincide with and support the company’s business objectives.
Process Owner
Ever heard the popular mantra, “Security is not a product, it’s a process”? The statement
is very true. Security should be considered and treated like any another business pro-
cess—not as its own island, nor like a redheaded stepchild with cooties. (The author is
a redheaded stepchild, but currently has no cooties.)
All organizations have many processes: how to take orders from customers; how
to make widgets to fulfill these orders; how to ship the widgets to the customers; how
to collect from customers when they don’t pay their bills; and so on. An organization
could not function properly without well-defined processes.
The process owner is responsible for properly defining, improving upon, and mon-
itoring these processes. A process owner is not necessarily tied to one business unit or
application. Complex processes involve many variables that can span different depart-
ments, technologies, and data types.
Solution Provider
I came up with the solution to world peace, but then I forgot it.
Response: Write it down on this napkin next time.
Every vendor you talk to will tell you they are the right solution provider for what-
ever ails you. In truth, several different types of solution providers exist because the

Chapter 2: Information Security Governance and Risk Management
125
world is full of different problems. This role is called upon when a business has a prob-
lem or requires a process to be improved upon. For example, if Company A needs a
solution that supports digitally signed e-mails and an authentication framework for
employees, it would turn to a public key infrastructure (PKI) solution provider. A solu-
tion provider works with the business unit managers, data owners, and senior manage-
ment to develop and deploy a solution to reduce the company’s pain points.
User
I have concluded that our company would be much more secure without users.
Response: I’ll start on the pink slips.
The user is any individual who routinely uses the data for work-related tasks. The
user must have the necessary level of access to the data to perform the duties within
their position and is responsible for following operational security procedures to en-
sure the data’s confidentiality, integrity, and availability to others.
Product Line Manager
Who’s responsible for explaining business requirements to vendors and wading through
their rhetoric to see if the product is right for the company? Who is responsible for
ensuring compliance to license agreements? Who translates business requirements into
objectives and specifications for the developer of a product or solution? Who decides if
the company really needs to upgrade their operating system version every time Micro-
soft wants to make more money? That would be the product line manager.
This role must understand business drivers, business processes, and the technology
that is required to support them. The product line manager evaluates different products
in the market, works with vendors, understands different options a company can take,
and advises management and business units on the proper solutions needed to meet
their goals.
Auditor
The auditor is coming! Hurry up and do all that stuff he told us to do last year!
The function of the auditor is to come around periodically and make sure you are
doing what you are supposed to be doing. They ensure the correct controls are in place
and are being maintained securely. The goal of the auditor is to make sure the organiza-
tion complies with its own policies and the applicable laws and regulations. Organiza-
tions can have internal auditors and/or external auditors. The external auditors
commonly work on behalf of a regulatory body to make sure compliance is being met.
In an earlier section we covered CobiT, which is a model that most information secu-
rity auditors follow when evaluating a security program.
While many security professionals fear and dread auditors, they can be valuable
tools in ensuring the overall security of the organization. Their goal is to find the things
you have missed and help you understand how to fix the problem.

CISSP All-in-One Exam Guide
126
Why So Many Roles?
Most organizations will not have all the roles previously listed, but what is important
is to build an organizational structure that contains the necessary roles and map the
correct security responsibilities to them. This structure includes clear definitions of re-
sponsibilities, lines of authority and communication, and enforcement capabilities. A
clear-cut structure takes the mystery out of who does what and how things are handled
in different situations.
Personnel Security
Many facets of the responsibilities of personnel fall under management’s umbrella, and
several facets have a direct correlation to the overall security of the environment.
Although society has evolved to be extremely dependent upon technology in the
workplace, people are still the key ingredient to a successful company. But in security
circles, people are often the weakest link. Either accidentally through mistakes or lack
of training, or intentionally through fraud and malicious intent, personnel cause more
serious and hard-to-detect security issues than hacker attacks, outside espionage, or
equipment failure. Although the future actions of individuals cannot be predicted, it is
possible to minimize the risks by implementing preventive measures. These include
hiring the most qualified individuals, performing background checks, using detailed
job descriptions, providing necessary training, enforcing strict access controls, and ter-
minating individuals in a way that protects all parties involved.
Several items can be put into place to reduce the possibilities of fraud, sabotage,
misuse of information, theft, and other security compromises. Separation of duties
makes sure that one individual cannot complete a critical task by herself. In the movies,
when a submarine captain needs to launch a nuclear torpedo to blow up the enemy
and save civilization as we know it, the launch usually requires three codes to be en-
tered into the launching mechanism by three different senior crewmembers. This is an
example of separation of duties, and it ensures that the captain cannot complete such
an important and terrifying task all by himself.
Separation of duties is a preventative administrative control put into place to reduce
the potential of fraud. For example, an employee cannot complete a critical financial
transaction by herself. She will need to have her supervisor’s written approval before
the transaction can be completed.
In an organization that practices separation of duties, collusion must take place for
fraud to be committed. Collusion means that at least two people are working together
to cause some type of destruction or fraud. In our example, the employee and her su-
pervisor must be participating in the fraudulent activity to make it happen.
Two variations of separation of duties are split knowledge and dual control. In both
cases, two or more individuals are authorized and required to perform a duty or task.
In the case of split knowledge, no one person knows or has all the details to perform a
task. For example, two managers might be required to open a bank vault, with each

Chapter 2: Information Security Governance and Risk Management
127
only knowing part of the combination. In the case of dual control, two individuals are
again authorized to perform a task, but both must be available and active in their par-
ticipation to complete the task or mission. For example, two officers must perform an
identical key-turn in a nuclear missile submarine, each out of reach of the other, to
launch a missile. The control here is that no one person has the capability of launching
a missile, because they cannot reach to turn both keys at the same time.
Rotation of duties (rotation of assignments) is an administrative detective control
that can be put into place to uncover fraudulent activities. No one person should stay
in one position for a long time because they may end up having too much control over
a segment of the business. Such total control could result in fraud or the misuse of re-
sources. Employees should be moved into different roles with the idea that they may be
able to detect suspicious activity carried out by the previous employee carrying out that
position. This type of control is commonly implemented in financial institutions.
Employees in sensitive areas should be forced to take their vacations, which is
known as a mandatory vacation. While they are on vacation, other individuals fill their
positions and thus can usually detect any fraudulent errors or activities. Two of the
many ways to detect fraud or inappropriate activities would be the discovery of activity
on someone’s user account while they’re supposed to be away on vacation, or if a spe-
cific problem stopped while someone was away and not active on the network. These
anomalies are worthy of investigation. Employees who carry out fraudulent activities
commonly do not take vacations because they do not want anyone to figure out what
they are doing behind the scenes. This is why they must be forced to be away from the
organization for a period of time, usually two weeks.
Key Terms
•Data owner Individual responsible for the protection and
classification of a specific data set.
•Data custodian Individual responsible for implementing and
maintaining security controls to meet security requirements outlined
by data owner.
•Separation of duties Preventive administrative control used to ensure
one person cannot carry out a critical task alone.
•Collusion Two or more people working together to carry out
fraudulent activities.
•Rotation of duties Detective administrative control used to uncover
potential fraudulent activities.
•Mandatory vacation Detective administrative control used to uncover
potential fraudulent activities by requiring a person to be away from
the organization for a period of time.

CISSP All-in-One Exam Guide
128
Hiring Practices
I like your hat. You’re hired!
Depending on the position to be filled, a level of screening should be done by hu-
man resources to ensure the company hires the right individual for the right job. Skills
should be tested and evaluated, and the caliber and character of the individual should
be examined. Joe might be the best programmer in the state, but if someone looks into
his past and finds out he served prison time because he continually flashes old ladies
in parks, the hiring manager might not be so eager to bring Joe into the organization.
Nondisclosure agreements must be developed and signed by new employees to pro-
tect the company and its sensitive information. Any conflicts of interest must be ad-
dressed, and there should be different agreements and precautions taken with temporary
and contract employees.
References should be checked, military records reviewed, education verified, and if
necessary, a drug test should be administered. Many times, important personal behav-
iors can be concealed, and that is why hiring practices now include scenario questions,
personality tests, and observations of the individual, instead of just looking at a per-
son’s work history. When a person is hired, he is bringing his skills and whatever other
baggage he carries. A company can reduce its heartache pertaining to personnel by first
conducting useful and careful hiring practices.
The goal is to hire the “right person” and not just hire a person for “right now.”
Employees represent an investment on the part of the organization, and by taking the
time and hiring the right people for the jobs, the organization will be able to maximize
their investment and achieve a better return.
A more detailed background check can reveal some interesting information. Things
like unexplained gaps in employment history, the validity and actual status of profes-
sional certifications, criminal records, driving records, job titles that have been misrep-
resented, credit histories, unfriendly terminations, appearances on suspected terrorist
watch lists, and even real reasons for having left previous jobs can all be determined
through the use of background checks. This has real benefit to the employer and the
organization because it serves as the first line of defense for the organization against
being attacked from within. Any negative information that can be found in these areas
could be indicators of potential problems that the potential employee could create for
the company at a later date. Take the credit report for instance. On the surface, this may
seem to be something the organization doesn’t need to know about, but if the report
indicates the potential employee has a poor credit standing and a history of financial
problems, it could mean you don’t want to place them in charge of the organization’s
accounting, or even the petty cash.
Ultimately, the goal here is to achieve several different things at the same time by
using a background check. You’re trying to mitigate risk, lower hiring costs, and also
lower the turnover rate for employees. All this is being done at the same time you are
trying to protect your existing customers and employees from someone gaining em-

Chapter 2: Information Security Governance and Risk Management
129
ployment in your organization who could potentially conduct malicious and dishonest
actions that could harm you, your employees, and your customers as well as the gen-
eral public. In many cases, it is also harder to go back and conduct background checks
after the individual has been hired and is working. This is because there will need to be
a specific cause or reason for conducting this kind of investigation, and if any employee
moves to a position of greater security sensitivity or potential risk, a follow-up investi-
gation should be considered.
Possible background check criteria could include
• A Social Security number trace
• A county/state criminal check
• A federal criminal check
• A sexual offender registry check
• Employment verification
• Education verification
• Professional reference verification
• An immigration check
• Professional license/certification verification
• Credit report
• Drug screening
Termination
I no longer like your hat. You are fired.
Because terminations can happen for a variety of reasons, and terminated people
have different reactions, companies should have a specific set of procedures to follow
with every termination. For example:
• The employee must leave the facility immediately under the supervision of a
manager or security guard.
• The employee must surrender any identification badges or keys, complete an
exit interview, and return company supplies.
• That user’s accounts and passwords should be disabled or changed immediately.
It seems harsh and cold when this actually takes place, but too many companies
have been hurt by vengeful employees who have lashed out at the company when their
positions were revoked for one reason or another. If an employee is disgruntled in any
way, or the termination is unfriendly, that employee’s accounts should be disabled right
away, and all passwords on all systems changed.

CISSP All-in-One Exam Guide
130
Security-Awareness Training
Our CEO said our organization is secure.
Response: He needs more awareness training than anyone.
For an organization to achieve the desired results of its security program, it must
communicate the what, how, and why of security to its employees. Security-awareness
training should be comprehensive, tailored for specific groups, and organization-wide.
It should repeat the most important messages in different formats; be kept up-to-date;
be entertaining, positive, and humorous; be simple to understand; and—most impor-
tant—be supported by senior management. Management must allocate the resources
for this activity and enforce its attendance within the organization.
The goal is for each employee to understand the importance of security to the com-
pany as a whole and to each individual. Expected responsibilities and acceptable be-
haviors must be clarified, and noncompliance repercussions, which could range from a
warning to dismissal, must be explained before being invoked. Security-awareness
training is performed to modify employees’ behavior and attitude toward security. This
can best be achieved through a formalized process of security-awareness training.
Because security is a topic that can span many different aspects of an organization,
it can be difficult to communicate the correct information to the right individuals. By
using a formalized process for security-awareness training, you can establish a method
that will provide you with the best results for making sure security requirements are
presented to the right people in an organization. This way you can make sure everyone
understands what is outlined in the organization’s security program, why it is impor-
tant, and how it fits into the individual’s role in the organization. The higher levels of
training may be more general and deal with broader concepts and goals, and as it
moves down to specific jobs and tasks, the training will become more situation-specif-
ic as it directly applies to certain positions within the company.
A security-awareness program is typically created for at least three types of audi-
ences: management, staff, and technical employees. Each type of awareness training
must be geared toward the individual audience to ensure each group understands its
particular responsibilities, liabilities, and expectations. If technical security training
were given to senior management, their eyes would glaze over as soon as protocols and
firewalls were mentioned. On the flip side, if legal ramifications, company liability is-
sues pertaining to protecting data, and shareholders’ expectations were discussed with
the IT group, they would quickly turn to their smart phone and start tweeting, browsing
the Internet, or texting their friends.
Members of management would benefit the most from a short, focused security-
awareness orientation that discusses corporate assets and financial gains and losses
pertaining to security. They need to know how stock prices can be negatively affected by
compromises, understand possible threats and their outcomes, and know why security
must be integrated into the environment the same way as other business processes.
Because members of management must lead the rest of the company in support of se-
curity, they must gain the right mindset about its importance.
Mid-management would benefit from a more detailed explanation of the policies,
procedures, standards, and guidelines and how they map to the individual departments

Chapter 2: Information Security Governance and Risk Management
131
for which they are responsible. Middle managers should be taught why their support for
their specific departments is critical and what their level of responsibility is for ensuring
that employees practice safe computing activities. They should also be shown how the
consequences of noncompliance by individuals who report to them can affect the com-
pany as a whole and how they, as managers, may have to answer for such indiscretions.
The technical departments must receive a different presentation that aligns more to
their daily tasks. They should receive a more in-depth training to discuss technical con-
figurations, incident handling, and recognizing different types of security compromises.
It is usually best to have each employee sign a document indicating they have heard
and understand all the security topics discussed, and that they also understand the
ramifications of noncompliance. This reinforces the policies’ importance to the em-
ployee and also provides evidence down the road if the employee claims they were
never told of these expectations. Awareness training should happen during the hiring
process and at least annually after that. Attendance of training should also be integrated
into employment performance reports.
Various methods should be employed to reinforce the concepts of security aware-
ness. Things like banners, employee handbooks, and even posters can be used as ways
to remind employees about their duties and the necessities of good security practices.
Degree or Certification?
Some roles within the organization need hands-on experience and skill, meaning that
the hiring manager should be looking for specific industry certifications. Some posi-
tions require more of a holistic and foundational understanding of concepts or a busi-
ness background, and in those cases a degree may be required. Table 2-12 provides
more information on the differences between awareness, training, and education.
Awareness Training Education
Attribute “What” “How” “Why”
Level Information Knowledge Insight
Learning
objective
Recognition and
retention
Skill Understanding
Example teaching
method
Media
• Videos
• Newsletters
• Posters
Practical Instruction
• Lecture and/or demo
• Case study
• Hands-on practice
Theoretical Instruction
• Seminar and
discussion
• Reading and study
• Research
Test measure True/False, multiple
choice (identify
learning)
Problem solving—i.e.,
recognition and
resolution (apply
learning)
Essay (interpret
learning)
Impact
timeframe
Short-term Intermediate Long-term
Table 2-12 Aspects of Awareness, Training, and Education

CISSP All-in-One Exam Guide
132
Security Governance
Are we doing all this stuff right?
An organization may be following many of the items laid out in this chapter: building
a security program, integrating it into their business architecture, developing a risk man-
agement program, documenting the different aspects of the security program, performing
data protection, and training their staff. But how do we know we are doing it all correctly
and on an ongoing basis? This is where security governance comes into play. Security
governance is a framework that allows for the security goals of an organization to be set
and expressed by senior management, communicated throughout the different levels of
the organization, grant power to the entities needed to implement and enforce security,
and provide a way to verify the performance of these necessary security activities. Not
only does senior management need to set the direction of security, it needs a way to be
able to view and understand how their directives are being met or not being met.
If a board of directors and CEO demand that security be integrated properly at all
levels of the organization, how do they know it is really happening? Oversight mecha-
nisms must be developed and integrated so that the people who are ultimately respon-
sible for an organization are constantly and consistently updated on the overall health
and security posture of the organization. This happens through properly defined com-
munication channels, standardized reporting methods, and performance-based metrics.
Let’s compare two companies. Company A has an effective security governance pro-
gram in place and Company B does not. Now, to the untrained eye it would seem as
though Companies A and B are equal in their security practices because they both have
security policies, procedures, standards, the same security technology controls (fire-
walls, IDSs, identity management, and so on), security roles are defined, and security
awareness is in place. You may think, “Man, these two companies are on the ball and
quite evolved in their security programs.” But if you look closer, you will see some
critical differences (listed in Table 2-13).
Does the organization you work for look like Company A or Company B? Most orga-
nizations today have many of the pieces and parts to a security program (policies, stan-
dards, firewalls, security team, IDS, and so on), but management may not be truly involved,
and security has not permeated throughout the organization. Some organizations rely just
on technology and isolate all security responsibilities within the IT group. If security were
just a technology issue, then this security team could properly install, configure, and main-
tain the products, and the company would get a gold star and pass the audit with flying
colors. But that is not how the world of information security works today. It is much more
than just technological solutions. Security must be utilized throughout the organization,
and having several points of responsibility and accountability is critical. Security gover-
nance is a coherent system of integrated processes that helps to ensure consistent over-
sight, accountability, and compliance. It is a structure that we should put in place to make
sure that our efforts are streamlined and effective and that nothing is being missed.
Metrics
We really can’t just build a security program, call it good, and go home. We need a way
to assess the effectiveness of our work, identify deficiencies, and prioritize the things
that still need work. We need a way to facilitate decision making, performance improve-

Chapter 2: Information Security Governance and Risk Management
133
ment, and accountability through collection, analysis, and reporting of the necessary
information. As the saying goes, “You can’t manage something you can’t measure.” In
security there are many items that need to be measured so that performance is properly
understood. We need to know how effective and efficient our security controls are to
not only make sure that assets are properly protected, but also to ensure that we are
being financially responsible in our budgetary efforts.
There are different methodologies that can be followed when it comes to develop-
ing security metrics, but no matter what model is followed, some things are critical
across the board. Strong management support is necessary, because while it might seem
that developing ways of counting things is not overly complex, the actual implementa-
tion and use of a metric and measuring system can be quite an undertaking. The metrics
have to be developed, adopted, integrated into many different existing and new processes,
Company A Company B
Board members understand that information
security is critical to the company and
demand to be updated quarterly on security
performance and breaches.
Board members do not understand that
information security is in their realm of
responsibility and focus solely on corporate
governance and profits.
CEO, CFO, CIO, and business unit managers
participate in a risk management committee
that meets each month, and information
security is always one topic on the agenda to
review.
CEO, CFO, and business unit managers
feel as though information security is the
responsibility of the CIO, CISO, and IT
department and do not get involved.
Executive management sets an acceptable
risk level that is the basis for the company’s
security policies and all security activities.
The CISO took some boilerplate security
policies and inserted his company’s name and
had the CEO sign them.
Executive management holds business unit
managers responsible for carrying out risk
management activities for their specific
business units.
All security activity takes place within the
security department; thus, security works
within a silo and is not integrated throughout
the organization.
Critical business processes are documented
along with the risks that are inherent at the
different steps within the business processes.
Business processes are not documented and
not analyzed for potential risks that can affect
operations, productivity, and profitability.
Employees are held accountable for any
security breaches they participate in, either
maliciously or accidentally.
Policies and standards are developed, but no
enforcement or accountability practices have
been envisioned or deployed.
Security products, managed services, and
consultants are purchased and deployed in
an informed manner. They are also constantly
reviewed to ensure they are cost-effective.
Security products, managed services, and
consultants are purchased and deployed
without any real research or performance
metrics to determine the return on investment
or effectiveness.
The organization is continuing to review its
processes, including security, with the goal of
continued improvement.
The organization does not analyze its
performance for improvement, but continually
marches forward and makes similar mistakes
over and over again.
Table 2-13 Comparison of Company A and Company B

CISSP All-in-One Exam Guide
134
interpreted, and used in decision-making efforts. Management needs to be on board if
this effort is going to be successful.
Another requirement is that there has to be established policies, procedures, and
standards to measure against. How can you measure policy compliance when there are
no policies in place? A full security program needs to be developed and matured before
attempting to measure its pieces and parts.
Measurement activities need to provide quantifiable performance-based data that is
repeatable, reliable, and produces results that are meaningful. Measurement will need
to happen on a continuous basis, so the data collection methods must be repeatable.
The same type of data must be continuously gathered and compared so that improve-
ment or a drop in improvement can be identified. The data collection may come from
parsing system logs, incident response reports, audit findings, surveys, or risk assess-
ments. The measurement results must also be meaningful for the intended audience.
An executive will want data portrayed in a method that allows him to understand the
health of the security program quickly and in terms he is used to. This can be a heat
map, graph, pie chart, or scorecard. A balanced scorecard, shown in Figure 2-15, is a
traditional strategic tool used for performance measurement in the business world. The
goal is to present the most relevant information quickly and easily. Measurements are
compared with set target values so if performance deviates from expectations, they can
be conveyed in a simplistic and straightforward manner.
Vision
and
Strategy
Objectives
Measures
Targets
Initiatives
Learning and growth
To achieve our
vision, how will
we sustain our
ability to change
and improve?
Objectives
Measures
Targets
Initiatives
Financial
To succeed
financially, how
should we
appear to our
shareholders?
Objectives
Measures
Targets
Initiatives
Customer
To achieve our
vision, how
should we
appear to our
customers?
Objectives
Measures
Targets
Initiatives
Internal business
To satisfy our
shareholders, and
customers,
what business
processes must
we excel at?
Figure 2-15 Balanced scorecard

Chapter 2: Information Security Governance and Risk Management
135
If the audience for the measurement values are not executives, but instead security
administrators, then the results are presented in a manner that is easiest for them to
understand and use.
CAUTION
CAUTION This author has seen many scorecards, pie charts, graphics,
and dashboard results that do not map to what is really going on in the
environment. Unless real data is gathered and the correct data is gathered, the
resulting pie chart can illustrate a totally different story than what is really
taking place. Some people spend more time making the colors in the graph
look eye-pleasing instead of perfecting the raw data-gathering techniques. This
can lead to a false sense of security and ultimate breaches.
There are industry best practices that can be used to guide the development of a
security metric and measurement system. The international standard is ISO/IEC
27004:2009, which is used to assess the effectiveness of an ISMS and the controls that
make up the security program as outlined in ISO/IEC 27001. So ISO/IEC 27001 tells
you how to build a security program and then ISO/IEC 27004 tells you how to measure
it. The NIST 800-55 publication also covers performance measuring for information
security, but has a U.S. government slant. The ISO standard and NIST approaches to
metric development are similar, but have some differences. The ISO standard breaks
individual metrics down into base measures, derived measures, and then indicator val-
ues. The NIST approach is illustrated in Figure 2-16, which breaks metrics down into
implementation, effectiveness/efficiency, and impact values.
Figure 2-16 Security measurement processes
Stakeholders
and interests
Goals and
objectives
Business/
mission impact
•
•
Business value gained
or lost
Acceptable loss
estimate
•
•
Timelessness of security
services delivered
Operational results
experienced by security
program implementation
Identification and Definitions
Measures Development and Selection
•Implementation level of
established security
standards, policies and
procedures
Business impact Effectiveness/efficiency Process implementation
Program
results
Information
security policies,
guideline and
procedures
Policy
update
Goal/objective
redefinition
Level of
implementation
Continuous
implementation
Information
systems security
program
implementation

CISSP All-in-One Exam Guide
136
If your organization has the goal of becoming ISO/IEC 27000 certified, then you
should follow ISO/IEC 27004:2009. If your organization is governmental or a govern-
ment contracting company, then following the NIST standard would make more sense.
What is important is consistency. For metrics to be used in a successful manner, they
have to be standardized and have a direct relationship to each other. For example, if an
organization used a rating system of 1–10 to measure incident response processes and
a rating system of High, Medium, and Low to measure malware infection protection
mechanisms, these metrics could not be integrated easily. An organization needs to
establish the metric value types it will use and implement them in a standardized meth-
od across the enterprise. Measurement processes need to be thought through at a de-
tailed level before attempting implementation. Table 2-14 illustrates a metric template
that can be used to track incident response performance levels.
The types of metrics that are developed need to map to the maturity level of the
security program. In the beginning simplistic items are measured (i.e., number of com-
pleted policies), and as the program matures the metrics mature and can increase in
complexity (i.e., number of vulnerabilities mitigated).
The use of metrics allows an organization to truly understand the health of their
security program because each activity and initiative can be measured in a quantifiable
manner. The metrics are used in governing activities because this allows for the best
strategic decisions to be made. The use of metrics also allows the organization to imple-
ment and follow the capability maturity model described earlier. A maturity model is
used to carry out incremental improvements, and the metric results indicate what needs
to be improved and to what levels. Metrics can also be used in process improvement
models, as in Six Sigma and the measurements of service level targets for ITIL. We don’t
only need to know what to do (implement controls, build a security program), we need
to know how well we did it and how to continuously improve.
Field Data
Measure ID Incident Response Measure 1
Goal Strategic Goal: Make accurate, timely information on the organization’s
programs and services readily available
Information Security Goal: Track, document, and report incidents to
appropriate organizational officials and/or authorities.
Measure Percentage of incidents reported within required timeframe per
applicable incident category.
Measure Type Effectiveness
Formula For each incident category (number of incidents reported on time/
total number of reported incidents) * 100
Target 85%
Table 2-14 Incident Response Measurement Template

Chapter 2: Information Security Governance and Risk Management
137
Summary
A security program should address issues from a strategic, tactical, and operational
view, as shown in Figure 2-17. The security program should be integrated at every level
of the enterprise’s architecture. Security management embodies the administrative and
procedural activities necessary to support and protect information and company assets
throughout the enterprise. It includes development and enforcement of security poli-
cies and their supporting mechanisms: procedures, standards, baselines, and guide-
lines. It encompasses enterprise security development, risk management, proper coun-
termeasure selection and implementation, governance, and performance measurement.
Security is a business issue and should be treated as such. It must be properly inte-
grated into the company’s overall business goals and objectives because security issues
can negatively affect the resources the company depends upon. More and more corpo-
rations are finding out the price paid when security is not given the proper attention,
support, and funds. This is a wonderful world to live in, but bad things can happen. The
ones who realize this notion not only survive, but also thrive.
Field Data
Implementation Evidence How many incidents were reported during the period of
12 months?
Category 1. Unauthorized Access? ______
Category 2. Denial of Service? _________
Category 3. Malicious Code? __________
Category 4. Improper Usage? __________
Category 5. Access Attempted? ________
How many incidents involved PII?
Of the incidents reported, how many were reported within the
prescribed timeframe for their category?
Category 1. Unauthorized Access? ______
Category 2. Denial of Service? _________
Category 3. Malicious Code? __________
Category 4. Improper Usage? __________
Category 5. Access Attempted? ________
Of the PII incidents reported, how many were reported within the
prescribed timeframe for their category?
Frequency Collection Frequency: Monthly
Reporting Frequency: Annually
Responsible Parties CIO, CISO
Data Source Incident logs, incident tracking database
Reporting Format Line chart that illustrates individual categories
Table 2-14 Incident Response Measurement Template (continued)

CISSP All-in-One Exam Guide
138
Quick Tips
• The objectives of security are to provide availability, integrity, and
confidentiality protection to data and resources.
• A vulnerability is the absence of or weakness in a control.
• A threat is the possibility that someone or something would exploit a
vulnerability, intentionally or accidentally, and cause harm to an asset.
• A risk is the probability of a threat agent exploiting a vulnerability and the loss
potential from that action.
Figure 2-17 A complete security program contains many items.

Chapter 2: Information Security Governance and Risk Management
139
• A countermeasure, also called a safeguard or control, mitigates the risk.
• A control can be administrative, technical, or physical and can provide
deterrent, preventive, detective, corrective, or recovery protection.
• A compensating control is an alternate control that is put into place because
of financial or business functionality reasons.
• CobiT is a framework of control objectives and allows for IT governance.
• ISO/IEC 27001 is the standard for the establishment, implementation,
control, and improvement of the information security management system.
• The ISO/IEC 27000 series were derived from BS 7799 and are international
best practices on how to develop and maintain a security program.
• Enterprise architecture frameworks are used to develop architectures for
specific stakeholders and present information in views.
• An information security management system (ISMS) is a coherent set of
policies, processes, and systems to manage risks to information assets as
outlined in ISO\IEC 27001.
• Enterprise security architecture is a subset of business architecture and a way
to describe current and future security processes, systems, and subunits to
ensure strategic alignment.
• Blueprints are functional definitions for the integration of technology into
business processes.
• Enterprise architecture frameworks are used to build individual architectures
that best map to individual organizational needs and business drivers.
• Zachman is an enterprise architecture framework, and SABSA is a security
enterprise architecture framework.
• COSO is a governance model used to help prevent fraud within a corporate
environment.
• ITIL is a set of best practices for IT service management.
• Six Sigma is used to identify defects in processes so that the processes can be
improved upon.
• CMMI is a maturity model that allows for processes to improve in an
incremented and standard approach.
• Security enterprise architecture should tie in strategic alignment, business
enablement, process enhancement, and security effectiveness.
• NIST 800-53 uses the following control categories: technical, management,
and operational.
• OCTAVE is a team-oriented risk management methodology that employs
workshops and is commonly used in the commercial sector.
• Security management should work from the top down (from senior
management down to the staff).

CISSP All-in-One Exam Guide
140
• Risk can be transferred, avoided, reduced, or accepted.
• Threats × vulnerability × asset value = total risk.
• (Threats × vulnerability × asset value) × controls gap = residual risk.
• The main goals of risk analysis are the following: identify assets and assign
values to them, identify vulnerabilities and threats, quantify the impact of
potential threats, and provide an economic balance between the impact of the
risk and the cost of the safeguards.
• Failure Modes and Effect Analysis (FMEA) is a method for determining
functions, identifying functional failures, and assessing the causes of failure
and their failure effects through a structured process.
• A fault tree analysis is a useful approach to detect failures that can take place
within complex environments and systems.
• A quantitative risk analysis attempts to assign monetary values to components
within the analysis.
• A purely quantitative risk analysis is not possible because qualitative items
cannot be quantified with precision.
• Capturing the degree of uncertainty when carrying out a risk analysis
is important, because it indicates the level of confidence the team and
management should have in the resulting figures.
• Automated risk analysis tools reduce the amount of manual work involved in
the analysis. They can be used to estimate future expected losses and calculate
the benefits of different security measures.
• Single loss expectancy × frequency per year = annualized loss expectancy
(SLE × ARO = ALE).
• Qualitative risk analysis uses judgment and intuition instead of numbers.
• Qualitative risk analysis involves people with the requisite experience and
education evaluating threat scenarios and rating the probability, potential
loss, and severity of each threat based on their personal experience.
• The Delphi technique is a group decision method where each group member
can communicate anonymously.
• When choosing the right safeguard to reduce a specific risk, the cost,
functionality, and effectiveness must be evaluated and a cost/benefit analysis
performed.
• A security policy is a statement by management dictating the role security
plays in the organization.
• Procedures are detailed step-by-step actions that should be followed to achieve
a certain task.

Chapter 2: Information Security Governance and Risk Management
141
• Standards are documents that outline rules that are compulsory in nature and
support the organization’s security policies.
• A baseline is a minimum level of security.
• Guidelines are recommendations and general approaches that provide advice
and flexibility.
• Job rotation is a detective administrative control to detect fraud.
• Mandatory vacations are a detective administrative control type that can help
detect fraudulent activities.
• Separation of duties ensures no single person has total control over a critical
activity or task. It is a preventative administrative control.
• Split knowledge and dual control are two aspects of separation of duties.
• Data owners specify the classification of data, and data custodians implement
and maintain controls to enforce the set classification levels.
• Security has functional requirements, which define the expected behavior
from a product or system, and assurance requirements, which establish
confidence in the implemented products or systems overall.
• Management must define the scope and purpose of security management,
provide support, appoint a security team, delegate responsibility, and review
the team’s findings.
• The risk management team should include individuals from different
departments within the organization, not just technical personnel.
• Social engineering is a nontechnical attack carried out to manipulate a person
into providing sensitive data to an unauthorized individual.
• Personal identification information (PII) is a collection of identity-based data
that can be used in identity theft and financial fraud, and thus must be highly
protected.
• Security governance is a framework that provides oversight, accountability,
and compliance.
• ISO/IEC 27004:2009 is an international standard for information security
measurement management.
• NIST 800-55 is a standard for performance measurement for information
security.
Questions
Please remember that these questions are formatted and asked in a certain way for a
reason. You must remember that the CISSP exam is asking questions at a conceptual
level. Questions may not always have the perfect answer, and the candidate is advised
against always looking for the perfect answer. The candidate should look for the best
answer in the list.

CISSP All-in-One Exam Guide
142
1. Who has the primary responsibility of determining the classification level for
information?
A. The functional manager
B. Senior management
C. The owner
D. The user
2. If different user groups with different security access levels need to access the
same information, which of the following actions should management take?
A. Decrease the security level on the information to ensure accessibility and
usability of the information.
B. Require specific written approval each time an individual needs to access
the information.
C. Increase the security controls on the information.
D. Decrease the classification label on the information.
3. What should management consider the most when classifying data?
A. The type of employees, contractors, and customers who will be accessing
the data
B. Availability, integrity, and confidentiality
C. Assessing the risk level and disabling countermeasures
D. The access controls that will be protecting the data
4. Who is ultimately responsible for making sure data is classified and protected?
A. Data owners
B. Users
C. Administrators
D. Management
5. Which factor is the most important item when it comes to ensuring security is
successful in an organization?
A. Senior management support
B. Effective controls and implementation methods
C. Updated and relevant security policies and procedures
D. Security awareness by all employees
6. When is it acceptable to not take action on an identified risk?
A. Never. Good security addresses and reduces all risks.
B. When political issues prevent this type of risk from being addressed.

Chapter 2: Information Security Governance and Risk Management
143
C. When the necessary countermeasure is complex.
D. When the cost of the countermeasure outweighs the value of the asset and
potential loss.
7. Which is the most valuable technique when determining if a specific security
control should be implemented?
A. Risk analysis
B. Cost/benefit analysis
C. ALE results
D. Identifying the vulnerabilities and threats causing the risk
8. Which best describes the purpose of the ALE calculation?
A. Quantifies the security level of the environment
B. Estimates the loss possible for a countermeasure
C. Quantifies the cost/benefit result
D. Estimates the loss potential of a threat in a span of a year
9. The security functionality defines the expected activities of a security
mechanism, and assurance defines which of the following?
A. The controls the security mechanism will enforce
B. The data classification after the security mechanism has been implemented
C. The confidence of the security the mechanism is providing
D. The cost/benefit relationship
10. How do you calculate residual risk?
A. Threats × risks × asset value
B. (Threats × asset value × vulnerability) × risks
C. SLE × frequency = ALE
D. (Threats × vulnerability × asset value) × controls gap
11. Why should the team that will perform and review the risk analysis
information be made up of people in different departments?
A. To make sure the process is fair and that no one is left out.
B. It shouldn’t. It should be a small group brought in from outside the
organization because otherwise the analysis is biased and unusable.
C. Because people in different departments understand the risks of their
department. Thus, it ensures the data going into the analysis is as close to
reality as possible.
D. Because the people in the different departments are the ones causing the
risks, so they should be the ones held accountable.

CISSP All-in-One Exam Guide
144
12. Which best describes a quantitative risk analysis?
A. A scenario-based analysis to research different security threats
B. A method used to apply severity levels to potential loss, probability of loss,
and risks
C. A method that assigns monetary values to components in the risk
assessment
D. A method that is based on gut feelings and opinions
13. Why is a truly quantitative risk analysis not possible to achieve?
A. It is possible, which is why it is used.
B. It assigns severity levels. Thus, it is hard to translate into monetary values.
C. It is dealing with purely quantitative elements.
D. Quantitative measures must be applied to qualitative elements.
14. What is CobiT and where does it fit into the development of information
security systems and security programs?
A. Lists of standards, procedures, and policies for security program development
B. Current version of ISO 17799
C. A framework that was developed to deter organizational internal fraud
D. Open standards for control objectives
15. What are the four domains that make up CobiT?
A. Plan and Organize, Acquire and Implement, Deliver and Support, and
Monitor and Evaluate
B. Plan and Organize, Maintain and Implement, Deliver and Support, and
Monitor and Evaluate
C. Plan and Organize, Acquire and Implement, Support and Purchase, and
Monitor and Evaluate
D. Acquire and Implement, Deliver and Support, and Monitor and Evaluate
16. What is the ISO/IEC 27799 standard?
A. A standard on how to protect personal health information
B. The new version of BS 17799
C. Definitions for the new ISO 27000 series
D. The new version of NIST 800-60
17. CobiT was developed from the COSO framework. What are COSO’s main
objectives and purpose?
A. COSO is a risk management approach that pertains to control objectives
and IT business processes.
B. Prevention of a corporate environment that allows for and promotes
financial fraud.

Chapter 2: Information Security Governance and Risk Management
145
C. COSO addresses corporate culture and policy development.
D. COSO is risk management system used for the protection of federal
systems.
18. OCTAVE, NIST 800-30, and AS/NZS 4360 are different approaches to carrying
out risk management within companies and organizations. What are the
differences between these methods?
A. NIST 800-30 and OCTAVE are corporate based, while AS/NZS is
international.
B. NIST 800-30 is IT based, while OCTAVE and AS/NZS 4360 are corporate
based.
C. AS/NZS is IT based, and OCTAVE and NIST 800-30 are assurance based.
D. NIST 800-30 and AS/NZS are corporate based, while OCTAVE is
international.
Use the following scenario to answer Questions 19–21. A server that houses sensitive data
has been stored in an unlocked room for the last few years at Company A. The door to
the room has a sign on the door that reads “Room 1.” This sign was placed on the door
with the hope that people would not look for important servers in this room. Realizing
this is not optimum security, the company has decided to install a reinforced lock and
server cage for the server and remove the sign. They have also hardened the server’s
configuration and employed strict operating system access controls.
19. The fact that the server has been in an unlocked room marked “Room 1” for
the last few years means the company was practicing which of the following?
A. Logical security
B. Risk management
C. Risk transference
D. Security through obscurity
20. The new reinforced lock and cage serve as which of the following?
A. Logical controls
B. Physical controls
C. Administrative controls
D. Compensating controls
21. The operating system access controls comprise which of the following?
A. Logical controls
B. Physical controls
C. Administrative controls
D. Compensating controls

CISSP All-in-One Exam Guide
146
Use the following scenario to answer Questions 22–24. A company has an e-commerce
website that carries out 60 percent of its annual revenue. Under the current circum-
stances, the annualized loss expectancy for a website against the threat of attack is
$92,000. After implementing a new application-layer firewall, the new annualized loss
expectancy would be $30,000. The firewall costs $65,000 per year to implement and
maintain.
22. How much does the firewall save the company in loss expenses?
A. $62,000
B. $3,000
C. $65,000
D. $30,000
23. What is the value of the firewall to the company?
A. $62,000
B. $3,000
C. –$62,000
D. –$3,000
24. Which of the following describes the company’s approach to risk
management?
A. Risk transference
B. Risk avoidance
C. Risk acceptance
D. Risk mitigation
Use the following scenario to answer Questions 25–27. A small remote office for a company
is valued at $800,000. It is estimated, based on historical data, that a fire is likely to
occur once every ten years at a facility in this area. It is estimated that such a fire would
destroy 60 percent of the facility under the current circumstances and with the current
detective and preventative controls in place.
25. What is the Single Loss Expectancy (SLE) for the facility suffering from a fire?
A. $80,000
B. $480,000
C. $320,000
D. 60%
26. What is the Annualized Rate of Occurrence (ARO)?
A. 1
B. 10
C. .1
D. .01

Chapter 2: Information Security Governance and Risk Management
147
27. What is the Annualized Loss Expectancy (ALE)?
A. $480,000
B. $32,000
C. $48,000
D. .6
28. The international standards bodies ISO and IEC developed a series of standards
that are used in organizations around the world to implement and maintain
information security management systems. The standards were derived from
the British Standard 7799, which was broken down into two main pieces.
Organizations can use this series of standards as guidelines, but can also be
certified against them by accredited third parties. Which of the following are
incorrect mappings pertaining to the individual standards that make up the
ISO/IEC 27000 series?
i. ISO/IEC 27001 outlines ISMS implementation guidelines, and ISO/IEC
27003 outlines the ISMS program’s requirements.
ii. ISO/IEC 27005 outlines the audit and certification guidance, and ISO/IEC
27002 outlines the metrics framework.
iii. ISO/IEC 27006 outlines the program implementation guidelines, and
ISO/IEC 27005 outlines risk management guidelines.
iv. ISO/IEC 27001 outlines the code of practice, and ISO/IEC 27004 outlines
the implementation framework.
A. i, iii
B. i, ii
C. ii, iii, iv
D. i, ii, iii, iv
29. The information security industry is made up of various best practices,
standards, models, and frameworks. Some were not developed first with security
in mind, but can be integrated into an organizational security program to help
in its effectiveness and efficiency. It is important to know of all of these different
approaches so that an organization can choose the ones that best fit its business
needs and culture. Which of the following best describes the approach(es) that
should be put into place if an organization wants to integrate a way to improve
its security processes over a period of time?
i. Information Technology Infrastructure Library should be integrated
because it allows for the mapping of IT service process management,
business drivers, and security improvement.
ii. Six Sigma should be integrated because it allows for the defects of security
processes to be identified and improved upon.
iii. Capability Maturity Model should be integrated because it provides
distinct maturity levels.

CISSP All-in-One Exam Guide
148
iv. The Open Group Architecture Framework should be integrated because it
provides a structure for process improvement.
A. i, iii
B. ii, iii, iv
C. ii, iii
D. ii, iv
Use the following scenario to answer Questions 30–32. Todd is a new security manager and
has the responsibility of implementing personnel security controls within the financial
institution where he works. Todd knows that many employees do not fully understand
how their actions can put the institution at risk; thus, an awareness program needs to be
developed. He has determined that the bank tellers need to get a supervisory override
when customers have checks over $3,500 that need to be cashed. He has also uncovered
that some employees have stayed in their specific positions within the company for over
three years. Todd would like to be able to investigate some of the bank’s personnel ac-
tivities to see if any fraudulent activities have taken place. Todd is already ensuring that
two people must use separate keys at the same time to open the bank vault.
30. Todd documents several fraud opportunities that the employees have at the
financial institution so that management understands these risks and allocates
the funds and resources for his suggested solutions. Which of the following
best describes the control Todd should put into place to be able to carry out
fraudulent investigation activity?
A. Separation of duties
B. Rotation of duties
C. Mandatory vacations
D. Split knowledge
31. If the financial institution wants to force collusion to take place for fraud to
happen successfully in this situation, what should Todd put into place?
A. Separation of duties
B. Rotation of duties
C. Social engineering
D. Split knowledge
32. Todd wants to be able to prevent fraud from taking place, but he knows that
some people may get around the types of controls he puts into place. In
those situations he wants to be able to identify when an employee is doing
something suspicious. Which of the following incorrectly describes what Todd
is implementing in this scenario and what those specific controls provide?
A. Separation of duties by ensuring that a supervisor must approve the
cashing of a check over $3,500. This is an administrative control that
provides preventative protection for Todd’s organization.

Chapter 2: Information Security Governance and Risk Management
149
B. Rotation of duties by ensuring that one employee only stays in one position
for up to three months of a time. This is an administrative control that
provides detective capabilities.
C. Security awareness training, which is a preventive administrative control
that can also emphasize enforcement.
D. Dual control, which is an administrative detective control that can ensure
that two employees must carry out a task simultaneously.
Use the following scenario to answer Questions 33–35. Sam has just been hired as the new
security officer for a pharmaceutical company. The company has experienced many
data breaches and has charged Sam with ensuring that the company is better protected.
The company currently has the following classifications in place: public, confidential,
and secret. There is a data classification policy that outlines the classification scheme
and the definitions for each classification, but there is no supporting documentation
that the technical staff can follow to know how to meet these goals. The company has
no data loss prevention controls in place and only conducts basic security awareness
training once a year. Talking to the business unit managers, he finds out that only half
of them even know where the company’s policies are located and none of them know
their responsibilities pertaining to classifying data.
33. Which of the following best describes what Sam should address first in this
situation?
A. Integrate data protection roles and responsibilities within the security
awareness training and require everyone to attend it within the
next 15 days.
B. Review the current classification policies to ensure that they properly
address the company’s risks.
C. Meet with senior management and get permission to enforce data owner
tasks for each business unit manager.
D. Audit all of the current data protection controls in place to get a firm
understanding of what vulnerabilities reside in the environment.
34. Sam needs to get senior management to assign the responsibility of protecting
specific data sets to the individual business unit managers, thus making them
data owners. Which of the following would be the most important in the
criteria the managers would follow in the process of actually classifying data
once this responsibility has been assigned to them?
A. Usefulness of the data
B. Age of the data
C. Value of the data
D. Compliance requirements of the data

CISSP All-in-One Exam Guide
150
35. From this scenario, what has the company accomplished so far?
A. Implementation of administrative controls
B. Implementation of operational controls
C. Implementation of physical controls
D. Implementation of logical controls
Use the following scenario to answer Questions 36–38. Susan has been told by her boss that
she will be replacing the current security manager within her company. Her boss ex-
plained to her that operational security measures have not been carried out in a stan-
dard fashion, so some systems have proper security configurations and some do not.
Her boss needs to understand how dangerous it is to have some of the systems miscon-
figured along with what to do in this situation.
36. Which of the following best describes what Susan needs to ensure the
operations staff creates for proper configuration standardization?
A. Dual control
B. Redundancy
C. Training
D. Baselines
37. Which of the following is the best way to illustrate to her boss the dangers of
the current configuration issues?
A. Map the configurations to the compliancy requirements.
B. Compromise a system to illustrate its vulnerability.
C. Audit the systems.
D. Carry out a risk assessment.
38. Which of the following is one of the most likely solutions that Susan will
come up with and present to her boss?
A. Development of standards
B. Development of training
C. Development of monitoring
D. Development of testing
Answers
1. C. A company can have one specific data owner or different data owners who
have been delegated the responsibility of protecting specific sets of data. One
of the responsibilities that goes into protecting this information is properly
classifying it.
2. C. If data is going to be available to a wide range of people, more granular
security should be implemented to ensure that only the necessary people access

Chapter 2: Information Security Governance and Risk Management
151
the data and that the operations they carry out are controlled. The security
implemented can come in the form of authentication and authorization
technologies, encryption, and specific access control mechanisms.
3. B. The best answer to this question is B, because to properly classify data,
the data owner must evaluate the availability, integrity, and confidentiality
requirements of the data. Once this evaluation is done, it will dictate which
employees, contractors, and users can access the data, which is expressed in
answer A. This assessment will also help determine the controls that should
be put into place.
4. D. The key to this question is the use of the word “ultimately.” Though
management can delegate tasks to others, it is ultimately responsible for
everything that takes place within a company. Therefore, it must continually
ensure that data and resources are being properly protected.
5. A. Without senior management’s support, a security program will not receive
the necessary attention, funds, resources, and enforcement capabilities.
6. D. Companies may decide to live with specific risks they are faced with if the
cost of trying to protect themselves would be greater than the potential loss
if the threat were to become real. Countermeasures are usually complex to a
degree, and there are almost always political issues surrounding different risks,
but these are not reasons to not implement a countermeasure.
7. B. Although the other answers may seem correct, B is the best answer here.
This is because a risk analysis is performed to identify risks and come up with
suggested countermeasures. The ALE tells the company how much it could
lose if a specific threat became real. The ALE value will go into the cost/benefit
analysis, but the ALE does not address the cost of the countermeasure and the
benefit of a countermeasure. All the data captured in answers A, C, and D are
inserted into a cost/benefit analysis.
8. D. The ALE calculation estimates the potential loss that can affect one asset
from a specific threat within a one-year time span. This value is used to figure
out the amount of money that should be earmarked to protect this asset from
this threat.
9. C. The functionality describes how a mechanism will work and behave. This
may have nothing to do with the actual protection it provides. Assurance
is the level of confidence in the protection level a mechanism will provide.
When systems and mechanisms are evaluated, their functionality and
assurance should be examined and tested individually.
10. D. The equation is more conceptual than practical. It is hard to assign a
number to an individual vulnerability or threat. This equation enables you to
look at the potential loss of a specific asset, as well as the controls gap (what
the specific countermeasure cannot protect against). What remains is the
residual risk, which is what is left over after a countermeasure is implemented.

CISSP All-in-One Exam Guide
152
11. C. An analysis is only as good as the data that go into it. Data pertaining to
risks the company faces should be extracted from the people who understand
best the business functions and environment of the company. Each department
understands its own threats and resources, and may have possible solutions to
specific threats that affect its part of the company.
12. C. A quantitative risk analysis assigns monetary values and percentages to
the different components within the assessment. A qualitative analysis uses
opinions of individuals and a rating system to gauge the severity level of
different threats and the benefits of specific countermeasures.
13. D. During a risk analysis, the team is trying to properly predict the future and
all the risks that future may bring. It is somewhat of a subjective exercise and
requires educated guessing. It is very hard to properly predict that a flood will
take place once in ten years and cost a company up to $40,000 in damages,
but this is what a quantitative analysis tries to accomplish.
14. D. The Control Objectives for Information and related Technology (CobiT)
is a framework developed by the Information Systems Audit and Control
Association (ISACA) and the IT Governance Institute (ITGI). It defines goals
for the controls that should be used to properly manage IT and ensure IT
maps to business needs.
15. A. CobiT has four domains: Plan and Organize, Acquire and Implement,
Deliver and Support, and Monitor and Evaluate. Each category drills down
into subcategories. For example, Acquire and Implement contains the
following subcategories:
• Acquire and Maintain Application Software
• Acquire and Maintain Technology Infrastructure
• Develop and Maintain Procedures
• Install and Accredit Systems
• Manage Changes
16. A. It is referred to as the health informatics, and its purpose is to provide
guidance to health organizations and other holders of personal health
information on how to protect such information via implementation
of ISO/IEC 27002.
17. B. COSO deals more at the strategic level, while CobiT focuses more at the
operational level. CobiT is a way to meet many of the COSO objectives,
but only from the IT perspective. COSO deals with non-IT items also, as
in company culture, financial accounting principles, board of director
responsibility, and internal communication structures. Its main purpose
is to help ensure fraudulent financial reporting cannot take place in an
organization.

Chapter 2: Information Security Governance and Risk Management
153
18. B. NIST 800-30 Risk Management Guide for Information Technology
Systems is a U.S. federal standard that is focused on IT risks. OCTAVE is a
methodology to set up a risk management program within an organizational
structure. AS/NZS 4360 takes a much broader approach to risk management.
This methodology can be used to understand a company’s financial, capital,
human safety, and business decisions risks. Although it can be used to analyze
security risks, it was not created specifically for this purpose.
19. D. Security through obscurity is not implementing true security controls,
but rather attempting to hide the fact that an asset is vulnerable in the hope
that an attacker will not notice. Security through obscurity is an approach to
try and fool a potential attacker, which is a poor way of practicing security.
Vulnerabilities should be identified and fixed, not hidden.
20. B. Physical controls are security mechanisms in the physical world, as in locks,
fences, doors, computer cages, etc. There are three main control types, which
are administrative, technical, and physical.
21. A. Logical (or technical) controls are security mechanisms, as in firewalls,
encryption, software permissions, and authentication devices. They are
commonly used in tandem with physical and administrative controls to
provide a defense-in-depth approach to security.
22. A. $62,000 is the correct answer. The firewall reduced the annualized loss
expectancy (ALE) from $92,000 to $30,000 for a savings of $62,000. The
formula for ALE is single loss expectancy × annualized rate of occurrence
= ALE. Subtracting the ALE value after the firewall is implemented from the
value before it was implemented results in the potential loss savings this type
of control provides.
23. D. –$3,000 is the correct answer. The firewall saves $62,000, but costs
$65,000 per year. 62,000 – 65,000 = –3,000. The firewall actually costs the
company more than the original expected loss, and thus the value to the
company is a negative number. The formula for this calculation is (ALE before
the control is implemented) – (ALE after the control is implemented) –
(annual cost of control) = value of control.
24. D. Risk mitigation involves employing controls in an attempt to reduce the
either the likelihood or damage associated with an incident, or both. The four
ways of dealing with risk are accept, avoid, transfer, and mitigate (reduce). A
firewall is a countermeasure installed to reduce the risk of a threat.
25. B. $480,000 is the correct answer. The formula for single loss expectancy (SLE)
is asset value × exposure factor (EF) = SLE. In this situation the formula would
work out as asset value ($800,000) × exposure factor (60%) = $480,000. This
means that the company has a potential loss value of $480,000 pertaining to
this one asset (facility) and this one threat type (fire).

CISSP All-in-One Exam Guide
154
26. C. The annualized rate occurrence (ARO) is the frequency that a threat will
most likely occur within a 12-month period. It is a value used in the ALE
formula, which is SLE × ARO = ALE.
27. C. $48,000 is the correct answer. The annualized loss expectancy formula (SLE
× ARO = ALE) is used to calculate the loss potential for one asset experiencing
one threat in a 12-month period. The resulting ALE value helps to determine
the amount that can be reasonably be spent in the protection of that asset. In
this situation, the company should not spend over $48,000 on protecting this
asset from the threat of fire. ALE values help organizations rank the severity
level of the risks they face so they know which ones to deal with first and how
much to spend on each.
28. D. Unfortunately, you will run into questions on the CISSP exam that will be
this confusing, so you need to be ready for them. The proper mapping for the
ISO/IEC standards are as follows:
• ISO/IEC 27001 ISMS requirements
• ISO/IEC 27002 Code of practice for information security management
• ISO/IEC 27003 Guideline for ISMS implementation
• ISO/IEC 27004 Guideline for information security management
measurement and metrics framework
• ISO/IEC 27005 Guideline for information security risk management
• ISO/IEC 27006 Guidance for bodies providing audit and certification of
information security management systems
29. C. The best process improvement approaches provided in this list are Six
Sigma and the Capability Maturity Model. The following outlines the
definitions for all items in this question:
• TOGAF Model and methodology for the development of enterprise
architectures developed by The Open Group
• ITIL Processes to allow for IT service management developed by the
United Kingdom’s Office of Government Commerce
• Six Sigma Business management strategy that can be used to carry out
process improvement
• Capability Maturity Model Integration (CMMI) Organizational
development for process improvement developed by Carnegie Mellon
30. C. Mandatory vacation is an administrative detective control that allows for an
organization to investigate an employee’s daily business activities to uncover
any potential fraud that may be taking place. The employee should be forced
to be away from the organization for a two-week period and another person
put into that role. The idea is that the person who was rotated into that
position may be able to detect suspicious activities.

Chapter 2: Information Security Governance and Risk Management
155
31. A. Separation of duties is an administrative control that is put into place to
ensure that one person cannot carry out a critical task by himself. If a person
were able to carry out a critical task alone, this could put the organization
at risk. Collusion is when two or more people come together to carry out
fraud. So if a task was split between two people, they would have to carry out
collusion (working together) to complete that one task and carry out fraud.
32. D. Dual control is an administrative preventative control. It ensures that
two people must carry out a task at the same time, as in two people having
separate keys when opening the vault. It is not a detective control. Notice
that the question asks what Todd is not doing. Remember that on the exam
you need to choose the best answer. In many situations you will not like
the question or the corresponding answers on the CISSP exam, so prepare
yourself. The questions can be tricky, which is one reason why the exam itself
is so difficult.
33. B. While each answer is a good thing for Sam to carry out, the first thing
that needs to be done is to ensure that the policies properly address data
classification and protection requirements for the company. Policies provide
direction, and all other documents (standards, procedures, guidelines) and
security controls are derived from the policies and support them.
34. C. Data is one of the most critical assets to any organization. The value of the
asset must be understood so that the organization knows which assets require
the most protection. There are many components that go into calculating the
value of an asset: cost of replacement, revenue generated from asset, amount
adversaries would pay for the asset, cost that went into the development of
the asset, productivity costs if asset was absent or destroyed, and liability costs
of not properly protecting the asset. So the data owners need to be able to
determine the value of the data to the organization for proper classification
purposes.
35. A. The company has developed a data classification policy, which is an
administrative control.
36. D. The operations staff needs to know what minimum level of security is
required per system within the network. This minimum level of security is
referred to as a baseline. Once a baseline is set per system, then the staff has
something to compare the system against to know if changes have not taken
place properly, which could make the system vulnerable.
37. D. Susan needs to illustrate these vulnerabilities (misconfigured systems) in
the context of risk to her boss. This means she needs to identify the specific
vulnerabilities, associate threats to those vulnerabilities, and calculate their
risks. This will allow her boss to understand how critical these issues are and
what type of action needs to take place.

CISSP All-in-One Exam Guide
156
38. A. Standards need to be developed that outline proper configuration
management processes and approved baseline configuration settings. Once
these standards are developed and put into place, then employees can be
trained on these issues and how to implement and maintain what is outlined
in the standards. Systems can be tested against what is laid out in the standards,
and systems can be monitored to detect if there are configurations that do not
meet the requirements outlined in the standards. You will find that some CISSP
questions seem subjective and their answers hard to pin down. Questions that
ask what is “best” or “more likely” are common.

CHAPTER 3
Access Control
This chapter presents the following:
• Identification methods and technologies
• Authentication methods, models, and technologies
• Discretionary, mandatory, and nondiscretionary models
• Accountability, monitoring, and auditing practices
• Emanation security and technologies
• Intrusion detection systems
• Threats to access control practices and technologies
A cornerstone in the foundation of information security is controlling how resources
are accessed so they can be protected from unauthorized modification or disclosure.
The controls that enforce access control can be technical, physical, or administrative in
nature. These control types need to be integrated into policy-based documentation,
software and technology, network design, and physical security components.
Access is one of the most exploited aspects of security, because it is the gateway that
leads to critical assets. Access controls need to be applied in a layered defense-in-depth
method, and an understanding of how these controls are exploited is extremely impor-
tant. In this chapter we will explore access control conceptually and then dig into the
technologies the industry puts in place to enforce these concepts. We will also look at
the common methods the bad guys use to attack these technologies.
Access Controls Overview
Access controls are security features that control how users and systems communicate and
interact with other systems and resources. They protect the systems and resources from
unauthorized access and can be components that participate in determining the level of
authorization after an authentication procedure has successfully completed. Although
we usually think of a user as the entity that requires access to a network resource or in-
formation, there are many other types of entities that require access to other network
entities and resources that are subject to access control. It is important to understand the
definition of a subject and an object when working in the context of access control.
Access is the flow of information between a subject and an object. A subject is an
active entity that requests access to an object or the data within an object. A subject can
157

CISSP All-in-One Exam Guide
158
be a user, program, or process that accesses an object to accomplish a task. When a
program accesses a file, the program is the subject and the file is the object. An object is
a passive entity that contains information or needed functionality. An object can be a
computer, database, file, computer program, directory, or field contained in a table
within a database. When you look up information in a database, you are the active
subject and the database is the passive object. Figure 3-1 illustrates subjects and objects.
Access control is a broad term that covers several different types of mechanisms that
enforce access control features on computer systems, networks, and information. Access
control is extremely important because it is one of the first lines of defense in battling
unauthorized access to systems and network resources. When a user is prompted for a
username and password to use a computer, this is access control. Once the user logs in
and later attempts to access a file, that file may have a list of users and groups that have
the right to access it. If the user is not on this list, the user is denied. This is another form
of access control. The users’ permissions and rights may be based on their identity, clear-
ance, and/or group membership. Access controls give organizations the ability to con-
trol, restrict, monitor, and protect resource availability, integrity, and confidentiality.
Security Principles
The three main security principles for any type of security control are
• Availability
• Integrity
• Confidentiality
Figure 3-1 Subjects are active entities that access objects, while objects are passive entities.

Chapter 3: Access Control
159
These principles, which were touched upon in Chapter 2, will be a running theme
throughout this book because each core subject of each chapter approaches these prin-
ciples in a unique way. In Chapter 2, you read that security management procedures
include identifying threats that can negatively affect the availability, integrity, and con-
fidentiality of the company’s assets and finding cost-effective countermeasures that will
protect them. This chapter looks at the ways the three principles can be affected and
protected through access control methodologies and technologies.
Every control that is used in computer and information security provides at least
one of these security principles. It is critical that security professionals understand all of
the possible ways these principles can be provided and circumvented.
Availability
Hey, I’m available.
Response: But no one wants you.
Information, systems, and resources must be available to users in a timely manner
so productivity will not be affected. Most information must be accessible and available
to users when requested so they can carry out tasks and fulfill their responsibilities. Ac-
cessing information does not seem that important until it is inaccessible. Administra-
tors experience this when a file server goes offline or a highly used database is out of
service for one reason or another. Fault tolerance and recovery mechanisms are put into
place to ensure the continuity of the availability of resources. User productivity can be
greatly affected if requested data are not readily available.
Information has various attributes, such as accuracy, relevance, timeliness, and pri-
vacy. It may be extremely important for a stockbroker to have information that is ac-
curate and timely, so he can buy and sell stocks at the right times at the right prices. The
stockbroker may not necessarily care about the privacy of this information, only that it
is readily available. A soft drink company that depends on its soda pop recipe would
care about the privacy of this trade secret, and the security mechanisms in place need to
ensure this secrecy.
Integrity
Information must be accurate, complete, and protected from unauthorized modifica-
tion. When a security mechanism provides integrity, it protects data, or a resource, from
being altered in an unauthorized fashion. If any type of illegitimate modification does
occur, the security mechanism must alert the user or administrator in some manner.
One example is when a user sends a request to her online bank account to pay her
$24.56 water utility bill. The bank needs to be sure the integrity of that transaction was
not altered during transmission, so the user does not end up paying the utility compa-
ny $240.56 instead. Integrity of data is very important. What if a confidential e-mail
was sent from the secretary of state to the president of the United States and was inter-
cepted and altered without a security mechanism in place that disallows this or alerts
the president that this message has been altered? Instead of receiving a message read-
ing, “We would love for you and your wife to stop by for drinks tonight,” the message
could be altered to say, “We have just bombed Libya.” Big difference.

CISSP All-in-One Exam Guide
160
Confidentiality
This is my secret and you can’t have it.
Response: I don’t want it.
Confidentiality is the assurance that information is not disclosed to unauthorized
individuals, programs, or processes. Some information is more sensitive than other
information and requires a higher level of confidentiality. Control mechanisms need to
be in place to dictate who can access data and what the subject can do with it once they
have accessed it. These activities need to be controlled, audited, and monitored. Ex-
amples of information that could be considered confidential are health records, finan-
cial account information, criminal records, source code, trade secrets, and military
tactical plans. Some security mechanisms that would provide confidentiality are en-
cryption, logical and physical access controls, transmission protocols, database views,
and controlled traffic flow.
It is important for a company to identify the data that must be classified so the
company can ensure that the top priority of security protects this information and
keeps it confidential. If this information is not singled out, too much time and money
can be spent on implementing the same level of security for critical and mundane in-
formation alike. It may be necessary to configure virtual private networks (VPNs) be-
tween organizations and use the IPSec encryption protocol to encrypt all messages
passed when communicating about trade secrets, sharing customer information, or
making financial transactions. This takes a certain amount of hardware, labor, funds,
and overhead. The same security precautions are not necessary when communicating
that today’s special in the cafeteria is liver and onions with a roll on the side. So, the
first step in protecting data’s confidentiality is to identify which information is sensitive
and to what degree, and then implement security mechanisms to protect it properly.
Different security mechanisms can supply different degrees of availability, integrity,
and confidentiality. The environment, the classification of the data that is to be pro-
tected, and the security goals must be evaluated to ensure the proper security mecha-
nisms are bought and put into place. Many corporations have wasted a lot of time and
money not following these steps and instead buying the new “gee whiz” product that
recently hit the market.
Identification, Authentication, Authorization,
and Accountability
I don’t really care who you are, but come right in.
For a user to be able to access a resource, he first must prove he is who he claims to
be, has the necessary credentials, and has been given the necessary rights or privileges
to perform the actions he is requesting. Once these steps are completed successfully, the
user can access and use network resources; however, it is necessary to track the user’s
activities and enforce accountability for his actions. Identification describes a method
of ensuring that a subject (user, program, or process) is the entity it claims to be. Iden-
tification can be provided with the use of a username or account number. To be prop-
erly authenticated, the subject is usually required to provide a second piece to the
credential set. This piece could be a password, passphrase, cryptographic key, personal

Chapter 3: Access Control
161
identification number (PIN), anatomical attribute, or token. These two credential items
are compared to information that has been previously stored for this subject. If these
credentials match the stored information, the subject is authenticated. But we are not
done yet.
Once the subject provides its credentials and is properly identified, the system it is
trying to access needs to determine if this subject has been given the necessary rights
and privileges to carry out the requested actions. The system will look at some type of
access control matrix or compare security labels to verify that this subject may indeed
access the requested resource and perform the actions it is attempting. If the system
determines that the subject may access the resource, it authorizes the subject.
Although identification, authentication, authorization, and accountability have close
and complementary definitions, each has distinct functions that fulfill a specific require-
ment in the process of access control. A user may be properly identified and authenti-
cated to the network, but he may not have the authorization to access the files on the file
server. On the other hand, a user may be authorized to access the files on the file server,
but until she is properly identified and authenticated, those resources are out of reach.
Figure 3-2 illustrates the four steps that must happen for a subject to access an object.
The subject needs to be held accountable for the actions taken within a system or
domain. The only way to ensure accountability is if the subject is uniquely identified
and the subject’s actions are recorded.
Logical access controls are technical tools used for identification, authentication, au-
thorization, and accountability. They are software components that enforce access con-
trol measures for systems, programs, processes, and information. The logical access
controls can be embedded within operating systems, applications, add-on security pack-
ages, or database and telecommunication management systems. It can be challenging to
synchronize all access controls and ensure all vulnerabilities are covered without pro-
ducing overlaps of functionality. However, if it were easy, security professionals would
not be getting paid the big bucks!
Race Condition
Arace condition is when processes carry out their tasks on a shared resource in an
incorrect order. A race condition is possible when two or more processes use a
shared resource, as in data within a variable. It is important that the processes
carry out their functionality in the correct sequence. If process 2 carried out its
task on the data before process 1, the result will be much different than if process
1 carried out its tasks on the data before process 2.
In software, when the authentication and authorization steps are split into
two functions, there is a possibility an attacker could use a race condition to force
the authorization step to be completed before the authentication step. This would
be a flaw in the software that the attacker has figured out how to exploit. A race
condition occurs when two or more processes use the same resource and the se-
quences of steps within the software can be carried out in an improper order,
something that can drastically affect the output. So, an attacker can force the au-
thorization step to take place before the authentication step and gain unauthor-
ized access to a resource.

CISSP All-in-One Exam Guide
162
NOTE
NOTE The words “logical” and “technical” can be used interchangeably in
this context. It is conceivable that the CISSP exam would refer to logical and
technical controls interchangeably.
An individual’s identity must be verified during the authentication process. Authen-
tication usually involves a two-step process: entering public information (a username,
employee number, account number, or department ID), and then entering private in-
formation (a static password, smart token, cognitive password, one-time password, or
PIN). Entering public information is the identification step, while entering private in-
formation is the authentication step of the two-step process. Each technique used for
identification and authentication has its pros and cons. Each should be properly evalu-
ated to determine the right mechanism for the correct environment.
Identification and Authentication
Now, who are you again?
Once a person has been identified through the user ID or a similar value, she must
be authenticated, which means she must prove she is who she says she is. Three general
factors can be used for authentication: something a person knows, something a person
has, and something a person is. They are also commonly called authentication by knowl-
edge, authentication by ownership, and authentication by characteristic.
Figure 3-2 Four steps must happen for a subject to access an object: identification, authentication,
authorization, and accountability.

Chapter 3: Access Control
163
Something a person knows (authentication by knowledge) can be, for example, a
password, PIN, mother’s maiden name, or the combination to a lock. Authenticating a
person by something that she knows is usually the least expensive to implement. The
downside to this method is that another person may acquire this knowledge and gain
unauthorized access to a resource.
Something a person has (authentication by ownership) can be a key, swipe card,
access card, or badge. This method is common for accessing facilities, but could also be
used to access sensitive areas or to authenticate systems. A downside to this method is
that the item can be lost or stolen, which could result in unauthorized access.
Something specific to a person (authentication by characteristic) becomes a bit
more interesting. This is not based on whether the person is a Republican, a Martian,
or a moron—it is based on a physical attribute. Authenticating a person’s identity based
on a unique physical attribute is referred to as biometrics. (For more information, see
the upcoming section, “Biometrics.”)
Strong authentication contains two out of these three methods: something a person
knows, has, or is. Using a biometric system by itself does not provide strong authentica-
tion because it provides only one out of the three methods. Biometrics supplies what a
person is, not what a person knows or has. For a strong authentication process to be in
place, a biometric system needs to be coupled with a mechanism that checks for one of
the other two methods. For example, many times the person has to type a PIN number
into a keypad before the biometric scan is performed. This satisfies the “what the per-
son knows” category. Conversely, the person could be required to swipe a magnetic
card through a reader prior to the biometric scan. This would satisfy the “what the
person has” category. Whatever identification system is used, for strong authentication
to be in the process, it must include two out of the three categories. This is also referred
to as two-factor authentication.
NOTE
NOTE Strong authentication is also sometimes referred to as multi-
authentication, which just means that more than one authentication
method is used. Three-factor authentication is possible, which includes
all authentication approaches.
Verification 1:1 is the measurement of an identity against a single claimed iden-
tity. The conceptual question is, “Is this person who he claims to be?” So if Bob
provides his identity and credential set, this information is compared to the data
kept in an authentication database. If they match, we know that it is really Bob. If
the identification is 1:N (many), the measurement of a single identity is com-
pared against multiple identities. The conceptual question is, “Who is this per-
son?” An example is if fingerprints were found at a crime scene, the cops would
run them through their database to identify the suspect.

CISSP All-in-One Exam Guide
164
Identity is a complicated concept with many varied nuances, ranging from the phil-
osophical to the practical. A person can have multiple digital identities. For example, a
user can be JPublic in a Windows domain environment, JohnP on a Unix server,
JohnPublic on the mainframe, JJP in instant messaging, JohnCPublic in the certifica-
tion authority, and IWearPanties on Facebook. If a company would want to centralize
all of its access control, these various identity names for the same person may put the
security administrator into a mental health institution.
Creating or issuing secure identities should include three key aspects: uniqueness,
nondescriptive, and issuance. The first, uniqueness, refers to the identifiers that are
specific to an individual, meaning every user must have a unique ID for accountability.
Things like fingerprints and retina scans can be considered unique elements in deter-
mining identity. Nondescriptive means that neither piece of the credential set should
indicate the purpose of that account. For example, a user ID should not be “administra-
tor,” “backup_operator,” or “CEO.” The third key aspect in determining identity is issu-
ance. These elements are the ones that have been provided by another authority as a
means of proving identity. ID cards are a kind of security element that would be con-
sidered an issuance form of identification.
NOTE
NOTE Mutual authentication is when the two communicating entities must
authenticate to each other before passing data. An authentication server may
be required to authenticate to a user’s system before allowing data to flow
back and forth.
While most of this chapter deals with user authentication, it is important to realize
system-based authentication is possible also. Computers and devices can be identified,
authenticated, monitored, and controlled based upon their hardware addresses (media
access control) and/or Internet Protocol (IP) addresses. Networks may have network
access control (NAC) technology that authenticates systems before they are allowed ac-
cess to the network. Every network device has a hardware address that is integrated into
its network interface card and a software-based address (IP), which either is assigned by
a DHCP server or locally configured. We will cover DHCP, IP, and MAC addresses more
in Chapter 6.
Identification Component Requirements
When issuing identification values to users, the following should be in place:
• Each value should be unique, for user accountability.
• A standard naming scheme should be followed.
• The value should be nondescriptive of the user’s position or tasks.
• The value should not be shared between users.

Chapter 3: Access Control
165
CAUTION
CAUTION In technology there are overlapping acronyms. In the CISSP exam
you will run into at least three different MAC acronyms. Media access control =
data link layer functionality and address type within a network protocol stack.
Mandatory access control = access control model integrated in software used
to control subject-to-object access functions through the use of clearance,
classifications, and labels. Message authentication code = cryptographic function
that uses a hashing algorithm and symmetric key for data integrity and system
origin functions. The CISSP exam does not use acronyms by themselves, but
spells the terms out so this should not be a problem on the exam.
Identity Management
I configured our system to only allow access to tall people.
Response: Short people are pretty sneaky.
Identity management is a broad and loaded term that encompasses the use of differ-
ent products to identify, authenticate, and authorize users through automated means. To
many people, the term also includes user account management, access control, pass-
word management, single sign-on functionality, managing rights and permissions for
user accounts, and auditing and monitoring all of these items. The reason that individu-
als, and companies, have different definitions and perspectives of identity management
(IdM) is because it is so large and encompasses so many different technologies and
processes. Remember the story of the four blind men who are trying to describe an ele-
phant? One blind man feels the tail and announces, “It’s a tail.” Another blind man feels
the trunk and announces, “It’s a trunk.” Another announces it’s a leg, and another an-
nounces it’s an ear. This is because each man cannot see or comprehend the whole of the
large creature—just the piece he is familiar with and knows about. This analogy can be
applied to IdM because it is large and contains many components and many people may
not comprehend the whole—only the component they work with and understand.
It is important for security professionals to understand not only the whole of IdM,
but understand the technologies that make up a full enterprise IdM solution. IdM re-
quires management of uniquely identified entities, their attributes, credentials, and en-
titlements. IdM allows organizations to create and manage digital identities’ life cycles
(create, maintain, terminate) in a timely and automated fashion. The enterprise IdM
must meet business needs and scale from internally facing systems to externally facing
systems. In this section, we will be covering many of these technologies and how they
work together.
Selling identity management products is now a flourishing market that focuses on
reducing administrative costs, increasing security, meeting regulatory compliance, and
improving upon service levels throughout enterprises. The continual increase in com-
plexity and diversity of networked environments only increases the complexity of keep-
ing track of who can access what and when. Organizations have different types of
applications, network operating systems, databases, enterprise resource management
(ERM) systems, customer relationship management (CRM) systems, directories, main-
frames—all used for different business purposes. Then the organizations have partners,

CISSP All-in-One Exam Guide
166
contractors, consultants, employees, and temporary employees. (Figure 3-3 actually
provides a simplistic view of most environments.) Users usually access several different
types of systems throughout their daily tasks, which makes controlling access and
providing the necessary level of protection on different data types difficult and full of
obstacles. This complexity usually results in unforeseen and unidentified holes in asset
protection, overlapping and contradictory controls, and policy and regulation non-
compliance. It is the goal of identity management technologies to simplify the admin-
istration of these tasks and bring order to chaos.
The following are many of the common questions enterprises deal with today in
controlling access to assets:
• What should each user have access to?
• Who approves and allows access?
• How do the access decisions map to policies?
• Do former employees still have access?
• How do we keep up with our dynamic and ever-changing environment?
• What is the process of revoking access?
• How is access controlled and monitored centrally?
• Why do employees have eight passwords to remember?
Access Control Review
The following is a review of the basic concepts in access control:
• Identification
• Subjects supplying identification information
• Username, user ID, account number
• Authentication
• Verifying the identification information
• Passphrase, PIN value, biometric, one-time password, password
• Authorization
• Using criteria to make a determination of operations that subjects
can carry out on objects
• “I know who you are, now what am I going to allow you to do?”
• Accountability
• Audit logs and monitoring to track subject activities with objects

Chapter 3: Access Control
167
• We have five different operating platforms. How do we centralize access when
each platform (and application) requires its own type of credential set?
• How do we control access for our employees, customers, and partners?
• How do we make sure we are compliant with the necessary regulations?
• Where do I send in my resignation? I quit.
The traditional identity management process has been manual, using directory ser-
vices with permissions, access control lists (ACLs), and profiles. This approach has
proven incapable of keeping up with complex demands and thus has been replaced
with automated applications rich in functionality that work together to create an iden-
tity management infrastructure. The main goals of identity management (IdM) tech-
nologies are to streamline the management of identity, authentication, authorization,
and the auditing of subjects on multiple systems throughout the enterprise. The sheer
diversity of a heterogeneous enterprise makes proper implementation of IdM a huge
undertaking.
Figure 3-3 Most environments are chaotic in terms of access.

CISSP All-in-One Exam Guide
168
Many identity management solutions and products are available in the market-
place. For the CISSP exam, the following are the types of technologies you should be
aware of:
• Directories
• Web access management
• Password management
• Legacy single sign-on
• Account management
• Profile update
Directories We should have a standardized and automated way of naming stuff and
controlling access to it.
Most enterprises have some type of directory that contains information pertaining
to the company’s network resources and users. Most directories follow a hierarchical
database format, based on the X.500 standard, and a type of protocol, as in Lightweight
Directory Access Protocol (LDAP), that allows subjects and applications to interact with
the directory. Applications can request information about a particular user by making
an LDAP request to the directory, and users can request information about a specific
resource by using a similar request.
The objects within the directory are managed by a directory service. The directory
service allows an administrator to configure and manage how identification, authentica-
tion, authorization, and access control take place within the network and on individual
systems. The objects within the directory are labeled and identified with namespaces.
In a Windows environment, when you log in, you are logging in to a domain con-
troller (DC), which has a hierarchical directory in its database. The database is running
a directory service (Active Directory), which organizes the network resources and carries
out user access control functionality. So once you successfully authenticate to the DC,
certain network resources will be available to you (print service, file server, e-mail serv-
er, and so on) as dictated by the configuration of AD.
How does the directory service keep all of these entities organized? By using
namespaces. Each directory service has a way of identifying and naming the objects they
will manage. In databases based on the X.500 standard that are accessed by LDAP, the
directory service assigns distinguished names (DNs) to each object. Each DN represents
a collection of attributes about a specific object, and is stored in the directory as an
entry. In the following example, the DN is made up of a common name (cn) and do-
main components (dc). Since this is a hierarchical directory, .com is the top, LogicalSe-
curity is one step down from .com, and Shon is at the bottom (where she belongs).
dn: cn=Shon Harris,dc=LogicalSecurity,dc=com
cn: Shon Harris
This is a very simplistic example. Companies usually have
large trees (directories) containing many levels and objects to
represent different departments, roles, users, and resources.

Chapter 3: Access Control
169
A directory service manages the entries and data in the directory and also enforces
the configured security policy by carrying out access control and identity management
functions. For example, when you log in to the DC, the directory service (AD) will de-
termine what resources you can and cannot access on the network.
NOTE
NOTE We touch on directory services again in the “Single Sign-on” section
of this chapter.
So are there any problems with using a directory product for identity management
and access control? Yes, there’s always something. Many legacy devices and applications
cannot be managed by the directory service because they were not built with the neces-
sary client software. The legacy entities must be managed through their inherited man-
agement software. This means that most networks have subjects, services, and resources
that can be listed in a directory and controlled centrally by an administrator through
the use of a directory service. Then there are legacy applications and devices that the
administrator must configure and manage individually.
Directories’ Role in Identity Management A directory used for IdM is spe-
cialized database software that has been optimized for reading and searching opera-
tions. It is the main component of an identity management solution. This is because all
resource information, users’ attributes, authorization profiles, roles, access control pol-
icies, and more are stored in this one location. When other IdM software applications
need to carry out their functions (authorization, access control, assigning permissions),
they now have a centralized location for all of the information they need.
As an analogy, let’s say I’m a store clerk and you enter my store to purchase alcohol.
Instead of me having to find a picture of you somewhere to validate your identity, go to
another place to find your birth certificate to obtain your true birth date, and find proof
of which state you are registered in, I can look in one place—your driver’s license. The
directory works in the same way. Some IdM applications may need to know a user’s
authorization rights, role, employee status, or clearance level, so instead of this applica-
tion having to make requests to several databases and other applications, it makes its
request to this one directory.
A lot of the information stored in an IdM directory is scattered throughout the en-
terprise. User attribute information (employee status, job description, department, and
so on) is usually stored in the HR database, authentication information could be in a
Kerberos server, role and group identification information might be in a SQL database,
and resource-oriented authentication information is stored in Active Directory on a
domain controller. These are commonly referred to as identity stores and are located in
different places on the network. Something nifty that many identity management prod-
ucts do is create meta-directories or virtual directories. A meta-directory gathers the nec-
essary information from multiple sources and stores it in one central directory. This
provides a unified view of all users’ digital identity information throughout the enter-
prise. The meta-directory synchronizes itself with all of the identity stores periodically
to ensure the most up-to-date information is being used by all applications and IdM
components within the enterprise.

CISSP All-in-One Exam Guide
170
Organizing All of This Stuff
In a database directory based on the X.500 standard, the following rules are used
for object organization:
• The directory has a tree structure to organize the entries using a parent-
child configuration.
• Each entry has a unique name made up of attributes of a specific object.
• The attributes used in the directory are dictated by the defined schema.
• The unique identifiers are called distinguished names.
The schema describes the directory structure and what names can be used
within the directory, among other things. (Schema and database components are
covered more in depth in Chapter 10.)
The following diagram shows how an object (Kathy Conlon) can have the
attributes of ou=General ou=NCTSW ou=pentagon ou=locations ou=Navy
ou=DoD ou=U.S. Government C=US.
Note that OU stands for organizational unit. They are used as containers of
other similar OUs, users, and resources. They provide the parent-child (some-
times called tree-leaf) organization structure.

Chapter 3: Access Control
171
Avirtual directory plays the same role and can be used instead of a meta-directory.
The difference between the two is that the meta-directory physically has the identity
data in its directory, whereas a virtual directory does not and points to where the actual
data reside. When an IdM component makes a call to a virtual directory to gather iden-
tity information on a user, the virtual directory will point to where the information
actually lives.
Figure 3-4 illustrates a central LDAP directory that is used by the IdM services: access
management, provisioning, and identity management. When one of these services ac-
cepts a request from a user or application, it pulls the necessary data from the directory
to be able to fulfill the request. Since the data needed to properly fulfill these requests
are stored in different locations, the metadata directory pulls the data from these other
sources and updates the LDAP directory.
Web Access Management Web access management (WAM) software controls
what users can access when using a web browser to interact with web-based enterprise
assets. This type of technology is continually becoming more robust and experiencing
increased deployment. This is because of the increased use of e-commerce, online
banking, content providing, web services, and more. The Internet only continues to
grow, and its importance to businesses and individuals increases as more and more
functionality is provided. We just can’t seem to get enough of it.
Figure 3-5 shows the basic components and activities in a web access control man-
agement process.
1. User sends in credentials to web server.
2. Web server validates user’s credentials.
3. User requests to access a resource (object).
Figure 3-4 Meta-directories pull data from other sources to populate the IdM directory.

CISSP All-in-One Exam Guide
172
4. Web server verifies with the security policy to determine if the user is allowed
to carry out this operation.
5. Web server allows access to the requested resource.
This is a simple example. More complexity comes in with all the different ways a
user can authenticate (password, digital certificate, token, and others); the resources
and services that may be available to the user (transfer funds, purchase product, update
profile, and so forth); and the necessary infrastructure components. The infrastructure
is usually made up of a web server farm (many servers), a directory that contains the
users’ accounts and attributes, a database, a couple of firewalls, and some routers, all
laid out in a tiered architecture. But let’s keep it simple right now.
The WAM software is the main gateway between users and the corporate web-based
resources. It is commonly a plug-in for a web server, so it works as a front-end process.
When a user makes a request for access, the web server software will query a directory,
an authentication server, and potentially a back-end database before serving up the re-
source the user requested. The WAM console allows the administrator to configure ac-
cess levels, authentication requirements, and account setup workflow steps, and to
perform overall maintenance.
WAM tools usually also provide a single sign-on capability so that once a user is
authenticated at a web site, she can access different web-based applications and resourc-
es without having to log in multiple times. When a product provides a single sign-on
capability in a web environment, the product must keep track of the user’s authentica-
tion state and security context as the user moves from one resource to the next.
For example, if Kathy logs on to her online bank web site, the communication is
taking place over the HTTP protocol. This protocol itself is stateless, which means it will
allow a web server to pass the user a web page and the user is forgotten about. Many
web servers work in a stateless mode because they have so many requests to fulfill and
they are just providing users with web pages. Keeping a constant session with each and
Figure 3-5 A basic example of web access control

Chapter 3: Access Control
173
every user who is requesting to see a web page would exhaust the web server’s resources.
When a user has to log on to a web site is when “keeping the user’s state” is required
and a continuous session is needed.
When Kathy first goes to her bank’s web site, she is viewing publicly available data
that do not require her to authenticate before viewing. A constant session is not being
kept by the web server, thus it is working in a stateless manner. Once she clicks Access
My Account, the web server sets up a secure connection (SSL) with her browser and
requests her credentials. After she is authenticated, the web server sends a cookie (small
text file) that indicates she has authenticated properly and the type of access she should
be allowed. When Kathy requests to move from her savings account to her checking
account, the web server will assess the cookie on Kathy’s web browser to see if she has
the rights to access this new resource. The web server continues to check this cookie
during Kathy’s session to ensure no one has hijacked the session and that the web
server is continually communicating with Kathy’s system and not someone else’s.
The web server continually asks Kathy’s web browser to prove she has been authen-
ticated, which the browser does by providing the cookie information. (The cookie in-
formation could include her password, account number, security level, browsing habits,
and/or personalization information.) As long as Kathy is authenticated, the web server
software will keep track of each of her requests, log her events, and make changes that
she requests that can take place in her security context. Security context is the authoriza-
tion level she is assigned based on her permissions, entitlements, and access rights.
Once Kathy ends the session, the cookie is usually erased from the web browser’s
memory and the web server no longer keeps this connection open or collects session
state information on this user.
NOTE
NOTE A cookie can be in the format of a text file stored on the user’s hard
drive (permanent) or it can be only held in memory (session). If the cookie
contains any type of sensitive information, then it should only be held in
memory and be erased once the session has completed.
As an analogy, let’s say I am following you in a mall as you are shopping. I am mark-
ing down what you purchase, where you go, and the requests you make. I know every-
thing about your actions; I document them in a log, and remember them as you
continue. (I am keeping state information on you and your activities.) You can have
access to all of these stores if every 15 minutes you show me a piece of paper that I gave
you. If you fail to show me the piece of paper at the necessary interval, I will push a
button and all stores will be locked—you no longer have access to the stores, I no lon-
ger collect information about you, and I leave and forget all about you. Since you are
no longer able to access any sensitive objects (store merchandise), I don’t need to keep
track of you and what you are doing.
As long as the web server sends the cookie to the web browser, Kathy does not have to
provide credentials as she asks for different resources. This is what single sign-on is. You
only have to provide your credentials once, and the continual validation that you have
the necessary cookie will allow you to go from one resource to another. If you end your
session with the web server and need to interact with it again, you must re-authenticate
and a new cookie will be sent to your browser and it starts all over again.

CISSP All-in-One Exam Guide
174
NOTE
NOTE We will cover specific single sign-on technologies later in this chapter
along with their security issues.
So the WAM product allows an administrator to configure and control access to
internal resources. This type of access control is commonly put in place to control ex-
ternal entities requesting access. The product may work on a single web server or a
server farm.
Password Management
How do we reduce the number of times users forget their passwords?
Response: If someone calls the help desk, fire them.
We cover password requirements, security issues, and best practices later in this
chapter. At this point, we need to understand how password management can work
within an IdM environment.
Help-desk workers and administrators commonly complain about the amount of
time they have to spend resetting passwords when users forget them. Another issue is
the number of different passwords the users are required to remember for the different
platforms within the network. When a password changes, an administrator must con-
nect directly to that management software of the specific platform and change the pass-
word value. This may not seem like much of a hassle, but if an organization has 4,000
users and seven different platforms, and 35 different applications, it could require a
full-time person to continually make these password modifications. And who would
really want that job?
Different types of password management technologies have been developed to get
these pesky users off the backs of IT and the help desk by providing a more secure and
automated password management system. The most common password management
approaches are listed next:
•Password Synchronization Reduces the complexity of keeping up with
different passwords for different systems.
•Self-Service Password Reset Reduces help-desk call volumes by allowing
users to reset their own passwords.
•Assisted Password Reset Reduces the resolution process for password
issues for the help desk. This may include authentication with other types of
authentication mechanisms (biometrics, tokens).
Password Synchronization If users have too many passwords they need to keep
track of, they will write the passwords down on a sticky note and cleverly hide this un-
der their keyboard or just stick it on the side of their monitor. This is certainly easier for
the user, but not so great for security.
Password synchronization technologies can allow a user to maintain just one pass-
word across multiple systems. The product will synchronize the password to other sys-
tems and applications, which happens transparently to the user.

Chapter 3: Access Control
175
The goal is to require the user to memorize only one password and have the ability
to enforce more robust and secure password requirements. If a user only needs to re-
member one password, he is more likely to not have a problem with longer, more
complex strings of values. This reduces help-desk call volume and allows the adminis-
trator to keep her sanity for just a little bit longer.
One criticism of this approach is that since only one password is used to access dif-
ferent resources, now the hacker only has to figure out one credential set to gain unau-
thorized access to all resources. But if the password requirements are more demanding
(12 characters, no dictionary words, three symbols, uppercase and lowercase letters,
and so on) and the password is changed out regularly, the balance between security and
usability can be acceptable.
Self-Service Password Reset Some products are implemented to allow users to
reset their own passwords. This does not mean that the users have any type of privileged
permissions on the systems to allow them to change their own credentials. Instead, dur-
ing the registration of a user account, the user can be asked to provide several personal
questions (school graduated from, favorite teacher, favorite color, and so on) in a ques-
tion-and-answer form. When the user forgets his password, he may be required to pro-
vide another authentication mechanism (smart card, token) and to answer these previ-
ously answered questions to prove his identity. If he does this properly, he is allowed to
change his password. If he does not do this properly, he is fired because he is an idiot.
Products are available that allow users to change their passwords through other
means. For example, if you forgot your password, you may be asked to answer some of
the questions answered during the registration process of your account. If you do this
correctly, an e-mail is sent to you with a link you must click. The password management
product has your identity tied to the answers you gave to the questions during your ac-
count registration process and to your e-mail address. If the user does everything cor-
rectly, he is given a screen that allows him to reset his password.
CAUTION
CAUTION The product should not ask for information that is publicly
available, as in your mother’s maiden name, because anyone can find that
out and attempt to identify himself as you.
Assisted Password Reset Some products are created for help-desk employees
who need to work with individuals when they forget their password. The help-desk
employee should not know or ask the individual for her password. This would be a
security risk since only the owner of the password should know the value. The help-
desk employee also should not just change a password for someone calling in without
authenticating that person first. This can allow social engineering attacks where an at-
tacker calls the help desk and indicates she is someone who she is not. If this took
place, then an attacker would have a valid employee password and could gain unau-
thorized access to the company’s jewels.
The products that provide assisted password reset functionality allow the help-desk
individual to authenticate the caller before resetting the password. This authentication
process is commonly performed through the question-and-answer process described in
the previous section. The help-desk individual and the caller must be identified and

CISSP All-in-One Exam Guide
176
authenticated through the password management tool before the password can be
changed. Once the password is updated, the system that the user is authenticating to
should require the user to change her password again. This would ensure that only she
(and not she and the help-desk person) knows her password. The goal of an assisted
password reset product is to reduce the cost of support calls and ensure all calls are
processed in a uniform, consistent, and secure fashion.
Various password management products on the market provide one or all of these
functionalities. Since IdM is about streamlining identification, authentication, and access
control, one of these products is typically integrated into the enterprise IdM solution.
Legacy Single Sign-On We will cover specific single sign-on (SSO) technologies
later in this chapter, but at this point we want to understand how SSO products are
commonly used as an IdM solution or as part of a larger IdM enterprise-wide solution.
An SSO technology allows a user to authenticate one time and then access resourc-
es in the environment without needing to re-authenticate. This may sound the same as
password synchronization, but it is not. With password synchronization, a product
takes the user’s password and updates each user account on each different system and
application with that one password. If Tom’s password is iwearpanties, then this is the
value he must type into each and every application and system he must access. In an
SSO situation, Tom would send his password to one authentication system. When Tom
requests to access a network application, the application will send over a request for
credentials, but the SSO software will respond to the application for Tom. So in SSO
environments, the SSO software intercepts the login prompts from network systems
and applications and fills in the necessary identification and authentication informa-
tion (that is, the username and password) for the user.
Even though password synchronization and single sign-on are different technolo-
gies, they still have the same vulnerability. If an attacker uncovers a user’s credential set,
she can have access to all the resources that the legitimate user may have access to.
An SSO solution may also provide a bottleneck or single point of failure. If the SSO
server goes down, users are unable to access network resources. This is why it’s a good
idea to have some type of redundancy or fail-over technology in place.
Most environments are not homogeneous in devices and applications, which makes
it more difficult to have a true enterprise SSO solution. Legacy systems many times re-
quire a different type of authentication process than the SSO software can provide. So
potentially 80 percent of the devices and applications may be able to interact with the
SSO software and the other 20 percent will require users to authenticate to them di-
rectly. In many of these situations, the IT department may come up with their own
homemade solutions, such as using login batch scripts for the legacy systems.
Are there any other downfalls with SSO we should be aware of? Well, it can be ex-
pensive to implement, especially in larger environments. Many times companies evalu-
ate purchasing this type of solution and find out it is too cost-prohibitive. The other
issue is that it would mean all of the users’ credentials for the company’s resources are
stored in one location. If an attacker was able to break in to this storehouse, she could
access whatever she wanted, and do whatever she wanted, with the company’s assets.
As always, security, functionality, and cost must be properly weighed to determine
the best solution for the company.

Chapter 3: Access Control
177
Account Management Account management is often not performed efficiently
and effectively in companies today. Account management deals with creating user ac-
counts on all systems, modifying the account privileges when necessary, and decom-
missioning the accounts when they are no longer needed. Most environments have
their IT department create accounts manually on the different systems, users are given
excessive rights and permissions, and when an employee leaves the company, many or
all of the accounts stay active. This is because a centralized account management tech-
nology has not been put into place.
Account management products attempt to attack these issues by allowing an ad-
ministrator to manage user accounts across multiple systems. When there are multiple
directories containing user profiles or access information, the account management
software allows for replication between the directories to ensure each contains the same
up-to-date information.
Now let’s think about how accounts are set up. In many environments, when a new
user needs an account, a network administrator will set up the account(s) and provide
some type of privileges and permissions. But how would the network administrator
know what resources this new user should have access to and what permissions should
be assigned to the new account? In most situations, he doesn’t—he just wings it. This is
how users end up with too much access to too much stuff. What should take place in-
stead is implementing a workflow process that allows for a request for a new user ac-
count. This request is approved, usually, by the employee’s manager, and the accounts
are automatically set up on the systems, or a ticket is generated for the technical staff to
set up the account(s). If there is a request for a change to the permissions on the ac-
count or if an account needs to be decommissioned, it goes through the same process.
The request goes to a manager (or whoever is delegated with this approval task), the
manager approves it, and the changes to the various accounts take place.
The automated workflow component is common in account management products
that provide IdM solutions. Not only does this reduce the potential errors that can take
place in account management, each step (including account approval) is logged and
tracked. This allows for accountability and provides documentation for use in backtrack-
ing if something goes wrong. It also helps ensure that only the necessary amount of ac-
cess is provided to the account and that there are no “orphaned” accounts still active
when employees leave the company. In addition, these types of processes are the kind
your auditors will be looking for—and we always want to make the auditors happy!
NOTE
NOTE These types of account management products are commonly used to
set up and maintain internal accounts. Web access control management is used
mainly for external users.
As with SSO products, enterprise account management products are usually expen-
sive and can take years to properly roll out across the enterprise. Regulatory require-
ments, however, are making more and more companies spend the money for these
types of solutions—which the vendors love!

CISSP All-in-One Exam Guide
178
Provisioning Let’s review what we know, and then build upon these concepts. Most
IdM solutions pull user information from the HR database, because the data are already
collected and held in one place and are constantly updated as employees’ or contractors’
statuses change. So user information will be copied from the HR database (referred to as
the authoritative source) into a directory, which we covered in an earlier section.
When a new employee is hired, the employee’s information, along with his man-
ager’s name, is pulled from the HR database into the directory. The employee’s man-
ager is automatically sent an e-mail asking for approval of this new account. After the
manager approves, the necessary accounts are set up on the required systems.
Over time, this new user will commonly have different identity attributes, which
will be used for authentication purposes, stored in different systems in the network.
When a user requests access to a resource, all of his identity data have already been
copied from other identity stores and the HR database and held in this centralized di-
rectory (sometimes called the identity repository). This may be a meta-directory or a
virtual directory. The access control component of the IdM system will compare the
user’s request to the IdM access control policy and ensure the user has the necessary
identification and authentication pieces in place before allowing access to the resource.
When this employee is fired, this new information goes from the HR database to
the directory. An e-mail is automatically generated and sent to the manager to allow
this account to be decommissioned. Once this is approved, the account management
software disables all of the accounts that had been set up for this user.
This example illustrates user account management and provisioning, which is the
life-cycle management of identity components.
Why do we have to worry about all of this identification and authentication stuff?
Because users always want something—they are very selfish. Okay, users actually need
access to resources to carry out their jobs, but what do they need access to, and what
level of access? This question is actually a very difficult one in our distributed, hetero-
geneous, and somewhat chaotic environments today. Too much access to resources
opens the company up to potential fraud and other risks. Too little access means the
user cannot do his job. So we are required to get it just right.
Authoritative System of Record
The authoritative source is the “system of record,” or the location where identity
information originates and is maintained. It should have the most up-to-date
and reliable identity information. An “Authoritative System of Record” (ASOR) is
a hierarchical tree-like structure system that tracks subjects and their authoriza-
tion chains. Organizations need an automated and reliable way of detecting and
managing unusual or suspicious changes to user accounts and a method of col-
lecting this type of data through extensive auditing capabilities. The ASOR should
contain the subject’s name, associated accounts, authorization history per ac-
count, and provision details. This type of workflow and accounting is becoming
more in demand for regulatory compliance because it allows auditors to under-
stand how access is being centrally controlled within an environment.

Chapter 3: Access Control
179
User provisioning refers to the creation, maintenance, and deactivation of user ob-
jects and attributes as they exist in one or more systems, directories, or applications, in
response to business processes. User provisioning software may include one or more of
the following components: change propagation, self-service workflow, consolidated
user administration, delegated user administration, and federated change control. User
objects may represent employees, contractors, vendors, partners, customers, or other
recipients of a service. Services may include electronic mail, access to a database, access
to a file server or database, and so on.
Great. So we create, maintain, and deactivate accounts as required based on busi-
ness needs. What else does this mean? The creation of the account also is the creation
of the access rights to company assets. It is through provisioning that users either are
given access or access is taken away. Throughout the life cycle of a user identity, access
rights, permissions, and privileges should change as needed in a clearly understood,
automated, and audited process.
By now, you should be able to connect how these different technologies work to-
gether to provide an organization with streamlined IdM. Directories are built to contain
user and resource information. A metadata directory pulls identity information that re-
sides in different places within the network to allow IdM processes to only have to get
the needed data for their tasks from this one location. User management tools allow for
automated control of user identities through their lifetimes and can provide provision-
ing. A password management tool is in place so that productivity is not slowed down by
a forgotten password. A single sign-on technology requires internal users to only authen-
ticate once for enterprise access. Web access management tools provide a single sign-on
service to external users and control access to web-based resources. Figure 3-6 provides a
visual example of how many of these components work together.
Profile Update
I changed my dog’s name from Cornelis Vreeswijk Hollander to Spot.
Response: Good choice, but we don’t need this in your identity profile.
Most companies do not just contain the information “Bob Smith” for a user and
make all access decisions based on this data. There can be a plethora of information on
a user that is captured (e-mail address, home address, phone number, panty size, and
so on). When this collection of data is associated with the identity of a user, we call it a
profile.
The profile should be centrally located for easier management. IdM enterprise solu-
tions have profile update technology that allows an administrator to create, make
changes, or delete these profiles in an automated fashion when necessary. Many user
profiles contain nonsensitive data that the user can update himself (called self-service).
So if George moved to a new house, there should be a profile update tool that allows
him to go into his profile and change his address information. Now, his profile may
also contain sensitive data that should not be available to George—for example, his
access rights to resources or information that he is going to get laid off on Friday.
You have interacted with a profile update technology if you have requested to up-
date your personal information on a web site, as in Orbitz, Amazon, or Expedia. These
companies provide you with the capability to sign in and update the information they

CISSP All-in-One Exam Guide
180
allow you to access. This could be your contact information, home address, purchasing
preferences, or credit card data. This information is then used to update their customer
relationship management (CRM) system so they know where to send you their junk
mail advertisements or spam messages.
Federation
Beam me up, Scotty!
The world continually gets smaller as technology brings people and companies
closer together. Many times, when we are interacting with just one web site, we are actu-
ally interacting with several different companies—we just don’t know it. The reason we
don’t know it is because these companies are sharing our identity and authentication
information behind the scenes. This is not done for nefarious purposes necessarily, but
to make our lives easier and to allow merchants to sell their goods without much effort
on our part.
For example, a person wants to book an airline flight and a hotel room. If the airline
company and hotel company use a federated identity management system, this means
they have set up a trust relationship between the two companies and will share cus-
tomer identification and, potentially, authentication information. So when I book my
flight on Southwest, the web site asks me if I want to also book a hotel room. If I click
Figure 3-6 Enterprise identity management system components

Chapter 3: Access Control
181
“Yes,” I could then be brought to the Hilton web site, which provides me with informa-
tion on the closest hotel to the airport I’m flying into. Now, to book a room I don’t have
to log in again. I logged in on the Southwest web site, and that web site sent my infor-
mation over to the Hilton web site, all of which happened transparently to me.
Afederated identity is a portable identity, and its associated entitlements, that can be
used across business boundaries. It allows a user to be authenticated across multiple IT
systems and enterprises. Identity federation is based upon linking a user’s otherwise dis-
tinct identities at two or more locations without the need to synchronize or consolidate
directory information. Federated identity offers businesses and consumers a more conve-
nient way of accessing distributed resources and is a key component of e-commerce.
Digital Identity
In the digital world, I am a very different person.
Response: Thank goodness.
An interesting little fact that not many people are aware of is that a digital
identity is made up of attributes, entitlements, and traits. Many of us just think of
identity as a user ID that is mapped to an individual. The truth is that it is usu-
ally more complicated than that.
A user’s identity can be a collection of her attributes (department, role in
company, shift time, clearance, and others); her entitlements (resources available
to her, authoritative rights in the company, and so on); and her traits (biometric
information, height, sex, and so forth).
So if a user requests access to a database that contains sensitive employee in-
formation, the IdM solution would need to pull together the necessary identity
information and her supplied credentials before she is authorized access. If the
user is a senior manager (attribute), with a Secret clearance (attribute), and has
access to the database (entitlement)—she is granted the permissions Read and
Write to certain records in the database Monday through Friday, 8 A.M. to 5 P.M.
(attribute).
Another example is if a soldier requests to be assigned an M-16 firearm. She
must be in the 34th division (attribute), have a top secret clearance (attribute), her
supervisor must have approved this (entitlement), and her physical features
(traits) must match the ID card she presents to the firearm depot clerk.
The directory (or meta-directory) of the IdM system has all of this identity
information centralized, which is why it is so important.
Many people think that just logging in to a domain controller or a network
access server is all that is involved in identity management. But if you peek under
the covers, you can find an array of complex processes and technologies working
together.
The CISSP exam is not currently getting into this level of detail (entitlement,
attribute, traits) pertaining to IdM, but in the real world there are many facets to
identification, authentication, authorization, and auditing that make it a com-
plex beast.

CISSP All-in-One Exam Guide
182
Web portals functions are parts of a website that act as a point of access to informa-
tion. A portal presents information from diverse sources in a unified manner. It can
offer various services, as in e-mail, news updates, stock prices, data access, price look-
ups, access to databases, and entertainment. They provide a way for organizations to
present one consistent interface with one “look and feel” and various functionality
types. For example, your company might have a web portal that you can log into and it
provides access to many different systems and their functionalities, but it seems as
though you are only interacting with one system because the interface is “clean” and
organized. Common public web portals are iGoogle, Yahoo!, AOL, etc. They mash up,
or combine, web services (web-based functions) from several different entities and
present them in one central website.
A web portal is made up of portlets, which are pluggable user-interface software
components that present information from other systems. A portlet is an interactive
application that provides a specific type of web service functionality (e-mail, news feed,
weather updates, forums). A portal is made up of individual portlets to provide a pleth-
ora of services through one interface. It is a way of centrally providing a set of web ser-
vices. Users can configure their view to the portal by enabling or disabling these various
portlet functions.
Since each of these portlets can be provided by different entities, how user authen-
tication information is handled must be tightly controlled, and there must be a high
level of trust between these different entities. If you worked for a college, for example,
there might be one web portal available to students, parents, faculty members, and the
public. The public should only be able to view and access a small subset of available
portlets and not have access to more powerful web services (e-mails, database access).

Chapter 3: Access Control
183
Students could be able to log in and gain access to their grades, assignments, and a
student forum. Faculty members can gain access to all of these web services, including
the school’s e-mail service and access to the central database, which contains all of the
students’ information. If there is a software flaw or misconfiguration, it is possible that
someone can gain access to something they are not supposed to.
The following sections explain the various types of authentication methods com-
monly used and integrated in many web-based federated identity management pro-
cesses and products today.
Access Control and Markup Languages
You can only do what I want you to do when interacting with my web portal.
If you can remember when HyperText Markup Language (HTML) was all we had to
make a static web page, you’re old. Being old in the technology world is different than
in the regular world; HTML came out in the early 1990s. HTML came from Standard
Generalized Markup Language (SGML), which came from the Generalized Markup
Language (GML). We still use HTML, so it is certainly not dead and gone; the industry
has just improved upon the markup languages available for use to meet today’s needs.
A markup language is a way to structure text and data sets, and it dictates how these
will be viewed and used. When you adjust margins and other formatting capabilities in
a word processor, you are marking up the text in the word processor’s markup language.
If you develop a web page, you are using some type of markup language. You can con-
trol how it looks and some of the actual functionality the page provides. The use of a
standard markup language also allows for interoperability. If you develop a web page
and follow basic markup language standards, the page will basically look and act the
same no matter what web server is serving up the web page or what browser the viewer
is using to interact with it.
As the Internet grew in size and the World Wide Web (WWW) expanded in func-
tionality, and as more users and organizations came to depend upon web sites and
web-based communication, the basic and elementary functions provided by HTML
were not enough. And instead of every web site having its own proprietary markup
language to meet its specific functionality requirements, the industry had to have a way
for functionality needs to be met and still provide interoperability for all web server
and web browser interaction. This is the reason that Extensible Markup Language (XML)
was developed. XML is a universal and foundational standard that provides a structure
for other independent markup languages to be built from and still allow for interoper-
ability. Markup languages with various functionalities were built from XML, and while
each language provides its own individual functionality, if they all follow the core rules
of XML then they are interoperable and can be used across different web-based applica-
tions and platforms.
As an analogy, let’s look at the English language. Sam is a biology scientist, Trudy is
an accountant, and Val is a network administrator. They all speak English, so they have
a common set of communication rules, which allow them to talk with each other, but
each has their own “spin-off” languages that builds upon and uses the English language
as its core. Sam uses words like “mitochondrial amino acid genetic strains” and “DNA
polymerase.” Trudy uses words as in “accrual accounting” and “acquisition indiges-
tion.” Val uses terms as in “multiprotocol label switching” and “subkey creation.” Each

CISSP All-in-One Exam Guide
184
profession has its own “language” to meet its own needs, but each is based off the same
core language—English. In the world of the WWW, various web sites need to provide
different types of functionality through the use of their own language types but still
need a way to communicate with each other and their users in a consistent manner,
which is why they are based upon the same core language structure (XML).
There are hundreds of markup languages based upon XML, but we are going to fo-
cus on the ones that are used for identity management and access control purposes.
The Service Provisioning Markup Language (SPML) allows for the exchange of pro-
visioning data between applications, which could reside in one organization or many.
SPML allows for the automation of user management (account creation, amendments,
revocation) and access entitlement configuration related to electronically published
services across multiple provisioning systems. This markup language allows for the in-
tegration and interoperation of service provisioning requests across various platforms.
When a new employee is hired at a company, that employee usually needs access to
a wide range of systems, servers, and applications. Setting up new accounts on each and
every system, properly configuring access rights, and then maintaining those accounts
throughout their lifetimes is time-consuming, laborious, and error-prone. What if the
company has 20,000 employees and thousands of network resources that each em-
ployee needs various access rights to? This opens the door for confusion, mistakes,
vulnerabilities, and a lack of standardization.
SPML allows for all these accounts to be set up and managed simultaneously across
the various systems and applications. SPML is made up of three main entities: the Re-
questing Authority (RA), which is the entity that is making the request to set up a new
account or make changes to an existing account; the Provisioning Service Provider
(PSP), which is the software that responds to the account requests; and the Provision-
ing Service Target (PST), which is the entity that carries out the provisioning activities
on the requested system.
So when a new employee is hired, there is a request to set up the necessary user ac-
counts and access privileges on several different systems and applications across the
enterprise. This request originates in a piece of software carrying out the functionality
of the RA. The RA creates SPML messages, which provide the requirements of the new
account, and sends them to a piece of software that is carrying out the functionality of
the PSP. This piece of software reviews the requests and compares them to the organiza-
tion’s approved account creation criteria. If these requests are allowed, the PSP sends
new SPML messages to the end systems (PST) that the user actually needs to access.
Software on the PST sets up the requested accounts and configures the necessary access
rights. If this same employee is fired three months later, the same process is followed
and all necessary user accounts are deleted. This allows for consistent account manage-
ment in complex environments. These steps are illustrated in Figure 3-7.
When there is a need to allow a user to log in one time and gain access to different
and separate web-based applications, the actual authentication data have to be shared
between the systems maintaining those web applications securely and in a standard-
ized manner. This is the role that the Security Assertion Markup Language (SAML) plays.
It is an XML standard that allows the exchange of authentication and authorization
data to be shared between security domains. When you purchase an airline flight on
www.southwest.com, you are prompted to also purchase a hotel room and a rental car.

Chapter 3: Access Control
185
Southwest Airlines does not provide all these services itself, but the company has rela-
tionships set up with the companies that do provide these services. The Southwest
Airlines portal acts as a customer entry point. Once you are authenticated through their
web site and you request to purchase a hotel room, your authorization data are sent
from the airline web server to the hotel company web server. This allows you to pur-
chase an airline flight and hotel room from two different companies through one cen-
tralized portal.
SAML provides the authentication pieces to federated identity management systems
to allow business-to-business (B2B) and business-to-consumer (B2C) transactions. In
our previous example, the user is considered the principal, Southwest Airlines would be
considered the identity provider, and the hotel company that receives the user’s authen-
tication information from the Southwest Airlines web server is considered the service
provider.
This is not the only way that the SAML language can be used. The digital world has
evolved to being able to provide extensive services and functionality to users through
web-based machine-to-machine communication standards. Web services is a collection
of technologies and standards that allow services (weather updates, stock tickers, e-
mail, customer resource management, etc.) to be provided on distributed systems and
be “served up” in one place. For example, if you go to the iGoogle portal, you can con-
figure what services you want available to you each time you visit this page. All of these
services you choose (news feeds, videos, Gmail, calendar, etc.) are not provided by one
Figure 3-7 SPML provisioning steps

CISSP All-in-One Exam Guide
186
server in a Google data processing location. These services come from servers all over
the world; they are just provided in one central portal for your viewing pleasure.
Transmission of SAML data can take place over different protocol types, but a com-
mon one is Simple Object Access Protocol (SOAP). SOAP is a specification that outlines
how information pertaining to web services is exchanged in a structured manner. It
provides the basic messaging framework, which allows users to request a service and, in
exchange, the service is made available to that user. Let’s say you need to interact with
your company’s customer relationship management (CRM) system, which is hosted
and maintained by the vendor—for example, Salesforce.com. You would log in to your
company’s portal and double-click a link for Salesforce. Your company’s portal will take
this request and your authentication data and package it up in an SAML format and
encapsulate that data into a SOAP message. This message would be transmitted over an
HTTP connection to the Salesforce vendor site, and once you are authenticated, you are
provided with a screen that shows you the company’s customer database. The SAML,
SOAP, and HTTP relationship is illustrated in Figure 3-8.
The use of web services in this manner also allows for organizations to provide ser-
vice oriented architecture (SOA) environments. An SOA is a way to provide independent
services residing on different systems in different business domains in one consistent
manner. For example, if your company has a web portal that allows you to access the
company’s CRM, an employee directory, and a help-desk ticketing application, this is
most likely being provided through an SOA. The CRM system may be within the mar-
keting department, the employee directory may be within the HR department, and the
ticketing system may be within the IT department, but you can interact with all of them
through one interface. SAML is a way to send your authentication information to each
system, and SOAP allows this type of information to be presented and processed in a
unified manner.
The last XML-based standard we will look at is Extensible Access Control Markup
Language (XACML). XACML is used to express security policies and access rights to as-
Figure 3-8
SAML material
embedded within
an HTTP message

Chapter 3: Access Control
187
sets provided through web services and other enterprise applications. SAML is just a
way to send around your authentication information, as in a password, key, or digital
certificate, in a standard format. SAML does not tell the receiving system how to inter-
pret and use this authentication data. Two systems have to be configured to use the
same type of authentication data. If you log in to System A and provide a password and
try to access System B, which only uses digital certificates for authentication purposes,
your password is not going to give you access to System B’s service. So both systems
have to be configured to use passwords. But just because your password is sent to Sys-
tem B does not mean you have complete access to all of System B’s functionality. Sys-
tem B has access policies that dictate the operations specific subjects can carry out on
its resources. The access policies can be developed in the XACML format and enforced
by System B’s software. XACML is both an access control policy language and a process-
ing model that allows for policies to be interpreted and enforced in a standard manner.
When your password is sent to System B, there is a rules engine on that system that in-
terprets and enforces the XACML access control policies. If the access control policies
are created in the XACML format, they can be installed on both System A and System B
to allow for consistent security to be enforced and managed.
XACML uses a Subject element (requesting entity), a Resource element (requested
entity), and an Action element (types of access). So if you request access to your com-
pany’s CRM, you are the Subject, the CRM application is the Resource, and your access
parameters are outlined in the Action element.
NOTE
NOTE Who develops and keeps track of all of these standardized languages?
The Organization for the Advancement of Structured Information Standards (OASIS).
This organization develops and maintains the standards for how various
aspects of web-based communication are built and maintained.
Web services, SOA environments, and the implementation of these different XML-
based markup languages vary in nature because they allow for extensive flexibility. So
much of the world’s communication takes place through web-based processes; it is
becoming increasingly important for security professionals to understand these issues
and technologies.
Biometrics
I would like to prove who I am. Please look at the blood vessels at the back of my eyeball.
Response: Gross.
Biometrics verifies an individual’s identity by analyzing a unique personal attribute
or behavior, which is one of the most effective and accurate methods of verifying iden-
tification. Biometrics is a very sophisticated technology; thus, it is much more expen-
sive and complex than the other types of identity verification processes. A biometric
system can make authentication decisions based on an individual’s behavior, as in sig-
nature dynamics, but these can change over time and possibly be forged. Biometric
systems that base authentication decisions on physical attributes (such as iris, retina, or
fingerprint) provide more accuracy because physical attributes typically don’t change,
absent some disfiguring injury, and are harder to impersonate.

CISSP All-in-One Exam Guide
188
Biometrics is typically broken up into two different categories. The first is the phys-
iological. These are traits that are physical attributes unique to a specific individual.
Fingerprints are a common example of a physiological trait used in biometric systems.
The second category of biometrics is known as behavioral. This is based on a char-
acteristic of an individual to confirm his identity. An example is signature dynamics.
Physiological is “what you are” and behavioral is “what you do.”
A biometric system scans a person’s physiological attribute or behavioral trait and
compares it to a record created in an earlier enrollment process. Because this system
inspects the grooves of a person’s fingerprint, the pattern of someone’s retina, or the
pitches of someone’s voice, it must be extremely sensitive. The system must perform
accurate and repeatable measurements of anatomical or behavioral characteristics. This
type of sensitivity can easily cause false positives or false negatives. The system must be
calibrated so these false positives and false negatives occur infrequently and the results
are as accurate as possible.
When a biometric system rejects an authorized individual, it is called a Type I error
(false rejection rate). When the system accepts impostors who should be rejected, it is
called a Type II error (false acceptance rate). The goal is to obtain low numbers for each
type of error, but Type II errors are the most dangerous and thus the most important to
avoid.
When comparing different biometric systems, many different variables are used,
but one of the most important metrics is the crossover error rate (CER). This rating is
stated as a percentage and represents the point at which the false rejection rate equals
the false acceptance rate. This rating is the most important measurement when deter-
mining the system’s accuracy. A biometric system that delivers a CER of 3 will be more
accurate than a system that delivers a CER of 4.
NOTE
NOTE Crossover error rate (CER) is also called equal error rate (EER).
What is the purpose of this CER value anyway? Using the CER as an impartial judg-
ment of a biometric system helps create standards by which products from different
vendors can be fairly judged and evaluated. If you are going to buy a biometric system,
you need a way to compare the accuracy between different systems. You can just go by
the different vendors’ marketing material (they all say they are the best), or you can
compare the different CER values of the products to see which one really is more accu-
rate than the others. It is also a way to keep the vendors honest. One vendor may tell
you, “We have absolutely no Type II errors.” This would mean that their product would
not allow any imposters to be improperly authenticated. But what if you asked the ven-
dor how many Type I errors their product had and she sheepishly replied, “We average
around 90 percent of Type I errors.” That would mean that 90 percent of the authentica-
tion attempts would be rejected, which would negatively affect your employees’ produc-
tivity. So you can ask about their CER value, which represents when the Type I and Type
II errors are equal, to give you a better understanding of the product’s overall accuracy.

Chapter 3: Access Control
189
Individual environments have specific security level requirements, which will dictate
how many Type I and Type II errors are acceptable. For example, a military institution
that is very concerned about confidentiality would be prepared to accept a certain num-
ber of Type I errors, but would absolutely not accept any false accepts (Type II errors).
Because all biometric systems can be calibrated, if you lower the Type II error rate by
adjusting the system’s sensitivity, it will result in an increase in Type I errors. The military
institution would obviously calibrate the biometric system to lower the Type II errors to
zero, but that would mean it would have to accept a higher rate of Type I errors.
Biometrics is the most expensive method of verifying a person’s identity, and it
faces other barriers to becoming widely accepted. These include user acceptance, enroll-
ment timeframe, and throughput. Many times, people are reluctant to let a machine
read the pattern of their retina or scan the geometry of their hand. This lack of enthusi-
asm has slowed down the widespread use of biometric systems within our society. The
enrollment phase requires an action to be performed several times to capture a clear
and distinctive reference record. People are not particularly fond of expending this time
and energy when they are used to just picking a password and quickly typing it into
their console. When a person attempts to be authenticated by a biometric system,
sometimes the system will request an action to be completed several times. If the sys-
tem was unable to get a clear reading of an iris scan or could not capture a full voice
verification print, the individual may have to repeat the action. This causes low through-
put, stretches the individual’s patience, and reduces acceptability.
During enrollment, the user provides the biometric data (i.e. fingerprint, voice print),
and the biometric reader converts this data into binary values. Depending on the system,
the reader may create a hash value of the biometric data, or it may encrypt the data, or do
both. The biometric data then goes from the reader to a back-end authentication data-
base where her user account has been created. When the user needs to later authenticate
to a system, she will provide the necessary biometric data (i.e. fingerprint, voice print),
and the binary format of this information is compared to what is in the authentication
database. If they match, then the user is authenticated.
In Figure 3-9, we see that biometric data can be stored on a smart card and used for
authentication. Also, you might notice that the match is 95 percent instead of 100 per-
cent. Obtaining a 100 percent match each and every time is very difficult because of the
level of sensitivity of the biometric systems. A smudge on the reader, oil on the person’s
finger, and other small environmental issues can stand in the way of matching 100
percent. If your biometric system was calibrated so it required 100 percent matches, this
would mean you would not allow any Type II errors and that users would commonly
not be authenticated in a timely manner.
Processing Speed
When reviewing biometric devices for purchase, one component to take into con-
sideration is the length of time it takes to actually authenticate users. From the
time a user inserts data until she receives an accept or reject response should take
five to ten seconds.

CISSP All-in-One Exam Guide
190
The following is an overview of the different types of biometric systems and the
physiological or behavioral characteristics they examine.
Fingerprint Fingerprints are made up of ridge endings and bifurcations exhibited
by friction ridges and other detailed characteristics called minutiae. It is the distinctive-
ness of these minutiae that gives each individual a unique fingerprint. An individual
places his finger on a device that reads the details of the fingerprint and compares this
to a reference file. If the two match, the individual’s identity has been verified.
NOTE
NOTE Fingerprint systems store the full fingerprint, which is actually a lot
of information that takes up hard drive space and resources. The finger-scan
technology extracts specific features from the fingerprint and stores just
that information, which takes up less hard drive space and allows for quicker
database lookups and comparisons.
Palm Scan The palm holds a wealth of information and has many aspects that are
used to identify an individual. The palm has creases, ridges, and grooves throughout
that are unique to a specific person. The palm scan also includes the fingerprints of
each finger. An individual places his hand on the biometric device, which scans and
captures this information. This information is compared to a reference file, and the
identity is either verified or rejected.
Hand Geometry The shape of a person’s hand (the shape, length, and width of
the hand and fingers) defines hand geometry. This trait differs significantly between
people and is used in some biometric systems to verify identity. A person places her
Figure 3-9 Biometric data is turned into binary data and compared for identity validation.

Chapter 3: Access Control
191
hand on a device that has grooves for each finger. The system compares the geometry of
each finger, and the hand as a whole, to the information in a reference file to verify that
person’s identity.
Retina Scan A system that reads a person’s retina scans the blood-vessel pattern of
the retina on the backside of the eyeball. This pattern has shown to be extremely unique
between different people. A camera is used to project a beam inside the eye and capture
the pattern and compare it to a reference file recorded previously.
Iris Scan The iris is the colored portion of the eye that surrounds the pupil. The iris
has unique patterns, rifts, colors, rings, coronas, and furrows. The uniqueness of each of
these characteristics within the iris is captured by a camera and compared with the in-
formation gathered during the enrollment phase. Of the biometric systems, iris scans
are the most accurate. The iris remains constant through adulthood, which reduces the
type of errors that can happen during the authentication process. Sampling the iris of-
fers more reference coordinates than any other type of biometric. Mathematically, this
means it has a higher accuracy potential than any other type of biometric.
NOTE
NOTE When using an iris pattern biometric system, the optical unit must
be positioned so the sun does not shine into the aperture; thus, when
implemented, it must have proper placement within the facility.
Signature Dynamics When a person signs a signature, usually they do so in the
same manner and speed each time. Signing a signature produces electrical signals that
can be captured by a biometric system. The physical motions performed when some-
one is signing a document create these electrical signals. The signals provide unique
characteristics that can be used to distinguish one individual from another. Signature
dynamics provides more information than a static signature, so there are more vari-
ables to verify when confirming an individual’s identity and more assurance that this
person is who he claims to be.
Signature dynamics is different from a digitized signature. A digitized signature is
just an electronic copy of someone’s signature and is not a biometric system that cap-
tures the speed of signing, the way the person holds the pen, and the pressure the
signer exerts to generate the signature.
Keystroke Dynamics Whereas signature dynamics is a method that captures the
electrical signals when a person signs a name, keystroke dynamics captures electrical
signals when a person types a certain phrase. As a person types a specified phrase, the
biometric system captures the speed and motions of this action. Each individual has a
certain style and speed, which translate into unique signals. This type of authentication
is more effective than typing in a password, because a password is easily obtainable. It
is much harder to repeat a person’s typing style than it is to acquire a password.
Voice Print People’s speech sounds and patterns have many subtle distinguishing
differences. A biometric system that is programmed to capture a voice print and com-
pare it to the information held in a reference file can differentiate one individual from
another. During the enrollment process, an individual is asked to say several different

CISSP All-in-One Exam Guide
192
words. Later, when this individual needs to be authenticated, the biometric system jum-
bles these words and presents them to the individual. The individual then repeats the
sequence of words given. This technique is used so others cannot attempt to record the
session and play it back in hopes of obtaining unauthorized access.
Facial Scan A system that scans a person’s face takes many attributes and character-
istics into account. People have different bone structures, nose ridges, eye widths, fore-
head sizes, and chin shapes. These are all captured during a facial scan and compared
to an earlier captured scan held within a reference record. If the information is a match,
the person is positively identified.
Hand Topography Whereas hand geometry looks at the size and width of an in-
dividual’s hand and fingers, hand topology looks at the different peaks and valleys of
the hand, along with its overall shape and curvature. When an individual wants to be
authenticated, she places her hand on the system. Off to one side of the system, a cam-
era snaps a side-view picture of the hand from a different view and angle than that of
systems that target hand geometry, and thus captures different data. This attribute is not
unique enough to authenticate individuals by itself and is commonly used in conjunc-
tion with hand geometry.
Biometrics are not without their own sets of issues and concerns. Because they de-
pend upon the specific and unique traits of living things, problems can arise. Living
things are notorious for not remaining the same, which means they won’t present static
biometric information for each and every login attempt. Voice recognition can be ham-
pered by a user with a cold. Pregnancy can change the patterns of the retina. Someone
could lose a finger. Or all three could happen. You just never know in this crazy world.
Some biometric systems actually check for the pulsation and/or heat of a body part
to make sure it is alive. So if you are planning to cut someone’s finger off or pluck out
someone’s eyeball so you can authenticate yourself as a legitimate user, it may not
work. Although not specifically stated, I am pretty sure this type of activity falls outside
the bounds of the CISSP ethics you will be responsible for upholding once you receive
your certification.
Passwords
User identification coupled with a reusable password is the most common form of
system identification and authorization mechanisms. A password is a protected string of
characters that is used to authenticate an individual. As stated previously, authentica-
tion factors are based on what a person knows, has, or is. A password is something the
user knows.
Passwords are one of the most often used authentication mechanisms employed
today. It is important the passwords are strong and properly managed.
Password Management Although passwords are the most commonly used au-
thentication mechanisms, they are also considered one of the weakest security mecha-
nisms available. Why? Users usually choose passwords that are easily guessed (a
spouse’s name, a user’s birth date, or a dog’s name), or tell others their passwords, and

Chapter 3: Access Control
193
many times write the passwords down on a sticky note and cleverly hide it under the
keyboard. To most users, security is usually not the most important or interesting part
of using their computers—except when someone hacks into their computer and steals
confidential information, that is. Then security is all the rage.
This is where password management steps in. If passwords are properly generated,
updated, and kept secret, they can provide effective security. Password generators can be
used to create passwords for users. This ensures that a user will not be using “Bob” or
“Spot” for a password, but if the generator spits out “kdjasijew284802h,” the user will
surely scribble it down on a piece of paper and safely stick it to the monitor, which
defeats the whole purpose. If a password generator is going to be used, the tools should
create uncomplicated, pronounceable, nondictionary words to help users remember
them so they aren’t tempted to write them down.
If the users can choose their own passwords, the operating system should enforce
certain password requirements. The operating system can require that a password con-
tain a certain number of characters, unrelated to the user ID, include special characters,
include upper- and lowercase letters, and not be easily guessable. The operating system
can keep track of the passwords a specific user generates so as to ensure no passwords
are reused. The users should also be forced to change their passwords periodically. All
of these factors make it harder for an attacker to guess or obtain passwords within the
environment.
If an attacker is after a password, she can try a few different techniques:
•Electronic monitoring Listening to network traffic to capture information,
especially when a user is sending her password to an authentication server.
The password can be copied and reused by the attacker at another time, which
is called a replay attack.
•Access the password file Usually done on the authentication server. The
password file contains many users’ passwords and, if compromised, can be
the source of a lot of damage. This file should be protected with access control
mechanisms and encryption.
•Brute force attacks Performed with tools that cycle through many possible
character, number, and symbol combinations to uncover a password.
•Dictionary attacks Files of thousands of words are compared to the user’s
password until a match is found.
•Social engineering An attacker falsely convinces an individual that she has
the necessary authorization to access specific resources.
•Rainbow table An attacker uses a table that contains all possible passwords
already in a hash format.
Certain techniques can be implemented to provide another layer of security for
passwords and their use. After each successful logon, a message can be presented to a
user indicating the date and time of the last successful logon, the location of this logon,
and whether there were any unsuccessful logon attempts. This alerts the user to any
suspicious activity and whether anyone has attempted to log on using his credentials.

CISSP All-in-One Exam Guide
194
An administrator can set operating parameters that allow a certain number of failed
logon attempts to be accepted before a user is locked out; this is a type of clipping level.
The user can be locked out for five minutes or a full day after the threshold (or clipping
level) has been exceeded. It depends on how the administrator configures this mecha-
nism. An audit trail can also be used to track password usage and both successful and
unsuccessful logon attempts. This audit information should include the date, time, user
ID, and workstation the user logged in from.
NOTE
NOTE Clipping level is an older term that just means threshold. If the
number of acceptable failed login attempts is set to three, three is the
threshold (clipping level) value.
A password’s lifetime should be short but practical. Forcing a user to change a pass-
word on a more frequent basis provides more assurance that the password will not be
guessed by an intruder. If the lifetime is too short, however, it causes unnecessary man-
agement overhead, and users may forget which password is active. A balance between
protection and practicality must be decided upon and enforced.
As with many things in life, education is the key. Password requirements, protec-
tion, and generation should be addressed in security-awareness programs so users un-
derstand what is expected of them, why they should protect their passwords, and how
passwords can be stolen. Users should be an extension to a security team, not the op-
position.
NOTE
NOTE Rainbow tables contain passwords already in their hashed format. The
attacker just compares a captured hashed password with one that is listed in
the table to uncover the plaintext password. This takes much less time than
carrying out a dictionary or brute force attack.
Password Checkers Several organizations test user-chosen passwords using tools
that perform dictionary and/or brute force attacks to detect the weak passwords. This
helps make the environment as a whole less susceptible to dictionary and exhaustive
attacks used to discover users’ passwords. Many times the same tools employed by an
attacker to crack a password are used by a network administrator to make sure the pass-
word is strong enough. Most security tools have this dual nature. They are used by se-
curity professionals and IT staff to test for vulnerabilities within their environment in
the hope of uncovering and fixing them before an attacker finds the vulnerabilities. An
attacker uses the same tools to uncover vulnerabilities to exploit before the security
professional can fix them. It is the never-ending cat-and-mouse game.
If a tool is called a password checker, it is used by a security professional to test the
strength of a password. If a tool is called a password cracker, it is usually used by a
hacker; however, most of the time, these tools are one and the same.
You need to obtain management’s approval before attempting to test (break) em-
ployees’ passwords with the intent of identifying weak passwords. Explaining you are

Chapter 3: Access Control
195
trying to help the situation, not hurt it, after you have uncovered the CEO’s password is
not a good situation to be in.
Password Hashing and Encryption In most situations, if an attacker sniffs
your password from the network wire, she still has some work to do before she actually
knows your password value because most systems hash the password with a hashing
algorithm, commonly MD4 or MD5, to ensure passwords are not sent in cleartext.
Although some people think the world is run by Microsoft, other types of operating
systems are out there, such as Unix and Linux. These systems do not use registries and
SAM databases, but contain their user passwords in a file cleverly called “shadow.”
Now, this shadow file does not contain passwords in cleartext; instead, your password
is run through a hashing algorithm, and the resulting value is stored in this file. Unix-
type systems zest things up by using salts in this process. Salts are random values added
to the encryption process to add more complexity and randomness. The more random-
ness entered into the encryption process, the harder it is for the bad guy to decrypt and
uncover your password. The use of a salt means that the same password can be en-
crypted into several thousand different formats. This makes it much more difficult for
an attacker to uncover the right format for your system.
Password Aging Many systems enable administrators to set expiration dates for
passwords, forcing users to change them at regular intervals. The system may also keep
a list of the last five to ten passwords (password history) and not let the users revert
back to previously used passwords.
Limit Logon Attempts A threshold can be set to allow only a certain number of
unsuccessful logon attempts. After the threshold is met, the user’s account can be locked
for a period of time or indefinitely, which requires an administrator to manually un-
lock the account. This protects against dictionary and other exhaustive attacks that con-
tinually submit credentials until the right combination of username and password is
discovered.
Cognitive Password
What is your mother’s name?
Response: Shucks, I don’t remember. I have it written down somewhere.
Cognitive passwords are fact- or opinion-based information used to verify an indi-
vidual’s identity. A user is enrolled by answering several questions based on her life
experiences. Passwords can be hard for people to remember, but that same person will
not likely forget her mother’s maiden name, favorite color, dog’s name, or the school
she graduated from. After the enrollment process, the user can answer the questions
asked of her to be authenticated instead of having to remember a password. This au-
thentication process is best for a service the user does not use on a daily basis because
it takes longer than other authentication mechanisms. This can work well for help-desk
services. The user can be authenticated via cognitive means. This way, the person at the
help desk can be sure he is talking to the right person, and the user in need of help does
not need to remember a password that may be used once every three months.

CISSP All-in-One Exam Guide
196
NOTE
NOTE Authentication by knowledge means that a subject is authenticated
based upon something she knows. This could be a PIN, password, passphrase,
cognitive password, personal history information, or through the use of a
CAPTCHA, which is the graphical representation of data. A CAPTCHA is a
skewed representation of characteristics a person must enter to prove that
the subject is a human and not an automated tool as in a software robot.
One-Time Password
How many times is my one-time password good for?
Response: You are fired.
Aone-time password (OTP) is also called a dynamic password. It is used for authen-
tication purposes and is only good once. After the password is used, it is no longer
valid; thus, if a hacker obtained this password, it could not be reused. This type of au-
thentication mechanism is used in environments that require a higher level of security
than static passwords provide. One-time password generating tokens come in two gen-
eral types: synchronous and asynchronous.
The token device is the most common implementation mechanism for OTP and
generates the one-time password for the user to submit to an authentication server. The
following sections explain these concepts.
The Token Device The token device, or password generator, is usually a handheld
device that has an LCD display and possibly a keypad. This hardware is separate from
the computer the user is attempting to access. The token device and authentication
service must be synchronized in some manner to be able to authenticate a user. The
token device presents the user with a list of characters to be entered as a password when
logging on to a computer. Only the token device and authentication service know the
meaning of these characters. Because the two are synchronized, the token device will
present the exact password the authentication service is expecting. This is a one-time
password, also called a token, and is no longer valid after initial use.
Synchronous Asynchronous token device synchronizes with the authentication ser-
vice by using time or a counter as the core piece of the authentication process. If the
synchronization is time-based, the token device and the authentication service must
hold the same time within their internal clocks. The time value on the token device and
a secret key are used to create the one-time password, which is displayed to the user.
The user enters this value and a user ID into the computer, which then passes them to
the server running the authentication service. The authentication service decrypts this
value and compares it to the value it expected. If the two match, the user is authenti-
cated and allowed to use the computer and resources.
If the token device and authentication service use counter-synchronization, the user
will need to initiate the creation of the one-time password by pushing a button on the
token device. This causes the token device and the authentication service to advance to
the next authentication value. This value and a base secret are hashed and displayed to the
user. The user enters this resulting value along with a user ID to be authenticated. In
either time- or counter-based synchronization, the token device and authentication
service must share the same secret base key used for encryption and decryption.

Chapter 3: Access Control
197
NOTE
NOTE Synchronous token-based one-time password generation can be
time-based or counter-based. Another term for counter-based is event-based.
Counter-based and event-based are interchangeable terms, and you could see
either or both on the CISSP exam.
Asynchronous A token device using an asynchronous token–generating method
employs a challenge/response scheme to authenticate the user. In this situation, the
authentication server sends the user a challenge, a random value, also called a nonce.
The user enters this random value into the token device, which encrypts it and returns
a value the user uses as a one-time password. The user sends this value, along with a
username, to the authentication server. If the authentication server can decrypt the val-
ue and it is the same challenge value sent earlier, the user is authenticated, as shown in
Figure 3-10.
NOTE
NOTE The actual implementation and process that these devices follow
can differ between different vendors. What is important to know is that
asynchronous is based on challenge/response mechanisms, while synchronous
is based on time- or counter-driven mechanisms.
SecurID
SecurID, from RSA Security, Inc., is one of the most widely used time-based to-
kens. One version of the product generates the one-time password by using a
mathematical function on the time, date, and ID of the token card. Another ver-
sion of the product requires a PIN to be entered into the token device.
1

CISSP All-in-One Exam Guide
198
Both token systems can fall prey to masquerading if a user shares his identification
information (ID or username) and the token device is shared or stolen. The token de-
vice can also have battery failure or other malfunctions that would stand in the way of
a successful authentication. However, this type of system is not vulnerable to electronic
eavesdropping, sniffing, or password guessing.
If the user has to enter a password or PIN into the token device before it provides a
one-time password, then strong authentication is in effect because it is using two fac-
tors—something the user knows (PIN) and something the user has (the token device).
NOTE
NOTE One-time passwords can also be generated in software, in which case a
piece of hardware such as a token device is not required. These are referred to
as soft tokens and require that the authentication service and application contain
the same base secrets, which are used to generate the one-time passwords.
Cryptographic Keys
Another way to prove one’s identity is to use a private key by generating a digital signa-
ture. A digital signature could be used in place of a password. Passwords are the weakest
form of authentication and can be easily sniffed as they travel over a network. Digital
signatures are forms of authentication used in environments that require higher secu-
rity protection than what is provided by passwords.
A private key is a secret value that should be in the possession of one person, and
one person only. It should never be disclosed to an outside party. A digital signature is
a technology that uses a private key to encrypt a hash value (message digest). The act of
encrypting this hash value with a private key is called digitally signing a message. A digi-
tal signature attached to a message proves the message originated from a specific source
and that the message itself was not changed while in transit.
Figure 3-10 Authentication using an asynchronous token device includes a workstation, token
device, and authentication service.
6.

Chapter 3: Access Control
199
A public key can be made available to anyone without compromising the associat-
ed private key; this is why it is called a public key. We explore private keys, public keys,
digital signatures, and public key infrastructure (PKI) in Chapter 7, but for now, under-
stand that a private key and digital signatures are other mechanisms that can be used to
authenticate an individual.
Passphrase
Apassphrase is a sequence of characters that is longer than a password (thus a “phrase”)
and, in some cases, takes the place of a password during an authentication process. The
user enters this phrase into an application, and the application transforms the value into
avirtual password, making the passphrase the length and format that is required by the
application. (For example, an application may require your virtual password to be 128
bits to be used as a key with the AES algorithm.) If a user wants to authenticate to an ap-
plication, such as Pretty Good Privacy (PGP), he types in a passphrase, let’s say StickWith-
MeKidAndYouWillWearDiamonds. The application converts this phrase into a virtual
password that is used for the actual authentication. The user usually generates the pass-
phrase in the same way a user creates a password the first time he logs on to a computer.
A passphrase is more secure than a password because it is longer, and thus harder to ob-
tain by an attacker. In many cases, the user is more likely to remember a passphrase than
a password.
Memory Cards
The main difference between memory cards and smart cards is their capacity to process
information. A memory card holds information but cannot process information. A
smart card holds information and has the necessary hardware and software to actually
process that information. A memory card can hold a user’s authentication information
so the user only needs to type in a user ID or PIN and present the memory card, and if
the data that the user entered matches the data on the memory card, the user is success-
fully authenticated. If the user presents a PIN value, then this is an example of two-
factor authentication—something the user knows and something the user has. A mem-
ory card can also hold identification data that are pulled from the memory card by a
reader. It travels with the PIN to a back-end authentication server. An example of a
memory card is a swipe card that must be used for an individual to be able to enter a
building. The user enters a PIN and swipes the memory card through a card reader. If
this is the correct combination, the reader flashes green and the individual can open
the door and enter the building. Another example is an ATM card. If Buffy wants to
withdraw $40 from her checking account, she needs to enter the correct PIN and slide
the ATM card (or memory card) through the reader.
Memory cards can be used with computers, but they require a reader to process the
information. The reader adds cost to the process, especially when one is needed per
computer, and card generation adds cost and effort to the whole authentication pro-
cess. Using a memory card provides a more secure authentication method than using a
password because the attacker would need to obtain the card and know the correct PIN.
Administrators and management must weigh the costs and benefits of a memory to-
ken–based card implementation to determine if it is the right authentication mecha-
nism for their environment.

CISSP All-in-One Exam Guide
200
Smart Card
My smart card is smarter than your memory card.
Asmart card has the capability of processing information because it has a micropro-
cessor and integrated circuits incorporated into the card itself. Memory cards do not
have this type of hardware and lack this type of functionality. The only function they
can perform is simple storage. A smart card, which adds the capability to process infor-
mation stored on it, can also provide a two-factor authentication method because the
user may have to enter a PIN to unlock the smart card. This means the user must pro-
vide something she knows (PIN) and something she has (smart card).
Two general categories of smart cards are the contact and the contactless types. The
contact smart card has a gold seal on the face of the card. When this card is fully in-
serted into a card reader, electrical fingers wipe against the card in the exact position
that the chip contacts are located. This will supply power and data I/O to the chip for
authentication purposes. The contactless smart card has an antenna wire that surrounds
the perimeter of the card. When this card comes within an electromagnetic field of the
reader, the antenna within the card generates enough energy to power the internal chip.
Now, the results of the smart card processing can be broadcast through the same an-
tenna, and the conversation of authentication can take place. The authentication can be
completed by using a one-time password, by employing a challenge/response value, or
by providing the user’s private key if it is used within a PKI environment.

Chapter 3: Access Control
201
NOTE
NOTE Two types of contactless smart cards are available: hybrid and combi.
The hybrid card has two chips, with the capability of utilizing both the contact
and contactless formats. A combi card has one microprocessor chip that can
communicate to contact or contactless readers.
The information held within the memory of a smart card is not readable until the
correct PIN is entered. This fact and the complexity of the smart token make these cards
resistant to reverse-engineering and tampering methods. If George loses the smart card he
uses to authenticate to the domain at work, the person who finds the card would need to
know his PIN to do any real damage. The smart card can also be programmed to store
information in an encrypted fashion, as well as detect any tampering with the card itself.
In the event that tampering is detected, the information stored on the smart card can be
automatically wiped.
The drawbacks to using a smart card are the extra cost of the readers and the overhead
of card generation, as with memory cards, although this cost is decreasing. The smart
cards themselves are more expensive than memory cards because of the extra integrated
circuits and microprocessor. Essentially, a smart card is a kind of computer, and because
of that it has many of the operational challenges and risks that can affect a computer.
Smart cards have several different capabilities, and as the technology develops and
memory capacities increase for storage, they will gain even more. They can store per-
sonal information in a storage manner that is tamper resistant. This also allows them
to have the ability to isolate security-critical computations within themselves. They can
be used in encryption systems in order to store keys and have a high level of portability
as well as security. The memory and integrated circuit also allow for the capacity to use
encryption algorithms on the actual card and use them for secure authorization that
can be utilized throughout an entire organization.
Smart Card Attacks
Could I tickle your smart card with this needleless ultrasonic vibration thingy?
Response: Um, no.
Smart cards are more tamperproof than memory cards, but where there is sensitive
data there are individuals who are motivated to circumvent any countermeasure the
industry throws at them.
Over the years, people have become very inventive in the development of various
ways to attack smart cards. For example, individuals have introduced computational
errors into smart cards with the goal of uncovering the encryption keys used and stored
on the cards. These “errors” are introduced by manipulating some environmental com-
ponent of the card (changing input voltage, clock rate, temperature fluctuations). The
attacker reviews the result of an encryption function after introducing an error to the
card, and also reviews the correct result, which the card performs when no errors are
introduced. Analysis of these different results may allow an attacker to reverse-engineer
the encryption process, with the hope of uncovering the encryption key. This type of
attack is referred to as fault generation.
Side-channel attacks are nonintrusive and are used to uncover sensitive information
about how a component works, without trying to compromise any type of flaw or
weakness. As an analogy, suppose you want to figure out what your boss does each day

CISSP All-in-One Exam Guide
202
at lunch time but you feel too uncomfortable to ask her. So you follow her, and you see
she enters a building holding a small black bag and exits exactly 45 minutes later with
the same bag and her hair not looking as great as when she went in. You keep doing this
day after day and come to the conclusion that she must be working out. Now you could
have simply read the sign on the building that said “Gym,” but we will give you the
benefit of the doubt here and just not call you for any further private investigator work.
So a noninvasive attack is one in which the attacker watches how something works
and how it reacts in different situations instead of trying to “invade” it with more intru-
sive measures. Some examples of side-channel attacks that have been carried out on
smart cards are differential power analysis (examining the power emissions released dur-
ing processing), electromagnetic analysis (examining the frequencies emitted), and timing
(how long a specific process takes to complete). These types of attacks are used to un-
cover sensitive information about how a component works without trying to compro-
mise any type of flaw or weakness. They are commonly used for data collection. Attackers
monitor and capture the analog characteristics of all supply and interface connections
and any other electromagnetic radiation produced by the processor during normal op-
eration. They can also collect the time it takes for the smart card to carry out its function.
From the collected data, the attacker can deduce specific information she is after, which
could be a private key, sensitive financial data, or an encryption key stored on the card.
Software attacks are also considered noninvasive attacks. A smart card has software
just like any other device that does data processing, and anywhere there is software there
is the possibility of software flaws that can be exploited. The main goal of this type of
attack is to input instructions into the card that will allow the attacker to extract account
information, which he can use to make fraudulent purchases. Many of these types of
attacks can be disguised by using equipment that looks just like the legitimate reader.
If you would like to be more intrusive in your smart card attack, give microprobing
a try. Microprobing uses needleless and ultrasonic vibration to remove the outer protec-
tive material on the card’s circuits. Once this is completed, data can be accessed and
manipulated by directly tapping into the card’s ROM chips.
Interoperability
In the industry today, lack of interoperability is a big problem. Although vendors
claim to be “compliant with ISO/IEC 14443,” many have developed technologies
and methods in a more proprietary fashion. The lack of true standardization has
caused some large problems because smart cards are being used for so many dif-
ferent applications. In the United States, the DoD is rolling out smart cards across
all of their agencies, and NIST is developing a framework and conformance test-
ing programs specifically for interoperability issues.
An ISO/IEC standard, 14443, outlines the following items for smart card
standardization:
•ISO/IEC 14443-1 Physical characteristics
•ISO/IEC 14443-3 Initialization and anticollision
•ISO/IEC 14443-4 Transmission protocol

Chapter 3: Access Control
203
Authorization
Now that I know who you are, let’s see if I will let you do what you want.
Although authentication and authorization are quite different, together they com-
prise a two-step process that determines whether an individual is allowed to access a
particular resource. In the first step, authentication, the individual must prove to the
system that he is who he claims to be—a permitted system user. After successful authen-
tication, the system must establish whether the user is authorized to access the particu-
lar resource and what actions he is permitted to perform on that resource.
Authorization is a core component of every operating system, but applications, se-
curity add-on packages, and resources themselves can also provide this functionality.
For example, suppose Marge has been authenticated through the authentication server
and now wants to view a spreadsheet that resides on a file server. When she finds this
spreadsheet and double-clicks the icon, she will see an hourglass instead of a mouse
pointer. At this stage, the file server is seeing if Marge has the rights and permissions to
view the requested spreadsheet. It also checks to see if Marge can modify, delete, move,
or copy the file. Once the file server searches through an access matrix and finds that
Marge does indeed have the necessary rights to view this file, the file opens up on
Marge’s desktop. The decision of whether or not to allow Marge to see this file was
based on access criteria. Access criteria are the crux of authentication.
Access Criteria
You can perform that action only because we like you and you wear a funny hat.
We have gone over the basics of access control. This subject can get very granular in
its level of detail when it comes to dictating what a subject can or cannot do to an object
or resource. This is a good thing for network administrators and security professionals,
because they want to have as much control as possible over the resources they have been
put in charge of protecting, and a fine level of detail enables them to give individuals just
Radio-Frequency Identification (RFID)
Radio-frequency identification (RFID) is a technology that provides data commu-
nication through the use of radio waves. An object contains an electronic tag,
which can be identified and communicated with through a reader. The tag has an
integrated circuit for storing and processing data, modulating and demodulating
a radio-frequency (RF) signal, and other specialized functions. The reader has a
built-in antenna for receiving and transmitting the signal. This type of technology
can be integrated into smart cards or other mobile transport mechanisms for ac-
cess control purposes. A common security issue with RFID is that the data can be
captured as it moves from the tag to the reader. While encryption can be inte-
grated as a countermeasure, it is not common because RFID is implemented in
technology that has low processing capabilities and encryption is very processor-
intensive.

CISSP All-in-One Exam Guide
204
the precise level of access they need. It would be frustrating if access control permissions
were based only on full control or no access. These choices are very limiting, and an
administrator would end up giving everyone full control, which would provide no pro-
tection. Instead, different ways of limiting access to resources exist, and if they are under-
stood and used properly, they can give just the right level of access desired.
Granting access rights to subjects should be based on the level of trust a company
has in a subject and the subject’s need-to-know. Just because a company completely
trusts Joyce with its files and resources does not mean she fulfills the need-to-know
criteria to access the company’s tax returns and profit margins. If Maynard fulfills the
need-to-know criteria to access employees’ work histories, it does not mean the com-
pany trusts him to access all of the company’s other files. These issues must be identi-
fied and integrated into the access criteria. The different access criteria can be enforced
by roles, groups, location, time, and transaction types.
Using roles is an efficient way to assign rights to a type of user who performs a cer-
tain task. This role is based on a job assignment or function. If there is a position
within a company for a person to audit transactions and audit logs, the role this person
fills would only need a read function to those types of files. This role would not need
full control, modify, or delete privileges.
Using groups is another effective way of assigning access control rights. If several
users require the same type of access to information and resources, putting them into
a group and then assigning rights and permissions to that group is easier to manage
than assigning rights and permissions to each and every individual separately. If a
specific printer is available only to the accounting group, when a user attempts to print
to it, the group membership of the user will be checked to see if she is indeed in the
accounting group. This is one way that access control is enforced through a logical ac-
cess control mechanism.
Physical or logical location can also be used to restrict access to resources. Some files
may be available only to users who can log on interactively to a computer. This means
the user must be physically at the computer and enter the credentials locally versus log-
ging on remotely from another computer. This restriction is implemented on several
server configurations to restrict unauthorized individuals from being able to get in and
reconfigure the server remotely.
Logical location restrictions are usually done through network address restrictions.
If a network administrator wants to ensure that status requests of an intrusion detection
management console are accepted only from certain computers on the network, the
network administrator can configure this within the software.
Time of day, or temporal isolation, is another access control mechanism that can be
used. If a security professional wants to ensure no one is accessing payroll files between
the hours of 8:00 P.M. and 4:00 A.M., that configuration can be implemented to ensure
access at these times is restricted. If the same security professional wants to ensure no
bank account transactions happen during days on which the bank is not open, she can
indicate in the logical access control mechanism this type of action is prohibited on
Sundays.

Chapter 3: Access Control
205
Temporal access can also be based on the creation date of a resource. Let’s say Rus-
sell started working for his company in March 2011. There may be a business need to
allow Russell to only access files that have been created after this date and not before.
Transaction-type restrictions can be used to control what data is accessed during
certain types of functions and what commands can be carried out on the data. An on-
line banking program may allow a customer to view his account balance, but may not
allow the customer to transfer money until he has a certain security level or access right.
A bank teller may be able to cash checks of up to $2,000, but would need a supervisor’s
access code to retrieve more funds for a customer. A database administrator may be able
to build a database for the human resources department, but may not be able to read
certain confidential files within that database. These are all examples of transaction-
type restrictions to control the access to data and resources.
Default to No Access
If you’re unsure, just say no.
Access control mechanisms should default to no access so as to provide the neces-
sary level of security and ensure no security holes go unnoticed. A wide range of access
levels is available to assign to individuals and groups, depending on the application
and/or operating system. A user can have read, change, delete, full control, or no access
permissions. The statement that security mechanisms should default to no access
means that if nothing has been specifically configured for an individual or the group
she belongs to, that user should not be able to access that resource. If access is not ex-
plicitly allowed, it should be implicitly denied. Security is all about being safe, and this
is the safest approach to practice when dealing with access control methods and mech-
anisms. In other words, all access controls should be based on the concept of starting
with zero access, and building on top of that. Instead of giving access to everything, and
then taking away privileges based on need to know, the better approach is to start with
nothing and add privileges based on need to know.
Most access control lists (ACLs) that work on routers and packet-filtering firewalls
default to no access. Figure 3-11 shows that traffic from Subnet A is allowed to access
Subnet B, traffic from Subnet D is not allowed to access Subnet A, and Subnet B is al-
lowed to talk to Subnet A. All other traffic transmission paths not listed here are not
allowed by default. Subnet D cannot talk to Subnet B because such access is not explic-
itly indicated in the router’s ACL.
Need to Know
If you need to know, I will tell you. If you don’t need to know, leave me alone.
The need-to-know principle is similar to the least-privilege principle. It is based on
the concept that individuals should be given access only to the information they abso-
lutely require in order to perform their job duties. Giving any more rights to a user just
asks for headaches and the possibility of that user abusing the permissions assigned to
him. An administrator wants to give a user the least amount of privileges she can, but
just enough for that user to be productive when carrying out tasks. Management will

CISSP All-in-One Exam Guide
206
decide what a user needs to know, or what access rights are necessary, and the adminis-
trator will configure the access control mechanisms to allow this user to have that level
of access and no more, and thus the least privilege.
For example, if management has decided that Dan, the copy boy, needs to know
where the files he needs to copy are located and needs to be able to print them, this
fulfills Dan’s need-to-know criteria. Now, an administrator could give Dan full control
of all the files he needs to copy, but that would not be practicing the least-privilege
principle. The administrator should restrict Dan’s rights and permissions to only allow
him to read and print the necessary files, and no more. Besides, if Dan accidentally
deletes all the files on the whole file server, whom do you think management will hold
ultimately responsible? Yep, the administrator.
It is important to understand that it is management’s job to determine the security
requirements of individuals and how access is authorized. The security administrator
configures the security mechanisms to fulfill these requirements, but it is not her job to
determine security requirements of users. Those should be left to the owners. If there is
a security breach, management will ultimately be held responsible, so it should make
these decisions in the first place.
Figure 3-11 What is not explicitly allowed should be implicitly denied.

Chapter 3: Access Control
207
Single Sign-On
I only want to have to remember one username and one password for everything in the world!
Many times employees need to access many different computers, servers, databases,
and other resources in the course of a day to complete their tasks. This often requires
the employees to remember multiple user IDs and passwords for these different com-
puters. In a utopia, a user would need to enter only one user ID and one password to
be able to access all resources in all the networks this user is working in. In the real
world, this is hard to accomplish for all system types.
Because of the proliferation of client/server technologies, networks have migrated
from centrally controlled networks to heterogeneous, distributed environments. The
propagation of open systems and the increased diversity of applications, platforms, and
operating systems have caused the end user to have to remember several user IDs and
passwords just to be able to access and use the different resources within his own net-
work. Although the different IDs and passwords are supposed to provide a greater level of
security, they often end up compromising security (because users write them down) and
causing more effort and overhead for the staff that manages and maintains the network.
As any network staff member or administrator can attest to, too much time is de-
voted to resetting passwords for users who have forgotten them. More than one em-
ployee’s productivity is affected when forgotten passwords have to be reassigned. The
network staff member who has to reset the password could be working on other tasks,
and the user who forgot the password cannot complete his task until the network staff
member is finished resetting the password. Many help-desk employees report that a
Authorization Creep
I think Mike’s a creep. Let’s not give him any authorization to access company stuff.
Response: Sounds like a great criterion. All creeps—no access.
As employees work at a company over time and move from one department
to another, they often are assigned more and more access rights and permissions.
This is commonly referred to as authorization creep. It can be a large risk for a
company, because too many users have too much privileged access to company
assets. In the past, it has usually been easier for network administrators to give
more access than less, because then the user would not come back and require
more work to be done on her profile. It is also difficult to know the exact access
levels different individuals require. This is why user management and user provi-
sioning are becoming more prevalent in identity management products today
and why companies are moving more toward role-based access control imple-
mentation. Enforcing least privilege on user accounts should be an ongoing job,
which means each user’s rights are permissions that should be reviewed to ensure
the company is not putting itself at risk.
NOTE
NOTE Rights and permission reviews have been incorporated into
many regulatory-induced processes. As part of the SOX regulations,
managers have to review their employees’ permissions to data on an
annual basis.

CISSP All-in-One Exam Guide
208
majority of their time is spent on users forgetting their passwords. System administra-
tors have to manage multiple user accounts on different platforms, which all need to be
coordinated in a manner that maintains the integrity of the security policy. At times the
complexity can be overwhelming, which results in poor access control management
and the generation of many security vulnerabilities. A lot of time is spent on multiple
passwords, and in the end they do not provide us with more security.
The increased cost of managing a diverse environment, security concerns, and user
habits, coupled with the users’ overwhelming desire to remember one set of credentials,
has brought about the idea of single sign-on (SSO) capabilities. These capabilities
would allow a user to enter credentials one time and be able to access all resources in
primary and secondary network domains. This reduces the amount of time users spend
authenticating to resources and enables the administrator to streamline user accounts
and better control access rights. It improves security by reducing the probability that
users will write down passwords and also reduces the administrator’s time spent on
adding and removing user accounts and modifying access permissions. If an adminis-
trator needs to disable or suspend a specific account, she can do it uniformly instead of
having to alter configurations on each and every platform.
So that is our utopia: log on once and you are good to go. What bursts this bubble?
Mainly interoperability issues. For SSO to actually work, every platform, application, and
resource needs to accept the same type of credentials, in the same format, and interpret
their meanings the same. When Steve logs on to his Windows XP workstation and gets
authenticated by a mixed-mode Windows 2000 domain controller, it must authenticate
him to the resources he needs to access on the Apple computer, the Unix server running
NIS, the mainframe host server, the MICR print server, and the Windows XP computer in
the secondary domain that has the plotter connected to it. A nice idea, until reality hits.

Chapter 3: Access Control
209
There is also a security issue to consider in an SSO environment. Once an individ-
ual is in, he is in. If an attacker was able to uncover one credential set, he would have
access to every resource within the environment that the compromised account has
access to. This is certainly true, but one of the goals is that if a user only has to remem-
ber one password, and not ten, then a more robust password policy can be enforced. If
the user has just one password to remember, then it can be more complicated and se-
cure because he does not have nine other ones to remember also.
SSO technologies come in different types. Each has its own advantages and disad-
vantages, shortcomings, and quality features. It is rare to see a real SSO environment;
rather, you will see a cluster of computers and resources that accept the same creden-
tials. Other resources, however, still require more work by the administrator or user side
to access the systems. The SSO technologies that may be addressed in the CISSP exam
are described in the next sections.
Kerberos
Sam, there is a three-headed dog in front of the server!
Kerberos is the name of a three-headed dog that guards the entrance to the under-
world in Greek mythology. This is a great name for a security technology that provides
authentication functionality, with the purpose of protecting a company’s assets. Kerbe-
ros is an authentication protocol and was designed in the mid-1980s as part of MIT’s
Project Athena. It works in a client/server model and is based on symmetric key cryp-
tography. The protocol has been used for years in Unix systems and is currently the
default authentication method for Windows 2000, 2003, and 2008 operating systems.
In addition, Apple’s Mac OS X, Sun’s Solaris, and Red Hat Enterprise Linux all use
Kerberos authentication. Commercial products supporting Kerberos are becoming
more frequent, so this one might be a keeper.
Kerberos is an example of a single sign-on system for distributed environments, and
is a de facto standard for heterogeneous networks. Kerberos incorporates a wide range
of security capabilities, which gives companies much more flexibility and scalability
when they need to provide an encompassing security architecture. It has four elements
necessary for enterprise access control: scalability, transparency, reliability, and security.
However, this open architecture also invites interoperability issues. When vendors have
a lot of freedom to customize a protocol, it usually means no two vendors will custom-
ize it in the same fashion. This creates interoperability and incompatibility issues.
Kerberos uses symmetric key cryptography and provides end-to-end security. Al-
though it allows the use of passwords for authentication, it was designed specifically to
eliminate the need to transmit passwords over the network. Most Kerberos implemen-
tations work with shared secret keys.
Main Components in Kerberos The Key Distribution Center (KDC) is the most
important component within a Kerberos environment. The KDC holds all users’ and
services’ secret keys. It provides an authentication service, as well as key distribution
functionality. The clients and services trust the integrity of the KDC, and this trust is the
foundation of Kerberos security.

CISSP All-in-One Exam Guide
210
The KDC provides security services to principals, which can be users, applications,
or network services. The KDC must have an account for, and share a secret key with,
each principal. For users, a password is transformed into a secret key value. The secret
key can be used to send sensitive data back and forth between the principal and the
KDC, and is used for user authentication purposes.
Aticket is generated by the ticket granting service (TGS) on the KDC and given to a
principal when that principal, let’s say a user, needs to authenticate to another princi-
pal, let’s say a print server. The ticket enables one principal to authenticate to another
principal. If Emily needs to use the print server, she must prove to the print server she
is who she claims to be and that she is authorized to use the printing service. So Emily
requests a ticket from the TGS. The TGS gives Emily the ticket, and in turn, Emily pass-
es this ticket on to the print server. If the print server approves this ticket, Emily is al-
lowed to use the print service.
A KDC provides security services for a set of principals. This set is called a realm in
Kerberos. The KDC is the trusted authentication server for all users, applications, and
services within a realm. One KDC can be responsible for one realm or several realms.
Realms are used to allow an administrator to logically group resources and users.
So far, we know that principals (users and services) require the KDC’s services to
authenticate to each other; that the KDC has a database filled with information about
each and every principal within its realm; that the KDC holds and delivers cryptograph-
ic keys and tickets; and that tickets are used for principals to authenticate to each other.
So how does this process work?
The Kerberos Authentication Process The user and the KDC share a secret
key, while the service and the KDC share a different secret key. The user and the re-
quested service do not share a symmetric key in the beginning. The user trusts the KDC
because they share a secret key. They can encrypt and decrypt data they pass between
each other, and thus have a protected communication path. Once the user authenti-
cates to the service, they, too, will share a symmetric key (session key) that is used for
authentication purposes.
Here are the exact steps:
1. Emily comes in to work and enters her username and password into her
workstation at 8:00 A.M.
The Kerberos software on Emily’s computer sends the username to the
authentication service (AS) on the KDC, which in turn sends Emily a ticket
granting ticket (TGT) that is encrypted with Emily’s password (secret key).
2. If Emily has entered her correct password, then this TGT is decrypted and
Emily gains access to her local workstation desktop.
3. When Emily needs to send a print job to the print server, her system sends
the TGT to the ticket granting service (TGS), which runs on the KDC, and a
request to access the print server. (The TGT allows Emily to prove she has been
authenticated and allows her to request access to the print server.)
4. The TGS creates and sends a second ticket to Emily, which she will use to
authenticate to the print server. This second ticket contains two instances of

Chapter 3: Access Control
211
the same session key, one encrypted with Emily’s secret key and the other
encrypted with the print server’s secret key. The second ticket also contains
an authenticator, which contains identification information on Emily, her
system’s IP address, sequence number, and a timestamp.
5. Emily’s system receives the second ticket, decrypts and extracts the embedded
session key, adds a second authenticator set of identification information to
the ticket, and sends the ticket on to the print server.
6. The print server receives the ticket, decrypts and extracts the session key, and
decrypts and extracts the two authenticators in the ticket. If the print server
can decrypt and extract the session key, it knows the KDC created the ticket,
because only the KDC has the secret key used to encrypt the session key. If
the authenticator information that the KDC and the user put into the ticket
matches, then the print server knows it received the ticket from the correct
principal.
7. Once this is completed, it means Emily has been properly authenticated to
the print server and the server prints her document.
This is an extremely simplistic overview of what is going on in any Kerberos ex-
change, but it gives you an idea of the dance taking place behind the scenes whenever
you interact with any network service in an environment that uses Kerberos. Figure 3-12
provides a simplistic view of this process.
Figure 3-12 The user must receive a ticket from the KDC before being able to use the requested
resource.

CISSP All-in-One Exam Guide
212
The authentication service is the part of the KDC that authenticates a principal, and
the TGS is the part of the KDC that makes the tickets and hands them out to the prin-
cipals. TGTs are used so the user does not have to enter his password each time he needs
to communicate with another principal. After the user enters his password, it is tempo-
rarily stored on his system, and any time the user needs to communicate with another
principal, he just reuses the TGT.
Be sure you understand that a session key is different from a secret key. A secret key
is shared between the KDC and a principal and is static in nature. A session key is
shared between two principals and is generated when needed and destroyed after the
session is completed.
If a Kerberos implementation is configured to use an authenticator, the user sends
to the print server her identification information and a timestamp and sequence num-
ber encrypted with the session key they share. The print server decrypts this information
and compares it with the identification data the KDC sent to it about this requesting
user. If the data is the same, the print server allows the user to send print jobs. The time-
stamp is used to help fight against replay attacks. The print server compares the sent
timestamp with its own internal time, which helps determine if the ticket has been
sniffed and copied by an attacker and then submitted at a later time in hopes of imper-
sonating the legitimate user and gaining unauthorized access. The print server checks
the sequence number to make sure that this ticket has not been submitted previously.
This is another countermeasure to protect against replay attacks.
NOTE
NOTE A replay attack is when an attacker captures and resubmits data
(commonly a credential) with the goal of gaining unauthorized access to
an asset.
The primary reason to use Kerberos is that the principals do not trust each other
enough to communicate directly. In our example, the print server will not print any-
one’s print job without that entity authenticating itself. So none of the principals trust
each other directly; they only trust the KDC. The KDC creates tickets to vouch for the
individual principals when they need to communicate. Suppose I need to communi-
cate directly with you, but you do not trust me enough to listen and accept what I am
saying. If I first give you a ticket from something you do trust (KDC), this basically says,
“Look, the KDC says I am a trustworthy person. The KDC asked me to give this ticket to
you to prove it.” Once that happens, then you will communicate directly with me.
The same type of trust model is used in PKI environments. (More information on
PKI is presented in Chapter 7.) In a PKI environment, users do not trust each other di-
rectly, but they all trust the certificate authority (CA). The CA vouches for the individu-
als’ identities by using digital certificates, the same as the KDC vouches for the
individuals’ identities by using tickets.
So why are we talking about Kerberos? Because it is one example of a single sign-on
technology. The user enters a user ID and password one time and one time only. The

Chapter 3: Access Control
213
tickets have time limits on them that administrators can configure. Many times, the
lifetime of a TGT is eight to ten hours, so when the user comes in the next day, he will
have to present his credentials again.
NOTE
NOTE Kerberos is an open protocol, meaning that vendors can manipulate
it to work properly within their products and environments. The industry
has different “flavors” of Kerberos, since various vendors require different
functionality.
Weaknesses of Kerberos The following are some of the potential weaknesses of
Kerberos:
• The KDC can be a single point of failure. If the KDC goes down, no one can
access needed resources. Redundancy is necessary for the KDC.
• The KDC must be able to handle the number of requests it receives in a timely
manner. It must be scalable.
• Secret keys are temporarily stored on the users’ workstations, which means it
is possible for an intruder to obtain these cryptographic keys.
• Session keys are decrypted and reside on the users’ workstations, either in a
cache or in a key table. Again, an intruder can capture these keys.
• Kerberos is vulnerable to password guessing. The KDC does not know if a
dictionary attack is taking place.
• Network traffic is not protected by Kerberos if encryption is not enabled.
• If the keys are too short, they can be vulnerable to brute force attacks.
• Kerberos needs all client and server clocks to be synchronized.
Kerberos must be transparent (work in the background without the user needing to
understand it), scalable (work in large, heterogeneous environments), reliable (use dis-
tributed server architecture to ensure there is no single point of failure), and secure
(provide authentication and confidentiality).
Kerberos and Password-Guessing Attacks
Just because an environment uses Kerberos does not mean the systems are vul-
nerable to password-guessing attacks. The operating system itself will (should)
provide the protection of tracking failed login attempts. The Kerberos protocol
does not have this type of functionality, so another component must be in place
to counter these types of attacks. No need to start ripping Kerberos out of your
network environment after reading this section; your operating system provides
the protection mechanism for this type of attack.

CISSP All-in-One Exam Guide
214
SESAME
I said, “Open Sesame,” and nothing happened.
Response: It is broken then.
The Secure European System for Applications in a Multi-vendor Environment (SESAME)
project is a single sign-on technology developed to extend Kerberos functionality and
improve upon its weaknesses. SESAME uses symmetric and asymmetric cryptographic
techniques to authenticate subjects to network resources.
NOTE
NOTE Kerberos is a strictly symmetric key–based technology, whereas
SESAME is based on both asymmetric and symmetric key cryptography.
Kerberos uses tickets to authenticate subjects to objects, whereas SESAME uses Priv-
ileged Attribute Certificates (PACs), which contain the subject’s identity, access capa-
bilities for the object, access time period, and lifetime of the PAC. The PAC is digitally
signed so the object can validate it came from the trusted authentication server, which
is referred to as the Privileged Attribute Server (PAS). The PAS holds a similar role to
that of the KDC within Kerberos. After a user successfully authenticates to the authen-
tication service (AS), he is presented with a token to give to the PAS. The PAS then cre-
ates a PAC for the user to present to the resource he is trying to access. Figure 3-13
shows a basic overview of the SESAME process.
NOTE
NOTE Kerberos and SESAME can be accessed through the Generic
Security Services Application Programming Interface (GSS-API), which is a
generic API for client-to-server authentication. Using standard APIs enables
vendors to communicate with and use each other’s functionality and security.
Kerberos Version 5 and SESAME implementations allow any application to
use their authentication functionality as long as the application knows how to
communicate via GSS-API.
The SESAME technology is currently in version 4, and while it can be implemented
as a full SSO solution within its own right, it can also be integrated as an “add on” to
Kerberos. SESAME can be added to provide the functionality of public key cryptogra-
phy and role-based access control.
Security Domains
I am highly trusted and have access to many resources.
Response: So what.
The term “domain” has been around a lot longer than Microsoft, but when people
hear this term, they often think of a set of computers and devices on a network segment
being controlled by a server that runs Microsoft software, referred to as a domain con-
troller. A domain is really just a set of resources available to a subject. Remember that a
subject can be a user, process, or application. Within an operating system, a process has

Chapter 3: Access Control
215
a domain, which is the set of system resources available to the process to carry out its
tasks. These resources can be memory segments, hard drive space, operating system
services, and other processes. In a network environment, a domain is a set of physical
and logical resources that is available, which can include routers, file servers, FTP ser-
vice, web servers, and so forth.
The term security domain just builds upon the definition of domain by adding the fact
that resources within this logical structure (domain) are working under the same security
policy and managed by the same group. So, a network administrator may put all of the
accounting personnel, computers, and network resources in Domain 1 and all of
the management personnel, computers, and network resources in Domain 2. These items
fall into these individual containers because they not only carry out similar types of busi-
ness functions, but also, and more importantly, have the same type of trust level. It is this
common trust level that allows entities to be managed by one single security policy.
The different domains are separated by logical boundaries, such as firewalls with
ACLs, directory services making access decisions, and objects that have their own ACLs
indicating which individuals and groups can carry out operations on them. All of these
security mechanisms are examples of components that enforce the security policy for
each domain.
Figure 3-13 SESAME is very similar to Kerberos.

CISSP All-in-One Exam Guide
216
Domains can be architected in a hierarchical manner that dictates the relationship
between the different domains and the ways in which subjects within the different do-
mains can communicate. Subjects can access resources in domains of equal or lower trust
levels. Figure 3-14 shows an example of hierarchical network domains. Their communi-
cation channels are controlled by security agents (firewalls, router ACLs, directory servic-
es), and the individual domains are isolated by using specific subnet mask addresses.
Remember that a domain does not necessarily pertain only to network devices and
segmentations, but can also apply to users and processes. Figure 3-15 shows how users
and processes can have more granular domains assigned to them individually based
on their trust level. Group 1 has a high trust level and can access both a domain of its
own trust level (Domain 1) and a domain of a lower trust level (Domain 2). User 1,
who has a lower trust level, can access only the domain at his trust level and nothing
higher. The system enforces these domains with access privileges and rights provided
by the file system and operating system security kernel.
Figure 3-14 Network domains are used to separate different network segments.

Chapter 3: Access Control
217
So why are domains in the “Single Sign-On” section? Because several different types
of technologies available today are used to define and enforce these domains and secu-
rity policies mapped to them: domain controllers in a Windows environment, enter-
prise resource management (ERM) products, Microsoft Passport (now Windows Live
ID), and the various products that provide SSO functionality. The goal of each of them
is to allow a user (subject) to sign in one time and be able to access the different do-
mains available without having to reenter any other credentials.
Directory Services
While we covered directory services in the “Identity Management” section, it is also
important for you to realize that it is considered a single sign-on technology in its own
right, so we will review the characteristics again within this section.
Figure 3-15 Subjects can access specific domains based on their trust levels.

CISSP All-in-One Exam Guide
218
A network service is a mechanism that identifies resources (printers, file servers,
domain controllers, and peripheral devices) on a network. A network directory service
contains information about these different resources, and the subjects that need to ac-
cess them, and carries out access control activities. If the directory service is working in
a database based on the X.500 standard, it works in a hierarchical schema that outlines
the resources’ attributes, such as name, logical and physical location, subjects that can
access them, and the operations that can be carried out on them.
In a database based on the X.500 standard, access requests are made from users and
other systems using the LDAP protocol. This type of database provides a hierarchical
structure for the organization of objects (subjects and resources). The directory service
develops unique distinguished names for each object and appends the corresponding
attribute to each object as needed. The directory service enforces a security policy (con-
figured by the administrator) to control how subjects and objects interact.
Network directory services provide users access to network resources transparently,
meaning that users don’t need to know the exact location of the resources or the steps
required to access them. The network directory services handle these issues for the user
in the background. Some examples of directory services are Lightweight Directory Ac-
cess Protocol (LDAP), Novell NetWare Directory Service (NDS), and Microsoft Active
Directory (AD).
Thin Clients
Hey, where’s my operating system?
Response: You don’t deserve one.
Diskless computers and thin clients cannot store much information because of
their lack of onboard storage space and necessary resources. This type of client/server
technology forces users to log on to a central server just to use the computer and access
network resources. When the user starts the computer, it runs a short list of instructions
and then points itself to a server that will actually download the operating system, or
interactive operating software, to the terminal. This enforces a strict type of access con-
trol, because the computer cannot do anything on its own until it authenticates to a
centralized server, and then the server gives the computer its operating system, profile,
and functionality. Thin-client technology provides another type of SSO access for users
because users authenticate only to the central server or mainframe, which then provides
them access to all authorized and necessary resources.
In addition to providing an SSO solution, a thin-client technology offers several
other advantages. A company can save money by purchasing thin clients instead of
powerful and expensive PCs. The central server handles all application execution, pro-
cessing, and data storage. The thin client displays the graphical representation and
sends mouse clicks and keystroke inputs to the central server. Having all of the software
in one location, instead of distributed throughout the environment, allows for easier
administration, centralized access control, easier updates, and standardized configura-

Chapter 3: Access Control
219
tions. It is also easier to control malware infestations and the theft of confidential data
because the thin clients often do not have CD-ROM, DVD, or USB ports.
NOTE
NOTE The technology industry came from a centralized model, with the
use of mainframes and dumb terminals, and is in some ways moving back
toward this model with the use of terminal services, Citrix, Service Oriented
Architecture, and cloud computing.
Access Control Models
An access control model is a framework that dictates how subjects access objects. It uses
access control technologies and security mechanisms to enforce the rules and objec-
tives of the model. There are three main types of access control models: discretionary,
mandatory, and role based. Each model type uses different methods to control how
subjects access objects, and each has its own merits and limitations. The business and
security goals of an organization will help prescribe what access control model it should
use, along with the culture of the company and the habits of conducting business.
Some companies use one model exclusively, whereas others combine them to be able
to provide the necessary level of protection.
These models are built into the core or the kernel of the different operating systems
and possibly their supporting applications. Every operating system has a security kernel
that enforces a reference monitor concept, which differs depending upon the type of
access control model embedded into the system. For every access attempt, before a
subject can communicate with an object, the security kernel reviews the rules of the ac-
cess control model to determine whether the request is allowed.
Examples of Single Sign-On Technologies
•Kerberos Authentication protocol that uses a KDC and tickets, and is
based on symmetric key cryptography
•SESAME Authentication protocol that uses a PAS and PACs, and is
based on symmetric and asymmetric cryptography
•Security domains Resources working under the same security policy
and managed by the same group
•Directory services Technology that allows resources to be named in
a standardized manner and access control to be maintained centrally
•Thin clients Terminals that rely upon a central server for access
control, processing, and storage

CISSP All-in-One Exam Guide
220
The following sections explain these different models, their supporting technolo-
gies, and where they should be implemented.
Discretionary Access Control
Only I can let you access my files.
Response: Mother, may I?
If a user creates a file, he is the owner of that file. An identifier for this user is placed
in the file header and/or in an access control matrix within the operating system. Own-
ership might also be granted to a specific individual. For example, a manager for a cer-
tain department might be made the owner of the files and resources within her
department. A system that uses discretionary access control (DAC) enables the owner of
the resource to specify which subjects can access specific resources. This model is called
discretionary because the control of access is based on the discretion of the owner.
Many times department managers, or business unit managers, are the owners of the
data within their specific department. Being the owner, they can specify who should
have access and who should not.
In a DAC model, access is restricted based on the authorization granted to the users.
This means users are allowed to specify what type of access can occur to the objects they
own. If an organization is using a DAC model, the network administrator can allow
resource owners to control who has access to their files. The most common implemen-
tation of DAC is through ACLs, which are dictated and set by the owners and enforced
by the operating system. This can make a user’s ability to access information dynamic
versus the more static role of mandatory access control (MAC).
Most of the operating systems you may be used to dealing with are based on DAC
models, such as all Windows, Linux, and Macintosh systems, and most flavors of Unix.
When you look at the properties of a file or directory and see the choices that allow you
to control which users can have access to this resource and to what degree, you are wit-
nessing an instance of ACLs enforcing a DAC model.
DACs can be applied to both the directory tree structure and the files it contains.
The PC world has access permissions of No Access, Read (r), Write (w), Execute (x),
Delete (d), Change (c), and Full Control. The Read attribute lets you read the file but
not make changes. The Change attribute allows you to read, write, execute, and delete
the file but does not let you change the ACLs or the owner of the files. Obviously, the
attribute of Full Control lets you make any changes to the file and its permissions and
ownership.
Identity-Based Access Control
DAC systems grant or deny access based on the identity of the subject. The iden-
tity can be a user identity or a group membership. So, for example, a data owner
can choose to allow Bob (user identity) and the Accounting group (group mem-
bership identity) to access his file.

Chapter 3: Access Control
221
While DAC systems provide a lot of flexibility to the user and less administration
for IT, it is also the Achilles heel of operating systems. Malware can install itself and
work under the security context of the user. For example, if a user opens an attachment
that is infected with a virus, the code can install itself in the background without the
user being aware of this activity. This code basically inherits all the rights and permis-
sions that the user has and can carry out all the activities the user can on the system. It
can send copies of itself out to all the contacts listed in the user’s e-mail client, install a
back door, attack other systems, delete files on the hard drive, and more. The user is
actually giving rights to the virus to carry out its dirty deeds, because the user has very
powerful discretionary rights and is considered the owner of many objects on the sys-
tem. And the fact that many users are assigned local administrator or root accounts
means that once malware is installed, it can do anything on a system.
As we have said before, there is a constant battle between functionality and security.
To allow for the amount of functionality we demand of our operating systems today,
they have to work within a DAC model—but because they work in a DAC model, ex-
tensive compromises are always possible.
While we may want to give users some freedom to indicate who can access the files
that they create and other resources on their systems that they are configured to be
“owners” of, we really don’t want them dictating all access decisions in environments
with assets that need to be protected. We just don’t trust them that much, and we
shouldn’t. In most environments user profiles are created and loaded on user worksta-
tions that indicate the level of control the user does and does not have. As a security
administrator you might configure user profiles so that users cannot change the sys-
tem’s time, alter system configuration files, access a command prompt, or install unap-
proved applications. This type of access control is referred to as nondiscretionary,
meaning that access decisions are not made at the discretion of the user. Nondiscretion-
ary access controls are put into place by an authoritative entity (usually a security ad-
ministrator) with the goal of protecting the organization’s most critical assets.
Mandatory Access Control
This system holds sensitive, super-duper, secret stuff.
In a mandatory access control (MAC) model, users do not have the discretion of
determining who can access objects as in a DAC model. An operating system that is
based upon a MAC model greatly reduces the amount of rights, permissions, and func-
tionality a user has for security purposes. In most systems based upon the MAC model,
a user cannot install software, change file permissions, add new users, etc. The system
can be used by the user for very focused and specific purposes, and that is it. These sys-
tems are usually very specialized and are in place to protected highly classified data.
Most people have never interacted with a MAC-based system because they are used by
government-oriented agencies that maintain top secret information.
The MAC model is much more structured and strict than the DAC model and is
based on a security label system. Users are given a security clearance (secret, top secret,

CISSP All-in-One Exam Guide
222
confidential, and so on), and data is classified in the same way. The clearance and clas-
sification data are stored in the security labels, which are bound to the specific subjects
and objects. When the system makes a decision about fulfilling a request to access an
object, it is based on the clearance of the subject, the classification of the object, and the
security policy of the system. The rules for how subjects access objects are made by the
organization’s security policy, configured by the security administrator, enforced by the
operating system, and supported by security technologies.
NOTE
NOTE Traditional MAC systems are based upon multilevel security policies,
which outline how data at different classification levels are to be protected.
Multilevel security (MLS) systems allow data at different classification levels
to be accessed and interacted with by users with different clearance levels
simultaneously.
Security labels are attached to all objects; thus, every file, directory, and device has
its own security label with its classification information. A user may have a security
clearance of secret, and the data he requests may have a security label with the classifi-
cation of top secret. In this case, the user will be denied because his clearance is not
equivalent or does not dominate (is not equal or higher than) the classification of the
object.
NOTE
NOTE The terms “security labels” and “sensitivity labels” can be used
interchangeably.
Each subject and object must have an associated label with attributes at all times,
because this is part of the operating system’s access-decision criteria. Each subject and
object does not require a physically unique label, but can be logically associated. For
example, all subjects and objects on Server 1 can share the same label of secret clearance
and classification.
This type of model is used in environments where information classification and
confidentiality is of utmost importance, such as military institutions, government agen-
cies, and government contract companies. Special types of Unix systems are developed
based on the MAC model. A company cannot simply choose to turn on either DAC or
MAC. It has to purchase an operating system that has been specifically designed to en-
force MAC rules. DAC systems do not understand security labels, classifications, or
clearances, and thus cannot be used in institutions that require this type of structure for
access control. A publicly released MAC system is SE Linux, developed by the NSA and
Secure Computing. Trusted Solaris is a product based on the MAC model that most
people are familiar with (relative to other MAC products).
While MAC systems enforce strict access control, they also provide a wide range of
security, particularly dealing with malware. Malware is the bane of DAC systems. Vi-
ruses, worms, and rootkits can be installed and run as applications on DAC systems.

Chapter 3: Access Control
223
Since users that work within a MAC system cannot install software, the operating sys-
tem does not allow any type of software, including malware, to be installed while the
user is logged in. But while MAC systems might seem an answer to all our security
prayers, they have very limited user functionality, require a lot of administrative over-
head, are very expensive, and are not user-friendly. DAC systems are general-purpose
computers, while MAC systems serve a very specific purpose.
NOTE
NOTE DAC systems are discretionary and MAC systems are considered
nondiscretionary because the users cannot make access decisions based upon
their own discretion (choice).
Sensitivity Labels
I am very sensitive. Can I have a label?
Response: Nope.
When the MAC model is being used, every subject and object must have a sensitiv-
ity label, also called a security label. It contains a classification and different categories.
The classification indicates the sensitivity level, and the categories enforce need-to-
know rules. Figure 3-16 illustrates a sensitivity label.
The classifications follow a hierarchical structure, with one level being more trusted
than another. However, the categories do not follow a hierarchical scheme, because
they represent compartments of information within a system. The categories can cor-
respond to departments (UN, Information Warfare, Treasury), projects (CRM, Air-
portSecurity, 2011Budget), or management levels. In a military environment, the
classifications could be top secret, secret, confidential, and unclassified. Each classifica-
tion is more trusted than the one below it. A commercial organization might use con-
fidential, proprietary, corporate, and sensitive. The definition of the classification is up
to the organization and should make sense for the environment in which it is used.
The categories portion of the label enforces need-to-know rules. Just because some-
one has a top-secret clearance does not mean she now has access to all top-secret infor-
mation. She must also have a need to know. As shown in Figure 3-16, if Cheryl has a
top-secret clearance but does not have a need to know that is sufficient to access any of
the listed categories (Dallas, Max, Cricket), she cannot look at this object.
NOTE
NOTE In MAC implementations, the system makes access decisions by
comparing the subject’s clearance and need-to-know level to the object’s
security label. In DAC, the system compares the subject’s identity to the ACL
on the resource.
Figure 3-16
A sensitivity label
is made up of a
classification and
categories.

CISSP All-in-One Exam Guide
224
Software and hardware guards allow the exchange of data between trusted (high
assurance) and less trusted (low assurance) systems and environments. For instance, if
you were working on a MAC system (working in the dedicated security mode of secret)
and you needed it to communicate to a MAC database (working in multilevel security
mode, which goes up to top secret), the two systems would provide different levels of
protection. If a system with lower assurance can directly communicate with a system of
high assurance, then security vulnerabilities and compromises could be introduced. A
software guard is really just a front-end product that allows interconnectivity between
systems working at different security levels. Different types of guards can be used to
carry out filtering, processing requests, data blocking, and data sanitization. A hardware
guard can be implemented, which is a system with two NICs connecting the two sys-
tems that need to communicate with one another. Guards can be used to connect dif-
ferent MAC systems working in different security modes, and they can be used to
connect different networks working at different security levels. In many cases, the less
trusted system can send messages to the more trusted system and can only receive ac-
knowledgments back. This is common when e-mail messages need to go from less
trusted systems to more trusted classified systems.
Role-Based Access Control
I am in charge of chalk; thus, I need full control of all servers!
Response: Good try.
Arole-based access control (RBAC) model uses a centrally administrated set of con-
trols to determine how subjects and objects interact. The access control levels can be
based upon the necessary operations and tasks a user needs to carry out to fulfill her
responsibilities without an organization. This type of model lets access to resources be
based on the role the user holds within the company. The more traditional access con-
trol administration is based on just the DAC model, where access control is specified
at the object level with ACLs. This approach is more complex because the administra-
tor must translate an organizational authorization policy into permission when con-
figuring ACLs. As the number of objects and users grows within an environment, users
are bound to be granted unnecessary access to some objects, thus violating the least-
privilege rule and increasing the risk to the company. The RBAC approach simplifies
access control administration by allowing permissions to be managed in terms of user
job roles.
In an RBAC model, a role is defined in terms of the operations and tasks the role
will carry out, whereas a DAC model outlines which subjects can access what objects
based upon the individual user identity.
Let’s say we need a research and development analyst role. We develop this role not
only to allow an individual to have access to all product and testing data, but also, and
more importantly, to outline the tasks and operations that the role can carry out on this
data. When the analyst role makes a request to access the new testing results on the file
server, in the background the operating system reviews the role’s access levels before
allowing this operation to take place.

Chapter 3: Access Control
225
NOTE
NOTE Introducing roles also introduces the difference between rights
being assigned explicitly and implicitly. If rights and permissions are assigned
explicitly, it indicates they are assigned directly to a specific individual. If they
are assigned implicitly, it indicates they are assigned to a role or group and the
user inherits those attributes.
An RBAC model is the best system for a company that has high employee turnover.
If John, who is mapped to the contractor role, leaves the company, then Chrissy, his
replacement, can be easily mapped to this role. That way, the administrator does not
need to continually change the ACLs on the individual objects. He only needs to create
a role (contractor), assign permissions to this role, and map the new user to this role.
As discussed in the “Identity Management” section, organizations are moving more
toward role-based access models to properly control identity and provisioning activi-
ties. The formal RBAC model has several approaches to security that can be used in
software and organizations.
Core RBAC
This component will be integrated in every RBAC implementation because it is the
foundation of the model. Users, roles, permissions, operations, and sessions are de-
fined and mapped according to the security policy.
• Has a many-to-many relationship among individual users and privileges
• Session is a mapping between a user and a subset of assigned roles
• Accommodates traditional but robust group-based access control
Many users can belong to many groups with various privileges outlined for each
group. When the user logs in (this is a session), the various roles and groups this user
has been assigned will be available to the user at one time. If I am a member of the Ac-
counting role, RD group, and Administrative role, when I log on, all of the permissions
assigned to these various groups are available to me.
This model provides robust options because it can include other components when
making access decisions, instead of just basing the decision on a credential set. The
RBAC system can be configured to also include time of day, location of role, day of the
week, and so on. This means other information, not just the user ID and credential, is
used for access decisions.
Hierarchical RBAC
This component allows the administrator to set up an organizational RBAC model that
maps to the organizational structures and functional delineations required in a specific
environment. This is very useful since businesses are already set up in a personnel hier-
archical structure. In most cases, the higher you are in the chain of command, the more
access you will most likely have.
• Role relation defining user membership and privilege inheritance. For
example, the nurse role can access a certain amount of files, and the lab
technician role can access another set of files. The doctor role inherits the

CISSP All-in-One Exam Guide
226
permissions and access rights of these two roles and has more elevated rights
already assigned to the doctor role. So hierarchical is an accumulation of
rights and permissions of other roles.
• Reflects organizational structures and functional delineations.
• Two types of hierarchies:
• Limited hierarchies—Only one level of hierarchy is allowed (Role 1 inherits
from Role 2 and no other role)
• General hierarchies—Allows for many levels of hierarchies (Role 1 inherits
Role 2 and Role 3’s permissions)
Hierarchies are a natural means of structuring roles to reflect an organization’s lines
of authority and responsibility. Role hierarchies define an inheritance relation among
roles. Different separations of duties are provided under this model.
•Static Separation of Duty (SSD) Relations through RBAC This would be
used to deter fraud by constraining the combination of privileges (such as, the
user cannot be a member of both the Cashier and Accounts Receivable groups).
•Dynamic Separation of Duties (DSD) Relations through RBAC This
would be used to deter fraud by constraining the combination of privileges
that can be activated in any session (for instance, the user cannot be in both
the Cashier and Cashier Supervisor roles at the same time, but the user can
be a member of both). This one is a little more confusing. It means Joe is a
member of both the Cashier and Cashier Supervisor. If he logs in as a Cashier,
the Supervisor role is unavailable to him during that session. If he logs in as
Cashier Supervisor, the Cashier role is unavailable to him during that session.
Role-based access control can be managed in the following ways:
•Non-RBAC Users are mapped directly to applications and no roles are used.
•Limited RBAC Users are mapped to multiple roles and mapped directly to
other types of applications that do not have role-based access functionality.
•Hybrid RBAC Users are mapped to multiapplication roles with only selected
rights assigned to those roles.
•Full RBAC Users are mapped to enterprise roles.
NOTE
NOTE The privacy of many different types of data needs to be protected,
which is why many organizations have privacy officers and privacy policies
today. The current access control models (MAC, DAC, RBAC) do not lend
themselves to protecting data of a given sensitivity level, but instead limit the
functions that the users can carry out. For example, managers may be able to
access a Privacy folder, but there needs to be more detailed access control
that indicates that they can access customers’ home addresses but not Social
Security numbers. This is referred to as Privacy Aware Role-Based Access Control.

Chapter 3: Access Control
227
Access Control Techniques and Technologies
Once an organization determines what type of access control model it is going to use,
it needs to identify and refine its technologies and techniques to support that model.
The following sections describe the different access controls and technologies available
to support different access control models.
Rule-Based Access Control
Everyone will adhere to my rules.
Response: Who are you again?
Rule-based access control uses specific rules that indicate what can and cannot hap-
pen between a subject and an object. It is based on the simple concept of “if X then Y”
programming rules, which can be used to provide finer-grained access control to re-
sources. Before a subject can access an object in a certain circumstance, it must meet a
RBAC, MAC, and DAC
A lot of confusion exists regarding whether RBAC is a type of DAC model or a
type of MAC model. Different sources claim different things, but in fact it is a
model in its own right. In the 1960s and 1970s, the U.S. military and NSA did a
lot of research on the MAC model. DAC, which also sprang to life in the ’60s and
’70s, has its roots in the academic and commercial research laboratories. The
RBAC model, which started gaining popularity in the 1990s, can be used in com-
bination with MAC and DAC systems. For the most up-to-date information on
the RBAC model, go to http://csrc.nist.gov/rbac, which has documents that de-
scribe an RBAC standard and independent model, with the goal of clearing up
this continual confusion.
In reality, operating systems can be created to use one, two, or all three of
these models in some form, but just because they can be used together does not
mean that they are not their own individual models with their own strict access
control rules.
Access Control Models
The main characteristics of the three different access control models are impor-
tant to understand.
•DAC Data owners decide who has access to resources, and ACLs are
used to enforce these access decisions.
•MAC Operating systems enforce the system’s security policy through
the use of security labels.
•RBAC Access decisions are based on each subject’s role and/or
functional position.

CISSP All-in-One Exam Guide
228
set of predefined rules. This can be simple and straightforward, as in, “If the user’s ID
matches the unique user ID value in the provided digital certificate, then the user can
gain access.” Or there could be a set of complex rules that must be met before a subject
can access an object. For example, “If the user is accessing the system between Monday
and Friday and between 8 A.M. and 5 P.M., and if the user’s security clearance equals or
dominates the object’s classification, and if the user has the necessary need to know,
then the user can access the object.”
Rule-based access control is not necessarily identity-based. The DAC model is iden-
tity-based. For example, an identity-based control would stipulate that Tom Jones can
read File1 and modify File2. So when Tom attempts to access one of these files, the
operating system will check his identity and compare it to the values within an ACL to
see if Tom can carry out the operations he is attempting. In contrast, here is a rule-based
example: A company may have a policy that dictates that e-mail attachments can only
be 5MB or smaller. This rule affects all users. If rule-based was identity-based, it would
mean that Sue can accept attachments of 10MB and smaller, Bob can accept attach-
ments 2MB and smaller, and Don can only accept attachments 1MB and smaller. This
would be a mess and too confusing. Rule-based access controls simplify this by setting
a rule that will affect all users across the board—no matter what their identity is.
Rule-based access allows a developer to define specific and detailed situations in
which a subject can or cannot access an object, and what that subject can do once access
is granted. Traditionally, rule-based access control has been used in MAC systems as an
enforcement mechanism of the complex rules of access that MAC systems provide. To-
day, rule-based access is used in other types of systems and applications as well. Con-
tent filtering uses If-Then programming languages, which is a way to compare data or
an activity to a long list of rules. For example, “If an e-mail message contains the word
‘Viagra,’ then disregard. If an e-mail message contains the words ‘sex’ and ‘free,’ then
disregard,” and so on.
Many routers and firewalls use rules to determine which types of packets are al-
lowed into a network and which are rejected. Rule-based access control is a type of
compulsory control, because the administrator sets the rules and the users cannot mod-
ify these controls.
Constrained User Interfaces
Constrained user interfaces restrict users’ access abilities by not allowing them to re-
quest certain functions or information, or to have access to specific system resources.
Three major types of restricted interfaces exist: menus and shells, database views, and
physically constrained interfaces.
When menu and shell restrictions are used, the options users are given are the com-
mands they can execute. For example, if an administrator wants users to be able to ex-
ecute only one program, that program would be the only choice available on the menu.
This limits the users’ functionality. A shell is a type of virtual environment within a
system. It is the users’ interface to the operating system and works as a command inter-
preter. If restricted shells were used, the shell would contain only the commands the
administrator wants the users to be able to execute.

Chapter 3: Access Control
229
Many times, a database administrator will configure a database so users cannot see
fields that require a level of confidentiality. Database views are mechanisms used to re-
strict user access to data contained in databases. If the database administrator wants
managers to be able to view their employees’ work records but not their salary informa-
tion, then the salary fields would not be available to these types of users. Similarly, when
payroll employees look at the same database, they will be able to view the salary infor-
mation but not the work history information. This example is illustrated in Figure 3-17.
Physically constraining a user interface can be implemented by providing only cer-
tain keys on a keypad or certain touch buttons on a screen. You see this when you get
money from an ATM machine. This device has a type of operating system that can ac-
cept all kinds of commands and configuration changes, but you are physically con-
strained from being able to carry out these functions. You are presented with buttons
that only enable you to withdraw, view your balance, or deposit funds. Period.
Access Control Matrix
The matrix—let’s see, should I take the red pill or the blue pill?
An access control matrix is a table of subjects and objects indicating what actions
individual subjects can take upon individual objects. Matrices are data structures that
programmers implement as table lookups that will be used and enforced by the operat-
ing system. Table 3-1 provides an example of an access control matrix.
This type of access control is usually an attribute of DAC models. The access rights
can be assigned directly to the subjects (capabilities) or to the objects (ACLs).
Capability Table
Acapability table specifies the access rights a certain subject possesses pertaining to
specific objects. A capability table is different from an ACL because the subject is bound
to the capability table, whereas the object is bound to the ACL.
Figure 3-17 Different database views of the same tables

CISSP All-in-One Exam Guide
230
The capability corresponds to the subject’s row in the access control matrix. In Table
3-1, Diane’s capabilities are File1: read and execute; File2: read, write, and execute;
File3: no access. This outlines what Diane is capable of doing to each resource. An ex-
ample of a capability-based system is Kerberos. In this environment, the user is given a
ticket, which is his capability table. This ticket is bound to the user and dictates what
objects that user can access and to what extent. The access control is based on this
ticket, or capability table. Figure 3-18 shows the difference between a capability table
and an ACL.
A capability can be in the form of a token, ticket, or key. When a subject presents a
capability component, the operating system (or application) will review the access
rights and operations outlined in the capability component and allow the subject to
carry out just those functions. A capability component is a data structure that contains
a unique object identifier and the access rights the subject has to that object. The object
may be a file, array, memory segment, or port. Each user, process, and application in a
capability system has a list of capabilities.
Access Control Lists
Access control lists (ACLs) are used in several operating systems, applications, and router
configurations. They are lists of subjects that are authorized to access a specific object,
Figure 3-18 A capability table is bound to a subject, whereas an ACL is bound to an object.
User File1 File2 File3
Diane Read and execute Read, write, and execute No access
Katie Read and execute Read No access
Chrissy Read, write, and execute Read and execute Read
John Read and execute No access Read and write
Table 3-1 An Example of an Access Control Matrix

Chapter 3: Access Control
231
and they define what level of authorization is granted. Authorization can be specific to
an individual, group, or role.
ACLs map values from the access control matrix to the object. Whereas a capability
corresponds to a row in the access control matrix, the ACL corresponds to a column of
the matrix. The ACL for File1 in Table 3-1 is shown in Table 3-2.
Content-Dependent Access Control
This is sensitive information, so only Bob and I can look at it.
Response: Well, since Bob is your imaginary friend, I think I can live by that rule.
As the name suggests, with content-dependent access control, access to objects is deter-
mined by the content within the object. The earlier example pertaining to database views
showed how content-dependent access control can work. The content of the database
fields dictates which users can see specific information within the database tables.
Content-dependent filtering is used when corporations employ e-mail filters that
look for specific strings, such as “confidential,” “social security number,” “top secret,”
and any other types of words the company deems suspicious. Corporations also have
this in place to control web surfing—where filtering is done to look for specific words—
to try to figure out whether employees are gambling or looking at pornography.
Context-Dependent Access Control
First you kissed a parrot, then you threw your shoe, and then you did a jig. That’s the right se-
quence; you are allowed access.
Context-dependent access control differs from content-dependent access control in that
it makes access decisions based on the context of a collection of information rather than
on the sensitivity of the data. A system that is using context-dependent access control
“reviews the situation” and then makes a decision. For example, firewalls make context-
based access decisions when they collect state information on a packet before allowing it
into the network. A stateful firewall understands the necessary steps of communication
for specific protocols. For example, in a TCP connection, the sender sends a SYN packet,
the receiver sends a SYN/ACK, and then the sender acknowledges that packet with an
User File1
Diane Read and execute
Katie Read and execute
Chrissy Read, write, and execute
John Read and execute
Table 3-2
The ACL for File1

CISSP All-in-One Exam Guide
232
ACK packet. A stateful firewall understands these different steps and will not allow pack-
ets to go through that do not follow this sequence. So, if a stateful firewall receives a SYN/
ACK and there was not a previous SYN packet that correlates with this connection, the
firewall understands this is not right and disregards the packet. This is what stateful
means—something that understands the necessary steps of a dialog session. And this is
an example of context-dependent access control, where the firewall understands the con-
text of what is going on and includes that as part of its access decision.
Some software can track a user’s access requests in sequence and make decisions
based upon the previous access requests. For example, let’s say that we have a database
that contains information about our military’s mission and efforts. A user might have a
Secret clearance, and thus can access data with this level of classification. But if he ac-
cesses a data set that indicates a specific military troop location, then accesses a differ-
ent data set that indicates the new location this military troop will be deployed to, and
then accesses another data set that specifies the types of weapons that are being shipped
to the new troop location, he might be able to figure out information that is classified
as top secret, which is above his classification level. While it is okay that he knows that
there is a military troop located in Kuwait, it is not okay that he knows that this troop
is being deployed to Libya with fully armed drones. This is top secret information that
is outside his clearance level.
To ensure that a user cannot piece these different data sets together and figure out a
secret we don’t want him to know, but still allow him access to specific data sets so he
can carry out his job, we would need to implement software that can track his access
requests. Each access request he makes is based upon his previous requests. So while he
could access data set A, then data set B, he cannot access data sets A, B, and C.
Access Control Administration
Once an organization develops a security policy, supporting procedures, standards, and
guidelines (described in Chapter 2), it must choose the type of access control model:
DAC, MAC, or RBAC. After choosing a model, the organization must select and imple-
ment different access control technologies and techniques. Access control matrices; re-
stricted interfaces; and content-dependent, context-dependent, and rule-based controls
are just a few of the choices.
If the environment does not require a high level of security, the organization will
choose discretionary and/or role-based. The DAC model enables data owners to allow
other users to access their resources, so an organization should choose the DAC model
only if it is fully aware of what it entails. If an organization has a high turnover rate and/
or requires a more centralized access control method, the role-based model is more
appropriate. If the environment requires a higher security level and only the adminis-
trator should be able to grant access to resources, then a MAC model is the best choice.
What is left to work out is how the organization will administer the access control
model. Access control administration comes in two basic flavors: centralized and de-
centralized. The decision makers should understand both approaches so they choose
and implement the proper one to achieve the level of protection required.

Chapter 3: Access Control
233
Centralized Access Control Administration
I control who can touch the carrots and who can touch the peas.
Response: Could you leave now?
Acentralized access control administration method is basically what it sounds like:
one entity (department or individual) is responsible for overseeing access to all corpo-
rate resources. This entity configures the mechanisms that enforce access control, pro-
cesses any changes that are needed to a user’s access control profile; disables access
when necessary; and completely removes these rights when a user is terminated, leaves
the company, or moves to a different position. This type of administration provides a
consistent and uniform method of controlling users’ access rights. It supplies strict con-
trol over data because only one entity (department or individual) has the necessary
rights to change access control profiles and permissions. Although this provides for a
more consistent and reliable environment, it can be a slow one, because all changes
must be processed by one entity.
The following sections present some examples of centralized remote access control
technologies. Each of these authentication protocols is referred to as an AAA protocol,
which stands for authentication, authorization, and auditing. (Some resources have the
last A stand for accounting, but it is the same functionality—just a different name.)
Depending upon the protocol, there are different ways to authenticate a user in this
client/server architecture. The traditional authentication protocols are Password Au-
thentication Protocol (PAP), Challenge Handshake Authentication Protocol (CHAP),
and a newer method referred to as Extensible Authentication Protocol (EAP). Each of
these authentication protocols is discussed at length in Chapter 6.
Access Control Techniques
Access control techniques are used to support the access control models.
•Access control matrix Table of subjects and objects that outlines their
access relationships
•Access control list Bound to an object and indicates what subjects
can access it and what operations they can carry out
•Capability table Bound to a subject and indicates what objects that
subject can access and what operations it can carry out
•Content-based access Bases access decisions on the sensitivity of the
data, not solely on subject identity
•Context-based access Bases access decisions on the state of the
situation, not solely on identity or content sensitivity
•Restricted interface Limits the user’s environment within the system,
thus limiting access to objects
•Rule-based access Restricts subjects’ access attempts by predefined rules

CISSP All-in-One Exam Guide
234
RADIUS
So, I have to run across half of a circle to be authenticated?
Response: Don’t know. Give it a try.
Remote Authentication Dial-In User Service (RADIUS) is a network protocol that
provides client/server authentication and authorization, and audits remote users. A net-
work may have access servers, a modem pool, DSL, ISDN, or T1 line dedicated for re-
mote users to communicate through. The access server requests the remote user’s logon
credentials and passes them back to a RADIUS server, which houses the usernames and
password values. The remote user is a client to the access server, and the access server is
a client to the RADIUS server.
Most ISPs today use RADIUS to authenticate customers before they are allowed ac-
cess to the Internet. The access server and customer’s software negotiate through a
handshake procedure and agree upon an authentication protocol (PAP, CHAP, or EAP).
The customer provides to the access server a username and password. This communica-
tion takes place over a PPP connection. The access server and RADIUS server commu-
nicate over the RADIUS protocol. Once the authentication is completed properly, the
customer’s system is given an IP address and connection parameters, and is allowed
access to the Internet. The access server notifies the RADIUS server when the session
starts and stops, for billing purposes.
RADIUS is also used within corporate environments to provide road warriors and
home users access to network resources. RADIUS allows companies to maintain user
profiles in a central database. When a user dials in and is properly authenticated, a
preconfigured profile is assigned to him to control what resources he can and cannot
access. This technology allows companies to have a single administered entry point,
which provides standardization in security and a simplistic way to track usage and net-
work statistics.
RADIUS was developed by Livingston Enterprises for its network access server prod-
uct series, but was then published as a standard. This means it is an open protocol that
any vendor can use and manipulate so it will work within its individual products. Be-
cause RADIUS is an open protocol, it can be used in different types of implementa-
tions. The format of configurations and user credentials can be held in LDAP servers,
various databases, or text files. Figure 3-19 shows some examples of possible RADIUS
implementations.
TACACS
Terminal Access Controller Access Control System (TACACS) has a very funny name. Not
funny ha-ha, but funny “huh?” TACACS has been through three generations: TACACS,
Extended TACACS (XTACACS), and TACACS+. TACACS combines its authentication
and authorization processes; XTACACS separates authentication, authorization, and
auditing processes; and TACACS+ is XTACACS with extended two-factor user authenti-
cation. TACACS uses fixed passwords for authentication, while TACACS+ allows users
to employ dynamic (one-time) passwords, which provides more protection.

Chapter 3: Access Control
235
NOTE
NOTE TACACS+ is really not a new generation of TACACS and XTACACS;
it is a brand-new protocol that provides similar functionality and shares
the same naming scheme. Because it is a totally different protocol, it is not
backward-compatible with TACACS or XTACACS.
TACACS+ provides basically the same functionality as RADIUS with a few differ-
ences in some of its characteristics. First, TACACS+ uses TCP as its transport protocol,
while RADIUS uses UDP. “So what?” you may be thinking. Well, any software that is
developed to use UDP as its transport protocol has to be “fatter” with intelligent code
that will look out for the items that UDP will not catch. Since UDP is a connectionless
protocol, it will not detect or correct transmission errors. So RADIUS must have the
Figure 3-19 Environments can implement different RADIUS infrastructures.

CISSP All-in-One Exam Guide
236
necessary code to detect packet corruption, long timeouts, or dropped packets. Since
the developers of TACACS+ choose to use TCP, the TACACS+ software does not need to
have the extra code to look for and deal with these transmission problems. TCP is a
connection-oriented protocol, and that is its job and responsibility.
RADIUS encrypts the user’s password only as it is being transmitted from the RA-
DIUS client to the RADIUS server. Other information, as in the username, accounting,
and authorized services, is passed in cleartext. This is an open invitation for attackers to
capture session information for replay attacks. Vendors who integrate RADIUS into
their products need to understand these weaknesses and integrate other security mech-
anisms to protect against these types of attacks. TACACS+ encrypts all of this data be-
tween the client and server and thus does not have the vulnerabilities inherent in the
RADIUS protocol.
The RADIUS protocol combines the authentication and authorization functional-
ity. TACACS+ uses a true authentication, authorization, and accounting/audit (AAA)
architecture, which separates the authentication, authorization, and accounting func-
tionalities. This gives a network administrator more flexibility in how remote users are
authenticated. For example, if Tom is a network administrator and has been assigned
the task of setting up remote access for users, he must decide between RADIUS and
TACACS+. If the current environment already authenticates all of the local users through
a domain controller using Kerberos, then Tom can configure the remote users to be
authenticated in this same manner, as shown in Figure 3-20. Instead of having to main-
tain a remote access server database of remote user credentials and a database within
Active Directory for local users, Tom can just configure and maintain one database. The
separation of authentication, authorization, and accounting functionality provides this
capability. TACACS+ also enables the network administrator to define more granular
user profiles, which can control the actual commands users can carry out.
Remember that RADIUS and TACACS+ are both protocols, and protocols are just
agreed-upon ways of communication. When a RADIUS client communicates with a
RADIUS server, it does so through the RADIUS protocol, which is really just a set of
defined fields that will accept certain values. These fields are referred to as attribute-
value pairs (AVPs). As an analogy, suppose I send you a piece of paper that has several
different boxes drawn on it. Each box has a headline associated with it: first name, last
name, hair color, shoe size. You fill in these boxes with your values and send it back to
me. This is basically how protocols work; the sending system just fills in the boxes
(fields) with the necessary information for the receiving system to extract and process.
Since TACACS+ allows for more granular control on what users can and cannot do,
TACACS+ has more AVPs, which allows the network administrator to define ACLs, fil-
ters, user privileges, and much more. Table 3-3 points out the differences between RA-
DIUS and TACACS+.

Chapter 3: Access Control
237
So, RADIUS is the appropriate protocol when simplistic username/password au-
thentication can take place and users only need an Accept or Deny for obtaining access,
as in ISPs. TACACS+ is the better choice for environments that require more sophisti-
cated authentication steps and tighter control over more complex authorization activi-
ties, as in corporate networks.
Figure 3-20 TACACS+ works in a client/server model.

CISSP All-in-One Exam Guide
238
Diameter
If we create our own technology, we get to name it any goofy thing we want!
Response: I like Snizzernoodle.
Diameter is a protocol that has been developed to build upon the functionality of
RADIUS and overcome many of its limitations. The creators of this protocol decided to
call it Diameter as a play on the term RADIUS—as in the diameter is twice the radius.
Diameter is another AAA protocol that provides the same type of functionality as
RADIUS and TACACS+ but also provides more flexibility and capabilities to meet the
new demands of today’s complex and diverse networks. At one time, all remote com-
munication took place over PPP and SLIP connections and users authenticated them-
selves through PAP or CHAP. Those were simpler, happier times when our parents had
to walk uphill both ways to school wearing no shoes. As with life, technology has be-
come much more complicated and there are more devices and protocols to choose
from than ever before. Today, we want our wireless devices and smart phones to be able
to authenticate themselves to our networks, and we use roaming protocols, Mobile IP,
Ethernet over PPP, Voice over IP (VoIP), and other crazy stuff that the traditional AAA
protocols cannot keep up with. So the smart people came up with a new AAA protocol,
Diameter, that can deal with these issues and many more.
RADIUS TACACS+
Packet delivery UDP TCP
Packet encryption Encrypts only the password from
the RADIUS client to the server.
Encrypts all traffic between the
client and server.
AAA support Combines authentication and
authorization services.
Uses the AAA architecture,
separating authentication,
authorization, and auditing.
Multiprotocol support Works over PPP connections. Supports other protocols, such
as AppleTalk, NetBIOS, and IPX.
Responses Uses single-challenge response
when authenticating a user, which
is used for all AAA activities.
Uses multiple-challenge response
for each of the AAA processes.
Each AAA activity must be
authenticated.
Table 3-3 Specific Differences Between These Two AAA Protocols
Mobile IP
This technology allows a user to move from one network to another and still use
the same IP address. It is an improvement upon the IP protocol because it allows
a user to have a home IP address, associated with his home network, and a care-of
address. The care-of address changes as he moves from one network to the other.
All traffic that is addressed to his home IP address is forwarded to his care-of
address.

Chapter 3: Access Control
239
Diameter protocol consists of two portions. The first is the base protocol, which
provides the secure communication among Diameter entities, feature discovery, and
version negotiation. The second is the extensions, which are built on top of the base
protocol to allow various technologies to use Diameter for authentication.
Up until the conception of Diameter, IETF has had individual working groups who
defined how Voice over IP (VoIP), Fax over IP (FoIP), Mobile IP, and remote authentica-
tion protocols work. Defining and implementing them individually in any network can
easily result in too much confusion and interoperability. It requires customers to roll
out and configure several different policy servers and increases the cost with each new
added service. Diameter provides a base protocol, which defines header formats, secu-
rity options, commands, and AVPs. This base protocol allows for extensions to tie in
other services, such as VoIP, FoIP, Mobile IP, wireless, and cell phone authentication. So
Diameter can be used as an AAA protocol for all of these different uses.
As an analogy, consider a scenario in which ten people all need to get to the same
hospital, which is where they all work. They all have different jobs (doctor, lab techni-
cian, nurse, janitor, and so on), but they all need to end up at the same location. So,
they can either all take their own cars and their own routes to the hospital, which takes
up more hospital parking space and requires the gate guard to authenticate each and
every car, or they can take a bus. The bus is the common element (base protocol) to get
the individuals (different services) to the same location (networked environment). Di-
ameter provides the common AAA and security framework that different services can
work within, as illustrated in Figure 3-21.
RADIUS and TACACS+ are client/server protocols, which means the server portion
cannot send unsolicited commands to the client portion. The server portion can only
speak when spoken to. Diameter is a peer-based protocol that allows either end to initi-
ate communication. This functionality allows the Diameter server to send a message to
the access server to request the user to provide another authentication credential if she
is attempting to access a secure resource.
Diameter is not directly backward-compatible with RADIUS but provides an up-
grade path. Diameter uses TCP and AVPs, and provides proxy server support. It has
better error detection and correction functionality than RADIUS, as well as better
failover properties, and thus provides better network resilience.
Figure 3-21 Diameter provides an AAA architecture for several services.

CISSP All-in-One Exam Guide
240
Diameter has the functionality and ability to provide the AAA functionality for
other protocols and services because it has a large AVP set. RADIUS has 28 (256) AVPs,
while Diameter has 232 (a whole bunch). Recall from earlier in the chapter that AVPs are
like boxes drawn on a piece of paper that outline how two entities can communicate
back and forth. So, more AVPs allow for more functionality and services to exist and
communicate between systems.
Diameter provides the following AAA functionality:
• Authentication
• PAP, CHAP, EAP
• End-to-end protection of authentication information
• Replay attack protection
• Authorization
• Redirects, secure proxies, relays, and brokers
• State reconciliation
• Unsolicited disconnect
• Reauthorization on demand
• Accounting
• Reporting, roaming operations (ROAMOPS) accounting, event monitoring
You may not be familiar with Diameter because it is relatively new. It probably won’t
be taking over the world tomorrow, but it will be used by environments that need to
provide the type of services being demanded of them, and then slowly seep down into
corporate networks as more products are available. RADIUS has been around for a long
time and has served its purpose well, so don’t expect it to exit the stage any time soon.
Decentralized Access Control Administration
Okay, everyone just do whatever you want.
Adecentralized access control administration method gives control of access to the
people closer to the resources—the people who may better understand who should and
should not have access to certain files, data, and resources. In this approach, it is often
the functional manager who assigns access control rights to employees. An organiza-
tion may choose to use a decentralized model if its managers have better judgment re-
garding which users should be able to access different resources, and there is no business
requirement that dictates strict control through a centralized body is necessary.
Changes can happen faster through this type of administration because not just one
entity is making changes for the whole organization. However, there is a possibility that
conflicts of interest could arise that may not benefit the organization. Because no single
entity controls access as a whole, different managers and departments can practice se-
curity and access control in different ways. This does not provide uniformity and fair-
ness across the organization. One manager could be too busy with daily tasks and
decide it is easier to let everyone have full control over all the systems in the depart-
ment. Another department may practice a stricter and detail-oriented method of con-
trol by giving employees only the level of permissions needed to fulfill their tasks.

Chapter 3: Access Control
241
Also, certain controls can overlap, in which case actions may not be properly pro-
scribed or restricted. If Mike is part of the accounting group and recently has been under
suspicion for altering personnel account information, the accounting manager may re-
strict his access to these files to read-only access. However, the accounting manager does
not realize that Mike still has full-control access under the network group he is also a
member of. This type of administration does not provide methods for consistent con-
trol, as a centralized method would. Another issue that comes up with decentralized
administration is lack of proper consistency pertaining to the company’s protection. For
example, when Sean is fired for looking at pornography on his computer, some of the
groups Sean is a member of may not disable his account. So, Sean may still have access
after he is terminated, which could cause the company heartache if Sean is vindictive.
Access Control Methods
Access controls can be implemented at various layers of a network and individual sys-
tems. Some controls are core components of operating systems or embedded into ap-
plications and devices, and some security controls require third-party add-on packages.
Although different controls provide different functionality, they should all work to-
gether to keep the bad guys out and the good guys in, and to provide the necessary
quality of protection.
Most companies do not want people to be able to walk into their building arbi-
trarily, sit down at an employee’s computer, and access network resources. Companies
also don’t want every employee to be able to access all information within the compa-
ny, as in human resource records, payroll information, and trade secrets. Companies
want some assurance that employees who can access confidential information will
have some restrictions put upon them, making sure, say, a disgruntled employee does
not have the ability to delete financial statements, tax information, and top-secret data
that would put the company at risk. Several types of access controls prevent these things
from happening, as discussed in the sections that follow.
Access Control Layers
Access control consists of three broad categories: administrative, technical, and physi-
cal. Each category has different access control mechanisms that can be carried out man-
ually or automatically. All of these access control mechanisms should work in concert
with each other to protect an infrastructure and its data.
Each category of access control has several components that fall within it, as
shown next:
•Administrative Controls
• Policy and procedures
• Personnel controls
• Supervisory structure
• Security-awareness training
• Testing

CISSP All-in-One Exam Guide
242
•Physical Controls
• Network segregation
• Perimeter security
• Computer controls
• Work area separation
• Data backups
• Cabling
• Control zone
•Technical Controls
• System access
• Network architecture
• Network access
• Encryption and protocols
• Auditing
The following sections explain each of these categories and components and how
they relate to access control.
Administrative Controls
Senior management must decide what role security will play in the organization, in-
cluding the security goals and objectives. These directives will dictate how all the sup-
porting mechanisms will fall into place. Basically, senior management provides the
skeleton of a security infrastructure and then appoints the proper entities to fill in
the rest.
The first piece to building a security foundation within an organization is a security
policy. It is management’s responsibility to construct a security policy and delegate the
development of the supporting procedures, standards, and guidelines; indicate which
personnel controls should be used; and specify how testing should be carried out to
ensure all pieces fulfill the company’s security goals. These items are administrative
controls and work at the top layer of a hierarchical access control model. (Administra-
tive controls are examined in detail in Chapter 2, but are mentioned here briefly to
show the relationship to logical and physical controls pertaining to access control.)
Personnel Controls
Personnel controls indicate how employees are expected to interact with security mech-
anisms and address noncompliance issues pertaining to these expectations. These con-
trols indicate what security actions should be taken when an employee is hired, termi-
nated, suspended, moved into another department, or promoted. Specific procedures
must be developed for each situation, and many times the human resources and legal
departments are involved with making these decisions.

Chapter 3: Access Control
243
Supervisory Structure
Management must construct a supervisory structure in which each employee has a supe-
rior to report to, and that superior is responsible for that employee’s actions. This forces
management members to be responsible for employees and take a vested interest in
their activities. If an employee is caught hacking into a server that holds customer credit
card information, that employee and her supervisor will face the consequences. This is
an administrative control that aids in fighting fraud and enforcing proper control.
Security-Awareness Training
How do you know they know what they are supposed to know?
In many organizations, management has a hard time spending money and allocat-
ing resources for items that do not seem to affect the bottom line: profitability. This is
why training traditionally has been given low priority, but as computer security be-
comes more and more of an issue to companies, they are starting to recognize the value
of security-awareness training.
A company’s security depends upon technology and people, and people are usually
the weakest link and cause the most security breaches and compromises. If users under-
stand how to properly access resources, why access controls are in place, and the rami-
fications for not using the access controls properly, a company can reduce many types
of security incidents.
Testing
All security controls, mechanisms, and procedures must be tested on a periodic basis to
ensure they properly support the security policy, goals, and objectives set for them. This
testing can be a drill to test reactions to a physical attack or disruption of the network,
a penetration test of the firewalls and perimeter network to uncover vulnerabilities, a
query to employees to gauge their knowledge, or a review of the procedures and stan-
dards to make sure they still align with implemented business or technology changes.
Because change is constant and environments continually evolve, security procedures
and practices should be continually tested to ensure they align with management’s ex-
pectations and stay up-to-date with each addition to the infrastructure. It is manage-
ment’s responsibility to make sure these tests take place.
Physical Controls
We will go much further into physical security in Chapter 5, but it is important to un-
derstand certain physical controls must support and work with administrative and
technical (logical) controls to supply the right degree of access control. Examples of
physical controls include having a security guard verify individuals’ identities prior to
entering a facility, erecting fences around the exterior of the facility, making sure server
rooms and wiring closets are locked and protected from environmental elements (hu-
midity, heat, and cold), and allowing only certain individuals to access work areas that
contain confidential information. Some physical controls are introduced next, but
again, these and more physical mechanisms are explored in depth in Chapter 5.

CISSP All-in-One Exam Guide
244
Network Segregation
I have used my Lego set to outline the physical boundaries between you and me.
Response: Can you make the walls a little higher please?
Network segregation can be carried out through physical and logical means. A net-
work might be physically designed to have all AS400 computers and databases in a
certain area. This area may have doors with security swipe cards that allow only indi-
viduals who have a specific clearance to access this section and these computers. An-
other section of the network may contain web servers, routers, and switches, and yet
another network portion may have employee workstations. Each area would have the
necessary physical controls to ensure that only the permitted individuals have access
into and out of those sections.
Perimeter Security
How perimeter security is implemented depends upon the company and the security
requirements of that environment. One environment may require employees to be au-
thorized by a security guard by showing a security badge that contains a picture identifi-
cation before being allowed to enter a section. Another environment may require no
authentication process and let anyone and everyone into different sections. Perimeter
security can also encompass closed-circuit TVs that scan the parking lots and waiting
areas, fences surrounding a building, the lighting of walkways and parking areas, motion
detectors, sensors, alarms, and the location and visual appearance of a building. These
are examples of perimeter security mechanisms that provide physical access control by
providing protection for individuals, facilities, and the components within facilities.
Computer Controls
Each computer can have physical controls installed and configured, such as locks on
the cover so the internal parts cannot be stolen, the removal of the USB drive and CD-
ROM drives to prevent copying of confidential information, or implementation of a
protection device that reduces the electrical emissions to thwart attempts to gather in-
formation through airwaves.
Work Area Separation
Some environments might dictate that only particular individuals can access certain
areas of the facility. For example, research companies might not want office personnel
to be able to enter laboratories so they can’t disrupt experiments or access test data.
Most network administrators allow only network staff in the server rooms and wiring
closets to reduce the possibilities of errors or sabotage attempts. In financial institu-
tions, only certain employees can enter the vaults or other restricted areas. These ex-
amples of work area separation are physical controls used to support access control and
the overall security policy of the company.
Cabling
Different types of cabling can be used to carry information throughout a network.
Some cable types have sheaths that protect the data from being affected by the electrical
interference of other devices that emit electrical signals. Some types of cable have pro-

Chapter 3: Access Control
245
tection material around each individual wire to ensure there is no crosstalk between the
different wires. All cables need to be routed throughout the facility so they are not in
the way of employees or are exposed to any dangers like being cut, burnt, crimped, or
eavesdropped upon.
Control Zone
The company facility should be split up into zones depending upon the sensitivity of the
activity that takes place per zone. The front lobby could be considered a public area,
the product development area could be considered top secret, and the executive of-
fices could be considered secret. It does not matter what classifications are used, but it
should be understood that some areas are more sensitive than others, which will re-
quire different access controls based on the needed protection level. The same is true
of the company network. It should be segmented, and access controls should be cho-
sen for each zone based on the criticality of devices and the sensitivity of data being
processed.
Technical Controls
Technical controls are the software tools used to restrict subjects’ access to objects. They
are core components of operating systems, add-on security packages, applications, net-
work hardware devices, protocols, encryption mechanisms, and access control matri-
ces. These controls work at different layers within a network or system and need to
maintain a synergistic relationship to ensure there is no unauthorized access to re-
sources and that the resources’ availability, integrity, and confidentiality are guaranteed.
Technical controls protect the integrity and availability of resources by limiting the
number of subjects that can access them and protecting the confidentiality of resources
by preventing disclosure to unauthorized subjects. The following sections explain how
some technical controls work and where they are implemented within an environment.
System Access
Different types of controls and security mechanisms control how a computer is ac-
cessed. If an organization is using a MAC architecture, the clearance of a user is identi-
fied and compared to the resource’s classification level to verify that this user can access
the requested object. If an organization is using a DAC architecture, the operating sys-
tem checks to see if a user has been granted permission to access this resource. The
sensitivity of data, clearance level of users, and users’ rights and permissions are used as
logical controls to control access to a resource.
Many types of technical controls enable a user to access a system and the resources
within that system. A technical control may be a username and password combination,
a Kerberos implementation, biometrics, public key infrastructure (PKI), RADIUS,
TACACS+, or authentication using a smart card through a reader connected to a system.
These technologies verify the user is who he says he is by using different types of au-
thentication methods. Once a user is properly authenticated, he can be authorized and
allowed access to network resources. These technologies are addressed in further detail
in future chapters, but for now understand that system access is a type of technical con-
trol that can enforce access control objectives.

CISSP All-in-One Exam Guide
246
Network Architecture
The architecture of a network can be constructed and enforced through several logical
controls to provide segregation and protection of an environment. Whereas a network
can be segregated physically by walls and location, it can also be segregated logically
through IP address ranges and subnets and by controlling the communication flow
between the segments. Often, it is important to control how one segment of a network
communicates with another segment.
Figure 3-22 is an example of how an organization may segregate its network and
determine how network segments can communicate. This example shows that the or-
ganization does not want the internal network and the demilitarized zone (DMZ) to
have open and unrestricted communication paths. There is usually no reason for inter-
nal users to have direct access to the systems in the DMZ, and cutting off this type of
communication reduces the possibilities of internal attacks on those systems. Also, if
an attack comes from the Internet and successfully compromises a system on the DMZ,
the attacker must not be able to easily access the internal network, which this type of
logical segregation protects against.
This example also shows how the management segment can communicate with all
other network segments, but those segments cannot communicate in return. The seg-
mentation is implemented because the management consoles that control the firewalls
and IDSs reside in the management segment, and there is no reason for users, other
than the administrator, to have access to these computers.
A network can be segregated physically and logically. This type of segregation and
restriction is accomplished through logical controls.
Network Access
Systems have logical controls that dictate who can and cannot access them and what
those individuals can do once they are authenticated. This is also true for networks.
Routers, switches, firewalls, and gateways all work as technical controls to enforce ac-
cess restriction into and out of a network and access to the different segments within
the network. If an attacker from the Internet wants to gain access to a specific computer,
chances are she will have to hack through a firewall, router, and a switch just to be able
to start an attack on a specific computer that resides within the internal network. Each
device has its own logical controls that make decisions about what entities can access
them and what type of actions they can carry out.
Access to different network segments should be granular in nature. Routers and fire-
walls can be used to ensure that only certain types of traffic get through to each segment.
Encryption and Protocols
Encryption and protocols work as technical controls to protect information as it passes
throughout a network and resides on computers. They ensure that the information is
received by the correct entity, and that it is not modified during transmission. These
logical controls can preserve the confidentiality and integrity of data and enforce spe-
cific paths for communication to take place. (Chapter 7 is dedicated to cryptography
and encryption mechanisms.)

Chapter 3: Access Control
247
Auditing
Auditing tools are technical controls that track activity within a network, on a network
device, or on a specific computer. Even though auditing is not an activity that will deny
an entity access to a network or computer, it will track activities so a network adminis-
trator can understand the types of access that took place, identify a security breach, or
Figure 3-22 Technical network segmentation controls how different network segments
communicate.

CISSP All-in-One Exam Guide
248
warn the administrator of suspicious activity. This information can be used to point out
weaknesses of other technical controls and help the administrator understand where
changes must be made to preserve the necessary security level within the environment.
NOTE
NOTE Many of the subjects touched on in these sections will be fully
addressed and explained in later chapters. What is important to understand is
that there are administrative, technical, and physical controls that work toward
providing access control, and you should know several examples of each for
the exam.
Accountability
If you do wrong, you will pay.
Auditing capabilities ensure users are accountable for their actions, verify that the
security policies are enforced, and can be used as investigation tools. There are several
reasons why network administrators and security professionals want to make sure ac-
countability mechanisms are in place and configured properly: to be able to track bad
deeds back to individuals, detect intrusions, reconstruct events and system conditions,
provide legal recourse material, and produce problem reports. Audit documentation
and log files hold a mountain of information—the trick is usually deciphering it and
presenting it in a useful and understandable format.
Accountability is tracked by recording user, system, and application activities. This
recording is done through auditing functions and mechanisms within an operating
system or application. Audit trails contain information about operating system activi-
ties, application events, and user actions. Audit trails can be used to verify the health of
a system by checking performance information or certain types of errors and condi-
tions. After a system crashes, a network administrator often will review audit logs to try
and piece together the status of the system and attempt to understand what events
could be attributed to the disruption.
Audit trails can also be used to provide alerts about any suspicious activities that
can be investigated at a later time. In addition, they can be valuable in determining
exactly how far an attack has gone and the extent of the damage that may have been
caused. It is important to make sure a proper chain of custody is maintained to ensure
any data collected can later be properly and accurately represented in case it needs to be
used for later events such as criminal proceedings or investigations.
It is a good idea to keep the following in mind when dealing with auditing:
• Store the audits securely.
• The right audit tools will keep the size of the logs under control.
• The logs must be protected from any unauthorized changes in order to
safeguard data.
• Train the right people to review the data in the right manner.
• Make sure the ability to delete logs is only available to administrators.
• Logs should contain activities of all high-privileged accounts (root,
administrator).

Chapter 3: Access Control
249
An administrator configures what actions and events are to be audited and logged.
In a high-security environment, the administrator would configure more activities to be
captured and set the threshold of those activities to be more sensitive. The events can
be reviewed to identify where breaches of security occurred and if the security policy
has been violated. If the environment does not require such levels of security, the events
analyzed would be fewer, with less demanding thresholds.
Items and actions to be audited can become an endless list. A security professional
should be able to assess an environment and its security goals, know what actions
should be audited, and know what is to be done with that information after it is cap-
tured—without wasting too much disk space, CPU power, and staff time. The following
gives a broad overview of the items and actions that can be audited and logged:
• System-level events
• System performance
• Logon attempts (successful and unsuccessful)
• Logon ID
• Date and time of each logon attempt
• Lockouts of users and terminals
• Use of administration utilities
• Devices used
• Functions performed
• Requests to alter configuration files
• Application-level events
• Error messages
• Files opened and closed
• Modifications of files
• Security violations within application
• User-level events
• Identification and authentication attempts
• Files, services, and resources used
• Commands initiated
• Security violations
The threshold (clipping level) and parameters for each of these items must be con-
figured. For example, an administrator can audit each logon attempt or just each failed
logon attempt. System performance can look at the amount of memory used within an
eight-hour period or the memory, CPU, and hard drive space used within an hour.
Intrusion detection systems (IDSs) continually scan audit logs for suspicious activ-
ity. If an intrusion or harmful event takes place, audit logs are usually kept to be used
later to prove guilt and prosecute if necessary. If severe security events take place, many

CISSP All-in-One Exam Guide
250
times the IDS will alert the administrator or staff member so they can take proper ac-
tions to end the destructive activity. If a dangerous virus is identified, administrators
may take the mail server offline. If an attacker is accessing confidential information
within the database, this computer may be temporarily disconnected from the network
or Internet. If an attack is in progress, the administrator may want to watch the actions
taking place so she can track down the intruder. IDSs can watch for this type of activity
during real time and/or scan audit logs and watch for specific patterns or behaviors.
Review of Audit Information
It does no good to collect it if you don’t look at it.
Audit trails can be reviewed manually or through automated means—either way,
they must be reviewed and interpreted. If an organization reviews audit trails manually,
it needs to establish a system of how, when, and why they are viewed. Usually audit logs
are very popular items right after a security breach, unexplained system action, or sys-
tem disruption. An administrator or staff member rapidly tries to piece together the
activities that led up to the event. This type of audit review is event-oriented. Audit trails
can also be viewed periodically to watch for unusual behavior of users or systems, and
to help understand the baseline and health of a system. Then there is a real-time, or
near real-time, audit analysis that can use an automated tool to review audit informa-
tion as it is created. Administrators should have a scheduled task of reviewing audit
data. The audit material usually needs to be parsed and saved to another location for a
certain time period. This retention information should be stated in the company’s se-
curity policy and procedures.
Reviewing audit information manually can be overwhelming. There are applica-
tions and audit trail analysis tools that reduce the volume of audit logs to review and
improve the efficiency of manual review procedures. A majority of the time, audit logs
contain information that is unnecessary, so these tools parse out specific events and
present them in a useful format.
An audit-reduction tool does just what its name suggests—reduces the amount of
information within an audit log. This tool discards mundane task information and re-
cords system performance, security, and user functionality information that can be use-
ful to a security professional or administrator.
Today, more organizations are implementing security event management (SEM) sys-
tems, also called security information and event management (SIEM) systems. These
products gather logs from various devices (servers, firewalls, routers, etc.) and attempt
to correlate the log data and provide analysis capabilities. Reviewing logs manually
looking for suspicious activity on a continuous manner is not only mind-numbing, it
is close to impossible to be successful. So many packets and network communication
data sets are passing along a network; humans cannot collect all the data in real or near
to real time, analyze them, identify current attacks and react—it is just too overwhelm-
ing. We also have different types of systems on a network (routers, firewalls, IDS, IPS,
servers, gateways, proxies) collecting logs in various proprietary formats, which requires
centralization, standardization, and normalization. Log formats are different per prod-
uct type and vendor. Juniper network device systems create logs in a different format
than Cisco systems, which are different from Palo Alto and Barracuda firewalls. It is

Chapter 3: Access Control
251
important to gather logs from various different systems within an environment so that
some type of situational awareness can take place. Once the logs are gathered, intelli-
gence routines need to be processed on them so that data mining (identify patterns)
can take place. The goal is to piece together seemingly unrelated event data so that the
security team can fully understand what is taking place within the network and react
properly.
NOTE
NOTE Situational awareness means that you understand the current
environment even though it is complex, dynamic, and made up of seemingly
unrelated data points. You need to be able to understand each data point in
its own context within the surrounding environment so that the best possible
decisions can be made.
Protecting Audit Data and Log Information
I hear that logs can contain sensitive data, so I just turned off all logging capabilities.
Response: Brilliant.
If an intruder breaks into your house, he will do his best to cover his tracks by not
leaving fingerprints or any other clues that can be used to tie him to the criminal activ-
ity. The same is true in computer fraud and illegal activity. The intruder will work to
cover his tracks. Attackers often delete audit logs that hold this incriminating informa-
tion. (Deleting specific incriminating data within audit logs is called scrubbing.) Delet-
ing this information can cause the administrator to not be alerted or aware of the
security breach, and can destroy valuable data. Therefore, audit logs should be pro-
tected by strict access control.
Only certain individuals (the administrator and security personnel) should be able
to view, modify, and delete audit trail information. No other individuals should be able
to view this data, much less modify or delete it. The integrity of the data can be ensured
with the use of digital signatures, hashing tools, and strong access controls. Its confi-
dentiality can be protected with encryption and access controls, if necessary, and it can
be stored on write-once media (CD-ROMs) to prevent loss or modification of the data.
Unauthorized access attempts to audit logs should be captured and reported.
Audit logs may be used in a trial to prove an individual’s guilt, demonstrate how an
attack was carried out, or corroborate a story. The integrity and confidentiality of these
logs will be under scrutiny. Proper steps need to be taken to ensure that the confidenti-
ality and integrity of the audit information is not compromised in any way.
Keystroke Monitoring
Oh, you typed an L. Let me write that down. Oh, and a P, and a T, and an S—hey, slow down!
Keystroke monitoring is a type of monitoring that can review and record keystrokes
entered by a user during an active session. The person using this type of monitoring can
have the characters written to an audit log to be reviewed at a later time. This type of
auditing is usually done only for special cases and only for a specific amount of time,
because the amount of information captured can be overwhelming and/or unimport-
ant. If a security professional or administrator is suspicious of an individual and his

CISSP All-in-One Exam Guide
252
activities, she may invoke this type of monitoring. In some authorized investigative
stages, a keyboard dongle (hardware key logger) may be unobtrusively inserted be-
tween the keyboard and the computer to capture all the keystrokes entered, including
power-on passwords.
A hacker can also use this type of monitoring. If an attacker can successfully install a
Trojan horse on a computer, the Trojan horse can install an application that captures data
as it is typed into the keyboard. Typically, these programs are most interested in user cre-
dentials and can alert the attacker when credentials have been successfully captured.
Privacy issues are involved with this type of monitoring, and administrators could
be subject to criminal and civil liabilities if it is done without proper notification to the
employees and authorization from management. If a company wants to use this type
of auditing, it should state so in the security policy, address the issue in security-aware-
ness training, and present a banner notice to the user warning that the activities at that
computer may be monitored in this fashion. These steps should be taken to protect the
company from violating an individual’s privacy, and they should inform the users
where their privacy boundaries start and stop pertaining to computer use.
Access Control Practices
The fewest number of doors open allows the fewest number of flies in.
We have gone over how users are identified, authenticated, and authorized, and
how their actions are audited. These are necessary parts of a healthy and safe network
environment. You also want to take steps to ensure there are no unnecessary open
doors and that the environment stays at the same security level you have worked so
hard to achieve. This means you need to implement good access control practices. Not
keeping up with daily or monthly tasks usually causes the most vulnerabilities in an
environment. It is hard to put out all the network fires, fight the political battles, fulfill
all the users’ needs, and still keep up with small maintenance tasks. However, many
companies have found that not doing these small tasks caused them the greatest heart-
ache of all.
The following is a list of tasks that must be done on a regular basis to ensure secu-
rity stays at a satisfactory level:
• Deny access to systems to undefined users or anonymous accounts.
• Limit and monitor the usage of administrator and other powerful accounts.
• Suspend or delay access capability after a specific number of unsuccessful
logon attempts.
• Remove obsolete user accounts as soon as the user leaves the company.
• Suspend inactive accounts after 30 to 60 days.
• Enforce strict access criteria.
• Enforce the need-to-know and least-privilege practices.
• Disable unneeded system features, services, and ports.
• Replace default password settings on accounts.

Chapter 3: Access Control
253
• Limit and monitor global access rules.
• Remove redundant resource rules from accounts and group memberships.
• Remove redundant user IDs, accounts, and role-based accounts from resource
access lists.
• Enforce password rotation.
• Enforce password requirements (length, contents, lifetime, distribution,
storage, and transmission).
• Audit system and user events and actions, and review reports periodically.
• Protect audit logs.
Even if all of these countermeasures are in place and properly monitored, data can
still be lost in an unauthorized manner in other ways. The next section looks at these
issues and their corresponding countermeasures.
Unauthorized Disclosure of Information
Several things can make information available to others for whom it is not intended,
which can bring about unfavorable results. Sometimes this is done intentionally; other
times, unintentionally. Information can be disclosed unintentionally when one falls
prey to attacks that specialize in causing this disclosure. These attacks include social
engineering, covert channels, malicious code, and electrical airwave sniffing. Informa-
tion can be disclosed accidentally through object reuse methods, which are explained
next. (Social engineering was discussed in Chapter 2, while covert channels will be
discussed in Chapter 4.)
Object Reuse
Can I borrow this thumb drive?
Response: Let me destroy it first.
Object reuse issues pertain to reassigning to a subject media that previously con-
tained one or more objects. Huh? This means before someone uses a hard drive, USB
drive, or tape, it should be cleared of any residual information still on it. This concept
also applies to objects reused by computer processes, such as memory locations, vari-
ables, and registers. Any sensitive information that may be left by a process should be
securely cleared before allowing another process the opportunity to access the object.
This ensures that information not intended for this individual or any other subject is
not disclosed. Many times, USB drives are exchanged casually in a work environment.
What if a supervisor lent a USB drive to an employee without erasing it and it contained
confidential employee performance reports and salary raises forecasted for the next
year? This could prove to be a bad decision and may turn into a morale issue if the in-
formation was passed around. Formatting a disk or deleting files only removes the
pointers to the files; it does not remove the actual files. This information will still be on
the disk and available until the operating system needs that space and overwrites those
files. So, for media that holds confidential information, more extreme methods should
be taken to ensure the files are actually gone, not just their pointers.

CISSP All-in-One Exam Guide
254
Sensitive data should be classified (secret, top secret, confidential, unclassified, and
so on) by the data owners. How the data are stored and accessed should also be strictly
controlled and audited by software controls. However, it does not end there. Before al-
lowing someone to use previously used media, it should be erased or degaussed. (This
responsibility usually falls on the operations department.) If media holds sensitive in-
formation and cannot be purged, steps should be created describing how to properly
destroy it so no one else can obtain this information.
NOTE
NOTE Sometimes hackers actually configure a sector on a hard drive so it
is marked as bad and unusable to an operating system, but that is actually fine
and may hold malicious data. The operating system will not write information
to this sector because it thinks it is corrupted. This is a form of data hiding.
Some boot-sector virus routines are capable of putting the main part of their
code (payload) into a specific sector of the hard drive, overwriting any data
that may have been there, and then protecting it as a bad block.
Emanation Security
Quick, cover your computer and your head in tinfoil!
All electronic devices emit electrical signals. These signals can hold important infor-
mation, and if an attacker buys the right equipment and positions himself in the right
place, he could capture this information from the airwaves and access data transmis-
sions as if he had a tap directly on the network wire.
Several incidents have occurred in which intruders have purchased inexpensive
equipment and used it to intercept electrical emissions as they radiated from a com-
puter. This equipment can reproduce data streams and display the data on the intrud-
er’s monitor, enabling the intruder to learn of covert operations, find out military
strategies, and uncover and exploit confidential information. This is not just stuff found
in spy novels. It really happens. So, the proper countermeasures have been devised.
TEMPEST TEMPEST started out as a study carried out by the DoD and then turned
into a standard that outlines how to develop countermeasures that control spurious
electrical signals emitted by electrical equipment. Special shielding is used on equip-
ment to suppress the signals as they are radiated from devices. TEMPEST equipment is
implemented to prevent intruders from picking up information through the airwaves
with listening devices. This type of equipment must meet specific standards to be rated
as providing TEMPEST shielding protection. TEMPEST refers to standardized technol-
ogy that suppresses signal emanations with shielding material. Vendors who manufac-
ture this type of equipment must be certified to this standard.
The devices (monitors, computers, printers, and so on) have an outer metal coating,
referred to as a Faraday cage. This is made of metal with the necessary depth to ensure
only a certain amount of radiation is released. In devices that are TEMPEST rated, other
components are also modified, especially the power supply, to help reduce the amount
of electricity used.
Even allowable limits of emission levels can radiate and still be considered safe. The
approved products must ensure only this level of emissions is allowed to escape the

Chapter 3: Access Control
255
devices. This type of protection is usually needed only in military institutions, although
other highly secured environments do utilize this kind of safeguard.
Many military organizations are concerned with stray radio frequencies emitted by
computers and other electronic equipment because an attacker may be able to pick
them up, reconstruct them, and give away secrets meant to stay secret.
TEMPEST technology is complex, cumbersome, and expensive, and therefore only
used in highly sensitive areas that really need this high level of protection.
Two alternatives to TEMPEST exist: use white noise or use a control zone concept,
both of which are explained next.
NOTE
NOTE TEMPEST is the name of a program, and now a standard, that was
developed in the late 1950s by the U.S. and British governments to deal with
electrical and electromagnetic radiation emitted from electrical equipment,
mainly computers. This type of equipment is usually used by intelligence,
military, government, and law enforcement agencies, and the selling of such
items is under constant scrutiny.
White Noise A countermeasure used to keep intruders from extracting information
from electrical transmissions is white noise. White noise is a uniform spectrum of ran-
dom electrical signals. It is distributed over the full spectrum so the bandwidth is con-
stant and an intruder is not able to decipher real information from random noise or
random information.
Control Zone Another alternative to using TEMPEST equipment is to use the zone
concept, which was addressed earlier in this chapter. Some facilities use material in
their walls to contain electrical signals, which acts like a large Faraday cage. This pre-
vents intruders from being able to access information emitted via electrical signals from
network devices. This control zone creates a type of security perimeter and is construct-
ed to protect against unauthorized access to data or the compromise of sensitive infor-
mation.
Access Control Monitoring
Access control monitoring is a method of keeping track of who attempts to access spe-
cific company resources. It is an important detective mechanism, and different tech-
nologies exist that can fill this need. It is not enough to invest in antivirus and firewall
solutions. Companies are finding that monitoring their own internal network has be-
come a way of life.
Intrusion Detection
Intrusion detection systems (IDSs) are different from traditional firewall products be-
cause they are designed to detect a security breach. Intrusion detection is the process of
detecting an unauthorized use of, or attack upon, a computer, network, or telecommu-
nications infrastructure. IDSs are designed to aid in mitigating the damage that can be

CISSP All-in-One Exam Guide
256
caused by hacking, or by breaking into sensitive computer and network systems. The
basic intent of the IDS tool is to spot something suspicious happening on the network
and sound an alarm by flashing a message on a network manager’s screen, or possibly
sending an e-mail or even reconfiguring a firewall’s ACL setting. The IDS tools can look
for sequences of data bits that might indicate a questionable action or event, or moni-
tor system log and activity recording files. The event does not need to be an intrusion
to sound the alarm—any kind of “non-normal” behavior may do the trick.
Although different types of IDS products are available, they all have three common
components: sensors, analyzers, and administrator interfaces. The sensors collect traffic
and user activity data and send them to an analyzer, which looks for suspicious activity.
If the analyzer detects an activity it is programmed to deem as fishy, it sends an alert to
the administrator’s interface.
IDSs come in two main types: network-based, which monitor network communica-
tions, and host-based, which can analyze the activity within a particular computer system.
IDSs can be configured to watch for attacks, parse audit logs, terminate a connec-
tion, alert an administrator as attacks are happening, expose a hacker’s techniques,
illustrate which vulnerabilities need to be addressed, and possibly help track down
individual hackers.
Network-Based IDSs
Anetwork-based IDS (NIDS) uses sensors, which are either host computers with the
necessary software installed or dedicated appliances—each with its network interface
card (NIC) in promiscuous mode. Normally, NICs watch for traffic that has the address
of its host system, broadcasts, and sometimes multicast traffic. The NIC driver copies
the data from the transmission medium and sends them up the network protocol stack
for processing. When a NIC is put into promiscuous mode, the NIC driver captures all
traffic, makes a copy of all packets, and then passes one copy to the TCP stack and one
copy to an analyzer to look for specific types of patterns.
An NIDS monitors network traffic and cannot “see” the activity going on inside a
computer itself. To monitor the activities within a computer system, a company would
need to implement a host-based IDS.
Host-Based IDSs
Ahost-based IDS (HIDS) can be installed on individual workstations and/or servers to
watch for inappropriate or anomalous activity. HIDSs are usually used to make sure
users do not delete system files, reconfigure important settings, or put the system at risk
in any other way. So, whereas the NIDS understands and monitors the network traffic,
a HIDS’s universe is limited to the computer itself. A HIDS does not understand or re-
view network traffic, and a NIDS does not “look in” and monitor a system’s activity.
Each has its own job and stays out of the other’s way.
In most environments, HIDS products are installed only on critical servers, not on
every system on the network, because of the resource overhead and the administration
nightmare that such an installation would cause.

Chapter 3: Access Control
257
Just to make life a little more confusing, HIDS and NIDS can be one of the follow-
ing types:
• Signature-based
• Pattern matching
• Stateful matching
• Anomaly-based
• Statistical anomaly–based
• Protocol anomaly–based
• Traffic anomaly–based
• Rule- or heuristic-based
Knowledge- or Signature-Based Intrusion Detection
Knowledge is accumulated by the IDS vendors about specific attacks and how they are
carried out. Models of how the attacks are carried out are developed and called signa-
tures. Each identified attack has a signature, which is used to detect an attack in progress
or determine if one has occurred within the network. Any action that is not recognized
as an attack is considered acceptable.
NOTE
NOTE Signature-based is also known as pattern matching.
An example of a signature is a packet that has the same source and destination IP
address. All packets should have a different source and destination IP address, and if
they have the same address, this means a Land attack is under way. In a Land attack, a
hacker modifies the packet header so that when a receiving system responds to the
sender, it is responding to its own address. Now that seems as though it should be be-
nign enough, but vulnerable systems just do not have the programming code to know
what to do in this situation, so they freeze or reboot. Once this type of attack was dis-
covered, the signature-based IDS vendors wrote a signature that looks specifically for
packets that contain the same source and destination address.
Signature-based IDSs are the most popular IDS products today, and their effective-
ness depends upon regularly updating the software with new signatures, as with antivi-
rus software. This type of IDS is weak against new types of attacks because it can
recognize only the ones that have been previously identified and have had signatures
written for them. Attacks or viruses discovered in production environments are referred
to as being “in the wild.” Attacks and viruses that exist but that have not been released
are referred to as being “in the zoo.” No joke.

CISSP All-in-One Exam Guide
258
State-Based IDSs
Before delving too deep into how a state-based IDS works, you need to understand
what the state of a system or application actually is. Every change that an operating
system experiences (user logs on, user opens application, application communicates to
another application, user inputs data, and so on) is considered a state transition. In a
very technical sense, all operating systems and applications are just lines and lines of
instructions written to carry out functions on data. The instructions have empty vari-
ables, which is where the data is held. So when you use the calculator program and type
in 5, an empty variable is instantly populated with this value. By entering that value,
you change the state of the application. When applications communicate with each
other, they populate empty variables provided in each application’s instruction set. So,
a state transition is when a variable’s value changes, which usually happens continu-
ously within every system.
Specific state changes (activities) take place with specific types of attacks. If an at-
tacker will carry out a remote buffer overflow, then the following state changes will
occur:
1. The remote user connects to the system.
2. The remote user sends data to an application (the data exceed the allocated
buffer for this empty variable).
3. The data are executed and overwrite the buffer and possibly other memory
segments.
4. A malicious code executes.
So, state is a snapshot of an operating system’s values in volatile, semipermanent,
and permanent memory locations. In a state-based IDS, the initial state is the state
prior to the execution of an attack, and the compromised state is the state after success-
ful penetration. The IDS has rules that outline which state transition sequences should
sound an alarm. The activity that takes place between the initial and compromised state
is what the state-based IDS looks for, and it sends an alert if any of the state-transition
sequences match its preconfigured rules.
This type of IDS scans for attack signatures in the context of a stream of activity in-
stead of just looking at individual packets. It can only identify known attacks and re-
quires frequent updates of its signatures.
Statistical Anomaly–Based IDS
Through statistical analysis I have determined I am an anomaly in nature.
Response: You have my vote.
Astatistical anomaly–based IDS is a behavioral-based system. Behavioral-based IDS
products do not use predefined signatures, but rather are put in a learning mode to
build a profile of an environment’s “normal” activities. This profile is built by continu-
ally sampling the environment’s activities. The longer the IDS is put in a learning mode,

Chapter 3: Access Control
259
in most instances, the more accurate a profile it will build and the better protection it
will provide. After this profile is built, all future traffic and activities are compared to it.
The same type of sampling that was used to build the profile takes place, so the same
type of data is being compared. Anything that does not match the profile is seen as an
attack, in response to which the IDS sends an alert. With the use of complex statistical
algorithms, the IDS looks for anomalies in the network traffic or user activity. Each
packet is given an anomaly score, which indicates its degree of irregularity. If the score
is higher than the established threshold of “normal” behavior, then the preconfigured
action will take place.
The benefit of using a statistical anomaly–based IDS is that it can react to new at-
tacks. It can detect “0 day” attacks, which means an attack is new to the world and no
signature or fix has been developed yet. These products are also capable of detecting the
“low and slow” attacks, in which the attacker is trying to stay under the radar by sending
packets little by little over a long period of time. The IDS should be able to detect these
types of attacks because they are different enough from the contrasted profile.
Now for the bad news. Since the only thing that is “normal” about a network is that
it is constantly changing, developing the correct profile that will not provide an over-
whelming number of false positives can be difficult. Many IT staff members know all
too well this dance of chasing down alerts that end up being benign traffic or activity.
In fact, some environments end up turning off their IDS because of the amount of time
these activities take up. (Proper education on tuning and configuration will reduce the
number of false positives.)
If an attacker detects there is an IDS on a network, she will then try to detect the type
of IDS it is so she can properly circumvent it. With a behavioral-based IDS, the attacker
could attempt to integrate her activities into the behavior pattern of the network traffic.
That way, her activities are seen as “normal” by the IDS and thus go undetected. It is a
good idea to ensure no attack activity is under way when the IDS is in learning mode.
If this takes place, the IDS will never alert you of this type of attack in the future because
it sees this traffic as typical of the environment.
If a corporation decides to use a statistical anomaly–based IDS, it must ensure that
the staff members who are implementing and maintaining it understand protocols and
packet analysis. Because this type of an IDS sends generic alerts, compared to other
types of IDSs, it is up to the network engineer to figure out what the actual issue is. For
example, a signature-based IDS reports the type of attack that has been identified, while
a rule-based IDS identifies the actual rule the packet does not comply with. In a statisti-
cal anomaly–based IDS, all the product really understands is that something “abnor-
mal” has happened, which just means the event does not match the profile.
NOTE
NOTE IDS and some antimalware products are said to have “heuristic”
capabilities. The term heuristic means to create new information from
different data sources. The IDS gathers different “clues” from the network or
system and calculates the probability an attack is taking place. If the probability
hits a set threshold, then the alarm sounds.

CISSP All-in-One Exam Guide
260
Determining the proper thresholds for statistically significant deviations is really
the key for the successful use of a behavioral-based IDS. If the threshold is set too low,
nonintrusive activities are considered attacks (false positives). If the threshold is set too
high, some malicious activities won’t be identified (false negatives).
Once an IDS discovers an attack, several things can happen, depending upon the
capabilities of the IDS and the policy assigned to it. The IDS can send an alert to a con-
sole to tell the right individuals an attack is being carried out; send an e-mail or text to
the individual assigned to respond to such activities; kill the connection of the detected
attack; or reconfigure a router or firewall to try to stop any further similar attacks. A
modifiable response condition might include anything from blocking a specific IP ad-
dress to redirecting or blocking a certain type of activity.
Protocol Anomaly–Based IDS
A statistical anomaly–based IDS can use protocol anomaly–based filters. These types of
IDSs have specific knowledge of each protocol they will monitor. A protocol anomaly
pertains to the format and behavior of a protocol. The IDS builds a model (or profile)
of each protocol’s “normal” usage. Keep in mind, however, that protocols have theoreti-
cal usage, as outlined in their corresponding RFCs, and real-world usage, which refers to
the fact that vendors seem to always “color outside the boxes” and don’t strictly follow
the RFCs in their protocol development and implementation. So, most profiles of indi-
vidual protocols are a mix between the official and real-world versions of the protocol
and its usage. When the IDS is activated, it looks for anomalies that do not match the
profiles built for the individual protocols.
Although several vulnerabilities within operating systems and applications are
available to be exploited, many more successful attacks take place by exploiting vulner-
abilities in the protocols themselves. At the OSI data link layer, the Address Resolution
Protocol (ARP) does not have any protection against ARP attacks where bogus data is
inserted into its table. At the network layer, the Internet Control Message Protocol
(ICMP) can be used in a Loki attack to move data from one place to another, when this
protocol was designed to only be used to send status information—not user data. IP
Attack Techniques
It is common for hackers to first identify whether an IDS is present on the net-
work they are preparing to attack. If one is present, that attacker may implement
a denial-of-service attack to bring it offline. Another tactic is to send the IDS in-
correct data, which will make the IDS send specific alerts indicating a certain at-
tack is under way, when in truth it is not. The goal of these activities is either to
disable the IDS or to distract the network and security individuals so they will be
busy chasing the wrong packets, while the real attack takes place.
What’s in a Name?
Signature-based IDSs are also known as misuse-detection systems, and behavior-
al-based IDSs are also known as profile-based systems.

Chapter 3: Access Control
261
headers can be easily modified for spoofed attacks. At the transport layer, TCP packets
can be injected into the connection between two systems for a session hijacking attack.
NOTE
NOTE When an attacker compromises a computer and loads a back door
on the system, he will need to have a way to communicate to this computer
through this back door and stay “under the radar” of the network firewall and
IDS. Hackers have figured out that a small amount of code can be inserted
into an ICMP packet, which is then interpreted by the backdoor software
loaded on a compromised system. Security devices are usually not configured to
monitor this type of traffic because ICMP is a protocol that is supposed to be
used just to send status information—not commands to a compromised system.
Because every packet formation and delivery involves many protocols, and because
more attack vectors exist in the protocols than in the software itself, it is a good idea to
integrate protocol anomaly–based filters in any network behavioral-based IDS.
Traffic Anomaly–Based IDS
Most behavioral-based IDSs have traffic anomaly–based filters, which detect changes in
traffic patterns, as in DoS attacks or a new service that appears on the network. Once a
profile is built that captures the baselines of an environment’s ordinary traffic, all future
traffic patterns are compared to that profile. As with all filters, the thresholds are tunable
to adjust the sensitivity, and to reduce the number of false positives and false negatives.
Since this is a type of statistical anomaly–based IDS, it can detect unknown attacks.
Rule-Based IDS
Arule-based IDS takes a different approach than a signature-based or statistical anom-
aly–based system. A signature-based IDS is very straightforward. For example, if a signa-
ture-based IDS detects a packet that has all of its TCP header flags with the bit value of
1, it knows that an xmas attack is under way—so it sends an alert. A statistical anomaly–
based IDS is also straightforward. For example, if Bob has logged on to his computer at
6A.M. and the profile indicates this is abnormal, the IDS sends an alert, because this is
seen as an activity that needs to be investigated. Rule-based intrusion detection gets a
little trickier, depending upon the complexity of the rules used.
Rule-based intrusion detection is commonly associated with the use of an expert
system. An expert system is made up of a knowledge base, inference engine, and rule-
based programming. Knowledge is represented as rules, and the data to be analyzed are
referred to as facts. The knowledge of the system is written in rule-based programming
(IF situation THEN action). These rules are applied to the facts, the data that comes in
from a sensor, or a system that is being monitored. For example, in scenario 1 the IDS
pulls data from a system’s audit log and stores it temporarily in its fact database, as il-
lustrated in Figure 3-23. Then, the preconfigured rules are applied to this data to indi-
cate whether anything suspicious is taking place. In our scenario, the rule states “IF a
root user creates File1 AND creates File2 SUCH THAT they are in the same directory THEN
there is a call to Administrative Tool1 TRIGGER send alert.” This rule has been defined such
that if a root user creates two files in the same directory and then makes a call to a spe-
cific administrative tool, an alert should be sent.

CISSP All-in-One Exam Guide
262
It is the inference engine that provides some artificial intelligence into this process.
An inference engine can infer new information from provided data by using inference
rules. To understand what inferring means in the first place, let’s look at the following:
Socrates is a man.
All men are mortals.
Thus, we can infer that Socrates is mortal. If you are asking, “What does this have to
do with a hill of beans?” just hold on to your hat—here we go.
Regular programming languages deal with the “black and white” of life. The answer
is either yes or no, not maybe this or maybe that. Although computers can carry out
complex computations at a much faster rate than humans, they have a harder time
guessing, or inferring, answers because they are very structured. The fifth-generation
programming languages (artificial intelligence languages) are capable of dealing with
the grayer areas of life and can attempt to infer the right solution from the provided
data.
So, in a rule-based IDS founded on an expert system, the IDS gathers data from a
sensor or log, and the inference engine uses its preprogrammed rules on it. If the char-
acteristics of the rules are met, an alert or solution is provided.
Figure 3-23 Rule-based IDS and expert system components

Chapter 3: Access Control
263
IDS Types
It is important to understand the characteristics that make the different types of
IDS technologies distinct. The following is a summary:
•Signature-based
• Pattern matching, similar to antivirus software
• Signatures must be continuously updated
• Cannot identify new attacks
• Two types:
•Pattern matching Compares packets to signatures
•Stateful matching Compares patterns to several activities
at once
•Anomaly-based
• Behavioral-based system that learns the “normal” activities of an
environment
• Can detect new attacks
• Also called behavior- or heuristic-based
• Three types:
•Statistical anomaly–based Creates a profile of “normal” and
compares activities to this profile
•Protocol anomaly–based Identifies protocols used outside of
their common bounds
•Traffic anomaly–based Identifies unusual activity in network
traffic
•Rule-based
• Use of IF/THEN rule-based programming within expert systems
• Use of an expert system allows for artificial intelligence characteristics
• The more complex the rules, the more demands on software and
hardware processing requirements
• Cannot detect new attacks

CISSP All-in-One Exam Guide
264
IDS Sensors
Network-based IDSs use sensors for monitoring purposes. A sensor, which works as an
analysis engine, is placed on the network segment the IDS is responsible for monitor-
ing. The sensor receives raw data from an event generator, as shown in Figure 3-24, and
Application-Based IDS
There are specialized IDS products that can monitor specific applications for ma-
licious activities. Since their scopes are very focused (only one application), they
can gather fine-grained and detailed activities. They can be used to capture very
specific application attack types, but it is important to realize that these product
types will miss more general operating system–based attacks because this is not
what they are programmed to detect.
It might be important to implement this type of IDS if a critical application is
carrying out encryption functions that would obfuscate its communication chan-
nels and activities from other types of IDS (host, network).
Figure 3-24 The basic architecture of an NIDS

Chapter 3: Access Control
265
compares it to a signature database, profile, or model, depending upon the type of IDS.
If there is some type of a match, which indicates suspicious activity, the sensor works
with the response module to determine what type of activity must take place (alerting
through instant messaging, paging, or by e-mail; carrying out firewall reconfiguration;
and so on). The sensor’s role is to filter received data, discard irrelevant information,
and detect suspicious activity.
A monitoring console monitors all sensors and supplies the network staff with an
overview of the activities of all the sensors in the network. These are the components
that enable network-based intrusion detection to actually work. Sensor placement is a
critical part of configuring an effective IDS. An organization can place a sensor outside
of the firewall to detect attacks and place a sensor inside the firewall (in the perimeter
network) to detect actual intrusions. Sensors should also be placed in highly sensitive
areas, DMZs, and on extranets. Figure 3-25 shows the sensors reporting their findings
to the central console.
The IDS can be centralized, as firewall products that have IDS functionality inte-
grated within them, or distributed, with multiple sensors throughout the network.
Network Traffic
If the network traffic volume exceeds the IDS system’s threshold, attacks may go unno-
ticed. Each vendor’s IDS product has its own threshold, and you should know and un-
derstand that threshold before you purchase and implement the IDS.
In very high-traffic environments, multiple sensors should be in place to ensure all
packets are investigated. If necessary to optimize network bandwidth and speed, differ-
ent sensors can be set up to analyze each packet for different signatures. That way, the
analysis load can be broken up over different points.
Intrusion Prevention Systems
An ounce of prevention does something good.
Response: Yeah, causes a single point of failure.
In the industry, there is constant frustration with the inability of existing products
to stop the bad guys from accessing and manipulating corporate assets. This has created
a market demand for vendors to get creative and come up with new, innovative tech-
nologies and new products for companies to purchase, implement, and still be frus-
trated with.
Switched Environments
NIDSs have a harder time working on a switched network, compared to tradi-
tional nonswitched environments, because data are transferred through indepen-
dent virtual circuits and not broadcasted, as in nonswitched environments. The
IDS sensor acts as a sniffer and does not have access to all the traffic in these in-
dividual circuits. So, we have to take all the data on each individual virtual private
connection, make a copy of them, and put the copies of the data on one port
(spanning port) where the sensor is located. This allows the sensor to have access
to all the data going back and forth on a switched network.

CISSP All-in-One Exam Guide
266
The next “big thing” in the IDS arena has been the intrusion prevention system (IPS).
The traditional IDS only detects that something bad may be taking place and sends an
alert. The goal of an IPS is to detect this activity and not allow the traffic to gain access
to the target in the first place, as shown in Figure 3-26. So, an IPS is a preventative and
proactive technology, whereas an IDS is a detective and after-the-fact technology.
IPS products can be host-based or network-based, just as IDS products. IPS technol-
ogy can be “content-based,” meaning that it makes decisions pertaining to what is
malicious and what is not based upon protocol analysis or signature matching capa-
bilities. An IPS technology can also use a rate-based metric, which focuses on the vol-
ume of traffic. The volume of network traffic increases in case of a flood attack (denial
of service) or when excessive systems scans take place. IPS rate-based metrics can also
be set to identify traffic flow anomalies, which could detect the “slow and low” stealth
attack types that attempt to “stay under the radar.”
Honeypot
Hey, curious, ill-willed, and destructive attackers, look at this shiny new vulnerable computer.
Ahoneypot is a computer set up as a sacrificial lamb on the network. The system is
not locked down and has open ports and services enabled. This is to entice a would-be
attacker to this computer instead of attacking authentic production systems on a net-
work. The honeypot contains no real company information, and thus will not be at risk
if and when it is attacked.
Figure 3-25 Sensors must be placed in each network segment to be monitored by the IDS.

Chapter 3: Access Control
267
This enables the administrator to know when certain types of attacks are happening
so he can fortify the environment and perhaps track down the attacker. The longer the
hacker stays at the honeypot, the more information will be disclosed about her tech-
niques.
It is important to draw a line between enticement and entrapment when implement-
ing a honeypot system. Legal and liability issues surround each. If the system only has
open ports and services that an attacker might want to take advantage of, this would be
an example of enticement. If the system has a web page indicating the user can down-
load files, and once the user does this the administrator charges this user with trespass-
ing, it would be entrapment. Entrapment is where the intruder is induced or tricked
into committing a crime. Entrapment is illegal and cannot be used when charging an
individual with hacking or unauthorized activity.
Figure 3-26 IDS vs. IPS architecture
Intrusion Responses
Most IDSs and IPSs are capable of several types of response to a triggered event. An
IDS can send out a special signal to drop or kill the packet connections at both the
source and destinations. This effectively disconnects the communication and does
not allow traffic to be transmitted. An IDS might block a user from accessing a re-
source on a host system, if the threshold is set to trigger this response. An IDS can
send alerts of an event trigger to other hosts, IDS monitors, and administrators.

CISSP All-in-One Exam Guide
268
Network Sniffers
I think I smell a packet!
A packet or network sniffer is a general term for programs or devices able to exam-
ine traffic on a LAN segment. Traffic that is being transferred over a network medium is
transmitted as electrical signals, encoded in binary representation. The sniffer has to
have a protocol-analysis capability to recognize the different protocol values to prop-
erly interpret their meaning.
The sniffer has to have access to a network adapter that works in promiscuous mode
and a driver that captures the data. This data can be overwhelming, so they must be
properly filtered. The filtered data are stored in a buffer, and this information is dis-
played to a user and/or captured in logs. Some utilities have sniffer and packet-modifi-
cation capabilities, which is how some types of spoofing and man-in-the-middle attacks
are carried out.
Network sniffers are used by the people in the white hats (administrators and secu-
rity professionals) usually to try and track down a recent problem with the network. But
the guys in the black hats (attackers and crackers) can use them to learn about what
type of data is passed over a specific network segment and to modify data in an unau-
thorized manner. Black hats usually use sniffers to obtain credentials as they pass over
the network medium.
NOTE
NOTE Sniffers are dangerous and are very hard to detect and their activities
are difficult to audit.
NOTE
NOTE A sniffer is just a tool that can capture network traffic. If it has the
capability of understanding and interpreting individual protocols and their
associated data, this type of tool is referred to as a protocol analyzer.
Threats to Access Control
Who wants to hurt us and how are they going to do it?
As a majority of security professionals know, there is more risk and a higher prob-
ability of an attacker causing mayhem from within an organization than from outside
it. However, many people within organizations do not know this fact, because they
only hear stories about the outside attackers who defaced a web server or circumvented
a firewall to access confidential information.
An attacker from the outside can enter through remote access entry points, enter
through firewalls and web servers, physically break in, carry out social engineering at-
tacks, or exploit a partner communication path (extranet, vendor connection, and so
on). An insider has legitimate reasons for using the systems and resources, but can
misuse his privileges and launch an actual attack also. The danger of insiders is that
they have already been given a wide range of access that a hacker would have to work
to obtain; they probably have intimate knowledge of the environment; and, generally,

Chapter 3: Access Control
269
they are trusted. We have discussed many different types of access control mechanisms
that work to keep the outsiders outside and restrict insiders’ abilities to a minimum and
audit their actions. Now we will look at some specific attacks commonly carried out in
environments today by insiders or outsiders.
Dictionary Attack
Several programs can enable an attacker (or proactive administrator) to identify user
credentials. This type of program is fed lists (dictionaries) of commonly used words or
combinations of characters, and then compares these values to capture passwords. In
other words, the program hashes the dictionary words and compares the resulting mes-
sage digest with the system password file that also stores its passwords in a one-way
hashed format. If the hashed values match, it means a password has just been uncov-
ered. Once the right combination of characters is identified, the attacker can use this
password to authenticate herself as a legitimate user. Because many systems have a
threshold that dictates how many failed logon attempts are acceptable, the same type
of activity can happen to a captured password file. The dictionary-attack program hash-
es the combination of characters and compares it to the hashed entries in the password
file. If a match is found, the program has uncovered a password.
The dictionaries come with the password-cracking programs, and extra dictionaries
can be found on several sites on the Internet.
NOTE
NOTE Passwords should never be transmitted or stored in cleartext.
Most operating systems and applications put the passwords through hashing
algorithms, which result in hash values, also referred to as message digest
values.
Countermeasures
To properly protect an environment against dictionary and other password attacks, the
following practices should be followed:
• Do not allow passwords to be sent in cleartext.
• Encrypt the passwords with encryption algorithms or hashing functions.
• Employ one-time password tokens.
• Use hard-to-guess passwords.
• Rotate passwords frequently.
• Employ an IDS to detect suspicious behavior.
• Use dictionary-cracking tools to find weak passwords chosen by users.
• Use special characters, numbers, and upper- and lowercase letters within the
password.
• Protect password files.

CISSP All-in-One Exam Guide
270
Brute Force Attacks
I will try over and over until you are defeated.
Several types of brute force attacks can be implemented, but each continually tries
different inputs to achieve a predefined goal. Brute force is defined as “trying every pos-
sible combination until the correct one is identified.” So in a brute force password at-
tack, the software tool will see if the first letter is an “a” and continue through the
alphabet until that single value is uncovered. Then the tool moves on to the second
value, and so on.
The most effective way to uncover passwords is through a hybrid attack, which com-
bines a dictionary attack and a brute force attack. If a dictionary tool has found that a
user’s password starts with Dallas, then the brute force tool will try Dallas1, Dallas01,
Dallasa1, and so on until a successful logon credential is uncovered. (A brute force at-
tack is also known as an exhaustive attack.)
These attacks are also used in war dialing efforts, in which the war dialer inserts a
long list of phone numbers into a war dialing program in hopes of finding a modem
that can be exploited to gain unauthorized access. A program is used to dial many
phone numbers and weed out the numbers used for voice calls and fax machine ser-
vices. The attacker usually ends up with a handful of numbers he can now try to exploit
to gain access into a system or network.
So, a brute force attack perpetuates a specific activity with different input parame-
ters until the goal is achieved.
Countermeasures
For phone brute force attacks, auditing and monitoring of this type of activity should
be in place to uncover patterns that could indicate a war dialing attack:
• Perform brute force attacks to find weaknesses and hanging modems.
• Make sure only necessary phone numbers are made public.
• Provide stringent access control methods that would make brute force attacks
less successful.
• Monitor and audit for such activity.
• Employ an IDS to watch for suspicious activity.
• Set lockout thresholds.
Spoofing at Logon
So, what are your credentials again?
An attacker can use a program that presents to the user a fake logon screen, which
often tricks the user into attempting to log on. The user is asked for a username and
password, which are stored for the attacker to access at a later time. The user does not
know this is not his usual logon screen because they look exactly the same. A fake error
message can appear, indicating that the user mistyped his credentials. At this point, the
fake logon program exits and hands control over to the operating system, which prompts
the user for a username and password. The user assumes he mistyped his information
and doesn’t give it a second thought, but an attacker now knows the user’s credentials.

Chapter 3: Access Control
271
Phishing and Pharming
Hello, this is your bank. Hand over your SSN, credit card number, and your shoe size.
Response: Okay, that sounds honest enough.
Phishing is a type of social engineering with the goal of obtaining personal informa-
tion, credentials, credit card number, or financial data. The attackers lure, or fish, for
sensitive data through various different methods.
The term phishing was coined in 1996 when hackers started stealing America Online
(AOL) passwords. The hackers would pose as AOL staff members and send messages to
victims asking them for their passwords in order to verify correct billing information or
verify information about the AOL accounts. Once the password was provided, the hack-
er authenticated as that victim and used his e-mail account for criminal purposes, as in
spamming, pornography, and so on.
Although phishing has been around since the 1990s, many people did not fully
become aware of it until mid-2003 when these types of attacks spiked. Phishers created
convincing e-mails requesting potential victims to click a link to update their bank ac-
count information.
Victims click these links and are presented with a form requesting bank account
numbers, Social Security numbers, credentials, and other types of data that can be used
in identity theft crimes. These types of phishing e-mail scams have increased dramati-
cally in recent years with some phishers masquerading as large banking companies,
PayPal, eBay, Amazon.com, and other well-known Internet entities.
Phishers also create web sites that look very similar to legitimate sites and lure vic-
tims to them through e-mail messages and other web sites to gain the same type of in-
formation. Some sites require the victims to provide their Social Security numbers, date
of birth, and mother’s maiden name for authentication purposes before they can up-
date their account information.
The nefarious web sites not only have the look and feel of the legitimate web site,
but attackers would provide URLs with domain names that look very similar to the le-
gitimate site’s address. For example, www.amazon.com might become www.amzaon.
com. Or use a specially placed @ symbol. For example, www.msn.com@notmsn.com
would actually take the victim to the web site notmsn.com and provide the username
of www.msn.com to this web site. The username www.msn.com would not be a valid
username for notmsn.com, so the victim would just be shown the home page of not-
msn.com. Now, notmsn.com is a nefarious site and created to look and feel just like
www.msn.com. The victim feels comfortable he is at a legitimate site and logs on with
his credentials.
Some JavaScript commands are even designed to show the victim an incorrect web
address. So let’s say Bob is a suspicious and vigilant kind of a guy. Before he inputs his
username and password to authenticate and gain access to his online bank account, he
always checks the URL values in the address bar of his browser. Even though he closely
inspects it to make sure he is not getting duped, there could be a JavaScript replacing
the URL www.evilandwilltakeallyourmoney.com with www.citibank.com so he thinks
things are safe and life is good.

CISSP All-in-One Exam Guide
272
NOTE
NOTE There have been fixes to the previously mentioned attack dealing with
URLs, but it is important to know that attackers will continually come up with
new ways of carrying out these attacks. Just knowing about phishing doesn’t
mean you can properly detect or prevent it. As a security professional, you
must keep up with the new and tricky strategies deployed by attackers.
Some attacks use pop-up forms when a victim is at a legitimate site. So if you were
at your bank’s actual web site and a pop-up window appeared asking you for some
sensitive information, this probably wouldn’t worry you since you were communicat-
ing with your actual bank’s web site. You may believe the window came from your
bank’s web server, so you fill it out as instructed. Unfortunately, this pop-up window
could be from another source entirely, and your data could be placed right in the at-
tacker’s hands, not your bank’s.
With this personal information, phishers can create new accounts in the victim’s
name, gain authorized access to bank accounts, and make illegal credit card purchases
or cash advances.
As more people have become aware of these types of attacks and grown wary of
clicking embedded links in e-mail messages, phishers have varied their attack methods.
For instance, they began sending e-mails that indicate to the user that they have won a
prize or that there is a problem with a financial account. The e-mail instructs the person
to call a number, which has an automated voice asking the victim to type in their
credit card number or Social Security number for authentication purposes.
As phishing attacks increase and more people have become victims of fraud, finan-
cial institutions have been implementing two-factor authentication for online transac-
tions. To meet this need, some banks provided their customers with token devices that
created one-time passwords. Countering, phishers set up fake web sites that looked like
the financial institution, duping victims into typing their one-time passwords. The web
sites would then send these credentials to the actual bank web site, authenticate as this
user, and gain access to their account.
A similar type of attack is called pharming, which redirects a victim to a seemingly
legitimate, yet fake, web site. In this type of attack, the attacker carries out something
called DNS poisoning, in which a DNS server resolves a host name into an incorrect IP
address. When you type www.logicalsecurity.com into the address bar of your web
Spear-phishing
When a phishing attack is crafted to trick a specific target and not a large generic
group of people, this is referred to as a spear-phishing attack. If someone knows
about your specific likes, political motives, shopping habits, etc., the attacker can
craft an attack that is directed only at you. For example, if an attacker sends you a
spoofed e-mail that seems to have come from your mother with the subject line
of “Emily’s Birthday Pictures” and an e-mail attachment, you will most likely
think it came from your mother and open the file, which will then infect your
system. These specialized attacks take more time for the hacker to craft because
unique information has to be gathered about the target, but they are more suc-
cessful because they are more convincing.

Chapter 3: Access Control
273
browser, your computer really has no idea what these data are. So an internal request is
made to review your TCP/IP network setting, which contains the IP address of the DNS
server your computer is supposed to use. Your system then sends a request to this
DNS server basically asking, “Do you have the IP address for www.logicalsecurity.com?”
The DNS server reviews its resource records and if it has one with this information in it,
it sends the IP address for the server that is hosting www.logicalsecurity.com to your
computer. Your browser then shows the home page of this web site you requested.
Now, what if an attacker poisoned this DNS server so the resource record has the
wrong information? When you type in www.logicalsecurity.com and your system sends
a request to the DNS server, the DNS server will send your system the IP address that it
has recorded, not knowing it is incorrect. So instead of going to www.logicalsecurity.
com, you are sent to www.bigbooty.com. This could make you happy or sad, depending
upon your interests, but you are not at the site you requested.
So, let’s say the victim types in a web address of www.nicebank.com, as illustrated
in Figure 3-27. The victim’s system sends a request to a poisoned DNS server, which
points the victim to a different web site. This different web site looks and feels just like
the requested web site, so the user enters his username and password and may even be
presented with web pages that look legitimate.
The benefit of a pharming attack to the attacker is that it can affect a large amount
of victims without the need for sending out e-mails, and the victims usually fall for this
more easily since they are requesting to go to a web site themselves.
Countermeasures to phishing attacks include the following:
• Be skeptical of e-mails indicating you must make changes to your accounts,
or warnings stating an account will be terminated if you don’t perform some
online activity.
• Call the legitimate company to find out if this is a fraudulent message.
• Review the address bar to see if the domain name is correct.
• When submitting any type of financial information or credential data, an SSL
connection should be set up, which is indicated in the address bar (https://)
and a closed-padlock icon in the browser at the bottom-right corner.
• Do not click an HTML link within an e-mail. Type the URL out manually
instead.
• Do not accept e-mail in HTML format.
Threat Modeling
In reality most attacks that take place are attacks on some type of access control. This is
because in most situations the bad guy wants access to something he is not supposed
to have (i.e., Social Security numbers, financial data, sensitive information, etc.) What
makes it very difficult for the security professional is that there are usually a hundred
different ways the bad guy can get to this data and each entry point has to be secured.
But before each entry point can be secured and attack vector addressed, they first have
to be identified.

CISSP All-in-One Exam Guide
274
Assessing security issues can take place from two different views. If a vulnerability
analysis is carried out, this means the organization is looking for all the holes that a bad
guy could somehow exploit and enter. A vulnerability analysis could be carried out by
scanning systems and identifying missing patches, misconfigured settings, orphaned
user accounts, programming code mistakes, etc. It is like looking for all the cracks in a
wall that need to be filled. Threat modeling is a structured approach to identifying po-
tential threats that could exploit vulnerabilities. A threat modeling approach looks at
who would most likely want to attack us and how could they successfully do this. In-
stead of being heads-down and focusing on the cracks in one wall, we are looking
outward and trying to figure out all the ways our structure could be attacked (through
windows, doors, bombs, imposters, etc.).
If you ran a vulnerability scanner on one system that stores sensitive data, you
would probably come up with a list of things that should be fixed (directory permis-
sion setting, registry entry, missing patch, unnecessary service). Once you fix all of these
things, you might feel good about yourself and assume the sensitive data are now safe.
But life is not that simple. What about the following items?
• How is this system connected to other systems?
• Are the sensitive data encrypted while in storage and transit?
• Who has access to this system?
• Can someone physically steal this system?
• Can someone insert a USB device and extract the data?
• What are the vectors that malware can be installed on the system?
Figure 3-27 Pharming has been a common attack over the last couple of years.

Chapter 3: Access Control
275
• Is this system protected in case of a disaster?
• Can an authorized user make unauthorized changes to this system or data?
• Could an authorized user be social engineered into allowing unauthorized
access?
• Are there any access channels that are not auditable?
• Could any authentication steps be bypassed?
So to follow our analogy, you could fill all the cracks in the one wall you are focus-
ing on and the bad guy could just enter the facility through the door you forgot to lock.
You have to think about all the ways an asset could be attacked if you are really going
to secure it.
Threat modeling is a process of identifying the threats that could negatively affect
an asset and the attack vectors they would use to achieve their goals. Figure 3-28 illus-
trates a threat modeling tree that steps through issues that need to be addressed if an
organization is going to store sensitive data in a cloud environment.
If you think about who would want to attack you, you can then brainstorm on how
they could potentially accomplish their goal and put countermeasures in place to
thwart these types of attacks.
We will cover threat modeling more in-depth in Chapter 10 because it is becoming
a common approach to architecting software, but it is important to realize that security
should be analyzed from two points of view. A vulnerability analysis looks within, and
a threat modeling approach looks outward. Threat modeling works at a higher abstrac-
tion level, and vulnerability analysis works at a lower detail-oriented level. Both need
to be carried out and their results merged so that all risks can be understood and prop-
erly addressed. As an analogy, if I focused on making my body healthy by eating right,
working out, and keeping my cholesterol down, I am focusing internally, which is great.
But if I don’t pick up my head and look around my environment, I could be standing
right in the middle of a street and get hit by a bus. Then my unclogged arteries and low
body fat ratio really do not matter.
Identity Theft
I’m glad someone stole my identity. I’m tired of being me.
Identity theft refers to a situation where someone obtains key pieces of personal
information, such as a driver’s license number, bank account number, credentials, or
Social Security number, and then uses that information to impersonate someone else.
Typically, identity thieves will use the personal information to obtain credit, merchan-
dise, services in the name of the victim, or false credentials for the thief. This can result
in such things as ruining the victim’s credit rating, generating false criminal records,
and issuing arrest warrants for the wrong individuals. Identity theft is categorized in
two ways: true name and account takeover. True-name identity theft means the thief
uses personal information to open new accounts. The thief might open a new credit
card account, establish cellular phone service, or open a new checking account in order
to obtain blank checks. Account-takeover identity theft means the imposter uses per-
sonal information to gain access to the person’s existing accounts. Typically, the thief

CISSP All-in-One Exam Guide
276
Threat 1
Unauthorized access
to sensitive data
1.1.1.1
Misconfiguration
1.1
Unauthorized
access
1.2
Data
intelligible
1.1.1
Accidentally
compromised
1.1.1.2
Cloud
infrastructure
enforcement
failure
1.1.1.3
H/W repair
takes data out of
data center
1.1.2.1
Inside job
(business)
1.1.2.2
Inside job
(data center)
1.1.2.3
Data in
transmission
observed
1.2.1
Data not
encrypted
1.2.2
Encryption key
uncovered
1.1.2
Deliberately
compromised
Figure 3-28 Threat modeling

Chapter 3: Access Control
277
will change the mailing address on an account and run up a huge bill before the per-
son, whose identity has been stolen, realizes there is a problem. The Internet has made
it easier for an identity thief to use the information they’ve stolen because transactions
can be made without any personal interaction.
Summary
Access controls are security features that are usually considered the first line of defense
in asset protection. They are used to dictate how subjects access objects, and their main
goal is to protect the objects from unauthorized access. These controls can be adminis-
trative, physical, or technical in nature and should be applied in a layered approach,
ensuring that an intruder would have to compromise more than one countermeasure
to access critical assets.
Access control defines how users should be identified, authenticated, and autho-
rized. These issues are carried out differently in different access control models and
technologies, and it is up to the organization to determine which best fits its business
and security needs.
Access control needs to be integrated into the core of operating systems through the
use of DAC, MAC, and RBAC models. It needs to be embedded into applications, net-
work devices, and protocols, and enforced in the physical world through the use of se-
curity zones, network segmentation, locked doors, and security guards. Security is all
about keeping the bad guys out and unfortunately there are many different types of
“doorways” they can exploit to get access to our most critical assets.
Quick Tips
• Access is a flow of information between a subject and an object.
• A subject is an active entity that requests access to an object, which is a passive
entity.
• A subject can be a user, program, or process.
• Some security mechanisms that provide confidentiality are encryption, logical
and physical access control, transmission protocols, database views, and
controlled traffic flow.
• Identity management solutions include directories, web access management,
password management, legacy single sign-on, account management, and
profile update.
• Password synchronization reduces the complexity of keeping up with different
passwords for different systems.
• Self-service password reset reduces help-desk call volumes by allowing users to
reset their own passwords.

CISSP All-in-One Exam Guide
278
• Assisted password reset reduces the resolution process for password issues for
the help-desk department.
• IdM directories contain all resource information, users’ attributes, authorization
profiles, roles, and possibly access control policies so other IdM applications
have one centralized resource from which to gather this information.
• An automated workflow component is common in account management
products that provide IdM solutions.
• User provisioning refers to the creation, maintenance, and deactivation of
user objects and attributes, as they exist in one or more systems, directories,
or applications.
• The HR database is usually considered the authoritative source for user
identities because that is where it is first developed and properly maintained.
• There are three main access control models: discretionary, mandatory, and
role-based.
• Discretionary access control (DAC) enables data owners to dictate what
subjects have access to the files and resources they own.
• The mandatory access control (MAC) model uses a security label system.
Users have clearances, and resources have security labels that contain data
classifications. MAC systems compare these two attributes to determine access
control capabilities.
• Role-based access control is based on the user’s role and responsibilities
(tasks) within the company.
• Three main types of restricted interface measurements exist: menus and shells,
database views, and physically constrained interfaces.
• Access control lists are bound to objects and indicate what subjects can
use them.
• A capability table is bound to a subject and lists what objects it can access.
• Access control can be administered in two main ways: centralized and
decentralized.
• Some examples of centralized administration access control technologies are
RADIUS, TACACS+, and Diameter.
• A decentralized administration example is a peer-to-peer working group.
• Examples of administrative controls are a security policy, personnel controls,
supervisory structure, security-awareness training, and testing.
• Examples of physical controls are network segregation, perimeter security,
computer controls, work area separation, and cable.

Chapter 3: Access Control
279
• Examples of technical controls are system access, network architecture,
network access, encryption and protocols, and auditing.
• For a subject to be able to access a resource, it must be identified,
authenticated, and authorized, and should be held accountable for its actions.
• Authentication can be accomplished by biometrics, a password, a passphrase,
a cognitive password, a one-time password, or a token.
• A Type I error in biometrics means the system rejected an authorized
individual, and a Type II error means an imposter was authenticated.
• A memory card cannot process information, but a smart card can through the
use of integrated circuits and processors.
• Least-privilege and need-to-know principles limit users’ rights to only what is
needed to perform tasks of their job.
• Single sign-on capabilities can be accomplished through Kerberos, SESAME,
domains, and thin clients.
• The Kerberos user receives a ticket granting ticket (TGT), which allows him to
request access to resources through the ticket granting service (TGS). The TGS
generates a new ticket with the session keys.
• Types of access control attacks include denial of service, spoofing, dictionary,
brute force, and war dialing.
• Keystroke monitoring is a type of auditing that tracks each keystroke made by
a user.
• Object reuse can unintentionally disclose information by assigning media to a
subject before it is properly erased.
• Just removing pointers to files (deleting file, formatting hard drive) is not
always enough protection for proper object reuse.
• Information can be obtained via electrical signals in airwaves. The ways to
combat this type of intrusion are TEMPEST, white noise, and control zones.
• User authentication is accomplished by what someone knows, is, or has.
• One-time password-generating token devices can use synchronous (time,
event) or asynchronous (challenge-based) methods.
• Strong authentication requires two of the three user authentication attributes
(what someone knows, is, or has).
• The following are weaknesses of Kerberos: the KDC is a single point of failure;
it is susceptible to password guessing; session and secret keys are locally stored;
KDC needs to always be available; and there must be management of secret keys.

CISSP All-in-One Exam Guide
280
• Phishing is a type of social engineering with the goal of obtaining personal
information, credentials, credit card numbers, or financial data.
• A race condition is possible when two or more processes use a shared resource
and the access steps could take place out of sequence
• Mutual authentication is when two entities must authenticate to each other
before sending data back and forth. Also referred to as two-way authentication.
• A directory service is a software component that stores, organizes, and
provides access to resources, which are listed in a directory (listing) of
resources. Individual resources are assigned names within a namespace.
• A cookie is data that are held permanently on a hard drive in the format of
a text file or held temporarily in memory. It can be used to store browsing
habits, authentication data, or protocol state information.
• A federated identity is a portable identity, and its associated entitlements, that
can be used across business boundaries without the need to synchronize or
consolidate directory information.
• Extensible Markup Language (XML) is a set of rules for encoding documents
in machine-readable form to allow for interoperability between various web-
based technologies.
• Service Provisioning Markup Language (SPML) is an XML-based framework,
being developed by OASIS, for exchanging user, resource, and service
provisioning information between cooperating organizations.
• eXtensible Access Control Markup Language (XACML) a declarative access
control policy language implemented in XML and a processing model,
describes how to interpret security policies.
• Replay attack is a form of network attack in which a valid data transmission is
maliciously or fraudulently repeated with the goal of obtaining unauthorized
access.
• Clipping level is a threshold value. Once a threshold value is passed, the
activity is considered to be an event that is logged, investigated, or both.
• Rainbow table is a set of precomputed hash values that represent password
combinations. These are used in password attack processes and usually
produce results more quickly than dictionary or brute force attacks.
• Cognitive passwords are fact- or opinion-based information used to verify an
individual’s identity.
• Smart cards can require physical interaction with a reader (contact) or no
physical interaction with the reader (contactless architectures). Two contactless
architectures are combi (one chip) and hybrid (two chips).

Chapter 3: Access Control
281
• A side channel attack is carried out by gathering data pertaining to how
something works and using that data to attack it or crack it, as in differential
power analysis or electromagnetic analysis.
• Authorization creep takes place when a user gains too much access rights and
permissions over time.
• SESAME is a single sign-on technology developed to address issues in
Kerberos. It is based upon public key cryptography (asymmetric) and uses
privileged attribute servers and certificates.
• Security information and event management implements data mining and
analysis functionality to be carried out on centralized logs for situational
awareness capabilities.
• Intrusion detection systems are either host or network based and provide
behavioral (statistical) or signature (knowledge) types of functionality.
• Phishing is a type of social engineering attack. If it is crafted for a specific
individual, it is called spear-phishing. If a DNS server is poisoned and points
users to a malicious website, this is referred to as pharming.
• A web portal is commonly made up of portlets, which are pluggable user
interface software components that present information and services from
other systems.
• The Service Provisioning Markup Language (SPML) allows for the automation
of user management (account creation, amendments, revocation) and access
entitlement configuration related to electronically published services across
multiple provisioning systems.
• The Security Assertion Markup Language (SAML) allows for the exchange of
authentication and authorization data to be shared between security domains.
• The Simple Object Access Protocol (SOAP) is a protocol specification for
exchanging structured information in the implementation of web services and
networked environments.
• Service oriented architecture (SOA) environments allow for a suite of
interoperable services to be used within multiple, separate systems from
several business domains.
• Radio-frequency identification (RFID) is a technology that provides data
communication through the use of radio waves.
• Threat modeling identifies potential threats and attack vectors. Vulnerability
analysis identifies weaknesses and lack of countermeasures.

CISSP All-in-One Exam Guide
282
Questions
Please remember that these questions are formatted and asked in a certain way for a
reason. Remember that the CISSP exam is asking questions at a conceptual level. Ques-
tions may not always have the perfect answer, and the candidate is advised against al-
ways looking for the perfect answer. Instead, the candidate should look for the best
answer in the list.
1. Which of the following statements correctly describes biometric methods?
A. They are the least expensive and provide the most protection.
B. They are the most expensive and provide the least protection.
C. They are the least expensive and provide the least protection.
D. They are the most expensive and provide the most protection.
2. Which of the following statements correctly describes passwords?
A. They are the least expensive and most secure.
B. They are the most expensive and least secure.
C. They are the least expensive and least secure.
D. They are the most expensive and most secure.
3. How is a challenge/response protocol utilized with token device
implementations?
A. This protocol is not used; cryptography is used.
B. An authentication service generates a challenge, and the smart token
generates a response based on the challenge.
C. The token challenges the user for a username and password.
D. The token challenges the user’s password against a database of stored
credentials.
4. Which access control method is considered user-directed?
A. Nondiscretionary
B. Mandatory
C. Identity-based
D. Discretionary
5. Which item is not part of a Kerberos authentication implementation?
A. Message authentication code
B. Ticket granting service
C. Authentication service
D. Users, programs, and services

Chapter 3: Access Control
283
6. If a company has a high turnover rate, which access control structure is best?
A. Role-based
B. Decentralized
C. Rule-based
D. Discretionary
7. The process of mutual authentication involves _______________.
A. A user authenticating to a system and the system authenticating to the user
B. A user authenticating to two systems at the same time
C. A user authenticating to a server and then to a process
D. A user authenticating, receiving a ticket, and then authenticating to a service
8. In discretionary access control security, who has delegation authority to grant
access to data?
A. User
B. Security officer
C. Security policy
D. Owner
9. Which could be considered a single point of failure within a single sign-on
implementation?
A. Authentication server
B. User’s workstation
C. Logon credentials
D. RADIUS
10. What role does biometrics play in access control?
A. Authorization
B. Authenticity
C. Authentication
D. Accountability
11. What determines if an organization is going to operate under a discretionary,
mandatory, or nondiscretionary access control model?
A. Administrator
B. Security policy
C. Culture
D. Security levels

CISSP All-in-One Exam Guide
284
12. Which of the following best describes what role-based access control offers
companies in reducing administrative burdens?
A. It allows entities closer to the resources to make decisions about who can
and cannot access resources.
B. It provides a centralized approach for access control, which frees up
department managers.
C. User membership in roles can be easily revoked and new ones established
as job assignments dictate.
D. It enforces enterprise-wide security policies, standards, and guidelines.
13. Which of the following is the best description of directories that are used in
identity management technology?
A. Most are hierarchical and follow the X.500 standard.
B. Most have a flat architecture and follow the X.400 standard.
C. Most have moved away from LDAP.
D. Many use LDA.
14. Which of the following is not part of user provisioning?
A. Creation and deactivation of user accounts
B. Business process implementation
C. Maintenance and deactivation of user objects and attributes
D. Delegating user administration
15. What is the technology that allows a user to remember just one password?
A. Password generation
B. Password dictionaries
C. Password rainbow tables
D. Password synchronization
16. Which of the following is not considered an anomaly-based intrusion
protection system?
A. Statistical anomaly–based
B. Protocol anomaly–based
C. Temporal anomaly–based
D. Traffic anomaly–based
17. The next graphic covers which of the following:

Chapter 3: Access Control
285
A. Crossover error rate
B. Identity verification
C. Authorization rates
D. Authentication error rates
18. The diagram shown next explains which of the following concepts:
A. Crossover error rate.
B. Type III errors.
C. FAR equals FRR in systems that have a high crossover error rate.
D. Biometrics is a high acceptance technology.

CISSP All-in-One Exam Guide
286
19. The graphic shown here illustrates how which of the following works:
A. Rainbow tables
B. Dictionary attack
C. One-time password
D. Strong authentication
20. Which of the following has the correct definition mapping?
i. Brute force attacks Performed with tools that cycle through many
possible character, number, and symbol combinations to uncover a
password.
ii. Dictionary attacks Files of thousands of words are compared to the user’s
password until a match is found.
iii. Social engineering An attacker falsely convinces an individual that she
has the necessary authorization to access specific resources.
iv. Rainbow table An attacker uses a table that contains all possible
passwords already in a hash format.
A. i, ii
B. i, ii, iv
C. i, ii, iii, iv
D. i, ii, iii
21. George is responsible for setting and tuning the thresholds for his company’s
behavior-based IDS. Which of the following outlines the possibilities of not
doing this activity properly?

Chapter 3: Access Control
287
A. If the threshold is set too low, nonintrusive activities are considered attacks
(false positives). If the threshold is set too high, then malicious activities
are not identified (false negatives).
B. If the threshold is set too low, nonintrusive activities are considered attacks
(false negatives). If the threshold is set too high, then malicious activities
are not identified (false positives).
C. If the threshold is set too high, nonintrusive activities are considered
attacks (false positives). If the threshold is set too low, then malicious
activities are not identified (false negatives).
D. If the threshold is set too high, nonintrusive activities are considered
attacks (false positives). If the threshold is set too high, then malicious
activities are not identified (false negatives).
Use the following scenario to answer Questions 22–24.Tom is a new security manager for a
retail company, which currently has an identity management system (IdM) in place.
The data within the various identity stores update more quickly than the current IDM
software can keep up with, so some access decisions are made based upon obsolete
information. While the IDM currently provides centralized access control of internal
network assets, it is not tied into the web-based access control components that are
embedded within the company’s partner portals. Tom also notices that help-desk tech-
nicians are spending too much time resetting passwords for internal employees.
22. Which of the following changes would be best for Tom’s team to implement?
A. Move from namespaces to distinguished names.
B. Move from meta-directories to virtual directories.
C. Move from RADIUS to TACACS+.
D. Move from a centralized to a decentralized control model.
23. Which of the following components should Tom make sure his team puts
into place?
A. Single sign-on module
B. LDAP directory service synchronization
C. Web access management
D. X.500 database
24. Tom has been told that he has to reduce staff from the help-desk team.
Which of the following technologies can help with the company’s help-desk
budgetary issues?
A. Self-service password support
B. RADIUS implementation
C. Reduction of authoritative IdM sources
D. Implement a role-based access control model

CISSP All-in-One Exam Guide
288
Use the following scenario to answer Questions 25–27. Lenny is a new security manager for
a retail company that is expanding its functionality to its partners and customers. The
company’s CEO wants to allow its partners’ customers to be able to purchase items
through their web stores as easily as possible. The CEO also wants the company’s part-
ners to be able to manage inventory across companies more easily. The CEO wants to
be able to understand the network traffic and activities in a holistic manner, and he
wants to know from Lenny what type of technology should be put into place to allow
for a more proactive approach to stopping malicious traffic if it enters the network. The
company is a high-profile entity constantly dealing with zero-day attacks.
25. Which of the following is the best identity management technology that Lenny
should consider implementing to accomplish some of the company’s needs?
A. LDAP directories for authoritative sources
B. Digital identity provisioning
C. Active Directory
D. Federated identity
26. Lenny has a meeting with the internal software developers who are responsible
for implementing the necessary functionality within the web-based system.
Which of the following best describes the two items that Lenny needs to be
prepared to discuss with this team?
A. Service Provisioning Markup Language and the eXtensible Access Control
Markup Language
B. Standard Generalized Markup Language and the Generalized Markup
Language
C. Extensible Markup Language and the HyperText Markup Language
D. Service Provisioning Markup Language and the Generalized Markup
Language
27. Pertaining to the CEO’s security concerns, what should Lenny suggest the
company put into place?
A. Security event management software, intrusion prevention system, and
behavior-based intrusion detection
B. Security information and event management software, intrusion detection
system, and signature-based protection
C. Intrusion prevention system, security event management software, and
malware protection
D. Intrusion prevention system, security event management software, and war
dialing protection
Use the following scenario to answer Questions 28–29. Robbie is the security administrator
of a company that needs to extend its remote access functionality. Employees travel
around the world, but still need to be able to gain access to corporate assets as in data-
bases, servers, and network-based devices. Also, while the company has had a VoIP

Chapter 3: Access Control
289
telephony solution in place for two years, it has not been integrated into a centralized
access control solution. Currently the network administrators have to maintain access
control separately for internal resources, external entities, and VoIP end systems. Rob-
bie has also been asked to look into some specious e-mails that the CIO’s secretary has
been receiving, and her boss has asked her to remove some old modems that are no
longer being used for remote dial-in purposes.
28. Which of the following is the best remote access technology for this situation?
A. RADIUS
B. TACAS+
C. Diameter
D. Kerberos
29. What are the two main security concerns Robbie is most likely being asked to
identify and mitigate?
A. Social engineering and spear-phishing
B. War dialing and pharming
C. Spear-phishing and war dialing
D. Pharming and spear-phishing
Use the following scenario to answer Questions 30–32. Tanya is working with the company’s
internal software development team. Before a user of an application can access files lo-
cated on the company’s centralized server, the user must present a valid one-time pass-
word, which is generated through a challenge-response mechanism. The company needs
to tighten access control for these files and reduce the number of users who can access
each and every file. The company is looking to Tanya and her team for solutions to better
protect the data that have been classified and deemed critical to the company’s missions.
Tanya has also been asked to implement a single sign-on technology for all internal us-
ers, but she does not have the budget to implement a public key infrastructure.
30. Which of the following best describes what is currently in place?
A. Capability-based access system
B. Synchronous tokens that generate one-time passwords
C. RADIUS
D. Kerberos
31. Which of the following is one of the easiest and best items Tanya can look
into for proper data protection?
A. Implementation of mandatory access control
B. Implementation of access control lists
C. Implementation of digital signatures
D. Implementation of multilevel security

CISSP All-in-One Exam Guide
290
32. Which of the following is the best single sign-on technology for this situation?
A. SESAME
B. Kerberos
C. RADIUS
D. TACACS+
Use the following scenario to answer Questions 33–35. Harry is overseeing a team that has
to integrate various business services provided by different company departments into
one web portal for both internal employees and external partners. His company has a
diverse and heterogeneous environment with different types of systems providing cus-
tomer relationship management, inventory control, e-mail, and help-desk ticketing ca-
pabilities. His team needs to allow different users access to these different services in a
secure manner.
33. Which of the following best describes the type of environment Harry’s team
needs to set up?
A. RADIUS
B. Service oriented architecture
C. Public key infrastructure
D. Web services
34. Which of the following best describes the types of languages and/or protocols
that Harry needs to ensure are implemented?
A. Security Assertion Markup Language, Extensible Access Control Markup
Language, Service Provisioning Markup Language
B. Service Provisioning Markup Language,Simple Object Access Protocol,
Extensible Access Control Markup Language
C. Extensible Access Control Markup Language, Security Assertion Markup
Language, Simple Object Access Protocol
D. Service Provisioning Markup Language, Security Association Markup
Language
35. The company’s partners need to integrate compatible authentication
functionality into their web portals to allow for interoperability across the
different company boundaries. Which of the following will deal with this
issue?
A. Service Provisioning Markup Language
B. Simple Object Access Protocol
C. Extensible Access Control Markup Language
D. Security Assertion Markup Language

Chapter 3: Access Control
291
Answers
1. D. Compared with the other available authentication mechanisms, biometric
methods provide the highest level of protection and are the most expensive.
2. C. Passwords provide the least amount of protection, but are the cheapest
because they do not require extra readers (as with smart cards and memory
cards), do not require devices (as do biometrics), and do not require a lot of
overhead in processing (as in cryptography). Passwords are the most common
type of authentication method used today.
3. B. An asynchronous token device is based on challenge/response mechanisms.
The authentication service sends the user a challenge value, which the user
enters into the token. The token encrypts or hashes this value, and the user
uses this as her one-time password.
4. D. The DAC model allows users, or data owners, the discretion of letting other
users access their resources. DAC is implemented by ACLs, which the data
owner can configure.
5. A. Message authentication code (MAC) is a cryptographic function and is
not a key component of Kerberos. Kerberos is made up of a KDC, a realm
of principals (users, services, applications, and devices), an authentication
service, tickets, and a ticket granting service.
6. A. It is easier on the administrator if she only has to create one role, assign
all of the necessary rights and permissions to that role, and plug a user into
that role when needed. Otherwise, she would need to assign and extract
permissions and rights on all systems as each individual came and left the
company.
7. A. Mutual authentication means it is happening in both directions. Instead
of just the user having to authenticate to the server, the server also must
authenticate to the user.
8. D. This question may seem a little confusing if you were stuck between user
and owner. Only the data owner can decide who can access the resources
she owns. She may be a user and she may not. A user is not necessarily the
owner of the resource. Only the actual owner of the resource can dictate what
subjects can actually access the resource.
9. A. In a single sign-on technology, all users are authenticating to one source. If
that source goes down, authentication requests cannot be processed.
10. C. Biometrics is a technology that validates an individual’s identity by reading
a physical attribute. In some cases, biometrics can be used for identification,
but that was not listed as an answer choice.
11. B. The security policy sets the tone for the whole security program. It dictates
the level of risk that management and the company are willing to accept. This
in turn dictates the type of controls and mechanisms to put in place to ensure
this level of risk is not exceeded.

CISSP All-in-One Exam Guide
292
12. C. An administrator does not need to revoke and reassign permissions to
individual users as they change jobs. Instead, the administrator assigns
permissions and rights to a role, and users are plugged into those roles.
13. A. Most enterprises have some type of directory that contains information
pertaining to the company’s network resources and users. Most directories
follow a hierarchical database format, based on the X.500 standard, and a
type of protocol, as in Lightweight Directory Access Protocol (LDAP), that
allows subjects and applications to interact with the directory. Applications
can request information about a particular user by making an LDAP request
to the directory, and users can request information about a specific resource
by using a similar request.
14. B. User provisioning refers to the creation, maintenance, and deactivation of
user objects and attributes as they exist in one or more systems, directories,
or applications, in response to business processes. User provisioning software
may include one or more of the following components: change propagation,
self-service workflow, consolidated user administration, delegated user
administration, and federated change control. User objects may represent
employees, contractors, vendors, partners, customers, or other recipients of
a service. Services may include electronic mail, access to a database, access
to a file server or mainframe, and so on.
15. D. Password synchronization technologies can allow a user to maintain just
one password across multiple systems. The product will synchronize the
password to other systems and applications, which happens transparently
to the user.
16. C. Behavioral-based system that learns the “normal” activities of an
environment. The three types are listed next:
•Statistical anomaly–based Creates a profile of “normal” and compares
activities to this profile
•Protocol anomaly–based Identifies protocols used outside of their
common bounds
•Traffic anomaly–based Identifies unusual activity in network traffic
17. B. These steps are taken to convert the biometric input for identity verification:
i. A software application identifies specific points of data as match points.
ii. An algorithm is used to process the match points and translate that
information into a numeric value.
iii. Authentication is approved or denied when the database value is
compared with the end user input entered into the scanner.
18. A. This rating is stated as a percentage and represents the point at which the
false rejection rate equals the false acceptance rate. This rating is the most
important measurement when determining a biometric system’s accuracy.

Chapter 3: Access Control
293
• (Type I error) rejects authorized individual
• False Reject Rate (FRR)
• (Type II error) accepts impostor
• False Acceptance Rate (FAR)
19. C. Different types of one-time passwords are used for authentication. This
graphic illustrates a synchronous token device, which synchronizes with the
authentication service by using time or a counter as the core piece of the
authentication process.
20. C. The list has all the correct terms to definition mappings.
21. C. If the threshold is set too high, nonintrusive activities are considered
attacks (false positives). If the threshold is set too low, then malicious
activities are not identified (false negatives).
22. B. A meta-directory within an IDM physically contains the identity
information within an identity store. It allows identity information to be
pulled from various locations and be stored in one local system (identity
store). The data within the identity store are updated through a replication
process, which may take place weekly, daily, or hourly depending upon
configuration. Virtual directories use pointers to where the identity data reside
on the original system; thus, no replication processes are necessary. Virtual
directories usually provide the most up-to-date identity information since
they point to the original source of the data.
23. C. Web access management (WAM) is a component of most IDM products
that allows for identity management of web-based activities to be integrated
and managed centrally.
24. A. If help-desk staff is spending too much time with password resetting, then
a technology should be implemented to reduce the amount of time paid
staff is spending on this task. The more tasks that can be automated through
technology, the less of the budget that has to be spent on staff. The following
are password management functionalities that are included in most IDM
products:
•Password Synchronization Reduces the complexity of keeping up with
different passwords for different systems.
•Self-Service Password Reset Reduces help-desk call volumes by allowing
users to reset their own passwords.
•Assisted Password Reset Reduces the resolution process for password
issues for the help desk. This may include authentication with other types
of authentication mechanisms (biometrics, tokens).

CISSP All-in-One Exam Guide
294
25. D. Federation identification allows for the company and its partners to share
customer authentication information. When a customer authenticates to a
partner web site, that authentication information can be passed to the retail
company, so when the customer visits the retail company’s web site, the
user has less amount of user profile information she has to submit and the
authentication steps she has to go through during the purchase process could
potentially be reduced. If the companies have a set trust model and share the
same or similar federated identity management software and settings, this
type of structure and functionality is possible.
26. A. The Service Provisioning Markup Language (SPML) allows company
interfaces to pass service requests, and the receiving company provisions
(allows) access to these services. Both the sending and receiving companies
need to be following XML standard, which will allow this type of
interoperability to take place. When using the eXtensible Access Control
Markup Language (XACML), application security policies can be shared
with other applications to ensure that both are following the same security
rules. The developers need to integrate both of these language types to allow
for their partner employees to interact with their inventory systems without
having to conduct a second authentication step. The use of the languages can
reduce the complexity of inventory control between the different companies.
27. A. Security event management software allows for network traffic to be viewed
holistically by gathering log data centrally and analyzing them. The intrusion
prevention system allows for proactive measures to be put into place to help
in stopping malicious traffic from entering the network. Behavior-based
intrusion detection can identify new types of attack (zero day) compared to
signature-based intrusion detection.
28. C. The Diameter protocol extends the RADIUS protocol to allow for various
types of authentication to take place with a variety of different technologies
(PPP, VoIP, Ethernet, etc.). It has extensive flexibility and allows for the
centralized administration of access control.
29. C. Spear-phishing is a targeted social engineering attack, which is what the
CIO’s secretary is most likely experiencing. War dialing is a brute force attack
against devices that use phone numbers, as in modems. If the modems can be
removed, the risk of war dialing attacks decreases.
30. A. A capability-based access control system means that the subject (user)
has to present something, which outlines what it can access. The item can
be a ticket, token, or key. A capability is tied to the subject for access control
purposes. A synchronous token is not being used, because the scenario
specifically states that a challenge\response mechanism is being used, which
indicates an asynchronous token.

Chapter 3: Access Control
295
31. B. Systems that provide mandatory access control (MAC) and multilevel
security are very specialized, require extensive administration, are expensive,
and reduce user functionality. Implementing these types of systems is not
the easiest approach out of the list. Since there is no budget for a PKI, digital
signatures cannot be used because they require a PKI. In most environments
access control lists (ACLs) are in place and can be modified to provide tighter
access control. ACLs are bound to objects and outline what operations specific
subjects can carry out on them.
32. B. SESAME is a single sign-on technology that is based upon public key
cryptography; thus, it requires a PKI. Kerberos is based upon symmetric
cryptography; thus, it does not need a PKI. RADIUS and TACACS+ are remote
centralized access control protocols.
33. B. A service oriented architecture will allow Harry’s team to create a centralized
web portal and offer the various services needed by internal and external
entities.
34. C. The most appropriate languages and protocols for the purpose laid out
in the scenario are Extensible Access Control Markup Language, Security
Assertion Markup Language, and Simple Object Access Protocol. Harry’s group
is not necessarily overseeing account provisioning, so the Service Provisioning
Markup Language is not necessary, and there is no language called “Security
Association Markup Language.”
35. D. Security Assertion Markup Language allows the exchange of authentication
and authorization data to be shared between security domains. It is one of the
most used approaches to allow for single sign-on capabilities within a web-
based environment.
This page intentionally left blank

CHAPTER 4
Security Architecture
and Design
This chapter presents the following:
• System architecture
• Computer hardware architecture
• Operating system architecture
• System security architecture
• Trusted computing base and security mechanisms
• Information security software models
• Assurance evaluation criteria and ratings
• Certification and accreditation processes
Software flaws account for a majority of the compromises organizations around the
world experience. The common saying in the security field is that a network has a “hard,
crunchy outer shell and a soft, chewy middle,” which sums it up pretty well. The secu-
rity industry has made amazing strides in its advancement of perimeter security devices
and technology (firewalls, intrusion detection systems [IDS], intrusion prevention sys-
tems [IPS], etc.), which provide the hard, crunchy outer shell. But the software that
carries out our critical processing still has a lot of vulnerabilities that are exploited on a
daily basis.
While software vendors do hold a lot of responsibility for the protection their prod-
ucts provide, nothing is as easy and as straightforward as it might seem. The computing
industry has been demanding extensive functionality, interoperability, portability, and
extensibility, and the vendors have been scrambling to provide these aspects of their
software and hardware products to their customers. It has only been in the last ten years
or so that the industry has been interested in or aware of the security requirements that
these products should also provide. Unfortunately, it is very difficult to develop software
297

CISSP All-in-One Exam Guide
298
that meets all of these demands: extensive functionality, interoperability, portability,
extensibility, and security. As the level of complexity within software increases, the abil-
ity to implement sound security decreases. While software developers can implement
more secure coding practices, one of the most critical aspects of secure software pertains
to its architecture. Security has to be baked into the fundamental core of the operating
systems that provide processing environments. Today’s operating systems were not ar-
chitected with security as their main focus. It is very difficult to retrofit security into
operating systems that are already deployed throughout the world and that contain
millions of lines of code. And it is almost impossible to rearchitect them in a manner
that allows them to continue to provide all of their current functionality and for it all
to take place securely.
The industry does have “trusted” systems, which provide a higher level of security
than the average Windows, Linux, UNIX, and Macintosh systems. These systems have
been built from the “ground up” with security as one of their main goals, so their archi-
tectures are different from the systems we are more used to. The trusted systems are not
considered general-purpose systems: they have specific functions, are expensive, are
more difficult to manage, and are commonly used in government and military environ-
ments. The hope is that we can find some type of middle ground, where our software
can still provide all the bells and whistles we are used to and do it securely. This can
only be done if more people understand how to build software securely from the begin-
ning, meaning at the architecture stage. The approach of integrating security at the ar-
chitecture level would get us much closer to having secure environments, compared to
the “patch and pray” approach many organizations deal with today.
Computer Security
Computer security can be a slippery term because it means different things to different
people. Many aspects of a system can be secured, and security can happen at various
levels and to varying degrees. As stated in previous chapters, information security con-
sists of the following main attributes:
•Availability Prevention of loss of, or loss of access to, data and resources
•Integrity Prevention of unauthorized modification of data and resources
•Confidentiality Prevention of unauthorized disclosure of data and resources
These main attributes branch off into more granular security attributes, such as
authenticity, accountability, nonrepudiation, and dependability. How does a company
know which of these it needs, to what degree they are needed, and whether the operat-
ing systems and applications they use actually provide these features and protection?
These questions get much more complex as one looks deeper into the questions and
products themselves. Organizations are not just concerned about e-mail messages be-
ing encrypted as they pass through the Internet. They are also concerned about the
confidential data stored in their databases, the security of their web farms that are con-
nected directly to the Internet, the integrity of data-entry values going into applications

Chapter 4: Security Architecture and Design
299
that process business-oriented information, external attackers bringing down servers
and affecting productivity, viruses spreading, the internal consistency of data warehous-
es, mobile device security, and much more.
These issues not only affect productivity and profitability, but also raise legal and
liability issues. Companies, and the management that runs them, can be held account-
able if any one of the many issues previously mentioned goes wrong. So it is, or at least
it should be, very important for companies to know what security they need and how
to be properly assured that the protection is actually being provided by the products
they purchase.
Many of these security issues must be thought through before and during the design
and architectural phases for a product. Security is best if it is designed and built into the
foundation of operating systems and applications and not added as an afterthought.
Once security is integrated as an important part of the design, it has to be engineered,
implemented, tested, evaluated, and potentially certified and accredited. The security
that a product provides must be evaluated based upon the availability, integrity, and
confidentiality it claims to provide. What gets tricky is that organizations and individu-
als commonly do not fully understand what it actually takes for software to provide
these services in an effective manner. Of course a company wants a piece of software to
provide solid confidentiality, but does the person who is actually purchasing this soft-
ware product know the correct questions to ask and what to look for? Does this person
ask the vendor about cryptographic key protection, encryption algorithms, and what
software development lifecycle model the vendor followed? Does the purchaser know
to ask about hashing algorithms, message authentication codes, fault tolerance, and
built-in redundancy options? The answer is “not usually.” Not only do most people not
fully understand what has to be in place to properly provide availability, integrity, and
confidentiality, it is very difficult to decipher what a piece of software is and is not car-
rying out properly without the necessary knowledge base.
Computer security was much simpler eight to ten years ago because software and
network environments were not as complex as they are today. It was easier to call your-
self a security expert. With today’s software, much more is demanded of the security
expert, software vendor, and of the organizations purchasing and deploying software
products. Computer systems and the software that run on them are very complex today
and unless you are ready to pull up your “big-girl panties” and really understand how
this technology works, you will not be fully qualified to determine if the proper level of
security is truly in place.
This chapter covers system and software security from the ground up. It then goes
into how these systems are evaluated and rated by governments and other agencies, and
what these ratings actually mean. However, before we dive into these concepts, it is
important to understand what we mean by system-based architectures and the compo-
nents that make them up.
NOTE
NOTE Enterprise architecture was covered in Chapter 2. This chapter
solely focuses on system architecture. Secure software development and
vulnerabilities are covered in Chapter 10.

CISSP All-in-One Exam Guide
300
System Architecture
In Chapter 2 we covered enterprise architecture frameworks and introduced their direct
relationship to system architecture. As explained in that chapter, an architecture is a tool
used to conceptually understand the structure and behavior of a complex entity through
different views. An architecture description is a formal description and representation of
a system, the components that make it up, the interactions and interdependencies be-
tween those components, and the relationship to the environment. An architecture
provides different views of the system, based upon the needs of the stakeholders of that
system.
Before digging into the meat of system architecture, we need to get our terminology
established. Although people use terms such as ”design,” “architecture,” and “software
development” interchangeably, we need to be more disciplined if we are really going to
learn this stuff correctly.
An architecture is at the highest level when it comes to the overall process of system
development. It is the conceptual constructs that must be first understood before get-
ting to the design and development phases. It is at the architectural level that we are
answering questions such as “Why are we building this system?,” “Who is going to use
it and why?,” “How is it going to be used?,” “What environment will it work within?,”
“What type of security and protection is required?” and “What does it need to be able
to communicate with?” The answers to these types of questions outline the main goals
the system must achieve and they help us construct the system at an abstract level. This
abstract architecture provides the “big picture” goals, which are used to guide the fol-
lowing design and development phases.
In the system design phase we gather system requirement specifications and use
modeling languages to establish how the system will accomplish design goals, such as
required functionality, compatibility, fault tolerance, extensibility, security, usability,
and maintainability. The modeling language is commonly graphical so that we can vi-
sualize the system from a static structural view and a dynamic behavioral view. We can
understand what the components within the system need to accomplish individually
and how they work together to accomplish the larger established architectural goals.
It is at the design phase that we introduce security models such as Bell-LaPadula,
Biba, and Clark-Wilson, which are discussed later in this chapter. The models are used
to help construct the design of the system to meet the architectural goals.
Once the design of the system is defined, then we move into the development
phase. Individual programmers are assigned the pieces of the system they are respon-
sible for, and the coding of the software begins and the creation of hardware starts. (A
system is made up of both software and hardware components.)
One last term we need to make sure is understood is “system.” When most people
hear this term, they think of an individual computer, but a system can be an individual
computer, an application, a select set of subsystems, a set of computers, or a set of net-
works made up of computers and applications. A system can be simplistic, as in a sin-
gle-user operating system dedicated to a specific task, or as complex as an enterprise
network made up of heterogeneous multiuser systems and applications. So when we
look at system architectures, this could apply to very complex and distributed environ-

Chapter 4: Security Architecture and Design
301
ments or very focused subsystems. We need to make sure we understand the type of
system that needs to be developed at the architecture stage.
There are evolving standards that outline the specifications of system architectures.
First IEEE came up with a standard (Standard 1471) that was called IEEE Recommended
Practice for Architectural Description of Software-Intensive Systems. This was adopted by
ISO and published in 2007 as ISO/IEC 42010:2007. As of this writing, this ISO standard
is being updated and will be called ISO/IEC/IEEE 42010, Systems and software engineering—
Architecture description. The standard is evolving and being improved upon. The goal is
to internationally standardize how system architecture takes place instead of product
developers just “winging it” and coming up with their own proprietary approaches. A
disciplined approach to system architecture allows for better quality, interoperability,
extensibility, portability, and security.
The ISO/IEC 42010:2007 follows the same terminology that was used in the formal
enterprise architecture frameworks we covered in Chapter 2:
•Architecture Fundamental organization of a system embodied in its
components, their relationships to each other and to the environment,
and the principles guiding its design and evolution.
•Architectural description (AD) Collection of document types to convey an
architecture in a formal manner.
•Stakeholder Individual, team, or organization (or classes thereof) with
interests in, or concerns relative to, a system.
•View Representation of a whole system from the perspective of a related set
of concerns.
•Viewpoint A specification of the conventions for constructing and using a view.
A template from which to develop individual views by establishing the purposes
and audience for a view and the techniques for its creation and analysis.
As an analogy, if I am going to build my own house I am first going to have to work
with an architect. He will ask me a bunch of questions to understand my overall “goals”
for the house, as in four bedrooms, three bathrooms, family room, game room, garage,
3,000 square feet, two-story ranch style, etc. Once he collects my goal statements, he
will create the different types of documentation (blueprint, specification documents)
that describe the architecture of the house in a formal manner (architecture descrip-
tion). The architect needs to make sure he meets several people’s (stakeholders) goals
for this house—not just mine. He needs to meet zoning requirements, construction
requirements, legal requirements, and my design requirements. Each stakeholder needs
to be presented with documentation and information (views) that map to their needs
and understanding of the house. One architecture schematic can be created for the
plumber, a different schematic can be created for the electrician, another one can be
created for the zoning officials, and one can be created for me. Each stakeholder needs
to have information about this house in terms that they understand and that map to
their specific concerns. If the architect gives me documentation about the electrical cur-
rent requirements and location of where electrical grounding will take place, that does
not help me. I need to see the view of the architecture that directly relates to my needs.

CISSP All-in-One Exam Guide
302
The same is true with a system. An architect needs to capture the goals that the sys-
tem is supposed to accomplish for each stakeholder. One stakeholder is concerned
about the functionality of the system, another one is concerned about the performance,
another is concerned about interoperability, and yet another stakeholder is concerned
about security. The architect then creates documentation that formally describes the
architecture of the system for each of these stakeholders that will best address their
concerns and viewpoint. Each stakeholder will review their documentation to ensure
that the architect has not missed anything. After the architecture is approved, the soft-
ware designers and developers are brought in to start building the system.
The relationship between these terms and concepts is illustrated in Figure 4-1.
The stakeholders for a system are the users, operators, maintainers, developers, and
suppliers. Each stakeholder has his own concern pertaining to the system, which can be
performance, functionality, security, maintainability, quality of service, usability, etc.
The system architecture needs to express system data pertaining to each concern of each
stakeholder, which is done through views. The views of the system can be logical, phys-
ical, structural, or behavioral.
The creation and use of system architect processes are evolving, becoming more
disciplined and standardized. In the past, system architectures were developed to meet
Figure 4-1 Formal architecture terms and relationships

Chapter 4: Security Architecture and Design
303
the identified stakeholders’ concerns (functionality, interoperability, performance), but
a new concern has come into the limelight—security. So new systems need to meet not
just the old, but also the new concerns the stakeholders have. Security goals have to be
defined before the architecture of a system is created, and specific security views of the
system need to be created to help guide the design and development phases. When we
hear about security being “bolted on,” that means security concerns are addressed at
the development (programming) phase and not the architecture phase. When we state
that security needs to be “baked in,” this means that security has to be integrated at the
architecture phase.
NOTE
NOTE While a system architecture addresses many stakeholder concerns,
we will focus on the concern of security since information security is the crux
of the CISSP exam.
CAUTION
CAUTION It is common for people in technology to not take higher-level,
somewhat theoretical concepts as in architecture seriously because they see
it is fluffy, non-practical, and they cannot always relate these concepts to what
they see in their daily activities. While knowing how to configure a server is
important, it is actually more important for more people in the industry to
understand how to actually build that server securely in the first place. Make
sure to understand security from a high-level theoretical perspective to a
practical hands-on perspective. If no one focuses on how to properly carry
out secure system architecture, we will always been doomed with insecure
systems.
Computer Architecture
Put the processor over there by the plant, the memory by the window, and the secondary
storage upstairs.
Computer architecture encompasses all of the parts of a computer system that are
necessary for it to function, including the operating system, memory chips, logic cir-
cuits, storage devices, input and output devices, security components, buses, and net-
working interfaces. The interrelationships and internal working of all of these parts can
be quite complex, and making them work together in a secure fashion consists of com-
plicated methods and mechanisms. Thank goodness for the smart people who figured
this stuff out! Now it is up to us to learn how they did it and why.
The more you understand how these different pieces work and process data, the more
you will understand how vulnerabilities actually occur and how countermeasures work
to impede and hinder vulnerabilities from being introduced, found, and exploited.
NOTE
NOTE This chapter interweaves the hardware and operating system
architectures and their components to show you how they work together.

CISSP All-in-One Exam Guide
304
The Central Processing Unit
The CPU seems complex. How does it work?
Response: Black magic. It uses eye of bat, tongue of goat, and some transistors.
The central processing unit (CPU) is the brain of a computer. In the most general
description possible, it fetches instructions from memory and executes them. Although
a CPU is a piece of hardware, it has its own instruction set that is necessary to carry out
its tasks. Each CPU type has a specific architecture and set of instructions that it can
carry out. The operating system must be designed to work within this CPU architecture.
This is why one operating system may work on a Pentium Pro processor but not on an
AMD processor. The operating system needs to know how to “speak the language” of
the processor, which is the processor’s instruction set.
The chips within the CPU cover only a couple of square inches, but contain mil-
lions of transistors. All operations within the CPU are performed by electrical signals at
different voltages in different combinations, and each transistor holds this voltage,
which represents 0’s and 1’s to the operating system. The CPU contains registers that
point to memory locations that contain the next instructions to be executed and that
enable the CPU to keep status information of the data that need to be processed. A
register is a temporary storage location. Accessing memory to get information on what
instructions and data must be executed is a much slower process than accessing a regis-
ter, which is a component of the CPU itself. So when the CPU is done with one task, it
asks the registers, “Okay, what do I have to do now?” And the registers hold the infor-
mation that tells the CPU what its next job is.
The actual execution of the instructions is done by the arithmetic logic unit (ALU).
The ALU performs mathematical functions and logical operations on data. The ALU
can be thought of as the brain of the CPU, and the CPU as the brain of the computer.
Software holds its instructions and data in memory. When an action needs to take
place on the data, the instructions and data memory addresses are passed to the CPU
registers, as shown in Figure 4-2. When the control unit indicates that the CPU can
process them, the instructions and data memory addresses are passed to the CPU. The

Chapter 4: Security Architecture and Design
305
CPU sends out requests to fetch these instructions and data from the provided ad-
dresses and then actual processing, number crunching, and data manipulation take
place. The results are sent back to the requesting process’s memory address.
An operating system and applications are really just made up of lines and lines of
instructions. These instructions contain empty variables, which are populated at run
time. The empty variables hold the actual data. There is a difference between instruc-
tions and data. The instructions have been written to carry out some type of functional-
ity on the data. For example, let’s say you open a Calculator application. In reality, this
program is just lines of instructions that allow you to carry out addition, subtraction,
division, and other types of mathematical functions that will be executed on the data
you provide. So, you type in 3 + 5. The 3 and the 5 are the data values. Once you click
the = button, the Calculator program tells the CPU it needs to take the instructions on
how to carry out addition and apply these instructions to the two data values 3 and 5.
The ALU carries out this instruction and returns the result of 8 to the requesting pro-
gram. This is when you see the value 8 in the Calculator’s field. To users, it seems as
though the Calculator program is doing all of this on its own, but it is incapable of this.
It depends upon the CPU and other components of the system to carry out this type of
activity.
The control unit manages and synchronizes the system while different applications’
code and operating system instructions are being executed. The control unit is the com-
ponent that fetches the code, interprets the code, and oversees the execution of the dif-
ferent instruction sets. It determines what application instructions get processed and in
Figure 4-2 Instruction and data addresses are passed to the CPU for processing.

CISSP All-in-One Exam Guide
306
what priority and time slice. It controls when instructions are executed, and this execu-
tion enables applications to process data. The control unit does not actually process the
data. It is like the traffic cop telling vehicles when to stop and start again, as illustrated
in Figure 4-3. The CPU’s time has to be sliced up into individual units and assigned to
processes. It is this time slicing that fools the applications and users into thinking the
system is actually carrying out several different functions at one time. While the operat-
ing system can carry out several different functions at one time (multitasking), in real-
ity, the CPU is executing the instructions in a serial fashion (one at a time).
A CPU has several different types of registers, containing information about the
instruction set and data that must be executed. General registers are used to hold vari-
ables and temporary results as the ALU works through its execution steps. The general
registers are like the ALU’s scratch pad, which it uses while working. Special registers
(dedicated registers) hold information such as the program counter, stack pointer, and
program status word (PSW). The program counter register contains the memory address
of the next instruction to be fetched. After that instruction is executed, the program
counter is updated with the memory address of the next instruction set to be processed.
It is similar to a boss-and-secretary relationship. The secretary keeps the boss on sched-
ule and points him to the necessary tasks he must carry out. This allows the boss to just
concentrate on carrying out the tasks instead of having to worry about the “busy work”
being done in the background.
Figure 4-3 The control unit works as a traffic cop, indicating when instructions are sent to
the processor.

Chapter 4: Security Architecture and Design
307
The program status word (PSW) holds different condition bits. One of the bits indi-
cates whether the CPU should be working in user mode (also called problem state) or
privileged mode (also called kernel or supervisor mode). The crux of this chapter is to
teach you how operating systems protect themselves. They need to protect themselves
from applications, software utilities, and user activities if they are going to provide a
stable and safe environment. One of these protection mechanisms is implemented
through the use of these different execution modes. When an application needs the
CPU to carry out its instructions, the CPU works in user mode. This mode has a lower
privilege level, and many of the CPU’s instructions and functions are not available to
the requesting application. The reason for the extra caution is that the developers of the
operating system and CPU do not know who developed the application or how it is
going to react, so the CPU works in a lower privilege mode when executing these types
of instructions. By analogy, if you are expecting visitors who are bringing their two-year-
old boy, you move all of the breakables that someone under three feet tall can reach.
No one is ever sure what a two-year-old toddler is going to do, but it usually has to do
with breaking something. An operating system and CPU are not sure what applications
are going to attempt, which is why this code is executed in a lower privilege and critical
resources are out of reach of the application’s code.
If the PSW has a bit value that indicates the instructions to be executed should be
carried out in privileged mode, this means a trusted process (an operating system pro-
cess) made the request and can have access to the functionality that is not available in
user mode. An example would be if the operating system needed to communicate with
a peripheral device. This is a privileged activity that applications cannot carry out. When
these types of instructions are passed to the CPU, the PSW is basically telling the CPU,
“The process that made this request is an all right guy. We can trust him. Go ahead and
carry out this task for him.”
Memory addresses of the instructions and data to be processed are held in registers
until needed by the CPU. The CPU is connected to an address bus, which is a hardwired
connection to the RAM chips in the system and the individual input/output (I/O) de-
vices. Memory is cut up into sections that have individual addresses associated with
them. I/O devices (CD-ROM, USB device, printers, and so on) are also allocated spe-
cific unique addresses. If the CPU needs to access some data, either from memory or
from an I/O device, it sends a fetch request on the address bus. The fetch request con-
tains the address of where the needed data are located. The circuitry associated with the
memory or I/O device recognizes the address the CPU sent down the address bus and
instructs the memory or device to read the requested data and put it on the data bus. So
the address bus is used by the CPU to indicate the location of the instructions to be
processed, and the memory or I/O device responds by sending the data that reside at
that memory location through the data bus. As an analogy, if I call you on the tele-
phone and tell you the book I need you to mail me, this would be like a CPU sending
a fetch request down the address bus. You locate the book I requested and send it to me
in the mail, which would be similar to how an I/O device finds the requested data and
puts it on the data bus for the CPU to receive.

CISSP All-in-One Exam Guide
308
This process is illustrated in Figure 4-4.
Once the CPU is done with its computation, it needs to return the results to the
requesting program’s memory. So, the CPU sends the requesting program’s address
down the address bus and sends the new results down the data bus with the command
write. These new data are then written to the requesting program’s memory space.
Following our earlier example, once the CPU adds 3 and 5 and sends the new resulting
data to the Calculator program, you see the result as 8.
The address and data buses can be 8, 16, 32, or 64 bits wide. Most systems today use
a 64-bit address bus, which means the system can have a large address space (264). Sys-
tems can also have a 64-bit data bus, which means the system can move data in parallel
back and forth between memory, I/O devices, and the CPU of this size. (A 64-bit data
bus means the size of the chunks of data a CPU can request at a time is 64 bits.) But what
does this really mean and why does it matter? A two-lane highway can be a bottleneck if
a lot of vehicles need to travel over it. This is why highways are increased to four, six, and
eight lanes. As computer and software gets more complex and performance demands
increase, we need to get more instructions and data to the CPU faster so it can do its
work on these items and get them back to the requesting program as fast as possible. So
we need fatter pipes (buses) to move more stuff from one place to another place.
Figure 4-4
Address and data
buses are separate
and have specific
functionality.

Chapter 4: Security Architecture and Design
309
Multiprocessing
I have many brains, so I can work on many different tasks at once.
Some specialized computers have more than one CPU for increased performance.
An operating system must be developed specifically to be able to understand and work
with more than one processor. If the computer system is configured to work in symmetric
mode, this means the processors are handed work as needed, as shown with CPU 1 and
CPU 2 in Figure 4-5. It is like a load-balancing environment. When a process needs
instructions to be executed, a scheduler determines which processor is ready for more
work and sends it on. If a processor is going to be dedicated to a specific task or applica-
tion, all other software would run on a different processor. In Figure 4-5, CPU 4 is
Memory Stacks
Each process has its own stack, which is a data structure in memory that the pro-
cess can read from and write to in a last in, first out (LIFO) fashion. Let’s say you
and I need to communicate through a stack. What I do is put all of the things I
need to say to you in a stack of papers. The first paper tells you how you can re-
spond to me when you need to, which is called a return pointer. The next paper
has some instructions I need you to carry out. The next piece of paper has the data
you must use when carrying out these instructions. So, I write down on individu-
al pieces of paper all that I need you to do for me and stack them up. When I am
done, I tell you to read my stack of papers. You take the first page off the stack and
carry out the request. Then you take the second page and carry out that request.
You continue to do this until you are at the bottom of the stack, which contains
my return pointer. You look at this return pointer (which is my memory address)
to know where to send the results of all the instructions I asked you to carry out.
This is how processes communicate to other processes and to the CPU. One pro-
cess stacks up its information that it needs to communicate to the CPU. The CPU
has to keep track of where it is in the stack, which is the purpose of the stack
pointer. Once the first item on the stack is executed, then the stack pointer moves
down to tell the CPU where the next piece of data is located.
NOTE
NOTE The traditional way of explaining how a stack works is to use
the analogy of stacking up trays in a cafeteria. When people are done
eating, they place their trays on a stack of other trays, and when the
cafeteria employees need to get the trays for cleaning, they take the last
tray placed on top and work down the stack. This analogy is used to
explain how a stack works in the mode of “last in, first out.” The process
being communicated to takes the last piece of data the requesting
process laid down from the top of the stack and works down the stack.

CISSP All-in-One Exam Guide
310
dedicated to one application and its threads, while CPU 3 is used by the operating
system. When a processor is dedicated, as in this example, the system is working in
asymmetric mode. This usually means the computer has some type of time-sensitive ap-
plication that needs its own personal processor. So, the system scheduler will send in-
structions from the time-sensitive application to CPU 4 and send all the other
instructions (from the operating system and other applications) to CPU 3.
Figure 4-5 Symmetric mode and asymmetric mode of multiprocessing
Key Terms
•ISO/IEC 42010:2007 International standard that provides guidelines
on how to create and maintain system architectures.
•Central processing unit (CPU) A silicon component made up of
integrated chips with millions of transistors that carry out the execution
of instructions within a computer.

Chapter 4: Security Architecture and Design
311
•Arithmetic logic unit (ALU) Component of the CPU that carries
out logic and mathematical functions as they are laid out in the
programming code being processed by the CPU.
•Register Small, temporary memory storage units integrated and used
by the CPU during its processing functions.
•Control unit Part of the CPU that oversees the collection of
instructions and data from memory and how they are passed to the
processing components of the CPU.
•General registers Temporary memory location the CPU uses during
its processes of executing instructions. The ALU’s “scratch pad” it uses
while carrying out logic and math functions.
•Special registers Temporary memory location that holds critical
processing parameters. They hold values as in the program counter,
stack pointer, and program status word.
•Program counter Holds the memory address for the following
instructions the CPU needs to act upon.
•Stack Memory segment used by processes to communicate
instructions and data to each other.
•Program status word Condition variable that indicates to the CPU
what mode (kernel or user) instructions need to be carried out in.
•User mode (problem state) Protection mode that a CPU works
within when carrying out less trusted process instructions.
•Kernel mode (supervisory state, privilege mode) Mode that a CPU
works within when carrying out more trusted process instructions. The
process has access to more computer resources when working in kernel
versus user mode.
•Address bus Physical connections between processing components
and memory segments used to communicate the physical memory
addresses being used during processing procedures.
•Data bus Physical connections between processing components and
memory segments used to transmit data being used during processing
procedures.
•Symmetric mode multiprocessing When a computer has two or
more CPUs and each CPU is being used in a load-balancing method.
•Asymmetric mode multiprocessing When a computer has two or
more CPUs and one CPU is dedicated to a specific program while the
other CPUs carry out general processing procedures.

CISSP All-in-One Exam Guide
312
Operating System Components
An operating system provides an environment for applications and users to work with-
in. Every operating system is a complex beast, made up of various layers and modules
of functionality. It has the responsibility of managing the hardware components, mem-
ory management, I/O operations, file system, process management, and providing sys-
tem services. We next look at each of these responsibilities that every operating system
type carries out. However, you must realize that whole books are written on just these
individual topics, so the discussion here will only scratch the surface.
Process Management
Well, just look at all of these processes squirming around like little worms. We need some real
organization here!
Operating systems, software utilities, and applications, in reality, are just lines and
lines of instructions. They are static lines of code that are brought to life when they are
initialized and put into memory. Applications work as individual units, called processes,
and the operating system also has several different processes carrying out various types
of functionality. A process is the set of instructions that is actually running. A program
is not considered a process until it is loaded into memory and activated by the operat-
ing system. When a process is created, the operating system assigns resources to it, such
as a memory segment, CPU time slot (interrupt), access to system application program-
ming interfaces (APIs), and files to interact with. The collection of the instructions and
the assigned resources is referred to as a process. So the operating system gives a process
all the tools it needs and then loads the process into memory and it is off and running.
The operating system has many of its own processes, which are used to provide and
maintain the environment for applications and users to work within. Some examples
of the functionality that individual processes provide include displaying data onscreen,
spooling print jobs, and saving data to temporary files. Operating systems provide mul-
tiprogramming, which means that more than one program (or process) can be loaded
into memory at the same time. This is what allows you to run your antivirus software,
word processor, personal firewall, and e-mail client all at the same time. Each of these
applications runs as individual processes.
NOTE
NOTE Many resources state that today’s operating systems provide
multiprogramming and multitasking. This is true, in that multiprogramming
just means more than one application can be loaded into memory at the
same time. But in reality, multiprogramming was replaced by multitasking,
which means more than one application can be in memory at the same
time and the operating system can deal with requests from these different
applications simultaneously. Multiprogramming is a legacy term.
Earlier operating systems wasted their most precious resource—CPU time. For ex-
ample, when a word processor would request to open a file on a floppy drive, the CPU
would send the request to the floppy drive and then wait for the floppy drive to initial-
ize, for the head to find the right track and sector, and finally for the floppy drive to

Chapter 4: Security Architecture and Design
313
send the data via the data bus to the CPU for processing. To avoid this waste of CPU
time, multitasking was developed, which enabled the operating system to maintain dif-
ferent processes in their various execution states. Instead of sitting idle waiting for activ-
ity from one process, the CPU could execute instructions for other processes, thereby
speeding up the system as a whole.
NOTE
NOTE If you are not old enough to remember floppy drives, they were like
our USB thumb drives we use today. They were just flatter, slower, and could
not hold as much data.
As an analogy, if you (CPU) put bread in a toaster (process) and just stand there
waiting for the toaster to finish its job, you are wasting time. On the other hand, if you
put bread in the toaster and then, while it’s toasting, fed the dog, made coffee, and
came up with a solution for world peace, you are being more productive and not wast-
ing time. You are multitasking.
Operating systems started out as cooperative and then evolved into preemptive
multitasking. Cooperative multitasking, used in Windows 3.x and early Macintosh sys-
tems, required the processes to voluntarily release resources they were using. This was
not necessarily a stable environment, because if a programmer did not write his code
properly to release a resource when his application was done using it, the resource
would be committed indefinitely to his application and thus be unavailable to other
processes. With preemptive multitasking, used in Windows 9x and later versions and in
Unix systems, the operating system controls how long a process can use a resource. The
system can suspend a process that is using the CPU and allow another process access to
it through the use of time sharing. So, in operating systems that used cooperative mul-
titasking, the processes had too much control over resource release, and when an ap-
plication hung, it usually affected all the other applications and sometimes the
operating system itself. Operating systems that use preemptive multitasking run the
show, and one application does not negatively affect another application as easily.
Different operating system types work within different process models. For exam-
ple, Unix and Linux systems allow their processes to create new children processes,
which is referred to as forking. Let’s say you are working within a shell of a Linux system.
That shell is the command interpreter and an interface that enables the user to interact
with the operating system. The shell runs as a process. When you type in a shell the
command cat file1 file2|grep stuff, you are telling the operating system to
concatenate (cat) the two files and then search (grep) for the lines that have the value
of “stuff” in them. When you press the ENTER key, the shell forks two children process-
es—one for the cat command and one for the grep command. Each of these children
processes takes on the characteristics of the parent process, but has its own memory
space, stack, and program counter values.
A process can be in a running state (CPU is executing its instructions and data),
ready state (waiting to send instructions to the CPU), or blocked state (waiting for input
data, such as keystrokes, from a user). These different states are illustrated in Figure 4-6.
When a process is blocked, it is waiting for some type of data to be sent to it. In the

CISSP All-in-One Exam Guide
314
preceding example of typing the command cat file1 file2|grep stuff, the
grep process cannot actually carry out its functionality of searching until the first pro-
cess (cat) is done combining the two files. The grep process will put itself to sleep and
will be in the blocked state until the cat process is done and sends the grep process the
input it needs to work with.
NOTE
NOTE Not all operating systems create and work in the process hierarchy
like Unix and Linux systems. Windows systems do not fork new children
processes, but instead create new threads that work within the same context
of the parent process.
Is it really necessary to understand this stuff all the way down to the process level?
Well, this is where everything actually takes place. All software works in “units” of pro-
cesses. If you do not understand how processes work, you cannot understand how
software works. If you do not understand how software works, you cannot know if it is
working securely. So yes, you need to know this stuff at this level. Let’s keep going.
The operating system is responsible for creating new processes, assigning them re-
sources, synchronizing their communication, and making sure nothing insecure is tak-
ing place. The operating system keeps a process table, which has one entry per process.
The table contains each individual process’s state, stack pointer, memory allocation,
program counter, and status of open files in use. The reason the operating system docu-
ments all of this status information is that the CPU needs all of it loaded into its regis-
ters when it needs to interact with, for example, process 1. When process 1’s CPU time
slice is over, all of the current status information on process 1 is stored in the process
table so that when its time slice is open again, all of this status information can be put
back into the CPU registers. So, when it is process 2’s time with the CPU, its status in-
formation is transferred from the process table to the CPU registers, and transferred
back again when the time slice is over. These steps are shown in Figure 4-7.
How does a process know when it can communicate with the CPU? This is taken
care of by using interrupts. An operating system fools us, and applications, into think-
ing it and the CPU are carrying out all tasks (operating system, applications, memory,
Figure 4-6 Processes enter and exit different states.

Chapter 4: Security Architecture and Design
315
I/O, and user activities) simultaneously. In fact, this is impossible. Most CPUs can do
only one thing at a time. So the system has hardware and software interrupts. When a
device needs to communicate with the CPU, it has to wait for its interrupt to be called
upon. The same thing happens in software. Each process has an interrupt assigned to it.
It is like pulling a number at a customer service department in a store. You can’t go up
to the counter until your number has been called out.
When a process is interacting with the CPU and an interrupt takes place (another
process has requested access to the CPU), the current process’s information is stored in
the process table, and the next process gets its time to interact with the CPU.
NOTE
NOTE Some critical processes cannot afford to have their functionality
interrupted by another process. The operating system is responsible for
setting the priorities for the different processes. When one process needs to
interrupt another process, the operating system compares the priority levels
of the two processes to determine if this interruption should be allowed.
Figure 4-7 A process table contains process status data that the CPU requires.

CISSP All-in-One Exam Guide
316
There are two categories of interrupts: maskable and nonmaskable. A maskable
interrupt is assigned to an event that may not be overly important and the programmer
can indicate that if that interrupt calls, the program does not stop what it is doing. This
means the interrupt is ignored. Nonmaskable interrupts can never be overridden by an
application because the event that has this type of interrupt assigned to it is critical. As
an example, the reset button would be assigned a nonmaskable interrupt. This means
that when this button is pushed, the CPU carries out its instructions right away.
As an analogy, a boss can tell her administrative assistant she is not going to take any
calls unless the Pope or Elvis phones. This means all other people will be ignored or
masked (maskable interrupt), but the Pope and Elvis will not be ignored (nonmaskable
interrupt). This is probably a good policy. You should always accept calls from either the
Pope or Elvis. Just remember not to use any bad words when talking to the Pope.
The watchdog timer is an example of a critical process that must always do its thing.
This process will reset the system with a warm boot if the operating system hangs and
cannot recover itself. For example, if there is a memory management problem and the
operating system hangs, the watchdog timer will reset the system. This is one mecha-
nism that ensures the software provides more of a stable environment.
Thread Management
What are all of these hair-like things hanging off of my processes?
Response: Threads.
As described earlier, a process is a program in memory. More precisely, a process is
the program’s instructions and all the resources assigned to the process by the operating
system. It is just easier to group all of these instructions and resources together and
control them as one entity, which is a process. When a process needs to send something
to the CPU for processing, it generates a thread. A thread is made up of an individual
instruction set and the data that must be worked on by the CPU.

Chapter 4: Security Architecture and Design
317
Most applications have several different functions. Word processing applications
can open files, save files, open other programs (such as an e-mail client), and print
documents. Each one of these functions requires a thread (instruction set) to be dy-
namically generated. So, for example, if Tom chooses to print his document, the word
processing process generates a thread that contains the instructions of how this docu-
ment should be printed (font, colors, text, margins, and so on). If he chooses to send a
document via e-mail through this program, another thread is created that tells the e-
mail client to open and what file needs to be sent. Threads are dynamically created and
destroyed as needed. Once Tom is done printing his document, the thread that was
generated for this functionality is broken down.
A program that has been developed to carry out several different tasks at one time
(display, print, interact with other programs) is capable of running several different
threads simultaneously. An application with this capability is referred to as a multi-
threaded application.
Each thread shares the same resources of the process that created it. So, all the
threads created by a word processing application work in the same memory space and
have access to all the same files and system resources. And how is this related to secu-
rity? Software security ultimately comes down to what threads and processes are doing.
If they are behaving properly, things work as planned and there are no issues to be
concerned about. But if a thread misbehaves and it is working in a privileged mode,
then it can carry out malicious activities that affect critical resources of the system. At-
tackers commonly inject code into a running process to carry out some type of compro-
mise. Let’s think this through. When an operating system is preparing to load a process
into memory it goes through a type of criteria checklist to make sure the process is se-
cure and will not negatively affect the system. Once the process passes this security
check the process is loaded into memory and is assigned a specific operation mode
(user or privileged). An attacker “injects” instructions into this running process, which
means the process is his vehicle for destruction. Since the process has already gone
through a security check before it was loaded into memory, it is trusted and has access
to system resources. If an attacker can inject malicious instructions into this process,
this trusted process carries out the attacker’s demands. These demands could be to col-
lect data as the user types it in on her keyboard, steal passwords, send out malware, etc.
If the process is running at a privileged mode, the attacker can carry out more damage
because more critical system resources are available to him through this running pro-
cess. When she creates her product, a software developer needs to make sure that run-
ning processes will not accept unqualified instructions and allow for these types of
compromises. Processes should only accept instructions for an approved entity and the
instructions that it accepts should be validated before execution. It is like “stranger
danger” with children. We teach our children to not take candy from a stranger and in
turn we need to make sure our software processes are not accepting improper instruc-
tions from an unknown source.
Process Scheduling
Scheduling and synchronizing various processes and their activities is part of process
management, which is a responsibility of the operating system. Several components
need to be considered during the development of an operating system, which will

CISSP All-in-One Exam Guide
318
dictate how process scheduling will take place. A scheduling policy is created to govern
how threads will interact with other threads. Different operating systems can use differ-
ent schedulers, which are basically algorithms that control the timesharing of the CPU.
As stated earlier, the different processes are assigned different priority levels (interrupts)
that dictate which processes overrule other processes when CPU time allocation is re-
quired. The operating system creates and deletes processes as needed, and oversees
them changing state (ready, blocked, running). The operating system is also responsi-
ble for controlling deadlocks between processes attempting to use the same resources.
If a process scheduler is not built properly, an attacker could manipulate it. The at-
tacker could ensure that certain processes do not get access to system resources (creat-
ing a denial of service attack) or that a malicious process has its privileges escalated
(allowing for extensive damage). An operating system needs to be built in a secure
manner to ensure that an attacker cannot slip in and take over control of the system’s
processes.
When a process makes a request for a resource (memory allocation, printer, second-
ary storage devices, disk space, and so on), the operating system creates certain data
structures and dedicates the necessary processes for the activity to be completed. Once
the action takes place (a document is printed, a file is saved, or data are retrieved from
the drive), the process needs to tear down these built structures and release the resourc-
es back to the resource pool so they are available for other processes. If this does not
happen properly, the system may run out of critical resources—as in memory. Attackers
have identified programming errors in operating systems that allow them to starve the
system of its own memory. This means the attackers exploit a software vulnerability
that ensures that processes do not properly release their memory resources. Memory is
continually committed and not released and the system is depleted of this resource
until it can no longer function. This is another example of a denial of service attack.
Another situation to be concerned about is a software deadlock. One example of a
deadlock situation is when process A commits resource 1 and needs to use resource 2
to properly complete its task, but process B has committed resource 2 and needs re-
source 1 to finish its job. Both processes are in deadlock because they do not have the
resources they need to finish the function they are trying to carry out. This situation
does not take place as often as it used to, as a result of better programming. Also, oper-
ating systems now have the intelligence to detect this activity and either release com-
mitted resources or control the allocation of resources so they are properly shared
between processes.
Operating systems have different methods of dealing with resource requests and
releases and solving deadlock situations. In some systems, if a requested resource is
unavailable for a certain period of time, the operating system kills the process that
is “holding on to” that resource. This action releases the resource from the process that
had committed it and restarts the process so it is “clean” and available for use by other
applications. Other operating systems might require a program to request all the re-
sources it needs before it actually starts executing instructions, or require a program to
release its currently committed resources before it may acquire more.

Chapter 4: Security Architecture and Design
319
Process Activity
Process 1, go into your room and play with your toys. Process 2, go into your room and play with
your toys. No intermingling and no fighting!
Computers can run different applications and processes at the same time. The pro-
cesses have to share resources and play nice with each other to ensure a stable and safe
computing environment that maintains its integrity. Some memory, data files, and
Key Terms
•Process Program loaded in memory within an operating system.
•Multiprogramming Interleaved execution of more than one program
(process) or task by a single operating system.
•Multitasking Simultaneous execution of more than one program
(process) or task by a single operating system.
•Cooperative multitasking Multitasking scheduling scheme used by
older operating systems to allow for computer resource time slicing.
Processes had too much control over resources, which would allow for
the programs or systems to “hang.”
•Preemptive multitasking Multitasking scheduling scheme used by
operating systems to allow for computer resource time slicing. Used
in newer, more stable operating systems.
•Process states (ready, running, blocked) Processes can be in various
activity levels. Ready = waiting for input. Running = instructions being
executed by CPU. Blocked = process is “suspended.”
•Interrupts Values assigned to computer components (hardware and
software) to allow for efficient computer resource time slicing.
•Maskable interrupt Interrupt value assigned to a noncritical
operating system activity.
•Nonmaskable interrupt Interrupt value assigned to a critical
operating system activity.
•Thread Instruction set generated by a process when it has a specific
activity that needs to be carried out by an operating system. When the
activity is finished, the thread is destroyed.
•Multithreading Applications that can carry out multiple activities
simultaneously by generating different instruction sets (threads).
•Software deadlock Two processes cannot complete their activities
because they are both waiting for system resources to be released.

CISSP All-in-One Exam Guide
320
variables are actually shared between different processes. It is critical that more than
one process does not attempt to read and write to these items at the same time. The
operating system is the master program that prevents this type of action from taking
place and ensures that programs do not corrupt each other’s data held in memory. The
operating system works with the CPU to provide time slicing through the use of inter-
rupts to ensure that processes are provided with adequate access to the CPU. This also
makes certain that critical system functions are not negatively affected by rogue applica-
tions.
To protect processes from each other, operating systems commonly have function-
ality that implements process isolation. Process isolation is necessary to ensure that
processes do not “step on each other’s toes,” communicate in an insecure manner, or
negatively affect each other’s productivity. Older operating systems did not enforce pro-
cess isolation as well as systems do today. This is why in earlier operating systems, when
one of your programs hung, all other programs, and sometimes the operating system
itself, hung. With process isolation, if one process hangs for some reason, it will not
affect the other software running. (Process isolation is required for preemptive multi-
tasking.) Different methods can be used to enforce process isolation:
• Encapsulation of objects
• Time multiplexing of shared resources
• Naming distinctions
• Virtual memory mapping
When a process is encapsulated, no other process understands or interacts with its
internal programming code. When process A needs to communicate with process B,
process A just needs to know how to communicate with process B’s interface. An inter-
face defines how communication must take place between two processes. As an analo-
gy, think back to how you had to communicate with your third-grade teacher. You had
to call her Mrs. So-and-So, say please and thank you, and speak respectfully to get
whatever it was you needed. The same thing is true for software components that need
to communicate with each other. They must know how to communicate properly with
each other’s interfaces. The interfaces dictate the type of requests a process will accept
and the type of output that will be provided. So, two processes can communicate with
each other, even if they are written in different programming languages, as long as they
know how to communicate with each other’s interface. Encapsulation provides data
hiding, which means that outside software components will not know how a process
works and will not be able to manipulate the process’s internal code. This is an integ-
rity mechanism and enforces modularity in programming code.
If a process is not isolated properly through encapsulation, this means its interface
is accepting potentially malicious instructions. The interface is like a membrane filter
that our cells within our bodies use. Our cells filter fluid and molecules that are at-
tempting to enter them. If some type of toxin slips by the filter, we can get sick because
the toxin has entered the worker bees of our bodies—cells. Processes are the worker
bees of our software. If they accept malicious instructions, our systems can get sick.

Chapter 4: Security Architecture and Design
321
Time multiplexing was already discussed, although we did not use this term. Time
multiplexing is a technology that allows processes to use the same resources. As stated
earlier, a CPU must be shared among many processes. Although it seems as though all
applications are running (executing their instructions) simultaneously, the operating
system is splitting up time shares between each process. Multiplexing means there are
several data sources and the individual data pieces are piped into one communication
channel. In this instance, the operating system is coordinating the different requests
from the different processes and piping them through the one shared CPU. An operat-
ing system must provide proper time multiplexing (resource sharing) to ensure a stable
working environment exists for software and users.
NOTE
NOTE Today’s CPUs have multiple cores, meaning that they have multiple
processors. This basically means that there are several smaller CPUs
(processors) integrated into one larger CPU. So in reality the different
processors on the CPU can execute instruction code simultaneously, making
the computer overall much faster. The operating system has to multiplex
process requests and “feed” them into the individual processors for
instruction execution.
While time multiplexing and multitasking is a performance requirement of our
systems today and is truly better than sliced bread, it introduces a lot of complexity to
our systems. We are forcing our operating systems to not only do more things faster, we
are forcing them to do all of these things simultaneously. As the complexity of our sys-
tems increases, the potential of truly securing them decreases. There is an inverse rela-
tionship between complexity and security: as one goes up the other one usually goes
down. But this fact does not necessarily predict doom and gloom; what it means is that
software architecture and development has to be done in a more disciplined manner.
Naming distinctions just means that the different processes have their own name or
identification value. Processes are usually assigned process identification (PID) values,
which the operating system and other processes use to call upon them. If each process
is isolated, that means each process has its own unique PID value. This is just another
way to enforce process isolation.
Virtual address memory mapping is different from the physical addresses of memory.
An application is written such that it basically “thinks” it is the only program running
within an operating system. When an application needs memory to work with, it tells
the operating system’s memory manager how much memory it needs. The operating
system carves out that amount of memory and assigns it to the requesting application.
The application uses its own address scheme, which usually starts at 0, but in reality, the
application does not work in the physical address space it thinks it is working in. Rather,
it works in the address space the memory manager assigns to it. The physical memory
is the RAM chips in the system. The operating system chops up this memory and as-
signs portions of it to the requesting processes. Once the process is assigned its own
memory space, it can address this portion however it is written to do so. Virtual address
mapping allows the different processes to have their own memory space; the memory

CISSP All-in-One Exam Guide
322
manager ensures no processes improperly interact with another process’s memory. This
provides integrity and confidentiality for the individual processes and their data and an
overall stable processing environment for the operating system.
If an operating system has a flaw in the programming code that controls memory
mapping, an attacker could manipulate this function. Since everything within an oper-
ating system actually has to operate in memory to work, the ability to manipulate
memory addressing can be very dangerous.
Memory Management
To provide a safe and stable environment, an operating system must exercise proper
memory management—one of its most important tasks. After all, everything happens
in memory.
The goals of memory management are to
• Provide an abstraction level for programmers
• Maximize performance with the limited amount of memory available
• Protect the operating system and applications loaded into memory
Abstraction means that the details of something are hidden. Developers of applica-
tions do not know the amount or type of memory that will be available in each and
every system their code will be loaded on. If a developer had to be concerned with this
type of detail, then her application would be able to work only on the one system that
maps to all of her specifications. To allow for portability, the memory manager hides all
of the memory issues and just provides the application with a memory segment. The
application is able to run without having to know all the hairy details of the operating
system and hardware it is running on.
Every computer has a memory hierarchy. Certain small amounts of memory are very
fast and expensive (registers,
cache), while larger amounts
are slower and less expensive
(RAM, hard drive). The por-
tion of the operating system
that keeps track of how these
different types of memory
are used is lovingly called
the memory manager. Its jobs
are to allocate and deallo-
cate different memory seg-
ments, enforce access control
to ensure processes are inter-
acting only with their own
memory segments, and swap
memory contents from RAM
to the hard drive.

Chapter 4: Security Architecture and Design
323
The memory manager has five basic responsibilities:
Relocation
• Swap contents from RAM to the hard drive as needed (explained later in the
“Virtual Memory” section of this chapter)
• Provide pointers for applications if their instructions and memory segment
have been moved to a different location in main memory
Protection
• Limit processes to interact only with the memory segments assigned to them
• Provide access control to memory segments
Sharing
• Use complex controls to ensure integrity and confidentiality when processes
need to use the same shared memory segments
• Allow many users with different levels of access to interact with the same
application running in one memory segment
Logical organization
• Segment all memory types and provide an addressing scheme for each at an
abstraction level
• Allow for the sharing of specific software modules, such as dynamic link
library (DLL) procedures
Physical organization
• Segment the physical memory space for application and operating system
processes
NOTE
NOTE A dynamic link library (DLL) is a set of functions that applications
can call upon to carry out different types of procedures. For example, the
Windows operating system has a crypt32.dll that is used by the operating
system and applications for cryptographic functions. Windows has a set of
DLLs, which is just a library of functions to be called upon and crypt32.dll
is just one example.
How can an operating system make sure a process only interacts with its memory
segment? When a process creates a thread, because it needs some instructions and data
processed, the CPU uses two registers. A base register contains the beginning address
that was assigned to the process, and a limit register contains the ending address, as il-
lustrated in Figure 4-8. The thread contains an address of where the instruction and
data reside that need to be processed. The CPU compares this address to the base and
limit registers to make sure the thread is not trying to access a memory segment outside

CISSP All-in-One Exam Guide
324
of its bounds. So, the base register makes it impossible for a thread to reference a mem-
ory address below its allocated memory segment, and the limit register makes it impos-
sible for a thread to reference a memory address above this segment.
If an operating system has a memory manager that does not enforce the memory
limits properly, an attacker can manipulate its functionality and use it against the sys-
tem. There have been several instances over the years where attackers would do just this
and bypass these types of controls. Architects and developers of operating systems have
to think through these types of weaknesses and attack types to ensure that the system
properly protects itself.
Figure 4-8
Base and limit
registers are used
to contain a process
in its own memory
segment.
Memory Protection Issues
• Every address reference is validated for protection.
• Two or more processes can share access to the same segment with
potentially different access rights.
• Different instruction and data types can be assigned different levels of
protection.
• Processes cannot generate an unpermitted address or gain access to an
unpermitted segment.
All of these issues make it more difficult for memory management to be
carried out properly in a constantly changing and complex system.

Chapter 4: Security Architecture and Design
325
Memory Types
Memory management is critical, but what types of memory actually have to be managed?
As stated previously, the operating system instructions, applications, and data are
held in memory, but so are the basic input/output system (BIOS), device controller
instructions, and firmware. They do not all reside in the same memory location or even
the same type of memory. The different types of memory, what they are used for, and
how each is accessed can get a bit confusing because the CPU deals with several differ-
ent types for different reasons.
The following sections outline the different types of memory that can be used with-
in computer systems.
Random Access Memory
Random access memory (RAM) is a type of temporary storage facility where data and
program instructions can temporarily be held and altered. It is used for read/write ac-
tivities by the operating system and applications. It is described as volatile because if
the computer’s power supply is terminated, then all information within this type of
memory is lost.
RAM is an integrated circuit made up of millions of transistors and capacitors. The
capacitor is where the actual charge is stored, which represents a 1 or 0 to the system.
The transistor acts like a gate or a switch. A capacitor that is storing a binary value of 1
has several electrons stored in it, which have a negative charge, whereas a capacitor that
is storing a 0 value is empty. When the operating system writes over a 1 bit with a 0 bit,
in reality, it is just emptying out the electrons from that specific capacitor.
One problem is that these capacitors cannot keep their charge for long. Therefore, a
memory controller has to “recharge” the values in the capacitors, which just means it
continually reads and writes the same values to the capacitors. If the memory controller
does not “refresh” the value of 1, the capacitor will start losing its electrons and become
a 0 or a corrupted value. This explains how dynamic RAM (DRAM) works. The data be-
ing held in the RAM memory cells must be continually and dynamically refreshed so
your bits do not magically disappear. This activity of constantly refreshing takes time,
which is why DRAM is slower than static RAM.
NOTE
NOTE When we are dealing with memory activities, we use a time metric
of nanoseconds (ns), which is a billionth of a second. So if you look at your
RAM chip and it states 70 ns, this means it takes 70 nanoseconds to read and
refresh each memory cell.
Static RAM (SRAM) does not require this continuous-refreshing nonsense; it uses a
different technology, by holding bits in its memory cells without the use of capacitors,
but it does require more transistors than DRAM. Since SRAM does not need to be re-
freshed, it is faster than DRAM, but because SRAM requires more transistors, it takes up
more space on the RAM chip. Manufacturers cannot fit as many SRAM memory cells on
a memory chip as they can DRAM memory cells, which is why SRAM is more expensive.
So, DRAM is cheaper and slower, and SRAM is more expensive and faster. It always

CISSP All-in-One Exam Guide
326
seems to go that way. SRAM has been used in cache, and DRAM is commonly used in
RAM chips.
Because life is not confusing enough, we have many other types of RAM. The main
reason for the continual evolution of RAM types is that it directly affects the speed of
the computer itself. Many people mistakenly think that just because you have a fast
processor, your computer will be fast. However, memory type and size and bus sizes are
also critical components. Think of memory as pieces of paper used by the system to
hold instructions. If the system had small pieces of papers (small amount of memory)
to read and write from, it would spend most of its time looking for these pieces and
lining them up properly. When a computer spends more time moving data from one
small portion of memory to another than actually processing the data, it is referred
to as thrashing. This causes the system to crawl in speed and your frustration level to
increase.
The size of the data bus also makes a difference in system speed. You can think of a
data bus as a highway that connects different portions of the computer. If a ton of data
must go from memory to the CPU and can only travel over a 4-lane highway, compared
to a 64-lane highway, there will be delays in processing.
Increased addressing space also increases system performance. A system that uses a
64-bit addressing scheme can put more instructions and data on a data bus at one time
compared to a system that uses a 32-bit addressing scheme. So a larger addressing
scheme allows more stuff to be moved around and processed and a larger bus size pro-
vides the highway to move this stuff around quickly and efficiently.
So the processor, memory type and amount, memory addressing, and bus speeds
are critical components to system performance.
The following are additional types of RAM you should be familiar with:
•Synchronous DRAM (SDRAM) Synchronizes itself with the system’s CPU
and synchronizes signal input and output on the RAM chip. It coordinates its
activities with the CPU clock so the timing of the CPU and the timing of the
memory activities are synchronized. This increases the speed of transmitting
and executing data.
•Extended data out DRAM (EDO DRAM) This is faster than DRAM because
DRAM can access only one block of data at a time, whereas EDO DRAM can
capture the next block of data while the first block is being sent to the CPU for
processing. It has a type of “look ahead” feature that speeds up memory access.
•Burst EDO DRAM (BEDO DRAM) Works like (and builds upon) EDO
DRAM in that it can transmit data to the CPU as it carries out a read option,
but it can send more data at once (burst). It reads and sends up to four
memory addresses in a small number of clock cycles.
•Double data rate SDRAM (DDR SDRAM) Carries out read operations on the
rising and falling cycles of a clock pulse. So instead of carrying out one operation
per clock cycle, it carries out two and thus can deliver twice the throughput of
SDRAM. Basically, it doubles the speed of memory activities, when compared
to SDRAM, with a smaller number of clock cycles. Pretty groovy.

Chapter 4: Security Architecture and Design
327
NOTE
NOTE These different RAM types require different controller chips to
interface with them; therefore, the motherboards that these memory types
are used on often are very specific in nature.
Well, that’s enough about RAM for now. Let’s look at other types of memory that are
used in basically every computer in the world.
Read-Only Memory
Read-only memory (ROM) is a nonvolatile memory type, meaning that when a com-
puter’s power is turned off, the data are still held within the memory chips. When data
are written into ROM memory chips, the data cannot be altered. Individual ROM chips
are manufactured with the stored program or routines designed into it. The software
that is stored within ROM is called firmware.
Programmable read-only memory (PROM) is a form of ROM that can be modified
after it has been manufactured. PROM can be programmed only one time because the
voltage that is used to write bits into the memory cells actually burns out the fuses that
connect the individual memory cells. The instructions are “burned into” PROM using
a specialized PROM programmer device.
Erasable programmable read-only memory (EPROM) can be erased, modified, and
upgraded. EPROM holds data that can be electrically erased or written to. To erase the
data on the memory chip, you need your handy-dandy ultraviolet (UV) light device
that provides just the right level of energy. The EPROM chip has a quartz window,
which is where you point the UV light. Although playing with UV light devices can be
fun for the whole family, we have moved on to another type of ROM technology that
does not require this type of activity.
To erase an EPROM chip, you must remove the chip from the computer and wave
your magic UV wand, which erases all of the data on the chip—not just portions of it.
So someone invented electrically erasable programmable read-only memory (EEPROM),
and we all put our UV light wands away for good.
EEPROM is similar to EPROM, but its data storage can be erased and modified elec-
trically by onboard programming circuitry and signals. This activity erases only one
byte at a time, which is slow. And because we are an impatient society, yet another
technology was developed that is very similar, but works more quickly.
Hardware Segmentation
Systems of a higher trust level may need to implement hardware segmentation of
the memory used by different processes. This means memory is separated physi-
cally instead of just logically. This adds another layer of protection to ensure that
a lower-privileged process does not access and modify a higher-level process’s
memory space.

CISSP All-in-One Exam Guide
328
Flash memory is a special type of memory that is used in digital cameras, BIOS
chips, memory cards, and video game consoles. It is a solid-state technology, meaning
it does not have moving parts and is used more as a type of hard drive than memory.
Flash memory basically moves around different levels of voltages to indicate that a
1 or 0 must be held in a specific address. It acts as a ROM technology rather than a RAM
technology. (For example, you do not lose pictures stored on your memory stick in your
digital camera just because your camera loses power. RAM is volatile and ROM is non-
volatile.) When Flash memory needs to be erased and turned back to its original state,
a program initiates the internal circuits to apply an electric field. The erasing function
takes place in blocks or on the entire chip instead of erasing one byte at a time.
Flash memory is used as a small disk drive in most implementations. Its benefits
over a regular hard drive are that it is smaller, faster, and lighter. So let’s deploy Flash
memory everywhere and replace our hard drives! Maybe one day. Today it is relatively
expensive compared to regular hard drives.
Cache Memory
I am going to need this later, so I will just stick it into cache for now.
Cache memory is a type of memory used for high-speed writing and reading activi-
ties. When the system assumes (through its programmatic logic) that it will need to
access specific information many times throughout its processing activities, it will store
the information in cache memory so it is easily and quickly accessible. Data in cache
can be accessed much more quickly than data stored in other memory types. Therefore,
any information needed by the CPU very quickly, and very often, is usually stored in
cache memory, thereby improving the overall speed of the computer system.
An analogy is how the brain stores information it uses often. If one of Marge’s pri-
mary functions at her job is to order parts, which requires telling vendors the compa-
ny’s address, Marge stores this address information in a portion of her brain from which
she can easily and quickly access it. This information is held in a type of cache. If Marge
was asked to recall her third-grade teacher’s name, this information would not neces-
sarily be held in cache memory, but in a more long-term storage facility within her
noggin. The long-term storage within her brain is comparable to a system’s hard drive.
It takes more time to track down and return information from a hard drive than from
specialized cache memory.
NOTE
NOTE Different motherboards have different types of cache. Level 1 (L1) is
faster than Level 2 (L2), and L2 is faster than L3. Some processors and device
controllers have cache memory built into them. L1 and L2 are usually built
into the processors and the controllers themselves.
Memory Mapping
Okay, here is your memory, here is my memory, and here is Bob’s memory. No one use each
other’s memory!
Because there are different types of memory holding different types of data, a com-
puter system does not want to let every user, process, and application access all types of
memory anytime they want to. Access to memory needs to be controlled to ensure data

Chapter 4: Security Architecture and Design
329
do not get corrupted and that sensitive information is not available to unauthorized
processes. This type of control takes place through memory mapping and addressing.
The CPU is one of the most trusted components within a system, and can access
memory directly. It uses physical addresses instead of pointers (logical addresses) to
memory segments. The CPU has physical wires connecting it to the memory chips
within the computer. Because physical wires connect the two types of components,
physical addresses are used to represent the intersection between the wires and the
transistors on a memory chip. Software does not use physical addresses; instead, it em-
ploys logical memory addresses. Accessing memory indirectly provides an access con-
trol layer between the software and the memory, which is done for protection and
efficiency. Figure 4-9 illustrates how the CPU can access memory directly using physical
addresses and how software must use memory indirectly through a memory mapper.
Let’s look at an analogy. You would like to talk to Mr. Marshall about possibly buy-
ing some acreage in Iowa. You don’t know Mr. Marshall personally, and you do not
want to give out your physical address and have him show up at your doorstep. Instead,
you would like to use a more abstract and controlled way of communicating, so you
give Mr. Marshall your phone number so you can talk to him about the land and deter-
mine whether you want to meet him in person. The same type of thing happens in
computers. When a computer runs software, it does not want to expose itself unneces-
sarily to software written by good and bad programmers alike. Operating systems en-
able software to access memory indirectly by using index tables and pointers, instead of
Figure 4-9 The CPU and applications access memory differently.

CISSP All-in-One Exam Guide
330
giving them the right to access the memory directly. This is one way the computer sys-
tem protects itself. If an operating system has a programming flaw that allows an at-
tacker to directly access memory through physical addresses, there is no memory
manager involved to control how memory is being used.
When a program attempts to access memory, its access rights are verified and then
instructions and commands are carried out in a way to ensure that badly written code
does not affect other programs or the system itself. Applications, and their processes,
can only access the memory allocated to them, as shown in Figure 4-10. This type of
memory architecture provides protection and efficiency.
The physical memory addresses that the CPU uses are called absolute addresses. The
indexed memory addresses that software uses are referred to as logical addresses. And
relative addresses are based on a known address with an offset value applied. As ex-
plained previously, an application does not “know” it is sharing memory with other
applications. When the program needs a memory segment to work with, it tells the
memory manager how much memory it needs. The memory manager allocates this
much physical memory, which could have the physical addressing of 34,000 to 39,000,
for example. But the application is not written to call upon addresses in this numbering
scheme. It is most likely developed to call upon addresses starting with 0 and extending
to, let’s say, 5000. So the memory manager allows the application to use its own ad-
dressing scheme—the logical addresses. When the application makes a call to one of
these “phantom” logical addresses, the memory manager must map this address to the
Figure 4-10 Applications, and the processes they use, access their own memory segments only.

Chapter 4: Security Architecture and Design
331
actual physical address. (It’s like two people using their own naming scheme. When
Bob asks Diane for a ball, Diane knows he really means a stapler. Don’t judge Bob and
Diane; it works for them.)
The mapping process is illustrated in Figure 4-11. When a thread indicates the in-
struction needs to be processed, it provides a logical address. The memory manager
maps the logical address to the physical address, so the CPU knows where the instruc-
tion is located. The thread will actually be using a relative address, because the application
uses the address space of 0 to 5000. When the thread indicates it needs the instruction
at the memory address 3400 to be executed, the memory manager has to work from its
mapping of logical address 0 to the actual physical address and then figure out the
physical address for the logical address 3400. So the logical address 3400 is relative to
the starting address 0.
Figure 4-11 The CPU uses absolute addresses, and software uses logical addresses.

CISSP All-in-One Exam Guide
332
As an analogy, if I know you use a different number system than everyone else in the
world, and you tell me that you need 14 cookies, I would need to know where to start
in your number scheme to figure out how many cookies to really give you. So, if you
inform me that in “your world” your numbering scheme starts at 5, I would map 5 to
0 and know that the offset is a value of 5. So when you tell me you want 14 cookies (the
relative number), I take the offset value into consideration. I know that you start at the
value 5, so I map your logical address of 14 to the physical number of 9. (But I would
not give you nine cookies, because you made me work too hard to figure all of this out.
I will just eat them myself.)
So the application is working in its “own world” using its “own addresses,” and the
memory manager has to map these values to reality, which means the absolute address
values.
Memory management is complex, and whenever there is complexity, there are most
likely vulnerabilities that can be exploited by attackers. It is very easy for people to com-
plain about software vendors and how they do not produce software that provides the
necessary level of security, but hopefully you are gaining more insight into the actual
complexity that is involved with these tasks.
Buffer Overflows
My cup runneth over and so does my buffer.
Today, many people know the term “buffer overflow” and the basic definition, but
it is important for security professionals to understand what is going on beneath the
covers.
Abuffer overflow takes place when too much data are accepted as input to a specific
process. A buffer is an allocated segment of memory. A buffer can be overflowed arbi-
trarily with too much data, but for it to be of any use to an attacker, the code inserted
into the buffer must be of a specific length, followed up by commands the attacker
wants executed. So, the purpose of a buffer overflow may be either to make a mess, by
shoving arbitrary data into various memory segments, or to accomplish a specific task,
by pushing into the memory segment a carefully crafted set of data that will accomplish
a specific task. This task could be to open a command shell with administrative privi-
lege or execute malicious code.
Let’s take a deeper look at how this is accomplished. Software may be written to
accept data from a user, website, database, or another application. The accepted data
needs something to happen to it, because it has been inserted for some type of ma-
nipulation or calculation, or to be used as a parameter to be passed to a procedure. A
procedure is code that can carry out a specific type of function on the data and return
the result to the requesting software, as shown in Figure 4-12.
When a programmer writes a piece of software that will accept data, this data and
its associated instructions will be stored in the buffers that make up a stack. The buffers
need to be the right size to accept the inputted data. So if the input is supposed to be
one character, the buffer should be one byte in size. If a programmer does not ensure
that only one byte of data is being inserted into the software, then someone can input
several characters at once and thus overflow that specific buffer.

Chapter 4: Security Architecture and Design
333
NOTE
NOTE You can think of a buffer as a small bucket to hold water (data). We
have several of these small buckets stacked on top of one another (memory
stack), and if too much water is poured into the top bucket, it spills over into
the buckets below it (buffer overflow) and overwrites the instructions and
data on the memory stack.
If you are interacting with an application that calculates mortgage rates, you have to
put in the parameters that need to be calculated—years of loan, percentage of interest
rate, and amount of loan. These parameters are passed into empty variables and put in
a linear construct (memory stack), which acts like a queue for the procedure to pull
from when it carries out this calculation. The first thing your mortgage rate application
lays down on the stack is its return pointer. This is a pointer to the requesting applica-
tion’s memory address that tells the procedure to return control to the requesting ap-
plication after the procedure has worked through all the values on the stack. The
Figure 4-12 A memory stack has individual buffers to hold instructions and data.

CISSP All-in-One Exam Guide
334
mortgage rate application then places on top of the return pointer the rest of the data
you have input and sends a request to the procedure to carry out the necessary calcula-
tion, as illustrated in Figure 4-12. The procedure takes the data off the stack starting
at the top, so they are first in, last out (FILO). The procedure carries out its functions on
all the data and returns the result and control back to the requesting mortgage rate
application once it hits the return pointer in the stack.
So the stack is just a segment in memory that allows for communication between
the requesting application and the procedure or subroutine. The potential for problems
comes into play when the requesting application does not carry out proper bounds
checking to ensure the inputted data are of an acceptable length. Look at the following
C code to see how this could happen:
#include<stdio.h>
int main(int argc, char **argv)
{
char buf1 [5] = "1111";
char buf2 [7] = "222222";
strcpy (buf2, "3333333333");
printf ("%s\n", buf2);
printf ("%s\n", buf1);
return 0;
}
CAUTION
CAUTION You do not need to know C programming for the CISSP exam.
We are digging deep into this topic because buffer overflows are so common
and have caused grave security breaches over the years. For the CISSP exam,
you just need to understand the overall concept of a buffer overflow.
Here, we are setting up a buffer (buf1) to hold four characters and a NULL value,
and a second buffer (buf2) to hold six characters and a NULL value. (The NULL values
indicate the buffer’s end place in memory.) If we viewed these buffers, we would see the
following:
Buf2
\0 2 2 2 2 2 2
Buf1
\0 1 1 1 1
The application then accepts ten 3s into buf2, which can only hold six characters.
So the six variables in buf2 are filled and then the four variables in buf1 are filled,
overwriting the original contents of buf1. This took place because the strcpy com-
mand did not make sure the buffer was large enough to hold that many values. So now
if we looked at the two buffers, we would see the following:
Buf2
\0 3 3 3 3 3 3
Buf1
\0 3 3 3 3

Chapter 4: Security Architecture and Design
335
But what gets even more interesting is when the actual return pointer is written
over, as shown in Figure 4-13. In a carefully crafted buffer overflow attack, the stack is
filled properly so the return pointer can be overwritten and control is given to the mali-
cious instructions that have been loaded onto the stack instead of back to the request-
ing application. This allows the malicious instructions to be executed in the security
context of the requesting application. If this application is running in a privileged
mode, the attacker has more permissions and rights to carry out more damage.
The attacker must know the size of the buffer to overwrite and must know the ad-
dresses that have been assigned to the stack. Without knowing these addresses, she
could not lay down a new return pointer to her malicious code. The attacker must also
write this dangerous payload to be small enough so it can be passed as input from one
procedure to the next.
Windows’ core is written in the C programming language and has layers and layers
of object-oriented code on top of it. When a procedure needs to call upon the operating
system to carry out some type of task, it calls upon a system service via an application
program interface (API) call. The API works like a doorway to the operating system’s
functionality.
Figure 4-13 A buffer overflow attack

CISSP All-in-One Exam Guide
336
The C programming language is susceptible to buffer overflow attacks because it
allows for direct pointer manipulations to take place. Specific commands can provide
access to low-level memory addresses without carrying out bounds checking. The C
functions that do perform the necessary boundary checking include strncpy(),
strncat(),snprintf(), and vsnprintf().
NOTE
NOTE An operating system must be written to work with specific CPU
architectures. These architectures dictate system memory addressing,
protection mechanisms, and modes of execution, and work with specific
instruction sets. This means a buffer overflow attack that works on an Intel chip
will not necessarily work on an AMD or a SPARC processor. These different
processors set up the memory address of the stacks differently, so the attacker
may have to craft a different buffer overflow code for different platforms.
Buffer overflows are in the source code of various applications and operating sys-
tems. They have been around since programmers started developing software. This
means it is very difficult for a user to identify and fix them. When a buffer overflow is
identified, the vendor usually sends out a patch, so keeping systems current on updates,
hotfixes, and patches is usually the best countermeasure. Some products installed on
systems can also watch for input values that might result in buffer overflows, but the
best countermeasure is proper programming. This means use bounds checking. If an
input value is only supposed to be nine characters, then the application should only
accept nine characters and no more. Some languages are more susceptible to buffer
overflows than others, so programmers should understand these issues, use the right
languages for the right purposes, and carry out code review to identify buffer overflow
vulnerabilities.
Memory Protection Techniques
Since your whole operating system and all your applications are loaded and run
in memory, this is where the attackers can really do their damage. Vendors of dif-
ferent operating systems (Windows, Unix, Linux, Macintosh, etc.) have imple-
mented various types of protection methods integrated into their memory man-
ager processes. For example, Windows Vista was the first version of Windows to
implement address space layout randomization (ASLR), which was first imple-
mented in OpenBSD.
If an attacker wants to maliciously interact with a process, he needs to know
what memory address to send his attack inputs to. If the operating system changed
these addresses continuously, which is what ALSR accomplishes, this would
greatly reduce the potential success of his attack. You can’t mess with something
if you don’t know where it is.
Many of the main operating systems use some form of data execution preven-
tion (DEP), which can be implemented via hardware (CPU) or software (operat-
ing system). The actual implementations of DEP varies, but the main goal is to

Chapter 4: Security Architecture and Design
337
Memory Leaks
Oh great, the memory leaked all over me. Does someone have a mop?
As stated earlier, when an application makes a request for a memory segment to
work within, it is allocated a specific memory amount by the operating system. When
the application is done with the memory, it is supposed to tell the operating system to
release the memory so it is available to other applications. This is only fair. But some
applications are written poorly and do not indicate to the system that this memory is
no longer in use. If this happens enough times, the operating system could become
“starved” for memory, which would drastically affect the system’s performance.
When a memory leak is identified in the hacker world, this opens the door to new
denial-of-service (DoS) attacks. For example, when it was uncovered that a Unix appli-
cation and a specific version of a Telnet protocol contained memory leaks, hackers
amplified the problem. They continually sent Telnet requests to systems with these
vulnerabilities. The systems would allocate resources for these network requests, which
in turn would cause more and more memory to be allocated and not returned. Eventu-
ally the systems would run out of memory and freeze.
NOTE
NOTE Memory leaks can take place in operating systems, applications, and
software drivers.
Two main countermeasures can protect against memory leaks: developing better
code that releases memory properly, and using a garbage collector. A garbage collector
is software that runs an algorithm to identify unused committed memory and then tells
the operating system to mark that memory as “available.” Different types of garbage
collectors work with different operating systems and programming languages.
Virtual Memory
My RAM is overflowing! Can I use some of your hard drive space?
Response: No, I don’t like you.
Secondary storage is considered nonvolatile storage media and includes such things
as the computer’s hard drive, USB drives, and CD-ROMs. When RAM and secondary
storage are combined, the result is virtual memory. The system uses hard drive space to
help ensure that executable code does not function within memory segments that
could be dangerous. It is similar to not allowing someone suspicious in your
house. You don’t know if this person is really going to do something malicious,
but just to make sure you will not allow him to be in a position where he could
bring harm to you or your household. DEP can mark certain memory locations as
“off limits” with the goal of reducing the “playing field” for hackers and malware.

CISSP All-in-One Exam Guide
338
extend its RAM memory space. Swap space is the reserved hard drive space used to ex-
tend RAM capabilities. Windows systems use the pagefile.sys file to reserve this space.
When a system fills up its volatile memory space, it writes data from memory onto the
hard drive. When a program requests access to this data, it is brought from the hard
drive back into memory in specific units, called pages. This process is called virtual
memory paging. Accessing data kept in pages on the hard drive takes more time than
accessing data kept in RAM memory because physical disk read/write access must take
place. Internal control blocks, maintained by the operating system, keep track of what
page frames are residing in RAM and what is available “offline,” ready to be called into
RAM for execution or processing, if needed. The payoff is that it seems as though the
system can hold an incredible amount of information and program instructions in
memory, as shown in Figure 4-14.
A security issue with using virtual swap space is that when the system is shut down,
or processes that were using the swap space are terminated, the pointers to the pages are
reset to “available” even though the actual data written to disk are still physically there.
These data could conceivably be compromised and captured. On various operating
systems, there are routines to wipe the swap spaces after a process is done with it, before
Figure 4-14 Combining RAM and secondary storage to create virtual memory

Chapter 4: Security Architecture and Design
339
it is used again. The routines should also erase this data before a system shutdown, at
which time the operating system would no longer be able to maintain any control over
what happens on the hard drive surface.
NOTE
NOTE If a program, file, or data are encrypted and saved on the hard drive,
they will be decrypted when used by the controlling program. While these
unencrypted data are sitting in RAM, the system could write out the data to
the swap space on the hard drive, in their unencrypted state. Attackers have
figured out how to gain access to this space in unauthorized manners.
Key Terms
•Process isolation Protection mechanism provided by operating
systems that can be implemented as encapsulation, time multiplexing
of shared resources, naming distinctions, and virtual memory mapping.
•Dynamic link libraries (DLLs) A set of subroutines that are shared by
different applications and operating system processes.
•Base registers Beginning of address space assigned to a process.
Used to ensure a process does not make a request outside its assigned
memory boundaries.
•Limit registers Ending of address space assigned to a process. Used to
ensure a process does not make a request outside its assigned memory
boundaries.
•RAM Memory sticks that are plugged into a computer’s motherboard
and work as volatile memory space for an operating system.
•ROM Nonvolatile memory that is used on motherboards for BIOS
functionality and various device controllers to allow for operating
system-to-device communication. Sometimes used for off-loading
graphic rendering or cryptographic functionality.
•Hardware segmentation Physically mapping software to individual
memory segments.
•Cache memory Fast and expensive memory type that is used by a
CPU to increase read and write operations.
•Absolute addresses Hardware addresses used by the CPU.
•Logical addresses Indirect addressing used by processes within an
operating system. The memory manager carries out logical-to-absolute
address mapping.
•Stack Memory construct that is made up of individually addressable
buffers. Process-to-process communication takes place through the use
of stacks.

CISSP All-in-One Exam Guide
340
Input/Output Device Management
Some things come in, some things go out.
Response: We took a vote and would like you to go out.
We have covered a lot of operating system responsibilities up to now, and we are not
stopping yet. An operating system also has to control all input/output devices. It sends
commands to them, accepts their interrupts when they need to communicate with the
CPU, and provides an interface between the devices and the applications.
I/O devices are usually considered block or character devices. A block device works
with data in fixed-size blocks, each block with its own unique address. A disk drive is
an example of a block device. A character device, such as a printer, network interface
card, or mouse, works with streams of characters, without using any fixed sizes. This
type of data is not addressable.
When a user chooses to print a document, open a stored file on a hard drive, or save
files to a USB drive, these requests go from the application the user is working in, through
the operating system, and to the device requested. The operating system uses a device
driver to communicate with a device controller, which may be a circuit card that fits into
an expansion slot on the motherboard. The controller is an electrical component with its
own software that provides a communication path that enables the device and operating
system to exchange data. The operating system sends commands to the device controller’s
registers and the controller then writes data to the peripheral device or extracts data to be
processed by the CPU, depending on the given commands. If the command is to extract
data from the hard drive, the controller takes the bits and puts them into the necessary
block size and carries out a checksum activity to verify the integrity of the data. If the in-
tegrity is successfully verified, the data are put into memory for the CPU to interact with.
Operating systems need to access and release devices and computer resources prop-
erly. Different operating systems handle accessing devices and resources differently. For
example, Windows 2000 is considered a more stable and safer data processing environ-
ment than Windows 9x because applications in Windows 2000 cannot make direct re-
•Buffer overflow Too much data is put into the buffers that make up a
stack. Common attack vector used by hackers to run malicious code on
a target system.
•Address space layout randomization (ASLR) Memory protection
mechanism used by some operating systems. The addresses used by
components of a process are randomized so that it is harder for an
attacker to exploit specific memory vulnerabilities.
•Data execution prevention (DEP) Memory protection mechanism
used by some operating systems. Memory segments may be marked as
nonexecutable so that they cannot be misused by malicious software.
•Garbage collector Tool that marks unused memory segments as
usable to ensure that an operating system does not run out of memory.
•Virtual memory Combination of main memory (RAM) and secondary
memory within an operating system.

Chapter 4: Security Architecture and Design
341
quests to hardware devices. Windows 2000 and later versions have a much more
controlled method of accessing devices than Windows 9x. This method helps protect
the system from badly written code that does not properly request and release resourc-
es. Such a level of protection helps ensure the resources’ integrity and availability.
Interrupts
Excuse me, can I talk now?
Response: Please wait until we call your number.
When an I/O device has completed whatever task was asked of it, it needs to inform
the CPU that the necessary data are now in memory for processing. The device’s con-
troller sends a signal down a bus, which is detected by the interrupt controller. (This is
what it means to use an interrupt. The device signals the interrupt controller and is
basically saying, “I am done and need attention now.”) If the CPU is busy and the de-
vice’s interrupt is not a higher priority than whatever job is being processed, then the
device has to wait. The interrupt controller sends a message to the CPU, indicating what
device needs attention. The operating system has a table (called the interrupt vector) of
all the I/O devices connected to it. The CPU compares the received number with the
values within the interrupt vector so it knows which I/O device needs its services. The
table has the memory addresses of the different I/O devices. So when the CPU under-
stands that the hard drive needs attention, it looks in the table to find the correct mem-
ory address. This is the new program counter value, which is the initial address of where
the CPU should start reading from.
One of the main goals of the operating system software that controls I/O activity is
to be device independent. This means a developer can write an application to read
(open a file) or write (save a file) to any device (USB drive, hard drive, CD-ROM drive).
This level of abstraction frees application developers from having to write different
procedures to interact with the various I/O devices. If a developer had to write an indi-
vidual procedure of how to write to a CD-ROM drive, and how to write to a USB drive,
how to write to a hard disk, and so on, each time a new type of I/O device was devel-
oped, all of the applications would have to be patched or upgraded.
Operating systems can carry out software I/O procedures in various ways. We will
look at the following methods:
• Programmed I/O
• Interrupt-driven I/O
• I/O using DMA
• Premapped I/O
• Fully mapped I/O
Programmable I/O If an operating system is using programmable I/O, this means
the CPU sends data to an I/O device and polls the device to see if it is ready to accept
more data. If the device is not ready to accept more data, the CPU wastes time by waiting
for the device to become ready. For example, the CPU would send a byte of data (a char-
acter) to the printer and then ask the printer if it is ready for another byte. The CPU sends
the text to be printed one byte at a time. This is a very slow way of working and wastes
precious CPU time. So the smart people figured out a better way: interrupt-driven I/O.

CISSP All-in-One Exam Guide
342
Interrupt-Driven I/O If an operating system is using interrupt-driven I/O, this
means the CPU sends a character over to the printer and then goes and works on an-
other process’s request. When the printer is done printing the first character, it sends an
interrupt to the CPU. The CPU stops what it is doing, sends another character to the
printer, and moves to another job. This process (send character—go do something
else—interrupt—send another character) continues until the whole text is printed. Al-
though the CPU is not waiting for each byte to be printed, this method does waste a lot
of time dealing with all the interrupts. So we excused those smart people and brought
in some new smarter people, who came up with I/O using DMA.
I/O Using DMA Direct memory access (DMA) is a way of transferring data between
I/O devices and the system’s memory without using the CPU. This speeds up data trans-
fer rates significantly. When used in I/O activities, the DMA controller feeds the charac-
ters to the printer without bothering the CPU. This method is sometimes referred to as
unmapped I/O.
Premapped I/O Premapped I/O and fully mapped I/O (described next) do not
pertain to performance, as do the earlier methods, but provide two approaches that can
directly affect security. In a premapped I/O system, the CPU sends the physical memory
address of the requesting process to the I/O device, and the I/O device is trusted enough
to interact with the contents of memory directly, so the CPU does not control the inter-
actions between the I/O device and memory. The operating system trusts the device to
behave properly. Scary.
Fully Mapped I/O Under fully mapped I/O, the operating system does not trust
the I/O device. The physical address is not given to the I/O device. Instead, the device
works purely with logical addresses and works on behalf (under the security context) of
the requesting process, so the operating system does not trust the device to interact with
memory directly. The operating system does not trust the process or device and it acts
as the broker to control how they communicate with each other.
CPU Architecture
If I am corrupted, very bad things can happen.
Response: Then you need to go into ring 0.
An operating system and a CPU have to be compatible and share a similar architec-
ture to work together. While an operating system is software and CPU is hardware, they
actually work so closely together when a computer is running this delineation gets
blurred. An operating system has to be able to “fit into” a CPU like a hand in a glove.
Once a hand is inside of a glove they both move together as a single entity.
An operating system and a CPU must be able to communicate through an instruc-
tion set. You may have heard of x86, which is a family of instruction sets. An instruction
set is a language an operating system must be able to talk to properly communicate to a
CPU. As an analogy, if you want me to carry out some tasks for you, you will have to tell
me the instructions in a manner that I understand.

Chapter 4: Security Architecture and Design
343
The microarchitecture contains the things that make up the physical CPU (registers,
logic gates, ALU, cache, etc.). The CPU knows mechanically how to use all of these
parts; it just needs to know what the operating system wants it to do. A chef knows how
to use all of his pots, pans, spices, and ingredients, but he needs an order from the
menu so he knows how to use all of these properly to achieve the requested outcome.
Similarly, the CPU has a “menu” of operations the operating system can “order” from,
which is the instruction set. The operating system puts in its order (render graphics on
screen, print to printer, encrypt data, etc.), and the CPU carries out the request and
provides the result.
NOTE
NOTE The most common instruction set in use today (x86) can be used
within different microarchitectures (Intel, AMD, etc.) and with different
operating systems (Windows, Macintosh, Linux, etc.).
Along with sharing this same language (instruction set), the operating system and
CPU have to work within the same ring architecture. Let’s approach this from the top
and work our way down. If an operating system is going to be stable, it must be able to
protect itself from its users and their applications. This requires the capability to distin-
guish between operations performed on behalf of the operating system itself and op-
erations performed on behalf of the users or applications. This can be complex, because
the operating system software may be accessing memory segments, sending instruc-
tions to the CPU for processing, accessing secondary storage devices, communicating
with peripheral devices, dealing with networking requests, and more at the same time.
Each user application (e-mail client, antimalware program, web browser, word proces-
sor, personal firewall, and so on) may also be attempting the same types of activities at
the same time. The operating system must keep track of all of these events and ensure
none of them put the system at risk.
The operating system has several protection mechanisms to ensure processes do not
negatively affect each other or the critical components of the system itself. One has al-
ready been mentioned: memory protection. Another security mechanism the system
uses is a ring-based architecture.
The architecture of the CPU dictates how many rings are available for an operating
system to use. As shown in Figure 4-15, the rings act as containers and barriers. They are

CISSP All-in-One Exam Guide
344
containers in that they provide an execution environment for processes to be able carry
out their functions, and barriers in that the different processes are “walled off” from
each other based upon the trust the operating system has in them.
Let’s say that I build a facility based upon this type of ring structure. My crown jew-
els are stored in the center of the facility (ring 0), so I am not going to allow just anyone
in this section of my building. Only the people I really, really trust. I will allow the
people I kind of trust in the next level of my facility (ring 1). If I don’t trust you at all,
you are going into ring 3 so that you are as far from my crown jewels as possible. This
is how the ring structure of a CPU works. Ring 0 is for the most trusted components of
the operating system itself. This is because processes that are allowed to work in ring 0
can access very critical components in the system. Ring 0 is where the operating sys-
tem’s kernel (most trusted and powerful processes) works. Less trusted processes, as in
operating system utilities, can work in ring 1, and the least trusted processes (applica-
tions) work in the farthest ring, ring 3. This layered approach provides a self-protection
mechanism for the operating system.
Operating system components that operate in ring 0 have the most access to mem-
ory locations, peripheral devices, system drivers, and sensitive configuration parame-
ters. Because this ring provides much more dangerous access to critical resources, it is
the most protected. Applications usually operate in ring 3, which limits the type of
memory, peripheral device, and driver access activity and is controlled through the op-
Figure 4-15
More trusted
processes operate
within lower-
numbered rings.

Chapter 4: Security Architecture and Design
345
erating system services and system calls. The type of commands and instructions sent to
the CPU from applications in the outer rings are more restrictive in nature. If an appli-
cation tries to send instructions to the CPU that fall outside its permission level, the
CPU treats this violation as an exception and may show a general protection fault or
exception error and attempt to shut down the offending application.
These protection rings provide an intermediate layer between processes, and are
used for access control when one process tries to access another process or interact with
system resources. The ring number determines the access level a process has—the lower
the ring number, the greater the amount of privilege given to the process running with-
in that ring. A process in ring 3 cannot directly access a process in ring 1, but processes
in ring 1 can directly access processes in ring 3. Entities cannot directly communicate
with objects in higher rings.
If we go back to our facility analogy, people in ring 0 can go and talk to any of the
other people in the different areas (rings) of the facility. I trust them and I will let them
do what they need to do. But if people in ring 3 of my facility want to talk to people in
ring 2, I cannot allow this to happen in an unprotected manner. I don’t trust these
people and do not know what they will do. Someone from ring 3 might try to punch
someone from ring 2 in the face and then everyone will be unhappy. So if someone in
ring 3 needs to communicate to someone in ring 2, he has to write down his message
on a piece of paper and give it to the guard. The guard will review it and hand it to the
person in ring 2 if it is safe and acceptable.
In an operating system, the less trusted processes that are working in ring 3 send
their communication requests to an API provided by the operating system specifically
for this purpose (guard). The communication request is passed to the more trusted
process in ring 2 in a controlled and safe manner.
Application Programming Interface (API)
An API is the doorway to a protocol, operating service, process, or DLL. When one
piece of software needs to send information to another piece of software, it must
format its communication request in a way that the receiving software under-
stands. An application may send a request to an operating system’s cryptographic
DLL, which will in turn carry out the requested cryptographic functionality for
the application.
We will cover APIs in more depth in Chapter 10, but for now understand that
it is a type of guard that provides access control between the trusted and non-
trusted processes within an operating system. If an application (nontrusted) pro-
cess needs to send a message to the operating system’s network protocol stack, it
will send the information to the operating system’s networking service. The ap-
plication sends the request in a format that will be accepted by the service’s API.
APIs must be properly written by the operating system developers to ensure dan-
gerous data cannot pass through this communication channel. If suspicious data
gets past an API, the service could be compromised and execute code in a privi-
leged context.

CISSP All-in-One Exam Guide
346
CPU Operation Modes
As stated earlier, the CPU provides the ring structure architecture and the operating
system assigns its processes to the different rings. When a process is placed in ring 0, its
activities are carried out in kernel mode, which means it can access the most critical
resources in a nonrestrictive manner. The process is assigned a status level by the oper-
ating system (stored as PSW) and when it needs to interact with the CPU, the CPU
checks its status to know what it can and cannot allow the process to do. If the process
has the status of user mode, the CPU will limit the process’s access to system resources
and restrict the functions it can carry out on these resources.
Attackers have found many ways around this protection scheme and have tricked
operating systems into loading their malicious code into ring 0, which is very danger-
ous. Attackers have fooled operating systems by creating their malicious code to mimic
system-based DLLs, loadable kernel modules, or other critical files. The operating sys-
tem then loads the malicious code into ring 0 and it runs in kernel mode. At this point
the code could carry out almost any activity within the operating system in an unpro-
tected manner. The malicious code can install key loggers, sniffers, code injection tools,
and Trojaned files. The code could delete files on the hard drive, install backdoors, or
send sensitive data to the attacker’s computer using the compromised system’s network
protocol stack.
NOTE
NOTE The actual ring numbers available in a CPU architecture are dictated
by the CPU itself. Some processors provide four rings and some provide eight
or more. The operating systems do not have to use each available ring in the
architecture; for example, Windows commonly uses only rings 0 and 3 and
does not use ring 1 or 2. The vendor of the CPU determines the number of
available rings, and the vendor of the operating system determines how it will
use these rings.
Process Domain
The term domain just means a collection of resources. A process has a collection
of resources assigned to it when it is loaded into memory (run time), as in mem-
ory addresses, files it can interact with, system services available to it, peripheral
devices, etc. The higher the ring level that the process executes within, the larger
the domain of resources that is available to it.
It is the responsibility of the operating system to provide a safe execution
domain for the different processes it serves. This means that when a process is
carrying out its activities, the operating system provides a safe, predictable, and
stable environment. The execution domain is a combination of where the pro-
cess can carry out its functions (memory segment), the tools available to it, and
the boundaries involved to keep it in a safe and confined area.

Chapter 4: Security Architecture and Design
347
Operating System Architectures
We started this chapter by looking at system architecture approaches. Remember that a
system is made up of all the necessary pieces for computation: hardware, firmware, and
software components. The chapter moved into the architecture of a CPU, which just
looked at the processor. Now we will look at operating system architectures, which deal
specifically with the software components of a system.
Operating system architectures have gone through quite an evolutionary process
based upon industry functionality and security needs. The architecture is the framework
that dictates how the pieces and parts of the operating system interact with each other
and provide the functionality that the applications and users require of it. This section
looks at the monolithic, layered, microkernel, and hybrid microkernel architectures.
While operating systems are very complex, some main differences in the architec-
tural approaches have come down to what is running in kernel mode and what is not.
In a monolithic architecture, all of the operating system processes work in kernel mode,
as illustrated in Figure 4-16. The services provided by the operating system (memory
management, I/O management, process scheduling, file management, etc.) are avail-
able to applications through system calls.
Earlier operating systems, such as MS-DOS, were based upon a monolithic design.
The whole operating system acted as one software layer between the user applications
and the hardware level. There are several problems with this approach: complexity,
portability, extensibility, and security. Since the functionality of the code is spread
throughout the system, it is hard to test and debug. If there is a flaw in a software com-
ponent it is difficult to localize and easily fix. Many pieces of this spaghetti bowl of
code had to be modified just to address one issue.
This type of operating system is also hard to port from one hardware platform to
another because the hardware interfaces are implemented throughout the software. If
the operating system has to work on new hardware, it would require extensive rewriting
of the code. Too many components interact directly with the hardware, which increased
the complexity.
Figure 4-16 Monolithic operating system architecture

CISSP All-in-One Exam Guide
348
Since the monolithic system is not modular in nature, it is difficult to add and sub-
tract functionality. As we will see in this section, later operating systems became more
modular in nature to allow for functionality to be added as needed. And since all the
code ran in a privileged state (kernel mode), user mistakes could cause drastic effects
and malicious activities could take place more easily. Too much code was running in
kernel mode, and it was too disorganized.
NOTE
NOTE MS-DOS was a very basic and simplistic operating system that used
a rudimentary user interface, which was just a simple command interpreter
(shell). Early versions of Windows (3.x) just added a user-friendly graphical
user interface (GUI) on top of MS-DOS. Windows NT and 2000 broke
away from the MS-DOS model and moved toward a kernel-based
architecture. A kernel is made up of all the critical processes within an
operating system. A kernel approach centralized and modularized critical
operating system functionality.
In the next generation of operating system architecture, system architects added
more organization to the system. The layered operating system architecture separates
system functionality into hierarchical layers. For example, a system that followed a lay-
ered architecture was, strangely enough, called THE (Technische Hogeschool Eind-
hoven) multiprogramming system. THE had five layers of functionality. Layer 0
controlled access to the processor and provided multiprogramming functionality; layer
1 carried out memory management; layer 2 provided interprocess communication;
layer 3 dealt with I/O devices; and layer 4 was where the applications resided. The pro-
cesses at the different layers each had interfaces to be used by processes in layers below
and above them.
This layered approach, illustrated in Figure 4-17, had the full operating system still
working in kernel mode (ring 0). The main difference between the monolithic approach
and this layered approach is that the functionality within the operating system was laid
out in distinctive layers that called upon each other.
Figure 4-17
Layered operating
system architecture

Chapter 4: Security Architecture and Design
349
In the monolithic architecture, software modules communicate to each other in an
ad hoc manner. In the layered architecture, module communication takes place in an
organized, hierarchical structure. Routines in one layer only facilitate the layer directly
below it, so no layer is missed.
Layered operating systems provide data hiding, which means that instructions and
data (packaged up as procedures) at the various layers do not have direct access to the
instructions and data at any other layers. Each procedure at each layer has access only
to its own data and a set of functions that it requires to carry out its own tasks. If a pro-
cedure can access more procedures than it really needs, this opens the door for more
successful compromises. For example, if an attacker is able to compromise and gain
control of one procedure and this procedure has direct access to all other procedures,
the attacker could compromise a more privileged procedure and carry out more devas-
tating activities.
A monolithic operating system provides only one layer of security. In a layered sys-
tem, each layer should provide its own security and access control. If one layer contains
the necessary security mechanisms to make security decisions for all the other layers,
then that one layer knows too much about (and has access to) too many objects at the
different layers. This directly violates the data-hiding concept. Modularizing software
and its code increases the assurance level of the system, because if one module is com-
promised, it does not mean all other modules are now vulnerable.
Since this layered approach provided more modularity, it allows for functionality to
be added and subtracted from the operating systems more easily. (You experience this
type of modularity when you load new kernel modules to Linux-based systems or DLLs
in Windows.) The layered approach also introduced the idea of having an abstraction
level added to the lower portion of the operating system. This abstraction level allows
the operating system to be more portable from one hardware platform to the next. (In
Windows environments you know this invention as the Hardware Abstraction Layer, or
HAL). Examples of layered operating systems are THE, VAX/VMS, Multics, and Unix
(although THE and Multics are no longer in use).
The downfalls with this layered approach are performance, complexity, and securi-
ty. If several layers of execution have to take place for even simple operating system ac-
tivities, there can be a performance hit. The security issues mainly dealt with so much
code still running in kernel mode. The more processes there are running in a privileged
state, the higher the likelihood of compromises that have high impact. The attack sur-
face of the operating system overall needed to be reduced.
As the evolution of operating system development marched forward, the system
architects reduced the number of required processes that made up the kernel (critical
operating system components) and some operating system types moved from a mono-
lithic model to a microkernel model. The microkernel is a smaller subset of critical
kernel processes, which focus mainly on memory management and interprocess com-
munication, as shown in Figure 4-18. Other operating system components, such as
protocols, device drivers, and file systems, are not included in the microkernel and
worked in user mode. The goal was to limit the processes that run in kernel mode so
that the overall system is more secure, complexity is reduced, and portability of the
operating system is increased.

CISSP All-in-One Exam Guide
350
Operating system vendors found that having just a stripped-down microkernel
working in kernel mode had a lot of performance issues because processing required so
many mode transitions. A mode transition takes place every time a CPU has to move
between executing instructions for processes that work in kernel mode versus user
mode. As an analogy, let’s say that I have to set up a different office environment for my
two employees when they come to the office to work. There is only one office with one
desk, one computer, and one file cabinet (just like the computer only has one CPU).
Before Sam gets to the office I have to put out the papers for his accounts, fill the file
cabinet with files relating to his tasks, configure the workstation with his user profile,
and make sure his coffee cup is available. When Sam leaves and before Vicky gets to the
office, I have to change out all the papers, files, user profile, and coffee cup. My respon-
sibility is to provide the different employees with the right environment so that they
can get right down to work when they arrive at the office, but constantly changing out
all the items is time consuming. In essence, this is what a CPU has to do when an inter-
rupt takes place and a process from a different mode (kernel or user) needs its instruc-
tions executed. The current process information has to be stored and saved so the CPU
can come back and complete the original process’s requests. The new process informa-
tion (memory addresses, program counter value, PSW, etc.) has to be moved into the
CPU registers. Once this is completed, then the CPU can start executing the process’s
instruction set. This back and forth has to happen because it is a multitasking system
that is sharing one resource—the CPU.
So the industry went from a bloated kernel (whole operating system) to a small
kernel (microkernel), but the performance hit was too great. There had to be a compro-
mise between the two, which is referred to as the hybrid microkernel architecture.
Figure 4-18 Microkernel architecture

Chapter 4: Security Architecture and Design
351
In a hybrid microkernel architecture, the microkernel still exists and carries out
mainly interprocess communication and memory management responsibilities. All of
the other operating services work in a client\server model. The operating system ser-
vices are the servers, and the application processes are the clients. When a user’s appli-
cation needs the operating system to carry out some type of functionality for it (file
system interaction, peripheral device communication, network access, etc.), it makes a
request to the specific API of the system’s server service. This operating system service
carries out the activity for the application and returns the result when finished. The
separation of a microkernel and the other operating system services within a hybrid
microkernel architecture is illustrated in Figure 4-19, which is the basic structure of a
Windows environment. The services that run outside the microkernel are collectively
referred to as the executive services.
Figure 4-19 Windows hybrid microkernel architecture

CISSP All-in-One Exam Guide
352
The basic core definitions of the different architecture types are as follows:
•Monolithic All operating system processes run in kernel mode.
•Layered All operating system processes run in a hierarchical model in kernel
mode.
•Microkernel Core operating system processes run in kernel mode and the
remaining ones run in user mode.
•Hybrid microkernel All operating system processes run in kernel mode. Core
processes run within a microkernel and others run in a client\server model.
The main architectures that are used in systems today are illustrated in Figure 4-20.
Cause for Confusion
If you continue your studies in operating system architecture, you will undoubt-
edly run into some of the confusion and controversy surrounding these families
of architectures. The intricacies and complexities of these arguments are out of
scope for the CISSP exam, but a little insight is worth noting.
Today, the terms monolithic operating system and monolithic kernel are used
interchangeably, which invites confusion. The industry started with monolithic
operating systems, as in MS-DOS, which did not clearly separate out kernel and
nonkernel processes. As operating systems advanced, the kernel components be-
came more organized, isolated, protected, and focused. The hardware-facing
code became more virtualized to allow for portability, and the code became more
modular so functionality components (loadable kernel modules, DLLs) could be
loaded and unloaded as needed. So while a Unix system today follows a mono-
lithic kernel model, it does not mean that it is as rudimentary as MS-DOS, which
was a monolithic operating system. The core definition of monolithic stayed the
same, which just means the whole operating system runs in kernel mode, but
operating systems that fall under this umbrella term advanced over time.
Different operating systems (BSD, Solaris, Linux, Windows, Macintosh, etc.)
have different flavors and versions, and while some cleanly fit into the classic ar-
chitecture buckets, some do not because the vendors have had to develop their
systems to meet its specific customer demands. Some operating systems only got
more lean and stripped of functionality so that they could work in embedded
systems, real-time systems, or dedicated devices (firewalls, VPN concentrators),
and some got more bloated to provide extensive functionality (Windows, Linux).
Operating systems moved from cooperative multitasking to preemptive, im-
proved memory management; some changed file system types (FAT to NTFS); I\O
management matured; networking components were added; and there was al-
lowance for distributed computing and multiprocessing. So in reality, we cannot
think that architectural advancement just had to do with what code ran in kernel
mode and what did not, but these design families are ways for us to segment
operating system advancements at a macro level.
You do not need to know the architecture types specific operating systems
follow for the CISSP exam, but just the architecture types. Remember that the
CISSP exam is a nonvendor and high-level exam.

Figure 4-20 Major operating system kernel architectures

CISSP All-in-One Exam Guide
354
Operating system architecture is critical when it comes to the security of a system
overall. Systems can be patched, but this is only a Band-Aid approach. Security should
be baked in from the beginning and then thought through in every step of the develop-
ment life cycle.
Key Terms
•Interrupt Software or hardware signal that indicates that system
resources (i.e., CPU) are needed for instruction processing.
•Instruction set Set of operations and commands that can be
implemented by a particular processor (CPU).
•Microarchitecture Specific design of a microprocessor, which includes
physical components (registers, logic gates, ALU, cache, etc.) that
support a specific instruction set.
•Application programming interface Software interface that enables
process-to-process interaction. Common way to provide access to
standard routines to a set of software programs.
•Monolithic operating system architecture All of the code of
the operating system working in kernel mode in an ad hoc and
nonmodularized manner.
•Layered operating system architecture Architecture that separates
system functionality into hierarchical layers.
•Data hiding Use of segregation in design decisions to protect software
components from negatively interacting with each other. Commonly
enforced through strict interfaces.
•Microkernel architecture Reduced amount of code running in kernel
mode carrying out critical operating system functionality. Only the
absolutely necessary code runs in kernel mode, and the remaining
operating system code runs in user mode.
•Hybrid microkernel architecture Combination of monolithic and
microkernel architectures. The microkernel carries out critical operating
system functionality, and the remaining functionality is carried out in a
client\server model within kernel mode.
•Mode transition When the CPU has to change from processing code
in user mode to kernel mode. This is a protection measure, but it causes
a performance hit.

Chapter 4: Security Architecture and Design
355
Virtual Machines
I would like my own simulated environment so I can have my own world.
If you have been into computers for a while, you might remember computer games
that did not have the complex, lifelike graphics of today’s games. Pong and Asteroids
were what we had to play with when we were younger. In those simpler times, the
games were 16-bit and were written to work in a 16-bit MS-DOS environment. When
our Windows operating systems moved from 16-bit to 32-bit, the 32-bit operating sys-
tems were written to be backward compatible, so someone could still load and play a
16-bit game in an environment that the game did not understand. The continuation of
this little life pleasure was available to users because the operating systems created vir-
tual environments for the games to run in.
When a 16-bit application needs to interact with the operating system, it has been
developed to make system calls and interact with the computer’s memory in a way that
would only work within a 16-bit operating system—not a 32-bit system. So, the virtual
environment simulates a 16-bit operating system, and when the application makes a
request, the operating system converts the 16-bit request into a 32-bit request (this is
called thunking) and reacts to the request appropriately. When the system sends a reply
to this request, it changes the 32-bit reply into a 16-bit reply so the application under-
stands it.
Today, virtual environments are much more advanced. Basic virtualization enables
single hardware equipment to run multiple operating system environments simultane-
ously, greatly enhancing processing power utilization, among other benefits. Creating
virtual instances of operating systems, applications, and storage devices is known as
virtualization.
In today’s jargon, a virtual instance of an operating system is known as a virtual
machine. A virtual machine is commonly referred to as a guest that is executed in the
host environment. Virtualization allows a single host environment to execute multiple
guests at once, with multiple virtual machines dynamically pooling resources from a
common physical system. Computer resources such as RAM, processors, and storage
are emulated through the host environment. The virtual machines do not directly ac-
cess these resources; instead, they communicate with a hypervisor within the host envi-
ronment, which is responsible for managing system resources. The hypervisor is the
central program that controls the execution of the various guest operating systems and
provides the abstraction level between the guest and host environments, as shown in
Figure 4-21.
What this means is that you can have one computer running several different oper-
ating systems at one time. For example, you can run a system with Windows 2000,
Linux, Unix, and Windows 2008 on one computer. Think of a house that has different
rooms. Each operating system gets its own room, but each share the same resources that

CISSP All-in-One Exam Guide
356
the house provides—a foundation, electricity, water, roof, and so on. An operating sys-
tem that is “living” in a specific room does not need to know about or interact with
another operating system in another room to take advantage of the resources provided
by the house. The same concept happens in a computer: Each operating system shares
the resources provided by the physical system (as in memory, processor, buses, and so
on). They “live” and work in their own “rooms,” which are the guest virtual machines.
The physical computer itself is the host.
Why do this? One reason is that it is cheaper than having a full physical system for
each and every operating system. If they can all live on one system and share the same
physical resources, your costs are reduced immensely. This is the same reason people
get roommates. The rent can be split among different people, and all can share the
same house and resources. Another reason to use virtualization is security. Providing
software their own “clean” environments to work within reduces the possibility of
them negatively interacting with each other.
The following useful list, taken from www.kernelthread.com/publications/virtual-
ization, pertains to the different reasons for using virtualization in various environ-
ments. It was written years ago, but is still very applicable to today’s needs and the
CISSP exam.
•Virtual machines can be used to consolidate the workloads of several under-
utilized servers to fewer machines, perhaps a single machine (server consolidation).
Related benefits are savings on hardware, environmental costs, management, and
administration of the server infrastructure.
•The need to run legacy applications is served well by virtual machines. A legacy
application might simply not run on newer hardware and/or operating systems. Even
if it does, if may under-utilize the server, so it makes sense to consolidate several
applications. This may be difficult without virtualization because such applications
are usually not written to coexist within a single execution environment.
• Virtual machines can be used to provide secure, isolated sandboxes for running
untrusted applications. You could even create such an execution environment
Hypervisor
Hardware
CPU
Memory
NIC
CPU Disk
Application
Operating system
CPU
Application
Operating system
CPU
Virtual Machine 1 Virtual Machine 2
Figure 4-21
The hypervisor
controls virtual
machine instances.

Chapter 4: Security Architecture and Design
357
dynamically—on the fly—as you download something from the Internet and run
it. Virtualization is an important concept in building secure computing platforms.
• Virtual machines can be used to create operating systems, or execution environments
with resource limits, and given the right schedulers, resource guarantees. Partitioning
usually goes hand-in-hand with quality of service in the creation of QoS-enabled
operating systems.
• Virtual machines can provide the illusion of hardware, or hardware configuration
that you do not have (such as SCSI devices or multiple processors). Virtualization
can also be used to simulate networks of independent computers.
• Virtual machines can be used to run multiple operating systems simultaneously:
different versions, or even entirely different systems, which can be on hot standby.
Some such systems may be hard or impossible to run on newer real hardware.
•Virtual machines allow for powerful debugging and performance monitoring. You can
put such tools in the virtual machine monitor, for example. Operating systems can
be debugged without losing productivity, or setting up more complicated debugging
scenarios.
•Virtual machines can isolate what they run, so they provide fault and error containment.
You can inject faults proactively into software to study its subsequent behavior.
•Virtual machines are great tools for research and academic experiments. Since they
provide isolation, they are safer to work with. They encapsulate the entire state of a
running system: you can save the state, examine it, modify it, reload it, and so on.
•Virtualization can make tasks such as system migration, backup, and recovery easier
and more manageable.
•Virtualization on commodity hardware has been popular in co-located hosting. Many
of the above benefits make such hosting secure, cost-effective, and appealing in general.
System Security Architecture
Up to this point we have looked at system architectures, CPU architectures, and operat-
ing system architectures. Remember that a system architecture has several views to it,
depending upon the stakeholder’s individual concerns. Since our main concern is secu-
rity, we are going to approach system architecture from a security point of view and drill
down into the core components that are part of most computing systems today. But
first we need to understand how the goals for the individual system security architec-
tures are defined.
Security Policy
In life we set goals for ourselves, our teams, companies, and families to meet. Setting a
goal defines the desired end state. We might define a goal for our company to make $2
million by the end of the year. We might define a goal of obtaining three government
contracts for our company within the next six months. A goal could be that we lose 30

CISSP All-in-One Exam Guide
358
pounds in 12 months or save enough money for our child to be able to go off to college
when she turns 18 years old. The point is that we have to define a desired end state and
from there we can lay out a structured plan on how to accomplish those goals, punctu-
ated with specific action items and a defined timeline.
It is not usually helpful to have vague goal statements, as in “save money” or “lose
weight” or “become successful.” Our goals need to be specific, or how do we know
when we accomplish them? This is also true in computer security. If your boss gave you
a piece of paper that had a simple goal written on it, “Build a secure system,” what does
this mean? What is the definition of a “system”? What is the definition of “secure”? Just
draw a picture of a pony on this piece of paper and give it back to him and go to lunch.
You don’t have time for such silliness.
When you get back from lunch, your boss hands you the same paper with the
following:
• Discretionary access control–based operating system
• Provides role-based access control functionality
• Capability of protecting data classified at “public” and “confidential” levels
• Does not allow unauthorized access to sensitive data or critical system
functions
• Enforces least privilege and separation of duties
• Provides auditing capabilities
• Implements trusted paths and trusted shells for sensitive processing activities
• Enforces identification, authentication, and authorization of trusted subjects
• Implements a capability-based authentication methodology
• Does not contain covert channels
• Enforces integrity rules on critical files
Now you have more direction on what it is that he wants you to accomplish and
you can work with him to form the overall security goals for the system you will be
designing and developing. All of these goals need to be captured and outlined in a se-
curity policy.
Security starts at a policy level, which are high-level directives that provide the foun-
dational goals for a system overall and the components that make it up from a security
perspective. A security policy is a strategic tool that dictates how sensitive information
and resources are to be managed and protected. A security policy expresses exactly what
the security level should be by setting the goals of what the security mechanisms are
supposed to accomplish. This is an important element that has a major role in defining
the architecture and design of the system. The security policy is a foundation for the
specifications of a system and provides the baseline for evaluating a system after it is
built. The evaluation is carried out to make sure that the goals that were laid out in the
security policy were accomplished.

Chapter 4: Security Architecture and Design
359
NOTE
NOTE Chapter 2 examined security policies in-depth, but those policies
were directed toward organizations, not technical systems. The security
policies being addressed here are for operating systems, devices, and
applications. The different policy types are similar in nature but have different
targets: an organization as opposed to an individual computer system.
In Chapter 3 we went through discretionary access control (DAC) and mandatory
access control (MAC) models. A DAC-based operating system is going to have a very
different (less strict) security policy than a MAC-based system (more strict). The secu-
rity mechanisms within these very different types of systems will vary, but the systems
commonly overlap in the main structures of their security architecture. We will explore
these structures in the following sections.
Security Architecture Requirements
In the 1970s computer systems were moving from single user, stand-alone, centralized
and closed systems to multiuser systems that had multiprogramming functionality and
networking capabilities. The U.S. government needed to ensure that all of the systems
that it was purchasing and implementing were properly protecting its most secret se-
crets. The government had various levels of classified data (secret, top secret) and users
with different clearance levels (Secret, Top Secret). It needed to come up with a way to
instruct vendors on how to build computer systems to meet their security needs and in
turn a way to test the products these vendors developed based upon those same secu-
rity needs.
In 1972, the U.S. government released a report (Computer Security Technology
Planning Study) that outlined basic and foundational security requirements of com-
puter systems that it would deem acceptable for purchase and deployment. These re-
quirements were further defined and built upon, which resulted in the Trusted
Computer System Evaluation Criteria, which we will cover in more detail at the end of
this chapter. These requirements shaped the security architecture of almost all of the
systems in use today. Some of the core tenets of these requirements were trusted com-
puting base, security perimeter, reference monitor, and the security kernel.
Trusted Computing Base
The trusted computing base (TCB) is a collection of all the hardware, software, and firm-
ware components within a system that provide some type of security and enforce the
system’s security policy. The TCB does not address only operating system components,
because a computer system is not made up of only an operating system. Hardware, soft-
ware components, and firmware components can affect the system in a negative or
positive manner, and each has a responsibility to support and enforce the security pol-
icy of that particular system. Some components and mechanisms have direct responsi-
bilities in supporting the security policy, such as firmware that will not let a user boot a
computer from a USB drive, or the memory manager that will not let processes over-
write other processes’ data. Then there are components that do not enforce the security

CISSP All-in-One Exam Guide
360
policy but must behave properly and not violate the trust of a system. Examples of the
ways in which a component could violate the system’s security policy include an ap-
plication that is allowed to make a direct call to a piece of hardware instead of using the
proper system calls through the operating system, a process that is allowed to read data
outside of its approved memory space, or a piece of software that does not properly
release resources after use.
The operating system’s kernel is made up of hardware, software, and firmware, so in
a sense the kernel is the TCB. But the TCB can include other components, such as
trusted commands, programs, and configuration files that can directly interact with the
kernel. For example, when installing a Unix system, the administrator can choose to
install the TCB configuration during the setup procedure. If the TCB is enabled, then
the system has a trusted path, a trusted shell, and system integrity–checking capabili-
ties. A trusted path is a communication channel between the user, or program, and the
TCB. The TCB provides protection resources to ensure this channel cannot be compro-
mised in any way. A trusted shell means that someone who is working in that shell
(command interpreter) cannot “bust out of it” and other processes cannot “bust into it.”
Every operating system has specific components that would cause the system grave
danger if they were compromised. The components that make up the TCB provide extra
layers of protection around these mechanisms to help ensure they are not compro-
mised, so the system will always run in a safe and predictable manner. While the TCB
components can provide extra layers of protection for sensitive processes, they them-
selves have to be developed securely. The BIOS function should have a password protec-
tion capability and be tamperproof. The subsystem within a Windows operating system
that generates access tokens should not be able to be hijacked and be used to produce
fake tokens for malicious processes. Before a process can interact with a system configu-
ration file, it must be authenticated by the security kernel. Device drivers should not be
able to be modified in an unauthorized manner. Basically, any piece of a system that
could be used to compromise the system or put it into an unstable condition is consid-
ered to be part of the TCB and it must be developed and controlled very securely.
You can think of a TCB as a building. You want the building to be strong and safe,
so there are certain components that absolutely have to be built and installed properly.
The right types of construction nails need to be used, not the flimsy ones we use at
home to hold up pictures of our grandparents. The beams in the walls need to be made
out of steel and properly placed. The concrete in the foundation needs to be made of
the right concentration of gravel and water. The windows need to be shatterproof. The
electrical wiring needs to be of proper grade and grounded.
An operating system also has critical pieces that absolutely have to be built and
installed properly. The memory manager has to be tamperproof and properly protect
shared memory spaces. When working in kernel mode, the CPU must have all logic
gates in the proper place. Operating system APIs must only accept secure service re-
quests. Access control lists on objects cannot be modified in an unauthorized manner.
Auditing must take place and the audit trails cannot be modified in an unauthor-
ized manner. Interprocess communication must take place in an approved and con-
trolled manner.

Chapter 4: Security Architecture and Design
361
The processes within the TCB are the components that protect the system overall. So
the developers of the operating system must make sure these processes have their own
execution domain. This means they reside in ring 0, their instructions are executed in
privileged state, and no less trusted processes can directly interact with them. The devel-
opers need to ensure the operating system maintains an isolated execution domain, so
their processes cannot be compromised or tampered with. The resources that the TCB
processes use must also be isolated, so tight access control can be provided and all ac-
cess requests and operations can be properly audited. So basically, the operating system
ensures that all the non-TCB processes and TCB processes interact in a secure manner.
When a system goes through an evaluation process, part of the process is to identify
the architecture, security services, and assurance mechanisms that make up the TCB. Dur-
ing the evaluation process, the tests must show how the TCB is protected from accidental
or intentional tampering and compromising activity. For systems to achieve a higher trust
level rating, they must meet well-defined TCB requirements, and the details of their op-
erational states, developing stages, testing procedures, and documentation will be re-
viewed with more granularity than systems attempting to achieve a lower trust rating.
By using specific security criteria, trust can be built into a system, evaluated, and
certified. This approach can provide a measurement system for customers to use when
comparing one product to another. It also gives vendors guidelines on what expectations
are put upon their systems and provides a common assurance rating metric so when one
group talks about a C2 rating, everyone else understands what that term means.
The Orange Book is one of these evaluation criteria. It defines a trusted system as
hardware and software that utilize measures to protect classified data for a range of us-
ers without violating access rights and the security policy. It looks at all protection
mechanisms within a system that enforce the security policy and provide an environ-
ment that will behave in a manner expected of it. This means each layer of the system
must trust the underlying layer to perform the expected functions, provide the expected
level of protection, and operate in an expected manner under many different situations.
When the operating system makes calls to hardware, it anticipates that data will be re-
turned in a specific data format and behave in a consistent and predictable manner.
Applications that run on top of the operating system expect to be able to make certain
system calls, receive the required data in return, and operate in a reliable and depend-
able environment. Users expect the hardware, operating system, and applications to
perform in particular fashions and provide a certain level of functionality. For all of
these actions to behave in such predicable manners, the requirements of a system must
be addressed in the planning stages of development, not afterward.
Security Perimeter
Now, whom do we trust?
Response: Anyone inside the security perimeter.
As stated previously, not every process and resource falls within the TCB, so some
of these components fall outside of an imaginary boundary referred to as the security
perimeter. A security perimeter is a boundary that divides the trusted from the untrust-
ed. For the system to stay in a secure and trusted state, precise communication standards

CISSP All-in-One Exam Guide
362
must be developed to ensure that when a component within the TCB needs to com-
municate with a component outside the TCB, the communication cannot expose the
system to unexpected security compromises. This type of communication is handled
and controlled through interfaces.
For example, a resource that is within the boundary of the security perimeter is
considered to be a part of the TCB and must not allow less trusted components access
to critical system resources in an insecure manner. The processes within the TCB must
also be careful about the commands and information they accept from less trusted re-
sources. These limitations and restrictions are built into the interfaces that permit this
type of communication to take place and are the mechanisms that enforce the security
perimeter. Communication between trusted components and untrusted components
needs to be controlled to ensure that the system stays stable and safe.
Remember when we covered CPU architectures, we went through the various rings a
CPU provides. The operating system places its software components within those rings.
The most trusted components would go inside ring 0, and the less trusted components
went into the other rings. Strict and controlled communication has to be put into place
to make sure a less trusted component does not compromise a more trusted compo-
nent. This control happens through APIs. The APIs are like bouncers at bars. The bounc-
ers only allow individuals who are safe into the bar environment and keep the others
out. This is the same idea of a security perimeter. Strict interfaces need to be put into
place to control the communication between the items within and outside the TCB.
NOTE
NOTE The TCB and security perimeter are not physical entities, but
conceptual constructs used by system architects and developers to delineate
between trusted and untrusted components and how they communicate.
Reference Monitor
Up to this point we have a CPU that provides a ringed structure and an operating sys-
tem that places its components in the different rings based upon the trust level of each
component. We have a defined security policy, which outlines the level of security we
want our system to provide. We have chosen the mechanisms that will enforce the se-
curity policy (TCB) and implemented security perimeters (interfaces) to make sure
these mechanisms communicate securely. Now we need to develop and implement a
mechanism that ensures that the subjects that access objects within the operating sys-
tem have been given the necessary permissions to do so. This means we need to devel-
op and implement a reference monitor.
The reference monitor is an abstract machine that mediates all access subjects have
to objects, both to ensure that the subjects have the necessary access rights and to pro-
tect the objects from unauthorized access and destructive modification. For a system to
achieve a higher level of trust, it must require subjects (programs, users, processes) to
be fully authorized prior to accessing an object (file, program, resource). A subject must
not be allowed to use a requested resource until the subject has proven it has been
granted access privileges to use the requested object. The reference monitor is an access
control concept, not an actual physical component, which is why it is normally referred
to as the “reference monitor concept” or an “abstract machine.”

Chapter 4: Security Architecture and Design
363
A reference monitor defines the design requirements a reference validation mecha-
nism must meet so that it can properly enforce the specifications of a system-based
access control policy. As discussed in Chapter 3, access control is made up of rules,
which specify what subjects (processes, programs, users, etc.) can communicate with
which objects (files, processes, peripheral devices, etc.) and what operations can be
performed (read, write, execute, etc.). If you think about it, almost everything that takes
place within an operating system is made up of subject-to-object communication and
it has to be tightly controlled, or the whole system could be put at risk. If the access
rules of the reference monitor are not properly enforced, a process could potentially
misuse an object, which could result in corruption or compromise.
The reference monitor provides direction on how all access control decisions should
be made and controlled in a central concerted manner within a system. Instead of hav-
ing distributed components carrying out subject-to-object access decisions individually
and independently, all access decisions should be made by a core-trusted, tamperproof
component of the operating system that works within the system’s kernel, which is the
role of the security kernel.
Security Kernel
The security kernel is made up of hardware, software, and firmware components that
fall within the TCB, and it implements and enforces the reference monitor concept. The
security kernel mediates all access and functions between subjects and objects. The se-
curity kernel is the core of the TCB and is the most commonly used approach to build-
ing trusted computing systems. The security kernel has three main requirements:
• It must provide isolation for the processes carrying out the reference monitor
concept, and the processes must be tamperproof.
• It must be invoked for every access attempt and must be impossible to
circumvent. Thus, the security kernel must be implemented in a complete
and foolproof way.
• It must be small enough to be tested and verified in a complete and
comprehensive manner.
These are the requirements of the reference monitor; therefore, they are the require-
ments of the components that provide and enforce the reference monitor concept—the
security kernel.
These issues work in the abstract but are implemented in the physical world of
hardware devices and software code. The assurance that the components are enforcing
the abstract idea of the reference monitor is proved through testing and evaluations.
NOTE
NOTE The reference monitor is a concept in which an abstract machine
mediates all access to objects by subjects. The security kernel is the hardware,
firmware, and software of a TCB that implements this concept. The TCB is
the totality of protection mechanisms within a computer system that work
together to enforce a security policy. It contains the security kernel and all
other security protection mechanisms.

CISSP All-in-One Exam Guide
364
The following is a quick analogy to show you the relationship between the pro-
cesses that make up the security kernel, the security kernel itself, and the reference
monitor concept. Individuals (processes) make up a society (security kernel). For a so-
ciety to have a certain standard of living, its members must interact in specific ways,
which is why we have laws. The laws represent the reference monitor, which enforces
proper activity. Each individual is expected to stay within the bounds of the laws and
act in specific ways so society as a whole is not adversely affected and the standard of
living is not threatened. The components within a system must stay within the bounds
of the reference monitor’s laws so they will not adversely affect other components and
threaten the security of the system.
For a system to provide an acceptable level of trust, it must be based on an architec-
ture that provides the capabilities to protect itself from untrusted processes, intentional
or accidental compromises, and attacks at different layers of the system. A majority of
the trust ratings obtained through formal evaluations require a defined subset of sub-
jects and objects, explicit domains, and the isolation of processes so their access can be
controlled and the activities performed on them can be audited.
Let’s regroup. We know that a system’s trust is defined by how it enforces its own
security policy. When a system is tested against specific criteria, a rating is assigned to the
system and this rating is used by customers, vendors, and the computing society as a
whole. The criteria will determine if the security policy is being properly supported and
enforced. The security policy lays out the rules and practices pertaining to how a system
will manage, protect, and allow access to sensitive resources. The reference monitor is a
concept that says all subjects must have proper authorization to access objects, and this
concept is implemented by the security kernel. The security kernel comprises all the re-
sources that supervise system activity in accordance with the system’s security policy and
is part of the operating system that controls access to system resources. For the security
kernel to work correctly, the individual processes must be isolated from each other and
domains must be defined to dictate which objects are available to which subjects.
NOTE
NOTE Security policies that prevent information from flowing from a high
security level to a lower security level are called multilevel security policies.
These types of policies permit a subject to access an object only if the
subject’s security level is higher than or equal to the object’s classification.
As previously stated, many of the concepts covered in the previous sections are ab-
stract ideas that will be manifested in physical hardware components, firmware, soft-
ware code, and activities through designing, building, and implementing a system.
Operating systems implement access rights, permissions, access tokens, mandatory in-
tegrity levels, access control lists, access control entities, memory protection, sandbox-
es, virtualization, and more to meet the requirements of these abstract concepts.

Chapter 4: Security Architecture and Design
365
Security Models
An important concept in the design and analysis of secure systems is the security mod-
el, because it incorporates the security policy that should be enforced in the system. A
model is a symbolic representation of a policy. It maps the desires of the policymakers
into a set of rules that a computer system must follow.
The reason this chapter has repeatedly mentioned the security policy and its impor-
tance is that it is an abstract term that represents the objectives and goals a system must
Key Terms
•Virtualization Creation of a simulated environment (hardware
platform, operating system, storage, etc.) that allows for central control
and scalability.
•Hypervisor Central program used to manage virtual machines
(guests) within a simulated environment (host).
•Security policy Strategic tool used to dictate how sensitive
information and resources are to be managed and protected.
•Trusted computing base A collection of all the hardware, software,
and firmware components within a system that provide security and
enforce the system’s security policy.
•Trusted path Trustworthy software channel that is used for
communication between two processes that cannot be circumvented.
•Security perimeter Mechanism used to delineate between the
components within and outside of the trusted computing base.
•Reference monitor Concept that defines a set of design requirements
of a reference validation mechanism (security kernel), which enforces
an access control policy over subjects’ (processes, users) ability to
perform operations (read, write, execute) on objects (files, resources)
on a system.
•Security kernel Hardware, software, and firmware components that
fall within the TCB and implement and enforce the reference monitor
concept.
•Multilevel security policies Outlines how a system can simultaneously
process information at different classifications for users with different
clearance levels.

CISSP All-in-One Exam Guide
366
meet and accomplish to be deemed secure and acceptable. How do we get from an
abstract security policy to the point at which an administrator is able to uncheck a box
on the GUI to disallow David from accessing configuration files on his system? There
are many complex steps in between that take place during the system’s design and
development.
A security model maps the abstract goals of the policy to information system terms
by specifying explicit data structures and techniques necessary to enforce the security
policy. A security model is usually represented in mathematics and analytical ideas,
which are mapped to system specifications and then developed by programmers
through programming code. So we have a policy that encompasses security goals, such
as “each subject must be authenticated and authorized before accessing an object.” The
security model takes this requirement and provides the necessary mathematical formu-
las, relationships, and logic structure to be followed to accomplish this goal. From
there, specifications are developed per operating system type (Unix, Windows, Macin-
tosh, and so on), and individual vendors can decide how they are going to implement
mechanisms that meet these necessary specifications.
So in a very general and simplistic example, if a security policy states that subjects
need to be authorized to access objects, the security model would provide the mathe-
matical relationships and formulas explaining how x can access y only through the
outlined specific methods. Specifications are then developed to provide a bridge to
what this means in a computing environment and how it maps to components and
mechanisms that need to be coded and developed. The developers then write the pro-
gram code to produce the mechanisms that provide a way for a system to use ACLs and
give administrators some degree of control. This mechanism presents the network ad-
ministrator with a GUI that enables the administrator to choose (via check boxes, for
example) which subjects can access what objects and to be able to set this configuration
within the operating system. This is a rudimentary example, because security models
can be very complex, but it is used to demonstrate the relationship between the secu-
rity policy and the security model.
A security policy outlines goals without regard to how they will be accomplished. A
model is a framework that gives the policy form and solves security access problems for
particular situations. Several security models have been developed to enforce security
policies. The following sections provide overviews of each model.
Relationship Between a Security Policy and a Security Model
If someone tells you to live a healthy and responsible life, this is a very broad,
vague, and abstract notion. So when you ask this person how this is accom-
plished, they outline the things you should and should not do (do not harm
others, do not lie, eat your vegetables, and brush your teeth). The security policy
provides the abstract goals, and the security model provides the do’s and don’ts
necessary to fulfill these goals.

Chapter 4: Security Architecture and Design
367
State Machine Models
No matter what state I am in, I am always safe.
In state machine models, to verify the security of a system, the state is used, which
means that all current permissions and all current instances of subjects accessing ob-
jects must be captured. Maintaining the state of a system deals with each subject’s as-
sociation with objects. If the subjects can access objects only by means that are
concurrent with the security policy, the system is secure. A state of a system is a snapshot
of a system at one moment of time. Many activities can alter this state, which are re-
ferred to as state transitions. The developers of an operating system that will implement
the state machine model need to look at all the different state transitions that are pos-
sible and assess whether a system that starts up in a secure state can be put into an in-
secure state by any of these events. If all of the activities that are allowed to happen in
the system do not compromise the system and put it into an insecure state, then the
system executes a secure state machine model.
The state machine model is used to describe the behavior of a system to different
inputs. It provides mathematical constructs that represent sets (subjects and objects)
and sequences. When an object accepts an input, this modifies a state variable. A sim-
plistic example of a state variable is (Name, Value), as shown in Figure 4-22. This vari-
able is part of the operating system’s instruction set. When this variable is called upon
to be used, it can be populated with (Color, Red) from the input of a user or program.
Let’s say the user enters a different value, so now the variable is (Color, Blue). This is a
simplistic example of a state transition. Some state transitions are this simple, but com-
plexity comes in when the system must decide if this transition should be allowed. To
allow this transition, the object’s security attributes and the access rights of the subject
must be reviewed and allowed by the operating system.
Developers who implement the state machine model must identify all the initial
states (default variable values) and outline how these values can be changed (inputs
that will be accepted) so the various number of final states (resulting values) still ensure
that the system is safe. The outline of how these values can be changed is often imple-
mented through condition statements: “if condition then update.”
Formal Models
Using models in software development has not become as popular as once imag-
ined, primarily because vendors are under pressure to get products to market as
soon as possible. Using formal models takes more time during the architectural
phase of development, extra time that many vendors feel they cannot afford.
Formal models are definitely used in the development of systems that cannot al-
low errors or security breaches, such as air traffic control systems, spacecraft soft-
ware, railway signaling systems, military classified systems, and medical control
systems. This does not mean that these models, or portions of them, are not used
in industry products, but rather that industry vendors do not always follow these
models in the purely formal and mathematical way all the time.

CISSP All-in-One Exam Guide
368
A system that has employed a state machine model will be in a secure state in each
and every instance of its existence. It will boot up into a secure state, execute commands
and transactions securely, allow subjects to access resources only in secure states, and
shut down and fail in a secure state. Failing in a secure state is extremely important. It
is imperative that if anything unsafe takes place, the system must be able to “save itself”
and not make itself vulnerable. When an operating system displays an error message to
the user or reboots or freezes, it is executing a safety measure. The operating system has
experienced something that is deemed illegal and it cannot take care of the situation
itself, so to make sure it does not stay in this insecure state, it reacts in one of these
fashions. Thus, if an application or system freezes on you, know that it is simply the
system trying to protect itself and your data.
Several points should be considered when developing a product that uses a state
machine model. Initially, the developer must define what and where the state variables
are. In a computer environment, all data variables could independently be considered
state variables, and an inappropriate change to one could conceivably change or cor-
rupt the system or another process’s activities. Next, the developer must define a secure
state for each state variable. The next step is to define and identify the allowable state
Figure 4-22 A simplistic example of a state change

Chapter 4: Security Architecture and Design
369
transition functions. These functions will describe the allowable changes that can be
made to the state variables.
After the state transition functions are defined, they must be tested to verify that the
overall machine state will not be compromised and that these transition functions will
keep the integrity of the system (computer, data, program, or process) intact at all times.
Bell-LaPadula Model
I don’t want anyone to know my secrets.
Response: We need Mr. Bell and Mr. LaPadula in here then.
In the 1970s, the U.S. military used time-sharing mainframe systems and was con-
cerned about the security of these systems and leakage of classified information. The
Bell-LaPadula model was developed to address these concerns. It was the first mathe-
matical model of a multilevel security policy used to define the concept of a secure state
machine and modes of access, and outlined rules of access. Its development was funded
by the U.S. government to provide a framework for computer systems that would be
used to store and process sensitive information. The model’s main goal was to prevent
secret information from being accessed in an unauthorized manner.
A system that employs the Bell-LaPadula model is called a multilevel security system
because users with different clearances use the system, and the system processes data at
different classification levels. The level at which information is classified determines
the handling procedures that should be used. The Bell-LaPadula model is a state ma-
chine model that enforces the confidentiality aspects of access control. A matrix and
security levels are used to determine if subjects can access different objects. The sub-
ject’s clearance is compared to the object’s classification and then specific rules are ap-
plied to control how subject-to-object interactions can take place.
This model uses subjects, objects, access operations (read, write, and read/write),
and security levels. Subjects and objects can reside at different security levels and will
have relationships and rules dictating the acceptable activities between them. This
model, when properly implemented and enforced, has been mathematically proven to
provide a very secure and effective operating system. It is also considered to be an infor-
mation-flow security model, which means that information does not flow in an inse-
cure manner.
The Bell-LaPadula model is a subject-to-object model. An example would be how
you (subject) could read a data element (object) from a specific database and write data
into that database. The Bell-LaPadula model focuses on ensuring that subjects are prop-
erly authenticated—by having the necessary security clearance, need to know, and for-
mal access approval—before accessing an object.
Three main rules are used and enforced in the Bell-LaPadula model: the simple se-
curity rule, the *-property (star property) rule, and the strong star property rule. The
simple security rule states that a subject at a given security level cannot read data that
reside at a higher security level. For example, if Bob is given the security clearance of
secret, this rule states he cannot read data classified as top secret. If the organization
wanted Bob to be able to read top-secret data, it would have given him that clearance
in the first place.

CISSP All-in-One Exam Guide
370
The *-property rule (star property rule) states that a subject in a given security level
cannot write information to a lower security level. The simple security rule is referred to
as the “no read up” rule, and the *-property rule is referred to as the “no write down”
rule. The third rule, the strong star property rule, states that a subject that has read and
write capabilities can only perform those functions at the same security level; nothing
higher and nothing lower. So, for a subject to be able to read and write to an object, the
clearance and classification must be equal.
These three rules indicate what states the system can go into. Remember that a state
is the values of the variables in the software at a snapshot in time. If a subject has per-
formed a read operation on an object at a lower security level, the subject now has a
variable that is populated with the data that was read, or copied, into its variable. If
a subject has written to an object at a higher security level, the subject has modified a
variable within that object’s domain.
NOTE
NOTE In access control terms, the word dominate means to be higher than
or equal to. So if you see a statement such as “A subject can only perform a
read operation if the access class of the subject dominates the access class
of an object,” this just means the subject must have a clearance that is higher
than or equal to the object. In the Bell-LaPadula model, this is referred to as
the dominance relation, which is the relationship of the subject’s clearance to
the object’s classification.
The state of a system changes as different operations take place. The Bell-LaPadula
model defines a secure state, meaning a secure computing environment and the al-
lowed actions, which are security-preserving operations. This means the model pro-
vides a secure state and only permits operations that will keep the system within a
secure state and not let it enter into an insecure state. So if 100 people access 2,000
objects in a day using this one system, this system is put through a lot of work and sev-
eral complex activities must take place. However, at the end of the day, the system is just
as secure as it was at the beginning of the day. This is the definition of the Basic Security
Theorem used in computer science, which states that if a system initializes in a secure
state and all allowed state transitions are secure, then every subsequent state will be
secure no matter what inputs occur.
NOTE
NOTE The tranquility principle, which is also used in this model, means that
subjects’ and objects’ security levels cannot change in a manner that violates
the security policy.
An important thing to note is that the Bell-LaPadula model was developed to make
sure secrets stay secret; thus, it provides and addresses confidentiality only. This model
does not address the integrity of the data the system maintains—only who can and can-
not access the data and what operations can be carried out.

Chapter 4: Security Architecture and Design
371
NOTE
NOTE Ensuring that information does not flow from a higher security
level to a lower level is referred to as controlling unauthorized downgrading of
information, which would take place through a “write down” operation. An
actual compromise occurs if and when a user at a lower security level reads
this data.
So what does this mean and why does it matter? Chapter 3 discussed mandatory
access control (MAC) systems versus discretionary access control (DAC) systems. All
MAC systems are based on the Bell-LaPadula model, because it allows for multilevel
security to be integrated into the code. Subjects and objects are assigned labels. The
subject’s label contains its clearance label (top secret, secret, or confidential) and the
object’s label contains its classification label (top secret, secret, or confidential). When
a subject attempts to access an object, the system compares the subject’s clearance label
and the object’s classification label and looks at a matrix to see if this is a legal and se-
cure activity. In our scenario, it is a perfectly fine activity, and the subject is given access
to the object. Now, if the subject’s clearance label is top secret and the object’s classifica-
tion label is secret, the subject cannot write to this object, because of the *-property
rule, which makes sure that subjects cannot accidentally or intentionally share confi-
dential information by writing to an object at a lower security level. As an example,
suppose that a busy and clumsy general (who has top-secret clearance) in the Army
opens up a briefing letter (which has a secret classification) that will go to all clerks at
all bases around the world. He attempts to write that the United States is attacking
Cuba. The Bell-LaPadula model will come into action and not permit this general to
write this information to this type of file because his clearance is higher than that of the
memo.
Likewise, if a nosey military staff clerk tried to read a memo that was available only
to generals and above, the Bell-LaPadula model would stop this activity. The clerk’s
clearance is lower than that of the object (the memo), and this violates the simple se-
curity rule of the model. It is all about keeping secrets secret.
NOTE
NOTE It is important that MAC operating systems and MAC databases
follow these rules. In Chapter 10, we will look at how databases can follow
these rules by the use of polyinstantiation.
CAUTION
CAUTION You may run into the Bell-LaPadula rule called Discretionary
Security Property (ds-property), which is another property of this model.
This rule is based on named subjects and objects. It dictates that specific
permissions allow a subject to pass on permissions at its own discretion.
These permissions are stored in an access matrix. This just means that
mandatory and discretionary access control mechanisms can be implemented
in one operating system.

CISSP All-in-One Exam Guide
372
Biba Model
The Biba model was developed after the Bell-LaPadula model. It is a state machine
model similar to the Bell-LaPadula model. Biba addresses the integrity of data within
applications. The Bell-LaPadula model uses a lattice of security levels (top secret, secret,
sensitive, and so on). These security levels were developed mainly to ensure that sensi-
tive data were only available to authorized individuals. The Biba model is not con-
cerned with security levels and confidentiality, so it does not base access decisions upon
this type of lattice. Instead, the Biba model uses a lattice of integrity levels.
If implemented and enforced properly, the Biba model prevents data from any in-
tegrity level from flowing to a higher integrity level. Biba has three main rules to provide
this type of protection:
• *-integrity axiom A subject cannot write data to an object at a higher
integrity level (referred to as “no write up”).
•Simple integrity axiom A subject cannot read data from a lower integrity
level (referred to as “no read down”).
•Invocation property A subject cannot request service (invoke) of higher
integrity.
The name “simple integrity axiom” might sound a little goofy, but this rule protects
the data at a higher integrity level from being corrupted by data at a lower integrity
level. This is all about trusting the source of the information. Another way to look at it
is that trusted data are “clean” data, and untrusted data (from a lower integrity level) are
“dirty” data. Dirty data should not be mixed with clean data, because that could ruin
the integrity of the clean data.
The simple integrity axiom applies not only to users creating the data, but also to
processes. A process of lower integrity should not be writing to trusted data of a higher
integrity level. The areas of the different integrity levels are compartmentalized within
the application that is based on the Biba model.
An analogy would be if you were writing an article for The New York Times about the
security trends over the last year, the amount of money businesses lost, and the cost/
benefit ratio of implementing firewalls, IDS, and vulnerability scanners. You do not
Rules to Know
The main rules of the Bell-LaPadula model are:
•Simple security rule A subject cannot read data within an object that
resides at a higher security level (the “no read up” rule).
•*- property rule A subject cannot write to an object at a lower security
level (the “no write down” rule).
•Strong star property rule For a subject to be able to read and write to an
object, the subject’s clearance and the object’s classification must be equal.

Chapter 4: Security Architecture and Design
373
want to get your data and numbers from any old website without knowing how those
figures were calculated and the sources of the information. Your article (data at a high-
er integrity level) can be compromised if mixed with unfounded information from a
bad source (data at a lower integrity level).
When you are first learning about the Bell-LaPadula and Biba models, they may
seem similar and the reasons for their differences may be somewhat confusing. The
Bell-LaPadula model was written for the U.S. government, and the government is very
paranoid about leakage of its secret information. In its model, a user cannot write to a
lower level because that user might let out some secrets. Similarly, a user at a lower
level cannot read anything at a higher level because that user might learn some secrets.
However, not everyone is so worried about confidentiality and has such big important
secrets to protect. The commercial industry is concerned about the integrity of its data.
An accounting firm might be more worried about keeping its numbers straight and
making sure decimal points are not dropped or extra zeroes are not added in a process
carried out by an application. The accounting firm is more concerned about the integ-
rity of these data and is usually under little threat of someone trying to steal these
numbers, so the firm would use software that employs the Biba model. Of course, the
accounting firm does not look for the name Biba on the back of a product or make sure
it is in the design of its application. Which model to use is something that was decided
upon and implemented when the application was being designed. The assurance rat-
ings are what consumers use to determine if a system is right for them. So, even if the
accountants are using an application that employs the Biba model, they would not
necessarily know (and we’re not going to tell them).
As mentioned earlier, the invocation property in the Biba model states that a subject
cannot invoke (call upon) a subject at a higher integrity level. Well, how is this different
from the other two Biba rules? The *-integrity axiom (no write up) dictates how subjects
can modify objects. The simple integrity axiom (no read down) dictates how subjects can
read objects. The invocation property dictates how one subject can communicate with
and initialize other subjects at run time. An example of a subject invoking another sub-
ject is when a process sends a request to a procedure to carry out some type of task.
Subjects are only allowed to invoke tools at a lower integrity level. With the invocation
property, the system is making sure a dirty subject cannot invoke a clean tool to con-
taminate a clean object.
Bell-LaPadula vs. Biba
The Bell-LaPadula model is used to provide confidentiality. The Biba model is used
to provide integrity. The Bell-LaPadula and Biba models are informational flow
models because they are most concerned about data flowing from one level to an-
other. Bell-LaPadula uses security levels, and Biba uses integrity levels. It is impor-
tant for CISSP test takers to know the rules of Biba and Bell-LaPadula. Their rules
sound similar: simple and * rules—one writing one way and one reading another
way. A tip for how to remember them is that if the word “simple” is used, the rule
is talking about reading. If the rule uses * or “star,” it is talking about writing. So
now you just need to remember the reading and writing directions per model.

CISSP All-in-One Exam Guide
374
Clark-Wilson Model
The Clark-Wilson model was developed after Biba and takes some different approaches
to protecting the integrity of information. This model uses the following elements:
•Users Active agents
•Transformation procedures (TPs) Programmed abstract operations, such as
read, write, and modify
•Constrained data items (CDIs) Can be manipulated only by TPs
•Unconstrained data items (UDIs) Can be manipulated by users via
primitive read and write operations
•Integrity verification procedures (IVPs) Check the consistency of CDIs with
external reality
Although this list may look overwhelming, it is really quite straightforward. When
an application uses the Clark-Wilson model, it separates data into one subset that
needs to be highly protected, which is referred to as a constrained data item (CDI), and
another subset that does not require a high level of protection, which is called an un-
constrained data item (UDI). Users cannot modify critical data (CDI) directly. Instead,
the subject (user) must be authenticated to a piece of software, and the software proce-
dures (TPs) will carry out the operations on behalf of the user. For example, when Kathy
needs to update information held within her company’s database, she will not be al-
lowed to do so without a piece of software controlling these activities. First, Kathy must
authenticate to a program, which is acting as a front end for the database, and then the
program will control what Kathy can and cannot do to the information in the database.
This is referred to as access triple: subject (user), program (TP), and object (CDI). A user
cannot modify CDI without using a TP.
So, Kathy is going to input data, which is supposed to overwrite some original data
in the database. The software (TP) has to make sure this type of activity is secure and
will carry out the write procedures for Kathy. Kathy (and any type of subject) is not
trusted enough to manipulate objects directly.
The CDI must have its integrity protected by the TPs. The UDI does not require such
a high level of protection. For example, if Kathy did her banking online, the data on her
bank’s servers and databases would be split into UDI and CDI categories. The CDI cat-
egory would contain her banking account information, which needs to be highly pro-
tected. The UDI data could be her customer profile, which she can update as needed.
TPs would not be required when Kathy needed to update her UDI information.
In some cases, a system may need to change UDI data into CDI data. For example,
when Kathy updates her customer profile via the website to show her new correct ad-
dress, this information will need to be moved into the banking software that is respon-
sible for mailing out bank account information. The bank would not want Kathy to
interact directly with that banking software, so a piece of software (TP) is responsible
for copying that data and updating this customer’s mailing address. At this stage, the TP
is changing the state of the UDI data to CDI. These concepts are shown in Figure 4-23.

Chapter 4: Security Architecture and Design
375
Remember that this is an integrity model, so it must have something that ensures that
specific integrity rules are being carried out. This is the job of the IVP. The IVP ensures
that all critical data (CDI) manipulation follow the application’s defined integrity rules.
What usually turns people’s minds into spaghetti when they are first learning about
models is that models are theoretical and abstract. Thus, when they ask the common
question, “What are these defined integrity rules that the IVP must comply with?” they
are told, “Whatever the vendor chooses them to be.”
A model is made up of constructs, mathematical formulas, and other PhD kinds of
stuff. The model provides the framework that can be used to build a certain character-
istic into software (confidentiality, integrity). So the model does not stipulate what
specific integrity rules the IVP must enforce; it just provides the framework, and the
vendor defines the integrity rules that best fits its product’s requirements. The vendor
implements integrity rules that its customer base needs the most. So if a vendor is de-
veloping an application for a financial institution, the UDI could be customer profiles
that they are allowed to update and the CDI could be the bank account information,
usually held on a centralized database. The UDI data do not need to be as highly pro-
tected and can be located on the same system or another system. A user can have access
to UDI data without the use of a TP, but when the user needs to access CDI, they must
use TP. So the vendor who develops the product will determine what type of data is
considered UDI and what type of data is CDI and develop the TPs to control and or-
chestrate how the software enforces the integrity of the CDI values.
In a banking application, the IVP would ensure that the CDI represents the correct
value. For example, if Kathy has $2,000 in her account and then deposits $50, the CDI for
her account should now have a value of $2,050. The IVP ensures the consistency of the
data. So after Kathy carries out this transaction and the IVP validates the integrity of
the CDI (new bank account value is correct), then the CDI is considered to be in a consis-
tent state. TPs are the only components allowed to modify the state of the CDIs. In our
example, TPs would be software procedures that carry out deposit, withdrawal, and trans-
fer functionalities. Using TPs to modify CDIs is referred to as a well-formed transaction.
Figure 4-23 Subjects cannot modify CDI without using TP.

CISSP All-in-One Exam Guide
376
Awell-formed transaction is a series of operations that are carried out to transfer the
data from one consistent state to the other. If Kathy transfers money from her checking
account to her savings account, this transaction is made up of two operations: subtract
money from one account and add it to a different account. By making sure the new
values in her checking and savings accounts are accurate and their integrity is intact, the
IVP maintains internal and external consistency. The Clark-Wilson model also outlines
how to incorporate separation of duties into the architecture of an application. If we
follow our same example of banking software, if a customer needs to withdraw over
$10,000, the application may require a supervisor to log in and authenticate this trans-
action. This is a countermeasure against potential fraudulent activities. The model pro-
vides the rules that the developers must follow to properly implement and enforce
separation of duties through software procedures.
Goals of Integrity Models
The following are the three main goals of integrity models:
• Prevent unauthorized users from making modifications
• Prevent authorized users from making improper modifications (separation
of duties)
• Maintain internal and external consistency (well-formed transaction)

Chapter 4: Security Architecture and Design
377
Clark-Wilson addresses each of these goals in its model. Biba only addresses the
first goal.
Internal and external consistency is provided by the IVP, which ensures that what is
stored in the system as CDI properly maps to the input value that modified its state. So
if Kathy has $2,500 in her account and she withdraws $2,000, the resulting value in the
CDI is $500.
To summarize, the Clark-Wilson model enforces the three goals of integrity by us-
ing access triple (subject, software [TP], object), separation of duties, and auditing. This
model enforces integrity by using well-formed transactions (through access triple) and
separation of duties.
NOTE
NOTE Many people find these security models confusing because they do
not interact with them directly. A software architect chooses the model that
will provide the rules he needs to follow to implement a certain type of
security (confidentiality, integrity). A building architect will follow the model
(framework) that will provide the rules he needs to follow to implement a
certain type of building (office, home, bridge).
Information Flow Model
Now, which way is the information flowing in this system?
Response: Not to you.
The Bell-LaPadula model focuses on preventing information from flowing from a
high security level to a low security level. The Biba model focuses on preventing infor-
mation from flowing from a low integrity level to a high integrity level. Both of these
models were built upon the information flow model. Information flow models can deal
with any kind of information flow, not only from one security (or integrity) level to
another.
In the information flow model, data are thought of as being held in individual and
discrete compartments. In the Bell-LaPadula model, these compartments are based on
security levels. Remember that MAC systems (which you learned about in Chapter 3)
are based on the Bell-LaPadula model. MAC systems use labels on each subject and
object. The subject’s label indicates the subject’s clearance and need to know. The ob-
ject’s label indicates the object’s classification and categories. If you are in the Army and
have a top-secret clearance, this does not mean you can access all of the Army’s top-se-
cret information. Information is compartmentalized based on two factors—classification
and need to know. Your clearance has to dominate the object’s classification and your
security profile must contain one of the categories listed in the object’s label, which
enforces need to know. So Bell-LaPadula is an information flow model that ensures
that information cannot flow from one compartment to another in a way that threatens
the confidentiality of the data. Biba compartmentalizes data based on integrity levels.
It is a model that controls information flow in a way that is intended to protect the
integrity of the most trusted information.

CISSP All-in-One Exam Guide
378
How can information flow within a system? The answer is in many ways. Subjects
can access files. Processes can access memory segments. When data are moved from the
hard drive’s swap space into memory, information flows. Data are moved into and out
of registers on a CPU. Data are moved into different cache memory storage devices.
Data are written to the hard drive, thumb drive, CD-ROM drive, and so on. Properly
controlling all of these ways of how information flows can be a very complex task. This
is why the information flow model exists—to help architects and developers make sure
their software does not allow information to flow in a way that can put the system or
data in danger. One way that the information flow model provides this type of protec-
tion is by ensuring that covert channels do not exist in the code.
Covert Channels
I have my decoder ring, cape, and pirate’s hat on. I will communicate to my spy buddies with
this tribal drum and a whistle.
Acovert channel is a way for an entity to receive information in an unauthorized
manner. It is an information flow that is not controlled by a security mechanism. This
type of information path was not developed for communication; thus, the system does
not properly protect this path, because the developers never envisioned information
being passed in this way. Receiving information in this manner clearly violates the sys-
tem’s security policy.
The channel to transfer this unauthorized data is the result of one of the following
conditions:
• Improper oversight in the development of the product
• Improper implementation of access controls within the software
• Existence of a shared resource between the two entities which are not properly
controlled
Covert channels are of two types: storage and timing. In a covert storage channel,
processes are able to communicate through some type of storage space on the system.
For example, System A is infected with a Trojan horse that has installed software that will
be able to communicate to another process in a limited way. System A has a very sensi-
tive file (File 2) that is of great interest to a particular attacker. The software the Trojan
horse installed is able to read this file, and it needs to send the contents of the file to the
attacker, which can only happen one bit at a time. The intrusive software is going to
communicate to the attacker by locking a specific file (File 3). When the attacker at-
tempts to access File 3 and finds it has a software lock enabled on it, the attacker inter-
prets this to mean the first bit in the sensitive file is a 1. The second time the attacker
attempts to access File 3, it is not locked. The attacker interprets this value to be 0. This
continues until all of the data in the sensitive file are sent to the attacker. In this example,
the software the Trojan horse installed is the messenger. It can access the sensitive data
and it uses another file that is on the hard drive to send signals to the attacker.

Chapter 4: Security Architecture and Design
379
Another way that a covert storage channel attack can take place is through file cre-
ation. A system has been compromised and has software that can create and delete files
within a specific directory and has read access to a sensitive file. When the intrusive
software sees that the first bit of the data within the sensitive file is 1, it will create a file
named Temp in a specific directory. The attacker will try and create (or upload) a file with
the exact same name, and the attacker will receive a message indicating there is already
a file with that name in that directory. The attacker will know this means the first bit in
the sensitive file is a 1. The attacker tries to create the same file again, and when the
system allows this, it means the intrusive software on the system deleted that file, which
means the second bit is a 0.
Information flow models produce rules on how to ensure that covert channels do
not exist. But there are many ways information flows within a system, so identifying
and rooting out covert channels is usually more difficult than one would think at first
glance.
NOTE
NOTE An overt channel is an intended path of communication. Processes
should be communicating through overt channels, not covert channels.
In a covert timing channel, one process relays information to another by modulat-
ing its use of system resources. The two processes that are communicating to each other
are using the same shared resource. So in our example, Process A is a piece of nefarious
software that was installed via a Trojan horse. In a multitasked system, each process is
offered access to interact with the CPU. When this function is offered to Process A, it
rejects it—which indicates a 1 to the attacker. The next time Process A is offered access
to the CPU, it uses it, which indicates a 0 to the attacker. Think of this as a type of Morse
code, but using some type of system resource.
Other Types of Covert Channels
Although we are looking at covert channels within programming code, covert
channels can be used in the outside world as well. Let’s say you are going to at-
tend one of my lectures. Before the lecture begins, you and I agree on a way of
communicating that no one else in the audience will understand. I tell you that
if I twiddle a pen between my fingers in my right hand, that means there will be
a quiz at the end of class. If I twiddle a pen between my fingers in my left hand,
there will be no quiz. It is a covert channel, because this is not a normal way of
communicating and it is secretive. (In this scenario, I would twiddle the pen in
both hands to confuse you and make you stay after class to take the quiz all by
yourself. Shame on you for wanting to be forewarned about a quiz!)

CISSP All-in-One Exam Guide
380
Countermeasures Because all operating systems have some type of covert chan-
nel, it is not always feasible to get rid of them all. The number of acceptable covert
channels usually depends on the assurance rating of a system. A system that has a Com-
mon Criteria rating of EAL 6 has fewer covert channels than a system with an EAL rating
of 3, because an EAL 6 rating represents a higher assurance level of providing a particu-
lar degree of protection when compared to the EAL 3 rating. There is not much a user
can do to counteract these channels; instead, the channels must be addressed when the
system is constructed and developed.
NOTE
NOTE In the Orange Book, covert channels in operating systems are not
addressed until security level B2 and above because these are the systems
that would be holding data sensitive enough for others to go through all the
necessary trouble to access data in this fashion.
Noninterference Model
Stop touching me. Stop touching me. You are interfering with me!
Multilevel security properties can be expressed in many ways, one being noninter-
ference. This concept is implemented to ensure any actions that take place at a higher
security level do not affect, or interfere with, actions that take place at a lower level. This
type of model does not concern itself with the flow of data, but rather with what a sub-
ject knows about the state of the system. So if an entity at a higher security level per-
forms an action, it cannot change the state for the entity at the lower level.
If a lower-level entity was aware of a certain activity that took place by an entity at a
higher level and the state of the system changed for this lower-level entity, the entity
might be able to deduce too much information about the activities of the higher state,
which in turn is a way of leaking information. Users at a lower security level should not
be aware of the commands executed by users at a higher level and should not be af-
fected by those commands in any way.
Let’s say that Tom and Kathy are both working on a multilevel mainframe at the
same time. Tom has the security clearance of secret and Kathy has the security clearance
of top secret. Since this is a central mainframe, the terminal Tom is working at has the
context of secret, and Kathy is working at her own terminal, which has a context of top
secret. This model states that nothing Kathy does at her terminal should directly or in-
directly affect Tom’s domain (available resources and working environment). So what-
ever commands she executes or whichever resources she interacts with should not affect
Tom’s experience of working with the mainframe in any way. This sounds simple
enough, until you actually understand what this model is really saying.
It seems very logical and straightforward that when Kathy executes a command, it
should not affect Tom’s computing environment. But the real intent of this model is to
address covert channels and inference attacks. The model looks at the shared resources
that the different users of a system will use and tries to identify how information can be
passed from a process working at a higher security clearance to a process working at a
lower security clearance. Since Tom and Kathy are working on the same system at the

Chapter 4: Security Architecture and Design
381
same time, they will most likely have to share some type of resources. So the model is
made up of rules to ensure that Kathy cannot pass data to Tom through covert storage or
timing channels.
The other security breach this model addresses is the inference attack. An inference
attack occurs when someone has access to some type of information and can infer (or
guess) something that he does not have the clearance level or authority to know. For
example, let’s say Tom is working on a file that contains information about supplies
that are being sent to Russia. He closes out of that file and one hour later attempts to
open the same file. During this time, this file’s classification has been elevated to top
secret, so when Tom attempts to access it, he is denied. Tom can infer that some type of
top-secret mission is getting ready to take place with Russia. He does not have clearance
to know this; thus, it would be an inference attack, or “leaking information.” (Inference
attacks are further explained in Chapter 10.)
Lattice Model
A lattice is a mathematical construct that is built upon the notion of a group. The most
common definition of the lattice model is “a structure consisting of a finite partially
ordered set together with least upper and greatest lower bound operators on the set.”
Two things are wrong with this type of explanation. First, “a structure consisting of
a finite partially ordered set together with least upper and greatest lower bound opera-
tors on the set” can only be understood by someone who understands the model in the
first place. This is similar to the common definition of metadata: “data about data.”
Only after you really understand what metadata are does this definition make any sense
to you. So this definition of lattice model is not overly helpful.
The problem with the mathematical explanation is that it is in weird alien writings
that only people who obtain their master’s or PhD degree in mathematics can under-
stand. This model needs to be explained in everyday language so us mere mortals can
understand it. So let’s give it a try.
The MAC model was explained in Chapter 3 and then built upon in this chapter. In
this model, the subjects and objects have labels. Each subject’s label contains the clear-
ance and need-to-know categories that this subject can access. Suppose Kathy’s security
clearance is top secret and she has been formally granted access to the compartments
named Iraq and Korea, based on her need-to-know. So Kathy’s security label states the
following: TS {Iraq, Korea}. Table 4-1 shows the different files on the system in this
scenario. The system is based on the MAC model, which means the operating system is
making access decisions based on security label contents.
Kathy’s Security
Label
File B’s Security
Label
File C’s Security
Label
File D’s Security
Label
Top Secret {Iraq,
Korea}
Secret {Iraq} Top Secret {Iraq,
Korea}
Secret {Iraq, Korea,
Iran}
Table 4-1 Security Access Control Elements

CISSP All-in-One Exam Guide
382
Kathy attempts to access File B; since her clearance is greater than File B’s classifica-
tion, she can read this file but not write to it. This is where the “partially ordered set to-
gether with least upper and greatest lower bound operators on the set” comes into play. A
set is a subject (Kathy) and an object (file). It is a partially ordered set because all of the
access controls are not completely equal. The system has to decide between read, write,
full control, modify, and all the other types of access permissions used in this operating
system. So, “partially ordered” means the system has to apply the most restrictive access
controls to this set, and “least upper bound” means the system looks at one access con-
trol’s statement (Kathy can read the file) and the other access control’s statement (Kathy
cannot write to the file) and takes the least upper bound value. Since no write is more
restrictive than read, Kathy’s least upper bound access to this file is read and her great-
est lower bound is no write. Figure 4-24 illustrates the bounds of access. This is just
a confusing way of saying, “The most that Kathy can do with this file is read it. The least
she can do is not write to it.”
Let’s figure out the least upper bound and greatest lower bound access levels for
Kathy and File C. Kathy’s clearance equals File C’s classification. Under the Bell-LaPadu-
la model, this is when the strong star property would kick in. (Remember that the strong
star property states that a subject can read and write to an object of the same security
level.) So the least upper bound is write and the greatest lower bound is read.
If we look at File D’s security label, we see it has a category that Kathy does not have
in her security label, which is Iran. This means Kathy does not have the necessary need-
to-know to be able to access this file. Kathy’s least upper bound and greatest lower
bound access permission is no access.
So why does this model state things in a very confusing way when in reality it de-
scribes pretty straightforward concepts? First, I am describing this model in the most
simplistic and basic terms possible so you can get the basic meaning of the purpose of
the model. These seemingly straightforward concepts build in complexity when you
think about all the subject-to-object communications that go on within an operating
Figure 4-24
Bounds of access
through the lattice
model

Chapter 4: Security Architecture and Design
383
system during any one second. Also, this is a formal model, which means it can be
proven mathematically to provide a specific level of protection if all of its rules are fol-
lowed properly. Learning these models is similar to learning the basics of chemistry. A
student first learns about the components of an atom (protons, neutrons, and elec-
trons) and how these elements interact with each other. This is the easy piece. Then the
student gets into organic chemistry and has to understand how these components work
together in complex organic systems (weak and strong attractions, osmosis, and ioniza-
tion). The student then goes to quantum physics to learn that the individual elements
of an atom actually have several different subatomic particles (quarks, leptons, and
mesons). In this book, you are just learning the basic components of the models. Much
more complexity lies under the covers.
Brewer and Nash Model
A wall separates our stuff so you can’t touch my stuff.
Response: Your stuff is green and smells funny. I don’t want to touch it.
The Brewer and Nash model, also called the Chinese Wall model, was created to pro-
vide access controls that can change dynamically depending upon a user’s previous
actions. The main goal of the model is to protect against conflicts of interest by users’
access attempts. For example, if a large marketing company provides marketing promo-
tions and materials for two banks, an employee working on a project for Bank A should
not look at the information the marketing company has on its other bank customer,
Bank B. Such action could create a conflict of interest because the banks are competi-
tors. If the marketing company’s project manager for the Bank A project could view
information on Bank B’s new marketing campaign, he may try to trump its promotion
to please his more direct customer. The marketing company would get a bad reputation
if it allowed its internal employees to behave so irresponsibly. This marketing company
could implement a product that tracks the different marketing representatives’ access
activities and disallows certain access requests that would present this type of conflict
of interest. In Figure 4-25, we see that when a representative accesses Bank A’s informa-
tion, the system automatically makes Bank B’s information off limits. If the representa-
tive accessed Bank B’s data, Bank A’s information would be off limits. These access
controls change dynamically depending upon the user’s authorizations, activities, and
previous access requests.
The Chinese Wall model is also based on an information flow model. No informa-
tion can flow between subjects and objects in a way that would result in a conflict of
interest. The model states that a subject can write to an object if, and only if, the subject
cannot read another object that is in a different dataset. So if we stay with our example,
the project manager could not write to any objects within the Bank A dataset if he cur-
rently has read access to any objects in the Bank B dataset.
This is only one example of how this model can be used. Other industries will
have their own possible conflicts of interest. If you were Martha Stewart’s stockbroker,
you should not be able to read a dataset that indicates a stock’s price is getting ready
to go down and be able to write to Martha’s account indicating she should sell the
stock she has.

CISSP All-in-One Exam Guide
384
Graham-Denning Model
Remember that these are all models, so they are not very specific in nature. Each indi-
vidual vendor must decide how it is going to actually meet the rules outlined in the
chosen model. Bell-LaPadula and Biba do not define how the security and integrity
levels are defined and modified, nor do they provide a way to delegate or transfer access
rights. The Graham-Denning model addresses some of these issues and defines a set of
basic rights in terms of commands that a specific subject can execute on an object. This
model has eight primitive protection rights, or rules of how these types of functional-
ities should take place securely, which are outlined next:
• How to securely create an object
• How to securely create a subject
• How to securely delete an object
• How to securely delete a subject
• How to securely provide the read access right
• How to securely provide the grant access right
• How to securely provide the delete access right
• How to securely provide transfer access rights
These things may sound insignificant, but when you’re building a secure system,
they are critical. If a software developer does not integrate these functionalities in a
secure manner, they can be compromised by an attacker and the whole system can be
at risk.
Figure 4-25 The Chinese Wall model provides dynamic access controls.

Chapter 4: Security Architecture and Design
385
Harrison-Ruzzo-Ullman Model
The Harrison-Ruzzo-Ullman (HRU) model deals with access rights of subjects and the
integrity of those rights. A subject can carry out only a finite set of operations on an
object. Since security loves simplicity, it is easier for a system to allow or disallow au-
thorization of operations if one command is restricted to a single operation. For ex-
ample, if a subject sent command X, which only required the operation of Y, this is
pretty straightforward and allows the system to allow or disallow this operation to take
place. But, if a subject sent a command M and to fulfill that command, operations N,
B, W, and P had to be carried out, then there is much more complexity for the system
to decide if this command should be authorized. Also the integrity of the access rights
needs to be ensured, so in this example if one operation cannot be processed properly,
the whole command fails. So while it is easy to dictate that subject A can only read
object B, it is not always so easy to ensure each and every function supports this high-
level statement. The HRU model is used by software designers to ensure that no unfore-
seen vulnerability is introduced and the stated access control goals are achieved.
Security Models Recap
All of these different models can get your head spinning. Most people are not
familiar with all of them, which can make it all even harder to absorb. The fol-
lowing are the core concepts of the different models:
•Bell-LaPadula model This is the first mathematical model of a
multilevel security policy that defines the concept of a secure state and
necessary modes of access. It ensures that information only flows in a
manner that does not violate the system policy and is confidentiality
focused.
•The simple security rule A subject cannot read data at a higher
security level (no read up).
•The *-property rule A subject cannot write to an object at a lower
security level (no write down).
•The strong star property rule A subject can perform read and write
functions only to the objects at its same security level.
•Biba model A formal state transition model that describes a set of
access control rules designed to ensure data integrity.
•The simple integrity axiom A subject cannot read data at a lower
integrity level (no read down).
•The *-integrity axiom A subject cannot modify an object in a
higher integrity level (no write up).

CISSP All-in-One Exam Guide
386
Security Modes of Operation
A multilevel security system can operate in different modes depending on the sensitiv-
ity of the data being processed, the clearance level of the users, and what those users are
authorized to do. The mode of operation describes the security conditions under which
the system actually functions.
These modes are used in MAC systems, which hold one or more classifications of
data. Several things come into play when determining the mode the operating system
should be working in:
• The types of users who will be directly or indirectly connecting to the system
• The type of data (classification levels, compartments, and categories)
processed on the system
• The clearance levels, need-to-know, and formal access approvals the users
will have
The following sections describe the different security modes that multilevel operat-
ing systems can be developed and configured to work in.
•Clark-Wilson model This integrity model is implemented to protect
the integrity of data and to ensure that properly formatted transactions
take place. It addresses all three goals of integrity:
• Subjects can access objects only through authorized programs
(access triple).
• Separation of duties is enforced.
• Auditing is required.
•Information flow model This is a model in which information is
restricted in its flow to only go to and from entities in a way that does
not negate or violate the security policy.
•Noninterference model This formal multilevel security model states
that commands and activities performed at one security level should
not be seen by, or affect, subjects or objects at a different security level.
•Brewer and Nash model This model allows for dynamically changing
access controls that protect against conflicts of interest. Also known as
the Chinese Wall model.
•Graham-Denning model This model shows how subjects and objects
should be created and deleted. It also addresses how to assign specific
access rights.
•Harrison-Ruzzo-Ullman model This model shows how a finite set of
procedures can be available to edit the access rights of a subject.

Chapter 4: Security Architecture and Design
387
Dedicated Security Mode
Our system only holds secret data and we can all access it.
A system is operating in a dedicated security mode if all users have a clearance for,
and a formal need-to-know about, all data processed within the system. All users have
been given formal access approval for all information on the system and have signed
nondisclosure agreements (NDAs) pertaining to this information. The system can han-
dle a single classification level of information.
Many military systems have been designed to handle only one level of security,
which works in dedicated security mode. This requires everyone who uses the system to
have the highest level of clearance required by any and all data on the system. If a sys-
tem holds top-secret data, only users with that clearance can use the system. Other
military systems work with multiple security levels, which is done by compartmental-
izing the data. These types of systems can support users with high and low clearances
simultaneously.
System High-Security Mode
Our system only holds secret data, but only some of us can access all of it.
A system is operating in system high-security mode when all users have a security
clearance to access the information but not necessarily a need-to-know for all the infor-
mation processed on the system. So, unlike in the dedicated security mode, in which all
users have a need-to-know pertaining to all data on the system, in system high-security
mode, all users have a need-to-know pertaining to some of the data.
This mode also requires all users to have the highest level of clearance required by
any and all data on the system. However, even though a user has the necessary security
clearance to access an object, the user may still be restricted if he does not have a need-
to-know pertaining to that specific object.
Compartmented Security Mode
Our system has various classifications of data, and each individual has the clearance to access
all of the data, but not necessarily the need to know.
A system is operating in compartmented security mode when all users have the clear-
ance to access all the information processed by the system in a system high-security
configuration, but might not have the need-to-know and formal access approval. This
means that if the system is holding secret and top-secret data, all users must have at
least a top-secret clearance to gain access to this system. This is how compartmented
and multilevel security modes are different. Both modes require the user to have a valid
need-to-know, NDA, and formal approval, but compartmented security mode requires
the user to have a clearance that dominates (above or equal to) any and all data on the
system, whereas multilevel security mode just requires the user to have clearance to ac-
cess the data she will be working with.
In compartmented security mode, users are restricted from accessing some informa-
tion because they do not need to access it to perform the functions of their jobs and

CISSP All-in-One Exam Guide
388
they have not been given formal approval to access it. This would be enforced by having
security labels on all objects that reflect the sensitivity (classification level, classification
category, and handling procedures) of the information. In this mode, users can access
a compartment of data only, enforced by mandatory access controls.
The objective is to ensure that the minimum possible number of people learn of
information at each level. Compartments are categories of data with a limited number
of subjects cleared to access data at each level. Compartmented mode workstations
(CMWs) enable users to process multiple compartments of data at the same time, if
they have the necessary clearance.
Multilevel Security Mode
Our system has various classifications of data, and each individual has the clearance and need-
to-know to access only individual pieces of data.
A system is operating in multilevel security mode when it permits two or more clas-
sification levels of information to be processed at the same time when not all of the
users have the clearance or formal approval to access all the information being pro-
cessed by the system. So all users must have formal approval, NDA, need-to-know, and
the necessary clearance to access the data that they need to carry out their jobs. In this
mode, the user cannot access all of the data on the system, only what she is cleared to
access.
The Bell-LaPadula model is an example of a multilevel security model because it
handles multiple information classifications at a number of different security levels
within one system simultaneously.
Guards
Software and hardware guards allow the exchange of data between trusted (high assur-
ance) and less trusted (low assurance) systems and environments. Let’s say you are
working on a MAC system (working in dedicated security mode of secret) and you need
the system to communicate with a MAC database (working in multilevel security mode,
which goes up to top secret). These two systems provide different levels of protection.
If a system with lower assurance could directly communicate with a system of higher
assurance, then security vulnerabilities and compromises could be introduced. So, a
software guard can be implemented, which is really just a front-end product that allows
interconnectivity between systems working at different security levels. (The various
types of guards available can carry out filtering, processing requests, data blocking, and
data sanitization.) Or a hardware guard can be implemented, which is a system with
two NICs connecting the two systems that need to communicate. The guard provides a
level of strict access control between different systems.
The guard accepts requests from the system of lower assurance, reviews the request
to make sure it is allowed, and then submits the request to the system of higher assur-
ance. The goal is to ensure that information does not flow from a high security level to
a low security level in an unauthorized manner.
Guards can be used to connect different MAC systems working in different security
modes and to connect different networks working at different security levels. In many
cases, the less trusted system can send messages to the more trusted system but can only

Chapter 4: Security Architecture and Design
389
receive acknowledgments in return. This is common when e-mail messages need to go
from less trusted systems to more trusted classified systems.
Security Modes Recap
Many times it is easier to understand these different modes when they are laid
out in a clear and simplistic format (see also Table 4-2). Pay attention to the
words in italics because they emphasize the differences among the various modes.
Dedicated Security Mode All users must have . . .
• Proper clearance for all information on the system
• Formal access approval for all information on the system
• A signed NDA for all information on the system
• A valid need-to-know for all information on the system
• All users can access all data.
System High-Security Mode All users must have . . .
• Proper clearance for all information on the system
• Formal access approval for all information on the system
• A signed NDA for all information on the system
• A valid need-to-know for some information on the system
• All users can access some data, based on their need-to-know.
Compartmented Security Mode All users must have . . .
• Proper clearance for the highest level of data classification on the
system
• Formal access approval for some information on the system
• A signed NDA for all information they will access on the system
• A valid need-to-know for some of the information on the system
• All users can access some data, based on their need-to-know and formal
access approval.
Multilevel Security Mode All users must have . . .
• Proper clearance for some of the information on the system
• Formal access approval for some of the information on the system
• A signed NDA for all information on the system
• A valid need-to-know for some of the information on the system
• All users can access some data, based on their need-to-know, clearance,
and formal access approval.

CISSP All-in-One Exam Guide
390
Trust and Assurance
I trust that you will act properly; thus, I have a high level of assurance in you.
Response: You are such a fool.
As discussed earlier in the section “Trusted Computing Base,” no system is really
secure because, with enough resources, attackers can compromise almost any system in
one way or another; however, systems can provide levels of trust. The trust level tells the
customer how much protection he can expect out of this system and the assurance that
the system will act in a correct and predictable manner in each and every computing
situation.
The TCB comprises all the protection mechanisms within a system (software, hard-
ware, firmware). All of these mechanisms need to work in an orchestrated way to en-
force all the requirements of a security policy for a specific system. When evaluated,
these mechanisms are tested, their designs are inspected, and their supporting docu-
mentation is reviewed and evaluated. How the system is developed, maintained, and
even delivered to the customer are all under review when the trust for a system is being
gauged. All of these different components are put through an evaluation process and
assigned an assurance rating, which represents the level of trust and assurance the test-
ing team has in the product. Customers then use this rating to determine which system
best fits their security needs.
Assurance and trust are similar in nature, but slightly different with regard to prod-
uct ratings. In a trusted system, all protection mechanisms work together to process
sensitive data for many types of uses, and will provide the necessary level of protection
per classification level. Assurance looks at the same issues but in more depth and detail.
Systems that provide higher levels of assurance have been tested extensively and have
had their designs thoroughly inspected, their development stages reviewed, and their
technical specifications and test plans evaluated. You can buy a car and you can trust it,
but you have a much deeper sense of assurance of that trust if you know how the car
Signed NDA
for
Proper
clearance for
Formal access
approval for
A valid need-
to-know for
Dedicated security
mode
ALL
information on
the system
ALL
information on
the system
ALL
information on
the system
ALL
information on
the system
System high-
security mode
ALL
information on
the system
ALL
information on
the system
ALL
information on
the system
SOME
information on
the system
Compartmented
security mode
ALL
information on
the system
ALL
information on
the system
SOME
information on
the system
SOME
information on
the system
Multilevel security
mode
ALL
information on
the system
SOME
information on
the system
SOME
information on
the system
SOME
information on
the system
Table 4-2 A Summary of the Different Security Modes

Chapter 4: Security Architecture and Design
391
was built, what it was built with, who built it, what tests it was put through, and how it
performed in many different situations.
In the Trusted Computer System Evaluation Criteria (TCSEC), commonly known as
the Orange Book (addressed shortly), the lower assurance level ratings look at a sys-
tem’s protection mechanisms and testing results to produce an assurance rating, but the
higher assurance level ratings look more at the system design, specifications, develop-
ment procedures, supporting documentation, and testing results. The protection mech-
anisms in the higher assurance level systems may not necessarily be much different
from those in the lower assurance level systems, but the way they were designed and
built is under much more scrutiny. With this extra scrutiny comes higher levels of assur-
ance of the trust that can be put into a system.
Systems Evaluation Methods
An assurance evaluation examines the security-relevant parts of a system, meaning the
TCB, access control mechanisms, reference monitor, kernel, and protection mecha-
nisms. The relationship and interaction between these components are also evaluated.
There are different methods of evaluating and assigning assurance levels to systems.
Two reasons explain why more than one type of assurance evaluation process exists:
methods and ideologies have evolved over time, and various parts of the world look at
computer security differently and rate some aspects of security differently. Each method
will be explained and compared.
Why Put a Product Through Evaluation?
Submitting a product to be evaluated against the Orange Book, Information Technol-
ogy Security Evaluation Criteria, or Common Criteria is no walk in the park for a ven-
dor. In fact, it is a really painful and long process, and no one wakes up in the morning
thinking, “Yippee! I have to complete all of the paperwork that the National Computer
Security Center requires so my product can be evaluated!” So, before we go through
these different criteria, let’s look at why anyone would even put themselves through this
process.
If you were going shopping to buy a firewall, how would you know what level of
protection each provides and which is the best product for your environment? You
could listen to the vendor’s marketing hype and believe the salesperson who informs
you that a particular product will solve all of your life problems in one week. Or you
could listen to the advice of an independent third party who has fully tested the prod-
uct and does not have any bias toward the product. If you choose the second option,
then you join a world of people who work within the realm of assurance ratings in one
form or another.
In the United States, the National Computer Security Center (NCSC) was an orga-
nization within the National Security Agency (NSA) that was responsible for evaluating
computer systems and products. It had a group, called the Trusted Product Evaluation
Program (TPEP), that oversaw the testing by approved evaluation entities of commer-
cial products against a specific set of criteria.

CISSP All-in-One Exam Guide
392
So, a vendor created a product and submitted it to an approved evaluation entity
that was compliant with the TPEP guidelines. The evaluation entity had groups of tes-
ters who would follow a set of criteria to test the vendor’s product. Once the testing was
over, the product was assigned an assurance rating. So, instead of having to trust the
marketing hype of the financially motivated vendor, you as a consumer can take the
word of an objective third-party entity that fully tested the product.
This evaluation process is very time-consuming and expensive for the vendor. Not
every vendor puts its product through this process, because of the expense and delayed
date to get it to market. Typically, a vendor would put its product through this process
if its main customer base will be making purchasing decisions based on assurance rat-
ings. In the United States, the Department of Defense is the largest customer, so major
vendors put their main products through this process with the hope that the Depart-
ment of Defense (and others) will purchase their products.
NOTE
NOTE The Trusted Product Evaluation Program (TPEP) has evolved over
time. TPEP was in operation from 1983 to 1998 and worked within the NSA.
Then came Trusted Technology Assessment Program, which was a commercial
evaluation process and it lasted until 2000. The Common Criteria evaluation
framework replaced the processes that were taking place in these two
programs in 2001. Products are no longer evaluated within the NSA, but
are tested through an international organization.
The Orange Book
The U.S. Department of Defense developed the Trusted Computer System Evaluation
Criteria (TCSEC), which was used to evaluate operating systems, applications, and dif-
ferent products. These evaluation criteria are published in a book with an orange cover,
which is called, appropriately, the Orange Book. (We like to keep things simple in secu-
rity.) Customers used the assurance rating that the criteria present as a metric when
comparing different products. It also provided direction for manufacturers so they
knew what specifications to build to, and provides a one-stop evaluation process so
customers do not need to have individual components within the systems evaluated.
The Orange Book was used to evaluate whether a product contained the security
properties the vendor claimed it did and whether the product was appropriate for a
specific application or function. The Orange Book was used to review the functionality,
effectiveness, and assurance of a product during its evaluation, and it used classes that
were devised to address typical patterns of security requirements.
TCSEC provides a classification system that is divided into hierarchical divisions of
assurance levels:
A. Verified protection
B. Mandatory protection

Chapter 4: Security Architecture and Design
393
C. Discretionary protection
D. Minimal security
Classification A represents the highest level of assurance, and D represents the low-
est level of assurance.
Each division can have one or more numbered classes with a corresponding set of
requirements that must be met for a system to achieve that particular rating. The classes
with higher numbers offer a greater degree of trust and assurance. So B2 would offer
more assurance than B1, and C2 would offer more assurance than C1.
The criteria breaks down into seven different areas:
•Security policy The policy must be explicit and well defined and enforced by
the mechanisms within the system.
•Identification Individual subjects must be uniquely identified.
•Labels Access control labels must be associated properly with objects.
•Documentation Documentation must be provided, including test, design,
and specification documents, user guides, and manuals.
•Accountability Audit data must be captured and protected to enforce
accountability.
•Life-cycle assurance Software, hardware, and firmware must be able to
be tested individually to ensure that each enforces the security policy in an
effective manner throughout their lifetimes.
•Continuous protection The security mechanisms and the system as a whole
must perform predictably and acceptably in different situations continuously.
These categories are evaluated independently, but the rating assigned at the end
does not specify these different objectives individually. The rating is a sum total of
these items.
Each division and class incorporates the requirements of the ones below it. This
means that C2 must meet its criteria requirements and all of C1’s requirements, and B3
has its requirements to fulfill along with those of C1, C2, B1, and B2. Each division or
class ups the ante on security requirements and is expected to fulfill the requirements
of all the classes and divisions below it.
So, when a vendor submitted a product for evaluation, it submitted it to the NCSC.
The group that oversaw the processes of evaluation was called the Trusted Products
Evaluation Program (TPEP). Successfully evaluated products were placed on the Evalu-
ated Products List (EPL) with their corresponding rating. When consumers were inter-
ested in certain products and systems, they could check the appropriate EPL to find out
their assigned assurance levels.

CISSP All-in-One Exam Guide
394
Division D: Minimal Protection
There is only one class in Division D. It is reserved for systems that have been evaluated
but fail to meet the criteria and requirements of the higher divisions.
Division C: Discretionary Protection
The C rating category has two individual assurance ratings within it, which are de-
scribed next. The higher the number of the assurance rating, the greater the protection.
C1: Discretionary Security Protection Discretionary access control is based
on individuals and/or groups. It requires a separation of users and information, and
identification and authentication of individual entities. Some type of access control is
necessary so users can ensure their data will not be accessed and corrupted by others.
The system architecture must supply a protected execution domain so privileged sys-
tem processes are not adversely affected by lower-privileged processes. There must be
specific ways of validating the system’s operational integrity. The documentation re-
quirements include design documentation, which shows that the system was built to
include protection mechanisms, test documentation (test plan and results), a facility
manual (so companies know how to install and configure the system correctly), and
user manuals.
The type of environment that would require this rating is one in which users are
processing information at the same sensitivity level; thus, strict access control and au-
diting measures are not required. It would be a trusted environment with low security
concerns.
Isn’t the Orange Book Dead?
We have moved from the Orange Book to the Common Criteria in the industry,
so a common question is, “Why do I have to study this Orange Book stuff?” The
Orange Book was the first evaluation criteria and was used for 20 years. Many of
the basic terms and concepts that have carried through originated in the Orange
Book. And we still have several products with these ratings that eventually will go
through the Common Criteria evaluation process.
The CISSP exam is moving steadily from the Orange Book to the Common
Criteria all the time, but don’t count the Orange Book out yet.
As a follow-on observation, many people are new to the security field. It is a
booming market, which means a flood of not-so-experienced people will be
jumping in and attempting to charge forward without a real foundation of knowl-
edge. To some readers, this book will just be a nice refresher and something that
ties already known concepts together. To other readers, many of these concepts
are new and more challenging. If a lot of this stuff is new to you—you are new to
the industry. That is okay, but knowing how we got where we are today is very
beneficial because it broadens your view of deep understanding—instead of just
memorizing for an exam.

Chapter 4: Security Architecture and Design
395
C2: Controlled Access Protection Users need to be identified individually to
provide more precise access control and auditing functionality. Logical access control
mechanisms are used to enforce authentication and the uniqueness of each individu-
al’s identification. Security-relevant events are audited, and these records must be pro-
tected from unauthorized modification. The architecture must provide resource, or ob-
ject, isolation so proper protection can be applied to the resource and any actions taken
upon it can be properly audited. The object reuse concept must also be invoked, mean-
ing that any medium holding data must not contain any remnants of information after
it is released for another subject to use. If a subject uses a segment of memory, that
memory space must not hold any information after the subject is done using it. The
same is true for storage media, objects being populated, and temporary files being cre-
ated—all data must be efficiently erased once the subject is done with that medium.
This class requires a more granular method of providing access control. The system
must enforce strict logon procedures and provide decision-making capabilities when
subjects request access to objects. A C2 system cannot guarantee it will not be compro-
mised, but it supplies a level of protection that would make attempts to compromise it
harder to accomplish.
The type of environment that would require systems with a C2 rating is one in
which users are trusted but a certain level of accountability is required. C2, overall, is
seen as the most reasonable class for commercial applications, but the level of protec-
tion is still relatively weak.
Division B: Mandatory Protection
Mandatory access control is enforced by the use of security labels. The architecture is
based on the Bell-LaPadula security model, and evidence of reference monitor enforce-
ment must be available.
B1: Labeled Security Each data object must contain a classification label and
each subject must have a clearance label. When a subject attempts to access an object,
the system must compare the subject’s and object’s security labels to ensure the re-
quested actions are acceptable. Data leaving the system must also contain an accurate
security label. The security policy is based on an informal statement, and the design
specifications are reviewed and verified.
This security rating is intended for environments that require systems to handle
classified data.
NOTE
NOTE Security labels are not required until security rating B; thus, C2 does
not require security labels but B1 does.
B2: Structured Protection The security policy is clearly defined and document-
ed, and the system design and implementation are subjected to more thorough review
and testing procedures. This class requires more stringent authentication mechanisms

CISSP All-in-One Exam Guide
396
and well-defined interfaces among layers. Subjects and devices require labels, and the
system must not allow covert channels. A trusted path for logon and authentication
processes must be in place, which means the subject communicates directly with the
application or operating system, and no trapdoors exist. There is no way to circumvent
or compromise this communication channel. Operator and administration functions
are separated within the system to provide more trusted and protected operational
functionality. Distinct address spaces must be provided to isolate processes, and a co-
vert channel analysis is conducted. This class adds assurance by adding requirements to
the design of the system.
The type of environment that would require B2 systems is one that processes sensi-
tive data that require a higher degree of security. This type of environment would re-
quire systems that are relatively resistant to penetration and compromise.
B3: Security Domains In this class, more granularity is provided in each protec-
tion mechanism, and the programming code that is not necessary to support the secu-
rity policy is excluded. The design and implementation should not provide too much
complexity, because as the complexity of a system increases, so must the skill level of
the individuals who need to test, maintain, and configure it; thus, the overall security
can be threatened. The reference monitor components must be small enough to test
properly and be tamperproof. The security administrator role is clearly defined, and the
system must be able to recover from failures without its security level being compro-
mised. When the system starts up and loads its operating system and components, it
must be done in an initial secure state to ensure that any weakness of the system cannot
be taken advantage of in this slice of time.
The type of environment that requires B3 systems is a highly secured environment
that processes very sensitive information. It requires systems that are highly resistant to
penetration.
Division A: Verified Protection
Formal methods are used to ensure that all subjects and objects are controlled with the
necessary discretionary and mandatory access controls. The design, development, im-
plementation, and documentation are looked at in a formal and detailed way. The
security mechanisms between B3 and A1 are not very different, but the way the system
was designed and developed is evaluated in a much more structured and stringent
procedure.
A1: Verified Design The architecture and protection features are not much differ-
ent from systems that achieve a B3 rating, but the assurance of an A1 system is higher
than a B3 system because of the formality in the way the A1 system was designed, the
way the specifications were developed, and the level of detail in the verification tech-
niques. Formal techniques are used to prove the equivalence between the TCB specifica-
tions and the security policy model. A more stringent change configuration is put in
place with the development of an A1 system, and the overall design can be verified. In
many cases, even the way in which the system is delivered to the customer is under
scrutiny to ensure there is no way of compromising the system before it reaches its
destination.

Chapter 4: Security Architecture and Design
397
The type of environment that would require A1 systems is the most secure of se-
cured environments. This type of environment deals with top-secret information and
cannot adequately trust anyone using the systems without strict authentication, restric-
tions, and auditing.
NOTE
NOTE TCSEC addresses confidentiality, but not integrity. Functionality of
the security mechanisms and the assurance of those mechanisms are not
evaluated separately, but rather are combined and rated as a whole.
The Orange Book and the Rainbow Series
Why are there so many colors in the rainbow?
Response: Because there are so many product types that need to be evaluated.
The Orange Book mainly addresses government and military requirements and ex-
pectations for their computer systems. Many people within the security field have
pointed out several deficiencies in the Orange Book, particularly when it is being ap-
plied to systems that are to be used in commercial areas instead of government organi-
zations. The following list summarizes a majority of the troubling issues that security
practitioners have expressed about the Orange Book:
• It looks specifically at the operating system and not at other issues like
networking, databases, and so on.
• It focuses mainly on one attribute of security—confidentiality—and not on
integrity and availability.
• It works with government classifications and not the protection classifications
commercial industries use.
• It has a relatively small number of ratings, which means many different
aspects of security are not evaluated and rated independently.
The Orange Book places great emphasis on controlling which users can access a
system and virtually ignores controlling what those users do with the information once
they are authorized. Authorized users can, and usually do, cause more damage to data
than outside attackers. Commercial organizations have expressed more concern about
the integrity of their data, whereas military organizations stress that their top concern
is confidentiality. Because of these different goals, the Orange Book is a better evalua-
tion tool for government and military systems.
Because the Orange Book focuses on the operating system, many other areas of se-
curity were left out. The Orange Book provides a broad framework for building and
evaluating trusted systems, but it leaves many questions about topics other than operat-
ing systems unanswered. So, more books were written to extend the coverage of the
Orange Book into other areas of security. These books provide detailed information
and interpretations of certain Orange Book requirements and describe the evaluation
processes. These books are collectively called the Rainbow Series because the cover of
each is a different color.

CISSP All-in-One Exam Guide
398
The Red Book
The Orange Book addresses single-system security, but networks are a combination of
systems, and each network needs to be secure without having to fully trust each and
every system connected to it. The Trusted Network Interpretation (TNI), also called the
Red Book because of the color of its cover, addresses security evaluation topics for net-
works and network components. It addresses isolated local area networks and wide
area internetwork systems.
Like the Orange Book, the Red Book does not supply specific details about how to
implement security mechanisms. Instead, it provides a framework for securing different
types of networks. A network has a security policy, architecture, and design, as does an
operating system. Subjects accessing objects on the network need to be controlled,
monitored, and audited. In a network, the subject could be a workstation and an object
could be a network service on a server.
The Red Book rates confidentiality of data and operations that happen within a
network and the network products. Data and labels need to be protected from unau-
thorized modification, and the integrity of information as it is transferred needs to be
ensured. The source and destination mechanisms used for messages are evaluated and
tested to ensure modification is not allowed.
Encryption and protocols are components that provide a lot of the security within
a network, and the Red Book measures their functionality, strength, and assurance.
The following is a brief overview of the security items addressed in the Red Book:
•Communication integrity
•Authentication Protects against masquerading and playback attacks.
Mechanisms include digital signatures, encryption, timestamp, and
passwords.
•Message integrity Protects the protocol header, routing information,
and packet payload from being modified. Mechanisms include message
authentication and encryption.
•Nonrepudiation Ensures that a sender cannot deny sending a message.
Mechanisms include encryption, digital signatures, and notarization.
•Denial-of-service prevention
•Continuity of operations Ensures that the network is available even if
attacked. Mechanisms include fault-tolerant and redundant systems and
the capability to reconfigure network parameters in case of an emergency.
•Network management Monitors network performance and identifies
attacks and failures. Mechanisms include components that enable network
administrators to monitor and restrict resource access.
•Compromise protection
•Data confidentiality Protects data from being accessed in an unauthorized
method during transmission. Mechanisms include access controls, encryption,
and physical protection of cables.

Chapter 4: Security Architecture and Design
399
•Traffic flow confidentiality Ensures that unauthorized entities are not
aware of routing information or frequency of communication via traffic
analysis. Mechanisms include padding messages, sending noise, or sending
false messages.
•Selective routing Routes messages in a way to avoid specific threats.
Mechanisms include network configuration and routing tables.
Assurance is derived by comparing how things actually work to a theory of how
things should work. Assurance is also derived by testing configurations in many differ-
ent scenarios, evaluating engineering practices, and validating and verifying security
claims.
TCSEC was introduced in 1985 and retired in December 2000. It was the first me-
thodical and logical set of standards developed to secure computer systems. It was
greatly influential to several countries that based their evaluation standards on the TC-
SEC guidelines. TCSEC was finally replaced with the Common Criteria.
Information Technology Security
Evaluation Criteria
The Information Technology Security Evaluation Criteria (ITSEC) was the first attempt
at establishing a single standard for evaluating security attributes of computer systems
and products by many European countries. The United States looked to the Orange
Book and Rainbow Series, and Europe employed ITSEC to evaluate and rate computer
systems. (Today, everyone is migrating to the Common Criteria, explained in the next
section.)
ITSEC evaluates two main attributes of a system’s protection mechanisms: function-
ality and assurance. When the functionality of a system’s protection mechanisms is
being evaluated, the services that are provided to the subjects (access control mecha-
nisms, auditing, authentication, and so on) are examined and measured. Protection
mechanism functionality can be very diverse in nature because systems are developed
differently just to provide different functionality to users. Nonetheless, when function-
ality is evaluated, it is tested to see if the system’s protection mechanisms deliver what
its vendor says they deliver. Assurance, on the other hand, is the degree of confidence
in the protection mechanisms, and their effectiveness and capability to perform consis-
tently. Assurance is generally tested by examining development practices, documenta-
tion, configuration management, and testing mechanisms.
It is possible for two systems’ protection mechanisms to provide the same type of
functionalities and have very different assurance levels. This is because the underlying
mechanisms providing the functionality can be developed, engineered, and imple-
mented differently. System A and System B may have protection mechanisms that pro-
vide the same type of functionality for authentication, in which case both products
would get the same rating for functionality. But System A’s developers could have been
sloppy and careless when developing their authentication mechanism, in which case
their product would receive a lower assurance rating. ITSEC actually separates these two

CISSP All-in-One Exam Guide
400
attributes (functionality and assurance) and rates them separately, whereas TCSEC
clumps them together and assigns them one rating (D through A1).
The following list shows the different types of functionalities and assurance items
tested during an evaluation:
• Security functional requirements
• Identification and authentication
• Audit
• Resource utilization
• Trusted paths/channels
• User data protection
• Security management
• Product access
• Communications
• Privacy
• Protection of the product’s security functions
• Cryptographic support
• Security assurance requirements
• Guidance documents and manuals
• Configuration management
• Vulnerability assessment
• Delivery and operation
• Life-cycle support
• Assurance maintenance
• Development
• Testing
Consider again our example of two systems that provide the same functionality
(pertaining to the protection mechanisms) but have very different assurance levels. Us-
ing the TCSEC approach, the difference in assurance levels will be hard to distinguish
because the functionality and assurance level are rated together. Under the ITSEC ap-
proach, the functionality is rated separately from the assurance, so the difference in
assurance levels will be more noticeable. In the ITSEC criteria, classes F1 to F10 rate the
functionality of the security mechanisms, whereas E0 to E6 rate the assurance of those
mechanisms.

Chapter 4: Security Architecture and Design
401
So a difference between ITSEC and TCSEC is that TCSEC bundles functionality and
assurance into one rating, whereas ITSEC evaluates these two attributes separately. The
other differences are that ITSEC was developed to provide more flexibility than TCSEC,
and ITSEC addresses integrity, availability, and confidentiality, whereas TCSEC address-
es only confidentiality. ITSEC also addresses networked systems, whereas TCSEC deals
with stand-alone systems.
Table 4-3 is a general mapping of the two evaluation schemes to show you their
relationship to each other.
As you can see, a majority of the ITSEC ratings can be mapped to the Orange Book
ratings, but then ITSEC took it a step further and added F6 through F10 for specific
needs consumers might have that the Orange Book does not address.
ITSEC is criteria for operating systems and other products, which it refers to indi-
vidually as the target of evaluation (TOE). So if you are reading literature discussing the
ITSEC rating of a product and it states the TOE has a rating of F1 and E5, you know the
TOE is the product that was evaluated and that it has a low functionality rating and a
high assurance rating.
The ratings pertain to assurance, which is the correctness and effectiveness of
the security mechanism and functionality. Functionality is viewed in terms of the sys-
tem’s security objectives, security functions, and security mechanisms. The following
are some examples of the functionalities that are tested: identification and authentica-
tion, access control, accountability, auditing, object reuse, accuracy, reliability of ser-
vice, and data exchange.
ITSEC TCSEC
E0 = D
F1 + E1 = C1
F2 + E2 = C2
F3 + E3 = B1
F4 + E4 = B2
F5 + E5 = B3
F5 + E6 = A1
F6 = Systems that provide high integrity
F7 = Systems that provide high availability
F8 = Systems that provide high data integrity during communication
F9 = Systems that provide high confidentiality (like cryptographic devices)
F10 = Networks with high demands on confidentiality and integrity
Table 4-3 ITSEC and TCSEC Mapping

CISSP All-in-One Exam Guide
402
Common Criteria
“TCSEC is too hard, ITSEC is too soft, but the Common Criteria is just right,” said the baby bear.
The Orange Book and the Rainbow Series provide evaluation schemes that are too
rigid and narrowly defined for the business world. ITSEC attempted to provide a more
flexible approach by separating the functionality and assurance attributes and consider-
ing the evaluation of entire systems. However, this flexibility added complexity because
evaluators could mix and match functionality and assurance ratings, which resulted in
too many classifications to keep straight. Because we are a species that continues to try
to get it right, the next attempt for an effective and usable evaluation criteria was the
Common Criteria.
In 1990, the International Organization for Standardization (ISO) identified the
need for international standard evaluation criteria to be used globally. The Common
Criteria project started in 1993 when several organizations came together to combine
and align existing and emerging evaluation criteria (TCSEC, ITSEC, Canadian Trusted
Computer Product Evaluation Criteria [CTCPEC], and the Federal Criteria). The Com-
mon Criteria was developed through a collaboration among national security stan-
dards organizations within the United States, Canada, France, Germany, the United
Kingdom, and the Netherlands.
The benefit of having a globally recognized and accepted set of criteria is that it
helps consumers by reducing the complexity of the ratings and eliminating the need to
understand the definition and meaning of different ratings within various evaluation
schemes. This also helps vendors, because now they can build to one specific set of re-
quirements if they want to sell their products internationally, instead of having to meet
several different ratings with varying rules and requirements.
The Orange Book evaluated all systems by how they compared to the Bell-LaPadu-
la model. The Common Criteria provides more flexibility by evaluating a product
against a protection profile, which is structured to address a real-world security need.
So while the Orange Book says, “Everyone march in this direction in this form using
this path,” the Common Criteria asks, “Okay, what are the threats we are facing today
and what are the best ways of battling them?”
Under the Common Criteria model, an evaluation is carried out on a product and
it is assigned an Evaluation Assurance Level (EAL). The thorough and stringent testing
increases in detailed-oriented tasks as the assurance levels increase. The Common Cri-
teria has seven assurance levels. The range is from EAL1, where functionality testing
takes place, to EAL7, where thorough testing is performed and the system design is
verified. The different EAL packages are listed next:
•EAL1 Functionally tested
•EAL2 Structurally tested
•EAL3 Methodically tested and checked
•EAL4 Methodically designed, tested, and reviewed
•EAL5 Semiformally designed and tested
•EAL6 Semiformally verified design and tested
•EAL7 Formally verified design and tested

Chapter 4: Security Architecture and Design
403
NOTE
NOTE When a system is “formally verified,” this means it is based on a
model that can be mathematically proven.
The Common Criteria uses protection profiles in its evaluation process. This is a
mechanism used to describe a real-world need for a product that is not currently on the
market. The protection profile contains the set of security requirements, their meaning
and reasoning, and the corresponding EAL rating that the intended product will re-
quire. The protection profile describes the environmental assumptions, the objectives,
and the functional and assurance level expectations. Each relevant threat is listed along
with how it is to be controlled by specific objectives. The protection profile also justifies
the assurance level and requirements for the strength of each protection mechanism.
The protection profile provides a means for a consumer, or others, to identify spe-
cific security needs; this is the security problem to be conquered. If someone identifies
a security need that is not currently being addressed by any current product, that person
can write a protection profile describing the product that would be a solution for this
real-world problem. The protection profile goes on to provide the necessary goals and
protection mechanisms to achieve the required level of security, as well as a list of
things that could go wrong during this type of system development. This list is used by
the engineers who develop the system, and then by the evaluators to make sure the
engineers dotted every i and crossed every t.
The Common Criteria was developed to stick to evaluation classes but also to retain
some degree of flexibility. Protection profiles were developed to describe the function-
ality, assurance, description, and rationale of the product requirements.
Like other evaluation criteria before it, the Common Criteria works to answer two
basic questions about products being evaluated: what does its security mechanisms do
(functionality), and how sure are you of that (assurance)? This system sets up a frame-
work that enables consumers to clearly specify their security issues and problems, de-
velopers to specify their security solution to those problems, and evaluators to
unequivocally determine what the product actually accomplishes.
A protection profile contains the following five sections:
•Descriptive elements Provides the name of the profile and a description of
the security problem to be solved.
•Rationale Justifies the profile and gives a more detailed description of the
real-world problem to be solved. The environment, usage assumptions, and
threats are illustrated along with guidance on the security policies that can be
supported by products and systems that conform to this profile.
•Functional requirements Establishes a protection boundary, meaning the
threats or compromises within this boundary to be countered. The product
or system must enforce the boundary established in this section.
•Development assurance requirements Identifies the specific requirements
the product or system must meet during the development phases, from design
to implementation.
•Evaluation assurance requirements Establishes the type and intensity of the
evaluation.

CISSP All-in-One Exam Guide
404
The evaluation process is just one leg of determining the functionality and assurance
of a product. Once a product achieves a specific rating, it only applies to that particular
version and only to certain configurations of that product. So if a company buys a fire-
wall product because it has a high assurance rating, the company has no guarantee the
next version of that software will have that rating. The next version will need to go
through its own evaluation review. If this same company buys the firewall product and
installs it with configurations that are not recommended, the level of security the com-
pany was hoping to achieve can easily go down the drain. So, all of this rating stuff is a
formalized method of reviewing a system being evaluated in a lab. When the product is
implemented in a real environment, factors other than its rating need to be addressed
and assessed to ensure it is properly protecting resources and the environment.
NOTE
NOTE When a product is assigned an assurance rating, this means it has the
potential of providing this level of protection. The customer has to properly
configure the product to actually obtain this level of security. The vendor
should provide the necessary configuration documentation, and it is up to
the customer to keep the product properly configured at all times.
Different Components of the Common Criteria:

Chapter 4: Security Architecture and Design
405
ISO/IEC15408 is the international standard that is used as the basis for the evalua-
tion of security properties of products under the CC framework. It actually has three
main parts:
•ISO/IEC15408-1 Introduction and general evaluation model
•ISO/IEC15408-2 Security functional components
•ISO/IEC15408-3 Security assurance components
ISO/IEC 15408-1 lays out the general concepts and principles of the CC evaluation
model. This part defines terms, establishes the core concept of TOE, describes the eval-
uation context, and necessary audience. It provides the key concepts for PP, security
requirements, and guidelines for the security target.
ISO/IEC 15408-2 defines the security functional requirements that will be assessed
during the evaluation. It contains a catalog of predefined security functional compo-
nents that maps to most security needs. These requirements are organized in a hierar-
chical structure of classes, families, and components. It also provides guidance on the
specification of customized security requirements if no predefined security functional
component exists.
ISO/IEC 15408-3 defines the assurance requirements, which are also organized in a
hierarchy of classes, families, and components. This part outlines the evaluation assur-
ance levels, which is a scale for measuring assurance of TOEs, and it provides the criteria
for evaluation of protection profiles and security targets.
So product vendors follow these standards when building products that they will
put through the CC evaluation process and the product evaluators follow these stan-
dards when carrying out the evaluation processes.
•Protection profile Description of a needed security solution.
•Target of evaluation Product proposed to provide a needed security
solution.
•Security target Vendor’s written explanation of the security functionality
and assurance mechanisms that meet the needed security solution—in
other words, “This is what our product does and how it does it.”
•Security functional requirements Individual security functions which
must be provided by a product.
•Security assurance requirements Measures taken during
development and evaluation of the product to assure compliance with
the claimed security functionality.
•Packages—EALs Functional and assurance requirements are bundled
into packages for reuse. This component describes what must be met to
achieve specific EAL ratings.

CISSP All-in-One Exam Guide
406
Certification vs. Accreditation
We have gone through the different types of evaluation criteria that a system can be ap-
praised against to receive a specific rating. This is a very formalized process, following
which the evaluated system or product will be placed on an EPL indicating what rating
it achieved. Consumers can check this listing and compare the different products and
systems to see how they rank against each other in the property of protection. However,
once a consumer buys this product and sets it up in their environment, security is not
guaranteed. Security is made up of system administration, physical security, installa-
tion, configuration mechanisms within the environment, and continuous monitoring.
To fairly say a system is secure, all of these items must be taken into account. The rating
is just one piece in the puzzle of security.
Certification
How did you certify this product?
Response: It came in a very pretty box. Let’s keep it.
Certification is the comprehensive technical evaluation of the security components
and their compliance for the purpose of accreditation. A certification process may use
safeguard evaluation, risk analysis, verification, testing, and auditing techniques to as-
sess the appropriateness of a specific system. For example, suppose Dan is the security
officer for a company that just purchased new systems to be used to process its confiden-
tial data. He wants to know if these systems are appropriate for these tasks and if they are
going to provide the necessary level of protection. He also wants to make sure they are
compatible with his current environment, do not reduce productivity, and do not open
doors to new threats—basically, he wants to know if these are the right products for his
company. He could pay a company that specializes in these matters to perform the nec-
essary procedures to certify the systems, or it can be carried out internally. The evaluation
team will perform tests on the software configurations, hardware, firmware, design, im-
plementation, system procedures, and physical and communication controls.
The goal of a certification process is to ensure that a system, product, or network is
right for the customer’s purposes. Customers will rely upon a product for slightly differ-
ent reasons, and environments will have various threat levels. So a particular product is
not necessarily the best fit for every single customer out there. (Of course, vendors will try
to convince you otherwise.) The product has to provide the right functionality and secu-
rity for the individual customer, which is the whole purpose of a certification process.
The certification process and corresponding documentation will indicate the good,
the bad, and the ugly about the product and how it works within the given environ-
ment. Dan will take these results and present them to his management for the accredi-
tation process.
Accreditation
Accreditation is the formal acceptance of the adequacy of a system’s overall security and
functionality by management. The certification information is presented to manage-
ment, or the responsible body, and it is up to management to ask questions, review the

Chapter 4: Security Architecture and Design
407
reports and findings, and decide whether to accept the product and whether any corrective
action needs to take place. Once satisfied with the system’s overall security as presented,
management makes a formal accreditation statement. By doing this, management is
stating it understands the level of protection the system will provide in its current envi-
ronment and understands the security risks associated with installing and maintaining
this system.
NOTE
NOTE Certification is a technical review that assesses the security
mechanisms and evaluates their effectiveness. Accreditation is management’s
official acceptance of the information in the certification process findings.
Because software, systems, and environments continually change and evolve, the
certification and accreditation should also continue to take place. Any major addition
of software, changes to the system, or modification of the environment should initiate
a new certification and accreditation cycle.
No More Pencil Whipping
Many organizations are taking the accreditation process more seriously than they
did in the past. Unfortunately, sometimes when a certification process is com-
pleted and the documentation is sent to management for review and approval,
management members just blindly sign the necessary documentation without
really understanding what they are signing. Accreditation means management is
accepting the risk that is associated with allowing this new product to be intro-
duced into the organization’s environment. When large security compromises
take place, the buck stops at the individual who signed off on the offending item.
So as these management members are being held more accountable for what
they sign off on, and as more regulations make executives personally responsible
for security, the pencil whipping of accreditation papers is decreasing.
Certification and accreditation (C&A) really came into focus within the Unit-
ed States when the Federal Information Security Management Act of 2002 (FIS-
MA) was passed as federal law. The act requires each federal agency to develop an
agency-wide program to ensure the security of their information and information
systems. It requires an annual review of the agency’s security program and the
results are reported to the Office of Management and Budget (OMB). OMB then
sends this information to the U.S. Congress to illustrate the individual agencies’
compliance levels.
C&A is a core component of FISMA compliance, but the manual processes of
reviewing each and every system is laborious, time consuming, and error-prone.
FISMA requirements are now moving to continuous monitoring, which means
that systems have to be continuously scanned and monitored instead of having
one C&A process carried out per system every couple of years.

CISSP All-in-One Exam Guide
408
Open vs. Closed Systems
Computer systems can be developed to integrate easily with other systems and products
(open systems) or can be developed to be more proprietary in nature and work with
only a subset of other systems and products (closed systems). The following sections
describe the difference between these approaches.
Open Systems
I want to be able to work and play well with others.
Response: But no one wants to play with you.
Systems described as open are built upon standards, protocols, and interfaces that
have published specifications. This type of architecture provides interoperability be-
tween products created by different vendors. This interoperability is provided by all the
vendors involved who follow specific standards and provide interfaces that enable each
system to easily communicate with other systems and allow add-ons to hook into the
system easily.
A majority of the systems in use today are open systems. The reason an administrator
can have Windows XP, Windows 2008, Macintosh, and Unix computers communicating
easily on the same network is because these platforms are open. If a software vendor
creates a closed system, it is restricting its potential sales to proprietary environments.
NOTE
NOTE In Chapter 10, we will look at the standards that support
interoperability, including CORBA, DCOM, J2EE, and more.
Closed Systems
I only want to play with you and him.
Response: Just play with him.
Systems referred to as closed use an architecture that does not follow industry stan-
dards. Interoperability and standard interfaces are not employed to enable easy com-
munication between different types of systems and add-on features. Closed systems are
proprietary, meaning the system can only communicate with like systems.
A closed architecture can potentially provide more security to the system because it
may operate in a more secluded environment than open environments. Because a
closed system is proprietary, there are not as many predefined tools to thwart the secu-
rity mechanisms and not as many people who understand its design, language, and
security weaknesses and thus exploit them. But just relying upon something being pro-
prietary as its security control is practicing “security through obscurity.” Attackers can
find flaws in proprietary systems or open systems, so each type should be built securely
and maintained securely.
A majority of the systems today are built with open architecture to enable them to
work with other types of systems, easily share information, and take advantage of the
functionality that third-party add-ons bring.

Chapter 4: Security Architecture and Design
409
A Few Threats to Review
Now that we have talked about how everything is supposed to work, let’s take a quick
look at some of the things that can go wrong when designing a system.
Software almost always has bugs and vulnerabilities. The rich functionality de-
manded by users brings about deep complexity, which usually opens the doors to prob-
lems in the computer world. Also, vulnerabilities are always around because attackers
continually find ways of using system operations and functionality in a negative and
destructive way. Just like there will always be cops and robbers, there will always be at-
tackers and security professionals. It is a game of trying to outwit each other and seeing
who will put the necessary effort into winning the game.
NOTE
NOTE Carnegie Mellon University estimates there are 5 to 15 bugs in every
1,000 lines of code. Windows 2008 has 40–60 million lines of code.
Maintenance Hooks
In the programming world, maintenance hooks are a type of back door. They are instruc-
tions within software that only the developer knows about and can invoke, and which
give the developer easy access to the code. They allow the developer to view and edit the
code without having to go through regular access controls. During the development
phase of the software, these can be very useful, but if they are not removed before the
software goes into production, they can cause major security issues.
An application that has a maintenance hook enables the developer to execute com-
mands by using a specific sequence of keystrokes. Once this is done successfully, the
developer can be inside the application looking directly at the code or configuration
files. She might do this to watch problem areas within the code, check variable popula-
tion, export more code into the program, or fix problems she sees taking place. Al-
though this sounds nice and healthy, if an attacker finds out about this maintenance
hook, he can take more sinister actions. So all maintenance hooks need to be removed
from software before it goes into production.
NOTE
NOTE Many would think that since security is more in the minds of people
today, that maintenance hooks would be a thing of the past. This is not
true. Developers are still using maintenance hooks, because of their lack of
understanding or care of security issues, and many maintenance hooks still
reside in older software that organizations are using.

CISSP All-in-One Exam Guide
410
Countermeasures
Because maintenance hooks are usually inserted by programmers, they are the ones
who usually have to take them out before the programs go into production. Code re-
views and unit and quality assurance testing should always be on the lookout for back
doors in case the programmer overlooked extracting them. Because maintenance hooks
are within the code of an application or system, there is not much a user can do to pre-
vent their presence, but when a vendor finds out a back door exists in its product, it
usually develops and releases a patch to reduce this vulnerability. Because most vendors
sell their software without including the associated source code, it may be very difficult
for companies who have purchased software to identify back doors. The following lists
some preventive measures against back doors:
• Use a host intrusion detection system to watch for any attackers using back
doors into the system.
• Use file system encryption to protect sensitive information.
• Implement auditing to detect any type of back door use.
Time-of-Check/Time-of-Use Attacks
Specific attacks can take advantage of the way a system processes requests and performs
tasks. A time-of-check/time-of-use (TOC/TOU) attack deals with the sequence of steps a
system uses to complete a task. This type of attack takes advantage of the dependency
on the timing of events that take place in a multitasking operating system.
As stated previously, operating systems and applications are, in reality, just lines and
lines of instructions. An operating system must carry out instruction 1, then instruction
2, then instruction 3, and so on. This is how it is written. If an attacker can get in be-
tween instructions 2 and 3 and manipulate something, she can control the result of
these activities.
An example of a TOC/TOU attack is if process 1 validates the authorization of a user
to open a noncritical text file and process 2 carries out the open command. If the at-
tacker can change out this noncritical text file with a password file while process 1 is
carrying out its task, she has just obtained access to this critical file. (It is a flaw within
the code that allows this type of compromise to take place.)
NOTE
NOTE This type of attack is also referred to as an asynchronous attack.
Asynchronous describes a process in which the timing of each step may
vary. The attacker gets in between these steps and modifies something. Race
conditions are also considered TOC/TOU attacks by some in the industry.
Arace condition is when two different processes need to carry out their tasks on one
resource. The processes need to follow the correct sequence. Process 1 needs to carry out
its work before process 2 accesses the same resource and carries out its tasks. If process
2 goes before process 1, the outcome could be very different. If an attacker can manipu-
late the processes so process 2 does its task first, she can control the outcome of the
processing procedure. Let’s say process 1’s instructions are to add 3 to a value and pro-
cess 2’s instructions are to divide by 15. If process 2 carries out its tasks before process

Chapter 4: Security Architecture and Design
411
1, the outcome would be different. So if an attacker can make process 2 do its work
before process 1, she can control the result.
Looking at this issue from a security perspective, there are several types of race con-
dition attacks that are quite concerning. If a system splits up the authentication and
authorization steps, an attacker could be authorized before she is even authenticated.
For example, in the normal sequence, process 1 verifies the authentication before al-
lowing a user access to a resource, and process 2 authorizes the user to access the re-
source. If the attacker makes process 2 carry out its tasks before process 1, she can access
a resource without the system making sure she has been authenticated properly.
So although the terms “race condition” and “TOC/TOU attack” are sometimes used
interchangeably, in reality, they are two different things. A race condition is an attack in
which an attacker makes processes execute out of sequence to control the result. A
TOC/TOU attack is when an attacker jumps in between two tasks and modifies some-
thing to control the result.
Countermeasures
It would take a dedicated attacker with great precision to perform these types of attacks,
but it is possible and has been done. To protect against race condition attacks, it is best
to not split up critical tasks that can have their sequence altered. This means the system
should use atomic operations where only one system call is used to check authentica-
tion and then grant access in one task. This would not give the processor the opportu-
nity to switch to another process in between two tasks. Unfortunately, using these types
of atomic operations is not always possible.
To avoid TOC/TOU attacks, it is best if the operating system can apply software
locks to the items it will use when it is carrying out its “checking” tasks. So if a user re-
quests access to a file, while the system is validating this user’s authorization, it should
put a software lock on the file being requested. This ensures the file cannot be deleted
and replaced with another file. Applying locks can be carried out easily on files, but it
is more challenging to apply locks to database components and table entries to provide
this type of protection.
Key Terms
•Assurance evaluation criteria “Checklist” and process of examining
the security-relevant parts of a system (TCB, reference monitor, security
kernel) and assigning the system an assurance rating.
•Trusted Computer System Evaluation Criteria (TCSEC) (aka Orange
Book) U.S. DoD standard used to assess the effectiveness of the security
controls built into a system. Replaced by the Common Criteria.
•Information Technology Security Evaluation Criteria (ITSEC)
European standard used to assess the effectiveness of the security controls
built into a system.
•Common Criteria International standard used to assess the
effectiveness of the security controls built into a system from functional
and assurance perspectives.

CISSP All-in-One Exam Guide
412
Summary
The architecture of a computer system is very important and comprises many topics.
The system has to ensure that memory is properly segregated and protected, ensure that
only authorized subjects access objects, ensure that untrusted processes cannot perform
activities that would put other processes at risk, control the flow of information, and
define a domain of resources for each subject. It also must ensure that if the computer
experiences any type of disruption, it will not result in an insecure state. Many of these
issues are dealt with in the system’s security policy, and the security model is built to
support the requirements of this policy.
Once the security policy, model, and architecture have been developed, the com-
puter operating system, or product, must be built, tested, evaluated, and rated. An eval-
uation is done by comparing the system to predefined criteria. The rating assigned to
the system depends upon how it fulfills the requirements of the criteria. Customers use
this rating to understand what they are really buying and how much they can trust this
new product. Once the customer buys the product, it must be tested within their own
environment to make sure it meets their company’s needs, which takes place through
certification and accreditation processes.
•Certification Technical evaluation of the security components and
their compliance to a predefined security policy for the purpose of
accreditation.
•Accreditation Formal acceptance of the adequacy of a system’s overall
security by management.
•Open system Designs are built upon accepted standards to allow for
interoperability.
•Closed system Designs are built upon proprietary procedures, which
inhibit interoperability capabilities.
•Maintenance hooks Code within software that provides a back door
entry capability.
•Time-of-check/time-of-use (TOC/TOU) attack Attacker manipulates
the “condition check” step and the “use” step within software to allow
for unauthorized activity.
•Race condition Two or more processes attempt to carry out their
activity on one resource at the same time. Unexpected behavior can result
if the sequence of execution does not take place in the proper order.

Chapter 4: Security Architecture and Design
413
Quick Tips
• System architecture is a formal tool used to design computer systems in a
manner that ensures each of the stakeholders’ concerns is addressed.
• A system’s architecture is made up of different views, which are representations
of system components and their relationships. Each view addresses a different
aspect of the system (functionality, performance, interoperability, security).
• ISO/IEC 42010:2007 is an international standard that outlines how system
architecture frameworks and their description languages are to be used.
• A CPU contains a control unit, which controls the timing of the execution of
instructions and data, and an ALU, which performs mathematical functions
and logical operations.
• Memory managers use various memory protection mechanisms, as in
base (beginning) and limit (ending) addressing, address space layout
randomization, and data execution prevention.
• Operating systems use absolute (hardware addresses), logical (indexed
addresses), and relative address (indexed addresses, including offsets)
memory schemes.
• Buffer overflow vulnerabilities are best addressed by implementing bounds
checking.
• A garbage collector is a software tool that releases unused memory segments
to help prevent “memory starvation.”
• Different processor families work within different microarchitectures to
execute specific instruction sets.
• Early operating systems were considered “monolithic” because all of the
code worked within one layer and ran in kernel mode, and components
communicated in an ad hoc manner.
• Operating systems can work within the following architectures: monolithic
kernel, microkernel, or hybrid kernel.
• Mode transition is when a CPU has to switch from executing one process’s
instructions running in user mode to another process’s instructions running
in kernel mode.
• CPUs provide a ringed architecture, which operating systems run within.
The more trusted processes run in the lower-numbered rings and have access
to all or most of the system resources. Nontrusted processes run in higher-
numbered rings and have access to a smaller amount of resources.

CISSP All-in-One Exam Guide
414
• Operating system processes are executed in privileged or supervisor mode, and
applications are executed in user mode, also known as “problem state.”
• Virtual storage combines RAM and secondary storage so the system seems to
have a larger bank of memory.
• The more complex a security mechanism is, the less amount of assurance it
can usually provide.
• The trusted computing base (TCB) is a collection of system components that
enforce the security policy directly and protect the system. These components
are within the security perimeter.
• Components that make up the TCB are hardware, software, and firmware that
provide some type of security protection.
• A security perimeter is an imaginary boundary that has trusted components
within it (those that make up the TCB) and untrusted components outside it.
• The reference monitor concept is an abstract machine that ensures all subjects
have the necessary access rights before accessing objects. Therefore, it mediates
all access to objects by subjects.
• The security kernel is the mechanism that actually enforces the rules of the
reference monitor concept.
• The security kernel must isolate processes carrying out the reference monitor
concept, must be tamperproof, must be invoked for each access attempt, and
must be small enough to be properly tested.
• Processes need to be isolated, which can be done through segmented memory
addressing, encapsulation of objects, time multiplexing of shared resources,
naming distinctions, and virtual mapping.
• The level of security a system provides depends upon how well it enforces its
security policy.
• A multilevel security system processes data at different classifications (security
levels), and users with different clearances (security levels) can use the system.
• Data hiding occurs when processes work at different layers and have layers of
access control between them. Processes need to know how to communicate
only with each other’s interfaces.
• A security model maps the abstract goals of a security policy to computer
system terms and concepts. It gives the security policy structure and provides
a framework for the system.
• A closed system is often proprietary to the manufacturer or vendor, whereas
the open system allows for more interoperability.
• The Bell-LaPadula model deals only with confidentiality, while the Biba and
Clark-Wilson models deal only with integrity.

Chapter 4: Security Architecture and Design
415
• A state machine model deals with the different states a system can enter. If
a system starts in a secure state, all state transitions take place securely, the
system shuts down and fails securely, and the system will never end up in an
insecure state.
• A lattice model provides an upper bound and a lower bound of authorized
access for subjects.
• An information flow security model does not permit data to flow to an object
in an insecure manner.
• The Bell-LaPadula model has a simple security rule, which means a subject
cannot read data from a higher level (no read up). The *-property rule means
a subject cannot write to an object at a lower level (no write down). The
strong star property rule dictates that a subject can read and write to objects
at its own security level.
• The Biba model does not let subjects write to objects at a higher integrity level
(no write up), and it does not let subjects read data at a lower integrity level (no
read down). This is done to protect the integrity of the data.
• The Bell-LaPadula model is used mainly in military and government-oriented
systems. The Biba and Clark-Wilson models are used in the commercial sector.
• The Clark-Wilson model dictates that subjects can only access objects through
applications. This model also illustrates how to provide functionality for
separation of duties and requires auditing tasks within software.
• If a system is working in a dedicated security mode, it only deals with one
level of data classification, and all users must have this level of clearance to
be able to use the system.
• Trust means that a system uses all of its protection mechanisms properly
to process sensitive data for many types of users. Assurance is the level of
confidence you have in this trust and that the protection mechanisms behave
properly in all circumstances predictably.
• The Orange Book, also called Trusted Computer System Evaluation Criteria
(TCSEC), was developed to evaluate systems built to be used mainly by the
government. Its use was expanded to evaluate other types of products.
• The Orange Book deals mainly with stand-alone systems, so a range of books
were written to cover many other topics in security. These books are called the
Rainbow Series.
• ITSEC evaluates the assurance and functionality of a system’s protection
mechanisms separately, whereas TCSEC combines the two into one rating.
• The Common Criteria was developed to provide globally recognized
evaluation criteria and is in use today. It combines sections of TCSEC,
ITSEC, CTCPEC, and the Federal Criteria.

CISSP All-in-One Exam Guide
416
• The Common Criteria uses protection profiles, security targets, and ratings
(EAL1 to EAL7) to provide assurance ratings for targets of evaluations (TOE).
• Certification is the technical evaluation of a system or product and its security
components. Accreditation is management’s formal approval and acceptance
of the security provided by a system.
• ISO/IEC15408 is the international standard that is used as the basis for the
evaluation of security properties of products under the CC framework.
• A covert channel is an unintended communication path that transfers data in
a way that violates the security policy. There are two types: timing and storage
covert channels.
• A covert timing channel enables a process to relay information to another
process by modulating its use of system resources.
• A covert storage channel enables a process to write data to a storage medium
so another process can read it.
• A maintenance hook is developed to let a programmer into the application
quickly for maintenance. This should be removed before the application goes
into production, or it can cause a serious security risk.
• Process isolation ensures that multiple processes can run concurrently and
the processes will not interfere with each other or affect each other’s memory
segments.
• TOC/TOU stands for time-of-check/time-of-use. This is a class of
asynchronous attacks.
• The Biba model addresses the first goal of integrity, which is to prevent
unauthorized users from making modifications.
• The Clark-Wilson model addresses all three integrity goals: prevent
unauthorized users from making modifications, prevent authorized users
from making improper modifications, and maintain internal and external
consistency.
Questions
Please remember that these questions are formatted and asked in a certain way for a
reason. Keep in mind that the CISSP exam is asking questions at a conceptual level.
Questions may not always have the perfect answer, and the candidate is advised against
always looking for the perfect answer. Instead, the candidate should look for the best
answer in the list.
1. What is the final step in authorizing a system for use in an environment?
A. Certification
B. Security evaluation and rating
C. Accreditation
D. Verification

Chapter 4: Security Architecture and Design
417
2. What feature enables code to be executed without the usual security checks?
A. Temporal isolation
B. Maintenance hook
C. Race conditions
D. Process multiplexing
3. If a component fails, a system should be designed to do which of the
following?
A. Change to a protected execution domain
B. Change to a problem state
C. Change to a more secure state
D. Release all data held in volatile memory
4. Which is the first level of the Orange Book that requires classification labeling
of data?
A. B3
B. B2
C. B1
D. C2
5. The Information Technology Security Evaluation Criteria was developed for
which of the following?
A. International use
B. U.S. use
C. European use
D. Global use
6. A guard is commonly used with a classified system. What is the main purpose
of implementing and using a guard?
A. To ensure that less trusted systems only receive acknowledgments and not
messages
B. To ensure proper information flow
C. To ensure that less trusted and more trusted systems have open
architectures and interoperability
D. To allow multilevel and dedicated mode systems to communicate
7. The trusted computing base (TCB) contains which of the following?
A. All trusted processes and software components
B. All trusted security policies and implementation mechanisms
C. All trusted software and design mechanisms
D. All trusted software and hardware components

CISSP All-in-One Exam Guide
418
8. What is the imaginary boundary that separates components that maintain
security from components that are not security related?
A. Reference monitor
B. Security kernel
C. Security perimeter
D. Security policy
9. Which model deals only with confidentiality?
A. Bell-LaPadula
B. Clark-Wilson
C. Biba
D. Reference monitor
10. What is the best description of a security kernel from a security point of view?
A. Reference monitor
B. Resource manager
C. Memory mapper
D. Security perimeter
11. In secure computing systems, why is there a logical form of separation used
between processes?
A. Processes are contained within their own security domains so each does
not make unauthorized accesses to other processes or their resources.
B. Processes are contained within their own security perimeter so they can
only access protection levels above them.
C. Processes are contained within their own security perimeter so they can
only access protection levels equal to them.
D. The separation is hardware and not logical in nature.
12. What type of attack is taking place when a higher-level subject writes data to a
storage area and a lower-level subject reads it?
A. TOC/TOU
B. Covert storage attack
C. Covert timing attack
D. Buffer overflow
13. What type of rating is used within the Common Criteria framework?
A. PP
B. EPL
C. EAL
D. A–D
14. Which best describes the *-integrity axiom?

Chapter 4: Security Architecture and Design
419
A. No write up in the Biba model
B. No read down in the Biba model
C. No write down in the Bell-LaPadula model
D. No read up in the Bell-LaPadula model
15. Which best describes the simple security rule?
A. No write up in the Biba model
B. No read down in the Biba model
C. No write down in the Bell-LaPadula model
D. No read up in the Bell-LaPadula model
16. Which of the following was the first mathematical model of a multilevel
security policy used to define the concepts of a security state and mode of
access, and to outline rules of access?
A. Biba
B. Bell-LaPadula
C. Clark-Wilson
D. State machine
17. Which of the following is a true statement pertaining to memory addressing?
A. The CPU uses absolute addresses. Applications use logical addresses.
Relative addresses are based on a known address and an offset value.
B. The CPU uses logical addresses. Applications use absolute addresses.
Relative addresses are based on a known address and an offset value.
C. The CPU uses absolute addresses. Applications use relative addresses.
Logical addresses are based on a known address and an offset value.
D. The CPU uses absolute addresses. Applications use logical addresses.
Absolute addresses are based on a known address and an offset value.
18. Pete is a new security manager at a financial institution that develops its
own internal software for specific proprietary functionality. The financial
institution has several locations distributed throughout the world and has
bought several individual companies over the last ten years, each with its own
heterogeneous environment. Since each purchased company had its own
unique environment, it has been difficult to develop and deploy internally
developed software in an effective manner that meets all the necessary
business unit requirements. Which of the following best describes a standard
that Pete should ensure the software development team starts to implement
so that various business needs can be met?
A. ISO/IEC 42010:2007
B. Common Criteria
C. ISO/IEC 43010:2007
D. ISO/IEC 15408

CISSP All-in-One Exam Guide
420
19. Which of the following is an incorrect description pertaining to the common
components that make up computer systems?
i. General registers are commonly used to hold temporary processing data,
while special registers are used to hold process characteristic data as in
condition bits.
ii. A processer sends a memory address and a “read” request down an address
bus and a memory address and “write” request down an I/O bus.
iii. Process-to-process communication commonly takes place through memory
stacks, which are made up of individually addressed buffer locations.
iv. A CPU uses a stack return pointer to keep track of the next instruction sets it
needs to process.
A. i
B. i, ii
C. ii, iii
D. ii, iv
20. Mark is a security administrator who is responsible for purchasing new
computer systems for a co-location facility his company is starting up.
The company has several time-sensitive applications that require extensive
processing capabilities. The co-location facility is not as large as the main
facility, so it can only fit a smaller number of computers, which still must
carry the same processing load as the systems in the main building. Which of
the following best describes the most important aspects of the products Mark
needs to purchase for these purposes?
A. Systems must provide symmetric multiprocessing capabilities and
virtualized environments.
B. Systems must provide asymmetric multiprocessing capabilities and
virtualized environments.
C. Systems must provide multiprogramming multiprocessing capabilities and
virtualized environments.
D. Systems must provide multiprogramming multiprocessing capabilities and
symmetric multiprocessing environments.
Use the following scenario to answer Questions 21–23. Tom is a new security manager who
is responsible for reviewing the current software that the company has developed inter-
nally. He finds that some of the software is outdated, which causes performance and
functionality issues. During his testing procedures he sees that when one program stops
functioning, it negatively affects other programs on the same system. He also finds out
that as systems run over a period of a month, they start to perform more slowly, but by
rebooting the systems this issue goes away. He also notices that the identification, au-
thentication, and authorization steps built into one software package are carried out by
individual and distinct software procedures.

Chapter 4: Security Architecture and Design
421
21. Which of the following best describes a characteristic of the software that may
be causing issues?
A. Cooperative multitasking
B. Preemptive multitasking
C. Maskable interrupt use
D. Nonmaskable interrupt use
22. Which of the following best describes why rebooting helps with system
performance in the situation described in this scenario?
A. Software is not using cache memory properly.
B. Software is carrying out too many mode transitions.
C. Software is working in ring 0.
D. Software is not releasing unused memory.
23. What security issue is Tom most likely concerned with in this situation?
A. Time of check\time of use
B. Maintenance hooks
C. Input validation errors
D. Unauthorized loaded kernel modules
Use the following scenario to answer Questions 24–27. Sarah’s team must build a new oper-
ating system for her company’s internal functionality requirements. The system must be
able to process data at different classifications levels and allow users of different clear-
ances to be able to interact with only the data that maps to their profile. She is told that
the system must provide data hiding, and her boss suggests that her team implement a
hybrid microkernel design. Sarah knows that the resulting system must be able to achieve
a rating of EAL 6 once it goes through the Common Criteria evaluation process.
24. Which of the following is a required characteristic of the system Sarah’s team
must build?
A. Multilevel security
B. Dedicated mode capability
C. Simple security rule
D. Clark-Wilson constructs
25. Which of the following reasons best describes her boss’s suggestion on the
kernel design of the new system?
A. Hardware layer abstraction for portability capability
B. Layered functionality structure
C. Reduced mode transition requirements
D. Central location of all critical operating system processes

CISSP All-in-One Exam Guide
422
26. Which of the following is a characteristic that this new system will need to
implement?
A. Multiprogramming
B. Simple integrity axiom
C. Mandatory access control
D. Formal verification
27. Which of the following best describes one of the system requirements
outlined in this scenario and how it should be implemented?
A. Data hiding should be implemented through memory deallocation.
B. Data hiding should be implemented through properly developed
interfaces.
C. Data hiding should be implemented through a monolithic architecture.
D. Data hiding should be implemented through multiprogramming.
Use the following scenario to answer Questions 28–30. Steve has found out that the soft-
ware product that his team submitted for evaluation did not achieve the actual rating
they were hoping for. He was confused about this issue since the software passed the
necessary certification and accreditation processes before being deployed. Steve was
told that the system allows for unauthorized device drivers to be loaded and that there
was a key sequence that could be used to bypass the software access control protection
mechanisms. Some feedback Steve received from the product testers is that it should
implement address space layout randomization and data execution protection.
28. Which of the following best describes Steve’s confusion?
A. Certification must happen first before the evaluation process can begin.
B. Accreditation is the acceptance from management, which must take place
before the evaluation process.
C. Evaluation, certification, and accreditation are carried out by different
groups with different purposes.
D. Evaluation requirements include certification and accreditation
components.
29. Which of the following best describes an item the software development team
needs to address to ensure that drivers cannot be loaded in an unauthorized
manner?
A. Improved security kernel processes
B. Improved security perimeter processes
C. Improved application programming interface processes
D. Improved garbage collection processes

Chapter 4: Security Architecture and Design
423
30. Which of the following best describes some of the issues that the evaluation
testers most likely ran into while testing the submitted product?
A. Non-protected ROM sections
B. Vulnerabilities that allowed malicious code to execute in protected
memory sections
C. Lack of a predefined and implemented trusted computing base
D. Lack of a predefined and implemented security kernel
31. John has been told that one of the applications installed on a web server
within the DMZ accepts any length of information that a customer using
a web browser inputs into the form the web server provides to collect new
customer data. Which of the following describes an issue that John should be
aware of pertaining to this type of issue?
A. Application is written in the C programming language.
B. Application is not carrying out enforcement of the trusted computing base.
C. Application is running in ring 3 of a ring-based architecture.
D. Application is not interacting with the memory manager properly.
Answers
1. C. Certification is a technical review of a product, and accreditation is
management’s formal approval of the findings of the certification process.
This question asked you which step was the final step in authorizing a system
before it is used in an environment, and that is what accreditation is all about.
2. B. Maintenance hooks get around the system’s or application’s security and
access control checks by allowing whomever knows the key sequence to
access the application and most likely its code. Maintenance hooks should be
removed from any code before it gets into production.
3. C. The state machine model dictates that a system should start up securely,
carry out secure state transitions, and even fail securely. This means that if the
system encounters something it deems unsafe, it should change to a more
secure state for self-preservation and protection.
4. C. These assurance ratings are from the Orange Book. B levels on up require
security labels be used, but the question asks which is the first level to require
this. B1 comes before B2 and B3, so it is the correct answer.
5. C. In ITSEC, the I does not stand for international; it stands for information.
This set of criteria was developed to be used by European countries to evaluate
and rate their products.
6. B. The guard accepts requests from the less trusted entity, reviews the request
to make sure it is allowed, and then submits the request on behalf of the less
trusted system. The goal is to ensure that information does not flow from a
high security level to a low security level in an unauthorized manner.

CISSP All-in-One Exam Guide
424
7. D. The TCB contains and controls all protection mechanisms within the
system, whether they are software, hardware, or firmware.
8. C. The security perimeter is a boundary between items that are within the TCB
and items that are outside the TCB. It is just a mark of delineation between
these two groups of items.
9. A. The Bell-LaPadula model was developed for the U.S. government with
the main goal of keeping sensitive data unreachable to those who were not
authorized to access and view it. This was the first mathematical model of a
multilevel security policy used to define the concepts of a security state and
mode of access and to outline rules of access. The Biba and Clark-Wilson
models do not deal with confidentiality, but with integrity instead.
10. A. The security kernel is a portion of the operating system’s kernel and
enforces the rules outlined in the reference monitor. It is the enforcer of the
rules and is invoked each time a subject makes a request to access an object.
11. A. Processes are assigned their own variables, system resources, and memory
segments, which make up their domain. This is done so they do not corrupt
each other’s data or processing activities.
12. B. A covert channel is being used when something is using a resource for
communication purposes, and that is not the reason this resource was created.
A process can write to some type of shared media or storage place that
another process will be able to access. The first process writes to this media,
and the second process reads it. This action goes against the security policy of
the system.
13. C. The Common Criteria uses a different assurance rating system than the
previously used criteria. It has packages of specifications that must be met for a
product to obtain the corresponding rating. These ratings and packages are called
Evaluation Assurance Levels (EALs). Once a product achieves any type of rating,
customers can view this information on an Evaluated Products List (EPL).
14. A. The *-integrity axiom (or star integrity axiom) indicates that a subject of a
lower integrity level cannot write to an object of a higher integrity level. This
rule is put into place to protect the integrity of the data that resides at the
higher level.
15. D. The simple security rule is implemented to ensure that any subject at a
lower security level cannot view data that resides at a higher level. The reason
this type of rule is put into place is to protect the confidentiality of the data
that resides at the higher level. This rule is used in the Bell-LaPadula model.
Remember that if you see “simple” in a rule, it pertains to reading, while * or
“star” pertains to writing.
16. B. This is a formal definition of the Bell-LaPadula model, which was
created and implemented to protect confidential government and military
information.

Chapter 4: Security Architecture and Design
425
17. A. The physical memory addresses that the CPU uses are called absolute
addresses. The indexed memory addresses that software uses are referred to as
logical addresses. A relative address is a logical address which incorporates the
correct offset value.
18. A. ISO/IEC 42010:2007 is an international standard that outlines
specifications for system architecture frameworks and architecture languages.
It allows for systems to be developed in a manner that addresses all of the
stakeholder’s concerns.
19. D. A processer sends a memory address and a “read” request down an
address bus. The system reads data from that memory address and puts the
requested data on the data bus. A CPU uses a program counter to keep track
of the memory addresses containing the instruction sets it needs to process
in sequence. A stack pointer is a component used within memory stack
communication processes. An I/O bus is used by a peripheral device.
20. B. When systems provide asymmetric multiprocessing, this means multiple
CPUs can be used for processing. Asymmetric indicates the capability of
assigning specific applications to one CPU so that they do not have to share
computing capabilities with other competing processes, which increases
performance. Since a smaller number of computers can fit in the new
location, virtualization should be deployed to allow for several different
systems to share the same physical computer platforms.
21. A. Cooperative multitasking means that a developer of an application has to
properly code his software to release system resources when the application
is finished using them, or the other software running on the system could be
negatively affected. In this type of situation an application could be poorly
coded and not release system resources, which would negatively affect other
software running on the system. In a preemptive multitasking environment,
the operating system would have more control of system resource allocation
and provide more protection for these types of situations.
22. D. When software is poorly written, it could be allocating memory and not
properly releasing it. This can affect the performance of the whole system,
since all software processes have to share a limited supply of memory. When
a system is rebooted, the memory allocation constructs are reset.
23. A. A time-of-check\time-of-use attack takes place when an attacker is able to
change an important parameter while the software is carrying out a sequence
of steps. If an attacker could manipulate the authentication steps, she could
potentially gain access to resources in an unauthorized manner before being
properly identified and authenticated.
24. A. A multilevel security system allows for data at different classification levels
to be processed and allows users with different clearance levels to interact
with the system securely.

CISSP All-in-One Exam Guide
426
25. C. A hybrid microkernel architecture means that all kernel processes work
within kernel mode, which reduces the amount of mode transitions. The
reduction of mode transitions reduces performance issues because the CPU
does not have to change from user mode to kernel mode as many times
during its operation.
26. C. Since the new system must achieve a rating of EAL 6, it must implement
mandatory access control capabilities. This is an access control model that
allows users with different clearances to be able to interact with a system that
processes data of different classification levels in a secure manner. The rating
of EAL 6 requires semiformally verified design and testing, whereas EAL 7
requires verified design and testing.
27. B. Data hiding means that certain functionality and/or data is “hidden,” or
not available to specific processes. For processes to be able to interact with
other processes and system services, they need to be developed with the
necessary interfaces that restrict communication flows between processes.
Data hiding is a protection mechanism that segregates trusted and untrusted
processes from each other through the use of strict software interface design.
28. C. Evaluation, certification, and accreditation are carried out by different
groups with different purposes. Evaluations are carried out by qualified
third parties who use specific evaluation criteria (Orange Book, ITSEC,
Common Criteria) to assign an assurance rating to a tested product. A
certification process is a technical review commonly carried out internally to
an organization, and accreditation is management’s formal acceptance that is
carried out after the certification process. A system can be certified internally
by a company and not pass an evaluation testing process because they are
completely different things.
29. A. If device drivers can be loaded improperly, then either the access control
rules outlined within the reference monitor need to be improved upon
or the current rules need to be better enforced through the security kernel
processes. Only authorized subjects should be able to install sensitive software
components that run within ring 0 of a system.
30. B. If testers suggested to the team that address space layout randomization
and data execution protection should be integrated, this is most likely because
the system allows for malicious code to easily execute in memory sections
that would be dangerous to the system. These are both memory protection
approaches.
31. A. The C language is susceptible to buffer overflow attacks because it allows
for direct pointer manipulations to take place. Specific commands can provide
access to low-level memory addresses without carrying out bounds checking.

CHAPTER 5
Physical and
Environmental Security
This chapter presents the following:
• Administrative, technical, and physical controls
• Facility location, construction, and management
• Physical security risks, threats, and countermeasures
• Electric power issues and countermeasures
• Fire prevention, detection, and suppression
• Intrusion detection systems
Security is very important to organizations and their infrastructures, and physical secu-
rity is no exception. Hacking is not the only way information and their related systems
can be compromised. Physical security encompasses a different set of threats, vulnera-
bilities, and risks than the other types of security we’ve addressed so far. Physical secu-
rity mechanisms include site design and layout, environmental components, emergen-
cy response readiness, training, access control, intrusion detection, and power and fire
protection. Physical security mechanisms protect people, data, equipment, systems,
facilities, and a long list of company assets.
Introduction to Physical Security
The physical security of computers and their resources in the 1960s and 1970s was not
as challenging as it is today because computers were mostly mainframes that were
locked away in server rooms, and only a handful of people knew what to do with them
anyway. Today, a computer sits on almost every desk in every company, and access to
devices and resources is spread throughout the environment. Companies have several
wiring closets and server rooms, and remote and mobile users take computers and re-
sources out of the facility. Properly protecting these computer systems, networks, facili-
ties, and employees has become an overwhelming task to many companies.
Theft, fraud, sabotage, vandalism, and accidents are raising costs for many compa-
nies because environments are becoming more complex and dynamic. Security and
complexity are at the opposite ends of the spectrum. As environments and technology
427

CISSP All-in-One Exam Guide
428
become more complex, more vulnerabilities are introduced that allow for compro-
mises to take place. Most companies have had memory or processors stolen from work-
stations, while some have had computers and laptops taken. Even worse, many
companies have been victims of more dangerous crimes, such as robbery at gunpoint,
a shooting rampage by a disgruntled employee, anthrax, bombs, and terrorist activities.
Many companies may have implemented security guards, closed-circuit TV (CCTV) sur-
veillance, intrusion detection systems (IDSs), and requirements for employees to main-
tain a higher level of awareness of security risks. These are only some of the items that
fall within the physical security boundaries. If any of these does not provide the neces-
sary protection level, it could be the weak link that causes potentially dangerous secu-
rity breaches.
Most people in the information security field do not think as much about physical
security as they do about information and computer security and the associated hackers,
ports, viruses, and technology-oriented security countermeasures. But information se-
curity without proper physical security could be a waste of time.
Even people within the physical security market do not always have a holistic view
of physical security. There are so many components and variables to understand, peo-
ple have to specialize in specific fields, such as secure facility construction, risk assess-
ment and analysis, secure data center implementation, fire protection, IDS and CCTV
implementation, personnel emergency response and training, legal and regulatory as-
pects of physical security, and so on. Each has its own focus and skill set, but for an
organization to have a solid physical security program, all of these areas must be under-
stood and addressed.
Just as most software is built with functionality as the number-one goal, with secu-
rity somewhere farther down the priority list, many facilities and physical environ-
ments are built with functionality and aesthetics in mind, with not as much concern for
providing levels of protection. Many thefts and deaths could be prevented if all organi-
zations were to implement physical security in an organized, mature, and holistic man-
ner. Most people are not aware of many of the crimes that happen every day. Many
people also are not aware of all the civil lawsuits that stem from organizations not
practicing due diligence and due care pertaining to physical security. The following is a
short list of some examples of things companies are sued for pertaining to improper
physical security implementation and maintenance:
• An apartment complex does not respond to a report of a broken lock on a
sliding glass door, and subsequently a woman who lives in that apartment
is raped by an intruder.
• Bushes are growing too close to an ATM, allowing criminals to hide behind
them and attack individuals as they withdraw money from their accounts.
• A portion of an underground garage is unlit, which allows an attacker to sit
and wait for an employee who works late.
• A gas station’s outside restroom has a broken lock, which allows an attacker
to enter after a female customer and kill her.

Chapter 5: Physical and Environmental Security
429
• A convenience store hangs too many advertising signs and posters on the
exterior windows, prompting thieves to choose this store because the signs hide
any crimes taking place inside the store from people driving or walking by.
• Backup tapes containing sensitive information are lost during the process of
moving from an on-site to an off-site facility.
• A laptop containing Social Security numbers and individuals’ financial
information is stolen from an employee’s car.
• A malicious camera is installed at an ATM station, which allows a hacker to
view and capture people’s ATM PIN values.
• Bollards are not implemented in high foot traffic areas outside of a retail
store and someone driving a car accidently swerves his car and injures some
pedestrians.
• A company builds an office building that does not follow fire codes. A fire
takes place and some people are trapped and cannot escape the fire.
Many examples like this take place every day. These crimes and issues might make
it to our local news outlets, but there are too many incidents to be reported in national
newspapers or on network news programs. It is important for security professionals to
evaluate security from the standpoint of a potential criminal, and to detect and remedy
any points of vulnerability that could be exploited by the same. Just as many people are
unaware of many of these “smaller” crimes that happen every day, they are also un-
aware of all the civil suits brought about because organizations are not practicing due
diligence and due care regarding physical security. While many different security-relat-
ed crimes occur every day, these kinds of crimes may be overshadowed by larger news
events or be too numerous to report. A security professional needs to regard security as
a holistic process, and as such it must be viewed from all angles and approaches. Dan-
ger can come from anywhere and take any different number of shapes, formats, and
levels of severity.
Physical security has a different set of vulnerabilities, threats, and countermeasures
from that of computer and information security. The set for physical security has more
to do with physical destruction, intruders, environmental issues, theft, and vandalism.
When security professionals look at information security, they think about how some-
one can enter an environment in an unauthorized manner through a port, wireless
access point, or software exploitation. When security professionals look at physical secu-
rity, they are concerned with how people can physically enter an environment and
cause an array of damages.
The threats that an organization faces fall into these broad categories:
•Natural environmental threats Floods, earthquakes, storms and tornadoes,
fires, extreme temperature conditions, and so forth
•Supply system threats Power distribution outages, communications
interruptions, and interruption of other resources such as water, gas, air
filtration, and so on

CISSP All-in-One Exam Guide
430
•Manmade threats Unauthorized access (both internal and external),
explosions, damage by disgruntled employees, employee errors and accidents,
vandalism, fraud, theft, and others
•Politically motivated threats Strikes, riots, civil disobedience, terrorist
attacks, bombings, and so forth
In all situations, the primary consideration, above all else, is that nothing should
impede life safety goals. When we discuss life safety, protecting human life is the first
priority. Good planning helps balance life safety concerns and other security measures.
For example, barring a door to prevent unauthorized physical intrusion might prevent
individuals from being able to escape in the event of a fire. Life safety goals should al-
ways take precedence over all other types of goals; thus, this door might allow insiders
to exit through it after pushing an emergency bar, but not allow external entities in.
A physical security program should comprise safety and security mechanisms. Safety
deals with the protection of life and assets against fire, natural disasters, and devastating
accidents. Security addresses vandalism, theft, and attacks by individuals. Many times an
overlap occurs between the two, but both types of threat categories must be understood
and properly planned for. This chapter addresses both safety and security mechanisms
that every security professional should be aware of.
Physical security must be implemented based on a layered defense model, which
means that physical controls should work together in a tiered architecture. The concept
is that if one layer fails, other layers will protect the valuable asset. Layers would be
implemented moving from the perimeter toward the asset. For example, you would
have a fence, then your facility walls, then an access control card device, then a guard,
then an IDS, and then locked computer cases and safes. This series of layers will protect
the company’s most sensitive assets, which would be placed in the innermost control
zone of the environment. So if the bad guy were able to climb over your fence and out-
smart the security guard, he would still have to circumvent several layers of controls
before getting to your precious resources and systems.
Security needs to protect all the assets of the organization and enhance productivity
by providing a secure and predictable environment. Good security enables employees
to focus on their tasks at hand and encourages attackers to move on to an easier target.
This is the hope, anyway. Keeping in mind the AIC security triad that has been pre-
sented in previous chapters, we look at physical security that can affect the availability of
company resources, the integrity of the assets and environment, and the confidentiality
of the data and business processes.
The Planning Process
Okay, so what are we doing and why?
Response: We have no idea.
A designer, or team of designers, needs to be identified to create or improve upon
an organization’s current physical security program. The team must work with manage-
ment to define the objectives of the program, design the program, and develop perfor-

Chapter 5: Physical and Environmental Security
431
mance-based metrics and evaluation processes to ensure the objectives are continually
being met.
The objectives of the physical security program depend upon the level of protection
required for the various assets and the company as a whole. And this required level of
protection, in turn, depends upon the organization’s acceptable risk level. This accept-
able risk level should be derived from the laws and regulations with which the organi-
zation must comply and from the threat profile of the organization overall. This
requires identifying who and what could damage business assets, identifying the types
of attacks and crimes that could take place, and understanding the business impact of
these threats. The type of physical countermeasures required and their adequacy or in-
adequacy need to be measured against the organization’s threat profile. A financial in-
stitution has a much different threat profile, and thus a much different acceptable risk
level, when compared to a grocery store. The threat profile of a hospital is different
from the threat profile of a military base or a government agency. The team must un-
derstand the types of adversaries it must consider, the capabilities of these adversaries,
and the resources and tactics these individuals would use. (Review Chapter 2 for a dis-
cussion of acceptable risk-level concepts.)
Physical security is a combination of people, processes, procedures, technology, and
equipment to protect resources. The design of a solid physical security program should
be methodical and should weigh the objectives of the program and the available re-
sources. Although every organization is different, the approach to constructing and
maintaining a physical security program is the same. The organization must first define
the vulnerabilities, threats, threat agents, and targets.
NOTE
NOTE Remember that a vulnerability is a weakness and a threat is the
potential that someone will identify this weakness and use it against you. The
threat agent is the person or mechanism that actually exploits this identified
vulnerability.
Threats can be grouped into categories such as internal and external threats. Inter-
nal threats may include faulty technology, fire hazards, or employees who aim to dam-
age the company in some way. Employees have intimate knowledge of the company’s
facilities and assets, which is usually required to perform tasks and responsibilities—
but this makes it easier for the insider to carry out damaging activity without being
noticed. Unfortunately, a large threat to companies can be their own security guards,
which is usually not realized until it is too late. These people have keys and access codes
to all portions of a facility and usually work during employee off-hours. This gives the
guards ample windows of opportunity to carry out their crimes. It is critical for a com-
pany to carry out a background investigation, or to pay a company to perform this
service, before hiring a security guard. If you hire a wolf to guard the chicken coop,
things can get ugly.
External threats come in many different forms as well. Government buildings are usu-
ally chosen targets for some types of political revenge. If a company performs abortions
or conducts animal research, then activists are usually a large and constant threat. And, of
course, banks and armored cars are tempting targets for organized crime members.

CISSP All-in-One Exam Guide
432
A threat that is even trickier to protect against is collusion, in which two or more
people work together to carry out fraudulent activity. Many criminal cases have uncov-
ered insiders working with outsiders to defraud or damage a company. The types of
controls for this type of activity are procedural protection mechanisms, which were
described at length in Chapter 2. This may include separation of duties, preemploy-
ment background checks, rotations of duties, and supervision.
As with any type of security, most attention and awareness surrounds the exciting
and headline-grabbing tidbits about large crimes being carried out and criminals being
captured. In information security, most people are aware of viruses and hackers, but not
of the components that make up a corporate security program. The same is true for
physical security. Many people talk about current robberies, murders, and other crimi-
nal activity at the water cooler, but do not pay attention to the necessary framework that
should be erected and maintained to reduce these types of activities. An organization’s
physical security program should address the following goals:
•Crime and disruption prevention through deterrence Fences, security
guards, warning signs, and so forth
•Reduction of damage through the use of delaying mechanisms Layers of
defenses that slow down the adversary, such as locks, security personnel, and
barriers
•Crime or disruption detection Smoke detectors, motion detectors, CCTV,
and so forth
•Incident assessment Response of security guards to detected incidents and
determination of damage level
•Response procedures Fire suppression mechanisms, emergency response
processes, law enforcement notification, and consultation with outside
security professionals
So, an organization should try to prevent crimes and disruptions from taking place,
but must also plan to deal with them when they do happen. A criminal should be de-
layed in her activities by having to penetrate several layers of controls before gaining
access to a resource. All types of crimes and disruptions should be able to be detected
through components that make up the physical security program. Once an intrusion is
discovered, a security guard should be called upon to assess the situation. The security
guard must then know how to properly respond to a large range of potentially danger-
ous activities. The emergency response activities could be carried out by the organiza-
tion’s internal security team or by outside experts.
This all sounds straightforward enough, until the team responsible for developing
the physical security program looks at all the possible threats, the finite budget that the
team has to work with, and the complexity of choosing the right combination of coun-
termeasures and ensuring that they all work together in a manner that ensures no gaps
of protection. All of these components must be understood in depth before the design
of a physical security program can begin.

Chapter 5: Physical and Environmental Security
433
As with all security programs, it is possible to determine how beneficial and effec-
tive your physical security program is only if it is monitored through a performance-
based approach. This means you should devise measurements and metrics to gauge the
effectiveness of your countermeasures. This enables management to make informed
business decisions when investing in the protection of the organization’s physical secu-
rity. The goal is to increase the performance of the physical security program and de-
crease the risk to the company in a cost-effective manner. You should establish a
baseline of performance and thereafter continually evaluate performance to make sure
that the company’s protection objectives are being met. The following list provides
some examples of possible performance metrics:
• Number of successful crimes
• Number of successful disruptions
• Number of unsuccessful crimes
• Number of unsuccessful disruptions
• Time between detection, assessment, and recovery steps
• Business impact of disruptions
• Number of false-positive detection alerts
• Time it took for a criminal to defeat a control
• Time it took to restore the operational environment
• Financial loss of a successful crime
• Financial loss of a successful disruption
Capturing and monitoring these types of metrics enables the organization to iden-
tify deficiencies, evaluate improvement measures, and perform cost/benefit analyses.
NOTE
NOTE Metrics are becoming more important in all domains of security
because it is important that an organization allocates the necessary controls
and countermeasures to mitigate risks in a cost-beneficial manner. You can’t
manage what you can’t measure.
The physical security team needs to carry out a risk analysis, which will identify the
organization’s vulnerabilities, threats, and business impacts. The team should present
these findings to management and work with them to define an acceptable risk level for
the physical security program. From there, the team must develop baselines (minimum
levels of security) and metrics in order to evaluate and determine if the baselines are
being met by the implemented countermeasures. Once the team identifies and imple-
ments the countermeasures, the performance of these countermeasures should be con-
tinually evaluated and expressed in the previously created metrics. These performance
values are compared to the set baselines. If the baselines are continually maintained,
then the security program is successful, because the company’s acceptable risk level is
not being exceeded. This is illustrated in Figure 5-1.

CISSP All-in-One Exam Guide
434
So, before an effective physical security program can be rolled out, the following
steps must be taken:
• Identify a team of internal employees and/or external consultants who will
build the physical security program through the following steps.
• Carry out a risk analysis to identify the vulnerabilities and threats and to
calculate the business impact of each threat.
• Identify regulatory and legal requirements that the organization must meet
and maintain.
• Work with management to define an acceptable risk level for the physical
security program.
• Derive the required performance baselines from the acceptable risk level.
• Create countermeasure performance metrics.
• Develop criteria from the results of the analysis, outlining the level of
protection and performance required for the following categories of the
security program:
• Deterrence
• Delaying
• Detection
• Assessment
• Response
• Identify and implement countermeasures for each program category.
Figure 5-1 Relationships of risk, baselines, and countermeasures

Chapter 5: Physical and Environmental Security
435
• Continuously evaluate countermeasures against the set baselines to ensure the
acceptable risk level is not exceeded.
Once these steps have taken place then the team is ready to move forward in its
actual design phase. The design will incorporate the controls required for each category
of the program: deterrence, delaying, detection, assessment, and response. We will dig
deeper into these categories and their corresponding controls later in the chapter in the
section “Designing a Physical Security Program.”
One of the most commonly used approaches in physical security program develop-
ment is described in the following section.
Crime Prevention Through Environmental Design
This place is so nice and pretty and welcoming. No one would want to carry out crimes here.
Crime Prevention Through Environmental Design (CPTED) is a discipline that out-
lines how the proper design of a physical environment can reduce crime by directly
affecting human behavior. It provides guidance in loss and crime prevention through
proper facility construction and environmental components and procedures.
CPTED concepts were developed in the 1960s. They have been expanded upon and
have matured as our environments and crime types have evolved. CPTED has been
used not just to develop corporate physical security programs, but also for large-scale
activities such as development of neighborhoods, towns, and cities. It addresses land-
scaping, entrances, facility and neighborhood layouts, lighting, road placement, and
traffic circulation patterns. It looks at microenvironments, such as offices and rest-
rooms, and macroenvironments, like campuses and cities. The crux of CPTED is that
Legal Requirements
In physical security there are some regulatory and high-level legal requirements
that must be met, but many of them just have high-level statements, as in “pro-
tect personnel” or “implement lifesaving controls.” It is up to the organization to
figure out how to actually meet these requirements in a practical manner. In the
United States there is a lot of case law that pertains to physical security require-
ments, which is built upon precedence. This means that there have been lawsuits
pertaining to specific physical security instances and a judgment was made on
liability. For example, there is no law that dictates that you must put up a yellow
sign indicating that a floor is wet. Many years ago someone somewhere slipped
on a wet floor and sued the company and the judge ruled that the company was
negligent and liable for the person’s injuries. Now it is built into many company
procedures that after a floor is mopped or there is a spill, this yellow sign is put
in place so no one will fall and sue the company. It is hard to think about and
cover all of these issues since there is no specific checklist to follow. This is why it
is a good idea to consult with a physical security expert when developing a phys-
ical security program.

CISSP All-in-One Exam Guide
436
the physical environment can be manipulated to create behavioral effects that will re-
duce crime and the fear of crime. It looks at the components that make up the relation-
ship between humans and their environment. This encompasses the physical, social,
and psychological needs of the users of different types of environments and predictable
behaviors of these users and offenders.
CPTED provides guidelines on items some of us might not consider. For example,
hedges and planters around a facility should not be higher than 2.5 feet tall, so they
cannot be used to gain access to a window. A data center should be located at the center
of a facility, so the facility’s walls will absorb any damages from external forces, instead
of the data center itself. Street furnishings (benches and tables) encourage people to sit
and watch what is going on around them, which discourages criminal activity. A corpo-
ration’s landscape should not include wooded areas or other places where intruders can
hide. Ensure that CCTV cameras are mounted in full view, so criminals know their ac-
tivities will be captured, and other people know the environment is well monitored
and thus safer.
CPTED and target hardening are two different approaches. Target hardening focuses
on denying access through physical and artificial barriers (alarms, locks, fences, and so
on). Traditional target hardening can lead to restrictions on the use, enjoyment, and
aesthetics of an environment. Sure, we can implement hierarchies of fences, locks, and
intimidating signs and barriers—but how pretty would that be? If your environment is
a prison, this look might be just what you need. But if your environment is an office
building, you’re not looking for Fort Knox décor. Nevertheless, you still must provide
the necessary levels of protection, but your protection mechanisms should be more
subtle and unobtrusive.
Let’s say your organization’s team needs to protect a side door at your facility. The
traditional target-hardening approach would be to put locks, alarms, and cameras on
the door; install an access control mechanism, such as a proximity reader; and instruct
security guards to monitor this door. The CPTED approach would be to ensure there is
no sidewalk leading to this door from the front of the building if you don’t want cus-
tomers using it. The CPTED approach would also ensure no tall trees or bushes block
the ability to view someone using this door. Barriers such as trees and bushes may make
intruders feel more comfortable in attempting to break in through a secluded door.
The best approach is usually to build an environment from a CPTED approach and
then apply the target-hardening components on top of the design where needed.
If a parking garage were developed using the CPTED approach, the stair towers and
elevators within the garage might have glass windows instead of metal walls, so people
feel safer, and potential criminals will not carry out crimes in this more visible environ-
ment. Pedestrian walkways would be created such that people could look out across the
rows of cars and see any suspicious activities. The different rows for cars to park in
would be separated by low walls and structural pillars, instead of solid walls, to allow

Chapter 5: Physical and Environmental Security
437
pedestrians to view activities within the garage. The goal is to not provide any hidden
areas where criminals can carry out their crimes and to provide an open-viewed area so
if a criminal does attempt something malicious, there is a higher likelihood of some-
one seeing it.
CPTED provides three main strategies to bring together the physical environment
and social behavior to increase overall protection: natural access control, natural sur-
veillance, and natural territorial reinforcement.
Natural Access Control
I want to go into the building from the side, but I would have to step on these flowers. I better
go around to the front.
Natural access control is the guidance of people entering and leaving a space by the
placement of doors, fences, lighting, and even landscaping. For example, an office
building may have external bollards with lights in them, as shown in Figure 5-2. These
bollards actually carry out different safety and security services. The bollards themselves
protect the facility from physical destruction by preventing people from driving their
cars into the building. The light emitted helps ensure that criminals do not have a dark
place to hide. And the lights and bollard placement guide people along the sidewalk to
the entrance, instead of using signs or railings. As shown in Figure 5-2, the landscape,
sidewalks, lighted bollards, and clear sight lines are used as natural access controls.
They work together to give individuals a feeling of being in a safe environment and
help dissuade criminals by working as deterrents.
Similarities in Approaches
The risk analysis steps that need to take place for the development of a physical
security program are similar to the steps outlined in Chapter 2 for the develop-
ment of an organizational security program and the steps outlined in Chapter 8
for a business impact analysis, because each of these processes (development of
an information security program, a physical security program, or a business con-
tinuity plan) accomplishes goals that are similar to the goals of the other two
processes, but with different focuses. Each process requires a team to carry out a
risk analysis to determine the company’s threats and risks. An information secu-
rity program looks at the internal and external threats to resources and data
through business processes and technological means. Business continuity looks
at how natural disasters and disruptions could damage the organization, while
physical security looks at internal and external physical threats to the company
resources.
Each requires a solid risk analysis process. Review Chapter 2 to understand
the core components of every risk analysis.

CISSP All-in-One Exam Guide
438
NOTE
NOTE Bollards are short posts commonly used to prevent vehicular access
and to protect a building or people walking on a sidewalk from vehicles. They
can also be used to direct foot traffic.
Clear lines of sight and transparency can be used to discourage potential offenders,
because of the absence of places to hide or carry out criminal activities.
The CPTED model shows how security zones can be created. An environment’s space
should be divided into zones with different security levels, depending upon who needs
to be in that zone and the associated risk. The zones can be labeled as controlled, re-
stricted, public, or sensitive. This is conceptually similar to information classification,
as described in Chapter 2. In a data classification program, different classifications are
created, along with data handling procedures and the level of protection that each clas-
sification requires. The same is true of physical zones. Each zone should have a specific
protection level required of it, which will help dictate the types of controls that should
be put into place.
Figure 5-2 Sidewalks, lights, and landscaping can be used for protection.

Chapter 5: Physical and Environmental Security
439
Access control should be in place to control and restrict individuals from going
from one security zone to the next. Access control should also be in place for all facility
entrances and exits. The security program development team needs to consider other
ways in which intruders can gain access to buildings, such as by climbing adjacent trees
to access skylights, upper-story windows, and balconies. The following controls are
commonly used for access controls within different organizations:
• Limit the number of entry points.
• Force all guests to go to a front desk and sign in before entering the environment.
• Reduce the number of entry points even further after hours or during the
weekend, when not as many employees are around.
• Implement sidewalks and landscaping to guide the public to a main entrance.
• Implement a back driveway for suppliers and deliveries, which is not easily
accessible to the public.
• Provide lighting for the pathways the public should follow to enter a building
to help encourage that only one entry is used for access.

CISSP All-in-One Exam Guide
440
• Implement sidewalks and grassy areas to guide vehicle traffic to only enter and
exit through specific locations.
• Provide parking in the front of the building (not the back or sides) so people
will be directed to enter the intended entrance.
These types of access controls are used all of the time, and we usually do not think
about them. They are built into the natural environment to manipulate us into doing
what the owner of the facility wants us to do. When you are walking on a sidewalk that
leads to an office front door and there are pretty flowers on both sides of the sidewalk,
know that they are put there because people tend not to step off a sidewalk and crush
pretty flowers. Flowers are commonly placed on both sides of a sidewalk to help ensure
that people stay on the sidewalk. Subtle and sneaky, but these control mechanisms work.
More obvious access barriers can be naturally created (cliffs, rivers, hills), existing
manmade elements (railroad tracks, highways), or artificial forms designed specifically
to impede movement (fences, closing streets). These can be used in tandem or sepa-
rately to provide the necessary level of access control.
Natural Surveillance
Please sit on this bench and just watch people walking by. You are cheaper than hiring a secu-
rity guard.
Surveillance can also take place through organized means (security guards), me-
chanical means (CCTV), and natural strategies (straight lines of sight, low landscaping,
raised entrances). The goal of natural surveillance is to make criminals feel uncomfort-
able by providing many ways observers could potentially see them and to make all
other people feel safe and comfortable by providing an open and well-designed envi-
ronment.
Natural surveillance is the use and placement of physical environmental features,
personnel walkways, and activity areas in ways that maximize visibility. Figure 5-3 il-
lustrates a stairway in a parking garage designed to be open and allow easy observation.
Next time you are walking down a street and see a bench next to a building or you
see a bench in a park, know that the city has not allocated funds for these benches just
in case your legs get tired. These benches are strategically placed so that people will sit
and watch other people. This is a very good surveillance system. The people who are
watching others do not realize that they are actually protecting the area, but many
criminals will identify them and not feel as confident in carrying out some type of
malicious deed.
Walkways and bicycle paths are commonly installed so that there will be a steady
flow of pedestrians who could identify malicious activity. Buildings might have large
windows that overlook sidewalks and parking lots for the same reason. Shorter fences
might be installed so people can see what is taking place on both sides of the fence.
Certain high-risk areas have more lighting than what is necessary so that people from a
distance can see what is going on. These high-risk areas could be stairs, parking areas,
bus stops, laundry rooms, children’s play areas, dumpsters, and recycling stations.
These constructs help people protect people without even knowing it.

Chapter 5: Physical and Environmental Security
441
Natural Territorial Reinforcement
This is my neighborhood and I will protect it.
The third CPTED strategy is natural territorial reinforcement, which creates physical
designs that emphasize or extend the company’s physical sphere of influence so legiti-
mate users feel a sense of ownership of that space. Territorial reinforcement can be
implemented through the use of walls, fences, landscaping, light fixtures, flags, clearly
marked addresses, and decorative sidewalks. The goal of territorial reinforcement is to
create a sense of a dedicated community. Companies implement these elements so
employees feel proud of their environment and have a sense of belonging, which they
will defend if required to do so. These elements are also implemented to give potential
offenders the impression that they do not belong there, that their activities are at risk of
being observed, and that their illegal activities will not be tolerated or ignored.
In towns and cities there could be areas for people to walk their dogs, picnic tables
for people to use, restrooms, parks, and locations for people to play sports (baseball,
soccer). All of these give the local people a feeling of being in a collective neighborhood
Figure 5-3 Open areas reduce the likelihood of criminal activity.

CISSP All-in-One Exam Guide
442
and a homey feeling. This helps people identify who belongs there and who does not
and what is normal behavior and what is not. If people feel as though they are in their
own neighborhood, they will be more empowered to challenge something suspicious
and protect the local area.
CPTED also encourages activity support, which is planned activities for the areas to
be protected. These activities are designed to get people to work together to increase the
overall awareness of acceptable and unacceptable activities in the area. The activities
could be neighborhood watch groups, company barbeques, block parties, or civic meet-
ings. This strategy is sometimes the reason for particular placement of basketball courts,
soccer fields, or baseball fields in open parks. The increased activity will hopefully keep
the bad guys from milling around doing things the community does not welcome.
Most corporate environments use a mix of the CPTED and target-hardening ap-
proaches. CPTED deals mainly with the construction of the facility, its internal and
external designs, and exterior components such as landscaping and lighting. If the en-
vironment is built based on CPTED, then the target hardening is like icing on the cake.
The target-hardening approach applies more granular protection mechanisms, such as
locks and motion detectors. The rest of the chapter looks at physical controls that can
be used in both models.
Designing a Physical Security Program
Our security guards should wear pink uniforms and throw water balloons at intruders.
If a team is organized to assess the protection level of an existing facility, it needs to
investigate the following:
• Construction materials of walls and ceilings
• Power distribution systems
• Communication paths and types (copper, telephone, fiber)
• Surrounding hazardous materials
• Exterior components:
• Topography
• Proximity to airports, highways, railroads
• Potential electromagnetic interference from surrounding devices
• Climate
• Soil
• Existing fences, detection sensors, cameras, barriers
• Operational activities that depend upon physical resources
• Vehicle activity
• Neighbors

Chapter 5: Physical and Environmental Security
443
To properly obtain this information, the team should do physical surveys and inter-
view various employees. All of this collected data will help the team to evaluate the
current controls, identify weaknesses, and ensure operational productivity is not nega-
tively affected by implementing new controls.
Although there are usually written policies and procedures on what should be taking
place pertaining to physical security, policies and reality do not always match up. It is
important for the team to observe how the facility is used, note daily activities that
could introduce vulnerabilities, and determine how the facility is protected. This infor-
mation should be documented and compared to the information within the written
policy and procedures. In most cases, existing gaps must be addressed and fixed. Just
writing out a policy helps no one if it is not actually followed.
Every organization must comply with various regulations, whether they be safety
and health regulations; fire codes; state and local building codes; Departments of De-
fense, Energy, or Labor requirements; or some other agency’s regulations. The organiza-
tion may also have to comply with requirements of the Occupational Safety and Health
Administration (OSHA) and the Environmental Protection Agency (EPA), if it is oper-
ating in the United States, or with the requirements of equivalent organizations within
another country. The physical security program development team must understand all
the regulations the organization must comply with and how to reach compliance
through physical security and safety procedures.
Legal issues must be understood and properly addressed as well. These issues may
include access availability for the disabled, liability issues, the failure to protect assets
and people, excessive force used by security guards, and so on. This long laundry list of
items can get a company into legal trouble if it is not doing what it is supposed to. Oc-
casionally, the legal trouble may take the form of a criminal case—for example, if doors
default to being locked when power is lost and, as a result, several employees are
trapped and killed during a fire, criminal negligence may be alleged. Legal trouble can
also come in the form of civil cases—for instance, if a company does not remove the ice
on its sidewalks and a pedestrian falls and breaks his ankle, the pedestrian may sue the
company. The company may be found negligent and held liable for damages.
Every organization should have a facility safety officer, whose main job is to under-
stand all the components that make up the facility and what the company needs to do
to protect its assets and stay within compliance. This person should oversee facility
management duties day in and day out, but should also be heavily involved with the
team that has been organized to evaluate the organization’s physical security program.
A physical security program is a collection of controls that are implemented and
maintained to provide the protection levels necessary to be in compliance with the
physical security policy. The policy should embody all the regulations and laws that
must be adhered to and should set the risk level the company is willing to accept.
By this point, the team has carried out a risk analysis, which consisted of identifying
the company’s vulnerabilities, threats, and business impact pertaining to the identified
threats. The program design phase should begin with a structured outline, which will
evolve into a framework. This framework will then be fleshed out with the necessary
controls and countermeasures. The outline should contain the program categories and
the necessary countermeasures. The following is a simplistic example:

CISSP All-in-One Exam Guide
444
I. Deterrence of criminal activity
A. Fences
B. Warning signs
C. Security guards
D. Dogs
II. Delay of intruders to help ensure they can be caught
A. Locks
B. Defense-in-depth measures
C. Access controls
III. Detection of intruders
A. External intruder sensors
B. Internal intruder sensors
IV. Assessment of situations
A. Security guard procedures
B. Damage assessment criteria
V. Response to intrusions and disruptions
A. Communication structure (calling tree)
B. Response force
C. Emergency response procedures
D. Police, fire, medical personnel
The team can then start addressing each phase of the security program, usually start-
ing with the facility.
Facility
I can’t see the building.
Response: That’s the whole idea.
When a company decides to erect a building, it should consider several factors be-
fore pouring the first batch of concrete. Of course, land prices, customer population,
and marketing strategies are reviewed, but as security professionals, we are more inter-
ested in the confidence and protection that a specific location can provide. Some orga-
nizations that deal with top-secret or confidential information and processes make
their facilities unnoticeable so they do not attract the attention of would-be attackers.
The building may be hard to see from the surrounding roads, the company signs and
logos may be small and not easily noticed, and the markings on the building may not
give away any information that pertains to what is going on inside that building. It is a
type of urban camouflage that makes it harder for the enemy to seek out that company
as a target. This is very common for telecommunication facilities that contain critical
infrastructure switches and other supporting technologies. When driving down the
road you might pass three of these buildings, but because they have no features that
actually stand out, you do not even give them a second thought—which is the goal.

Chapter 5: Physical and Environmental Security
445
A company should evaluate how close the facility would be to a police station, fire
station, and medical facilities. Many times, the proximity of these entities raises the real
estate value of properties, but for good reason. If a chemical company that manufactures
highly explosive materials needs to build a new facility, it may make good business sense
to put it near a fire station. (Although the fire station might not be so happy.) If another
company that builds and sells expensive electronic devices is expanding and needs to
move operations into another facility, police reaction time may be looked at when
choosing one facility location over another. Each of these issues—police station, fire sta-
tion, and medical facility proximity—can also reduce insurance rates and must be
looked at carefully. Remember that the ultimate goal of physical security is to ensure the
safety of personnel. Always keep that in mind when implementing any sort of physical
security control. Protect your fellow humans, be your brother’s keeper, and then run.
Some buildings are placed in areas surrounded by hills or mountains to help pre-
vent eavesdropping of electrical signals emitted by the facility’s equipment. In some
cases, the organization itself will build hills or use other landscaping techniques to
guard against eavesdropping. Other facilities are built underground or right into the
side of a mountain for concealment and disguise in the natural environment, and for
protection from radar tools, spying activities, and aerial bomb attacks.
In the United States there is an Air Force base built into the Cheyenne Mountain
close to Colorado Springs, Colorado. The base was built into the mountain and is made
up of an inner complex of buildings, rooms, and tunnels. It has its own air intake sup-
ply, as well as water, fuel, and sewer lines. This is where the North American Aerospace
Defense Command carries out its mission and apparently according to many popular
movies, where you should be headed if the world is about to be blown up.
Issues with Selecting a Facility Site
When selecting a location for a facility, some of the following items are critical to
the decision-making process:
•Visibility
• Surrounding terrain
• Building markings and signs
• Types of neighbors
• Population of the area
•Surrounding area and external entities
• Crime rate, riots, terrorism attacks
• Proximity to police, medical, and fire stations
• Possible hazards from surrounding area
•Accessibility
• Road access
• Traffic
• Proximity to airports, train stations, and highways

CISSP All-in-One Exam Guide
446
Construction
We need a little more than glue, tape, and a stapler.
Physical construction materials and structure composition need to be evaluated for
their appropriateness to the site environment, their protective characteristics, their util-
ity, and their costs and benefits. Different building materials provide various levels of
fire protection and have different rates of combustibility, which correlate with their fire
ratings. When making structural decisions, the decision of what type of construction
material to use (wood, concrete, or steel) needs to be considered in light of what the
building is going to be used for. If an area will be used to store documents and old
equipment, it has far different needs and legal requirements than if it is going to be
used for employees to work in every day.
The load (how much weight can be held) of a building’s walls, floors, and ceilings
needs to be estimated and projected to ensure the building will not collapse in different
situations. In most cases, this is dictated by local building codes. The walls, ceilings, and
floors must contain the necessary materials to meet the required fire rating and to pro-
tect against water damage. The windows (interior and exterior) may need to provide
ultraviolet (UV) protection, may need to be shatterproof, or may need to be translucent
or opaque, depending on the placement of the window and the contents of the build-
ing. The doors (exterior and interior) may need to have directional openings, have the
same fire rating as the surrounding walls, prohibit forcible entries, display emergency
egress markings, and—depending on placement—have monitoring and attached
alarms. In most buildings, raised floors are used to hide and protect wires and pipes,
and it is important to ensure any raised outlets are properly grounded.
Building codes may regulate all of these issues, but there are still many options
within each category that the physical security program development team should re-
view for extra security protection. The right options should accomplish the company’s
security and functionality needs and still be cost-effective.
When designing and building a facility, the following major items need to be ad-
dressed from a physical security point of view:
•Walls
• Combustibility of material (wood, steel, concrete)
• Fire rating
• Reinforcements for secured areas
•Doors
• Combustibility of material (wood, pressed board, aluminum)
• Fire rating
•Natural disaster
• Likelihood of floods, tornadoes, earthquakes, or hurricanes
• Hazardous terrain (mudslides, falling rock from mountains, or
excessive snow or rain)

Chapter 5: Physical and Environmental Security
447
• Resistance to forcible entry
• Emergency marking
• Placement
• Locked or controlled entrances
• Alarms
• Secure hinges
• Directional opening
• Electric door locks that revert to an unlocked state for safe evacuation in
power outages
• Type of glass—shatterproof or bulletproof glass requirements
•Ceilings
• Combustibility of material (wood, steel, concrete)
• Fire rating
• Weight-bearing rating
• Drop-ceiling considerations
•Windows
• Translucent or opaque requirements
• Shatterproof
• Alarms
• Placement
• Accessibility to intruders
•Flooring
• Weight-bearing rating
• Combustibility of material (wood, steel, concrete)
• Fire rating
• Raised flooring
• Nonconducting surface and material
•Heating, ventilation, and air conditioning
• Positive air pressure
• Protected intake vents
• Dedicated power lines
• Emergency shutoff valves and switches
• Placement
•Electric power supplies
• Backup and alternate power supplies

CISSP All-in-One Exam Guide
448
• Clean and steady power source
• Dedicated feeders to required areas
• Placement and access to distribution panels and circuit breakers
•Water and gas lines
• Shutoff valves—labeled and brightly painted for visibility
• Positive flow (material flows out of building, not in)
• Placement—properly located and labeled
•Fire detection and suppression
• Placement of sensors and detectors
• Placement of suppression systems
• Type of detectors and suppression agents
The risk analysis results will help the team determine the type of construction mate-
rial that should be used when constructing a new facility. Several grades of building
construction are available. For example, light frame construction material provides the
least amount of protection against fire and forcible entry attempts. It is composed of
untreated lumber that would be combustible during a fire. Light frame construction
material is usually used to build homes, primarily because it is cheap, but also because
homes typically are not under the same types of fire and intrusion threats that office
buildings are.
Heavy timber construction material is commonly used for office buildings. Combus-
tible lumber is still used in this type of construction, but there are requirements on the
thickness and composition of the materials to provide more protection from fire. The
construction materials must be at least four inches in thickness. Denser woods are used
and are fastened with metal bolts and plates. Whereas light frame construction mate-
rial has a fire survival rate of 30 minutes, the heavy timber construction material has a
fire rate of one hour.
A building could be made up of incombustible material, such as steel, which pro-
vides a higher level of fire protection than the previously mentioned materials, but
Ground
If you are holding a power cord plug that has two skinny metal pieces and one
fatter, rounder metal piece, which all go into the outlet—what is that fatter,
rounder piece for? It is a ground connector, which is supposed to act as the con-
duit for any excess current to ensure that people and devices are not negatively
affected by a spike in electrical current. So, in the wiring of a building, where do
you think this ground should be connected? Yep, to the ground. Old mother
earth. But many buildings are not wired properly, and the ground connector is
connected to nothing. This can be very dangerous, since the extra current has
nowhere to escape but into our equipment or ourselves.

Chapter 5: Physical and Environmental Security
449
loses its strength under extreme temperatures, something that may cause the building to
collapse. So, although the steel will not burn, it may melt and weaken. If a building
consists of fire-resistant material, the construction material is fire-retardant and may
have steel rods encased inside of concrete walls and support beams. This provides the
most protection against fire and forced entry attempts.
The team should choose its construction material based on the identified threats of
the organization and the fire codes to be complied with. If a company is just going to
have some office workers in a building and has no real adversaries interested in destroy-
ing the facility, then the light frame or heavy timber construction material would be
used. Facilities for government organizations, which are under threat by domestic and
foreign terrorists, would be built with fire-resistant materials. A financial institution
would also use fire-resistant and reinforcement material within its building. This is es-
pecially true for its exterior walls, through which thieves may attempt to drive vehicles
to gain access to the vaults.
Calculations of approximate penetration times for different types of explosives and
attacks are based on the thickness of the concrete walls and the gauge of rebar used.
(Rebar refers to the steel rods encased within the concrete.) So even if the concrete were
damaged, it would take longer to actually cut or break through the rebar. Using thicker
rebar and properly placing it within the concrete provides even more protection.
Reinforced walls, rebar, and the use of double walls can be used as delaying mecha-
nisms. The idea is that it will take the bad guy longer to get through two reinforced
walls, which gives the response force sufficient time to arrive at the scene and stop the
attacker, we hope.
Entry Points
Understanding the company needs and types of entry points for a specific building is
critical. The various types of entry points may include doors, windows, roof access, fire
escapes, chimneys, and service delivery access points. Second and third entry points
must also be considered, such as internal doors that lead into other portions of the
building and to exterior doors, elevators, and stairwells. Windows at the ground level
should be fortified, because they could be easily broken. Fire escapes, stairwells to the
roof, and chimneys are many times overlooked as potential entry points.
NOTE
NOTE Ventilation ducts and utility tunnels can also be used by intruders and
thus must be properly protected with sensors and access control mechanisms.
The weakest portion of the structure, usually its doors and windows, will likely be
attacked first. With regard to doors, the weaknesses usually lie within the frames, hinges,
and door material. The bolts, frames, hinges, and material that make up the door
should all provide the same level of strength and protection. For example, if a company
implements a heavy, nonhollow steel door but uses weak hinges that could be easily
extracted, the company is just wasting money. The attacker can just remove the hinges
and remove this strong and heavy door.

CISSP All-in-One Exam Guide
450
The door and surrounding walls and ceilings should also provide the same level of
strength. If another company has an extremely fortified and secure door, but the sur-
rounding wall materials are made out of regular light frame wood, then it is also wast-
ing money on doors. There is no reason to spend a lot of money on one countermeasure
that can be easily circumvented by breaking a weaker countermeasure in proximity.
Doors Different door types for various functionalities include the following:
• Vault doors
• Personnel doors
• Industrial doors
• Vehicle access doors
• Bullet-resistant doors
Doors can be hollow-core or solid-core. The team needs to understand the various
entry types and the potential forced-entry threats, which will help them determine what
type of door should be implemented. Hollow-core doors can be easily penetrated by kick-
ing or cutting them; thus, they are usually used internally. The team also has a choice of
solid-core doors, which are made up of various materials to provide different fire ratings
and protection from forced entry. As stated previously, the fire rating and protection level
of the door needs to match the fire rating and protection level of the surrounding walls.
Bulletproof doors are also an option if there is a threat that damage could be done
to resources by shooting through the door. These types of doors are constructed in a
manner that involves sandwiching bullet-resistant and bulletproof material between
wood or steel veneers to still give the door some aesthetic qualities while providing the
necessary levels of protection.
Hinges and strike plates should be secure, especially on exterior doors or doors used
to protect sensitive areas. The hinges should have pins that cannot be removed, and the
door frames must provide the same level of protection as the door itself.
Fire codes dictate the number and placement of doors with panic bars on them.
These are the crossbars that release an internal lock to allow a locked door to open.
Panic bars can be on regular entry doors and also on emergency exit doors. Those are
the ones that usually have the sign that indicates the door is not an exit point and that
an alarm will go off if the door is opened. It might seem like fun and a bit tempting to
see if the alarm will really go off or not—but don’t try it. Security people are not known
for their sense of humor.
Mantraps and turnstiles can be used so unauthorized individuals entering a facility
cannot get in or out if it is activated. A mantrap is a small room with two doors. The first
door is locked; a person is identified and authenticated by a security guard, biometric
system, smart card reader, or swipe card reader. Once the person is authenticated and
access is authorized, the first door opens and allows the person into the mantrap. The
first door locks and the person is trapped. The person must be authenticated again be-
fore the second door unlocks and allows him into the facility. Some mantraps use
biometric systems that weigh the person who enters to ensure that only one person at
a time is entering the mantrap area. This is a control to counter piggybacking.

Chapter 5: Physical and Environmental Security
451
Doorways with automatic locks can be configured to be fail-safe or fail-secure. A
fail-safe setting means that if a power disruption occurs that affects the automated lock-
ing system, the doors default to being unlocked. Fail-safe deals directly with protecting
people. If people work in an area and there is a fire or the power is lost, it is not a good
idea to lock them in. This would not make you many friends. A fail-secure configura-
tion means that the doors default to being locked if there are any problems with the
power. If people do not need to use specific doors for escape during an emergency, then
these doors can most likely default to fail-secure settings.
Windows Windows should be properly placed (this is where security and aesthetics
can come to blows) and should have frames of the proper strengths, the necessary glaz-
ing material, and possibly have a protective covering. The glazing material, which is
applied to the windows as they are being made, may be standard, tempered, acrylic,
wire, or laminated on glass. Standard glass windows are commonly used in residential
homes and are easily broken. Tempered glass is made by heating the glass and then sud-
denly cooling it. This increases its mechanical strength, which means it can handle
more stress and is harder to break. It is usually five to seven times stronger than stan-
dard glass.
Acrylic glass can be made out of polycarbonate acrylic, which is stronger than stan-
dard glass but produces toxic fumes if burned. Polycarbonate acrylics are stronger than
regular acrylics, but both are made out of a type of transparent plastic. Because of their
combustibility, their use may be prohibited by fire codes. The strongest window mate-
rial is glass-clad polycarbonate. It is resistant to a wide range of threats (fire, chemical,
breakage), but, of course, is much more expensive. These types of windows would be
used in areas that are under the greatest threat.
Some windows are made out of glass that has embedded wires—in other words, it
actually has two sheets of glass, with the wiring in between. The wires help reduce the
likelihood of the window being broken or shattering.
Laminated glass has two sheets of glass with a plastic film in between. This added
plastic makes it much more difficult to break the window. As with other types of glass,
laminated glass can come in different depths. The greater the depth (more glass and
plastic), the more difficult it is to break.

CISSP All-in-One Exam Guide
452
A lot of window types have a film on them that provides efficiency in heating and
cooling. They filter out UV rays and are usually tinted, which can make it harder for the
bad guy to peep in and monitor internal activities. Some window types have a different
kind of film applied that makes it more difficult to break them, whether by explosive,
storm, or intruder.
Internal Compartments
Many components that make up a facility must be looked at from a security point of
view. Internal partitions are used to create barriers between one area and another. These
partitions can be used to segment separate work areas, but should never be used in
protected areas that house sensitive systems and devices. Many buildings have dropped
ceilings, meaning the interior partitions do not extend to the true ceiling—only to the
dropped ceiling. An intruder can lift a ceiling panel and climb over the partition. This
example of intrusion is shown in Figure 5-4. In many situations, this would not require
forced entry, specialized tools, or much effort. (In some office buildings, this may even
be possible from a common public-access hallway.) These types of internal partitions
should not be relied upon to provide protection for sensitive areas.
Window Types
A security professional may be involved with the planning phase of building a
facility, and each of these items comes into play when constructing a secure
building and environment. The following sums up the types of windows that can
be used:
•Standard No extra protection. The cheapest and lowest level of
protection.
•Tempered Glass is heated and then cooled suddenly to increase its
integrity and strength.
•Acrylic A type of plastic instead of glass. Polycarbonate acrylics are
stronger than regular acrylics.
•Wired A mesh of wire is embedded between two sheets of glass. This
wire helps prevent the glass from shattering.
•Laminated The plastic layer between two outer glass layers. The plastic
layer helps increase its strength against breakage.
•Solar window film Provides extra security by being tinted and offers
extra strength due to the film’s material.
•Security film Transparent film is applied to the glass to increase its
strength.

Chapter 5: Physical and Environmental Security
453
Computer and Equipment Rooms
It used to be necessary to have personnel within the computer rooms for proper main-
tenance and operations. Today, most servers, routers, switches, mainframes, and other
equipment housed in computer rooms can be controlled remotely. This enables com-
puters to live in rooms that have fewer people milling around and spilling coffee. Be-
cause the computer rooms no longer have personnel sitting and working in them for
long periods, the rooms can be constructed in a manner that is efficient for equipment
instead of people.
Smaller systems can be stacked vertically to save space. They should be mounted on
racks or placed inside equipment cabinets. The wiring should be close to the equip-
ment to save on cable costs and to reduce tripping hazards.
Data centers, server rooms, and wiring closets should be located in the core areas of
a facility, near wiring distribution centers. Strict access control mechanisms and proce-
dures should be implemented for these areas. The access control mechanisms may be
smart card readers, biometric readers, or combination locks, as described in Chapter 3.
These restricted areas should have only one access door, but fire code requirements
typically dictate there must be at least two doors to most data centers and server rooms.
Only one door should be used for daily entry and exit, and the other door should be
used only in emergency situations. This second door should not be an access door,
which means people should not be able to come in through this door. It should be
locked, but should have a panic bar that will release the lock if pressed.
Figure 5-4 An intruder can lift ceiling panels and enter a secured area with little effort.

CISSP All-in-One Exam Guide
454
These restricted areas ideally should not be directly accessible from public areas like
stairways, corridors, loading docks, elevators, and restrooms. This helps ensure that the
people who are by the doors to secured areas have a specific purpose for being there,
versus being on their way to the restroom or standing around in a common area gos-
siping about the CEO.
Because data centers usually hold expensive equipment and the company’s critical
data, their protection should be thoroughly thought out before implementation. Data
centers should not be located on the top floors because it would be more difficult for
an emergency crew to access it in a timely fashion in case of a fire. By the same token,
data centers should not be located in basements where flooding can affect the systems.
And if a facility is in a hilly area, the data center should be located well above ground
level. Data centers should be located at the core of a building so if there is some type of
attack on the building, the exterior walls and structures will absorb the hit and hope-
fully the data center will not be damaged.
Which access controls and security measures should be implemented for the data
center depends upon the sensitivity of the data being processed and the protection
level required. Alarms on the doors to the data processing center should be activated
during off-hours, and there should be procedures dictating how to carry out access
control during normal business hours, after hours, and during emergencies. If a combi-
nation lock is used to enter the data processing center, the combination should be
changed at least every six months and also after an employee who knows the code
leaves the company.
The various controls discussed next are shown in Figure 5-5. The team responsible
for designing a new data center (or evaluating a current data center) should understand
all the controls shown in Figure 5-5 and be able to choose what is needed.
The data processing center should be constructed as one room rather than different
individual rooms. The room should be away from any of the building’s water pipes in
case a break in a line causes a flood. The vents and ducts from the HVAC system should
be protected with some type of barrier bars and should be too small for anyone to crawl
through and gain access to the center. The data center must have positive air pressure,
so no contaminants can be sucked into the room and into the computers’ fans.
In many data centers, an emergency Off switch is situated next to the door so some-
one can turn off the power if necessary. If a fire occurs, this emergency Off switch should
be flipped as employees are leaving the room and before the fire suppression agent is
released. This is critical if the suppression agent is water, because water and electricity are
not a good match—especially during a fire. A company can install a fire suppression
system that is tied into this switch, so when a fire is detected, the electricity is automati-
cally shut off right before the suppression material is released. (The suppression mate-
rial could be a type of gas, such as halon, or FM-200. Gases are usually a better choice for
environments filled with computers. We will cover different suppression agents in the
“Fire Prevention, Detection, and Suppression” section later in the chapter.)
Portable fire extinguishers should be located close to the equipment and should be
easy to see and access. Smoke detectors or fire sensors should be implemented, and
water sensors should be placed under the raised floors. Since most of the wiring and
cables run under the raised floors, it is important that water does not get to these places
and, if it does, that an alarm sound if water is detected.

Chapter 5: Physical and Environmental Security
455
NOTE
NOTE If there is any type of water damage in a data center or facility, mold
and mildew could easily become a problem. Instead of allowing things to
“dry out on their own,” many times it is better to use industry-strength
dehumidifiers, water movers, and sanitizers to ensure secondary damage does
not occur.
Water can cause extensive damage to equipment, flooring, walls, computers, and
facility foundations. It is important that an organization be able to detect leaks and
unwanted water. The detectors should be under raised floors and on dropped ceilings
(to detect leaks from the floor above it). The location of the detectors should be docu-
mented and their position marked for easy access. As smoke and fire detectors should
be tied to an alarm system, so should water detectors. The alarms usually just alert the
necessary staff members and not everyone in the building. The staff members who are
responsible for following up when an alarm sounds should be trained properly on how
to reduce any potential water damage. Before any poking around to see where water is
or is not pooling in places it does not belong, the electricity for that particular zone of
the building should be temporarily turned off.
Figure 5-5 A data center should have many physical security controls.

CISSP All-in-One Exam Guide
456
Water detectors can help prevent damage to
• Equipment
• Flooring
• Walls
• Computers
• Facility foundations
Location of water detectors should be
• Under raised floors
• On dropped ceilings
It is important to maintain the proper temperature and humidity levels within data
centers, which is why an HVAC system should be implemented specifically for this
room. Too high a temperature can cause components to overheat and turn off; too low
a temperature can cause the components to work more slowly. If the humidity is high,
then corrosion of the computer parts can take place; if humidity is low, then static elec-
tricity can be introduced. Because of this, the data center must have its own temperature
and humidity controls, which are separate from the rest of the building.
It is best if the data center is on a different electrical system than the rest of the
building, if possible. Thus, if anything negatively affects the main building’s power, it
will not carry over and affect the center. The data center may require redundant power
supplies, which means two or more feeders coming in from two or more electrical sub-
stations. The idea is that if one of the power company’s substations were to go down,
the company would still be able to receive electricity from the other feeder. But just
because a company has two or more electrical feeders coming into its facility does not
mean true redundancy is automatically in place. Many companies have paid for two
feeders to come into their building, only to find out both feeders were coming from the
same substation! This defeats the whole purpose of having two feeders in the first place.
Data centers need to have their own backup power supplies, either an uninterrupt-
ed power supply (UPS) or generators. The different types of backup power supplies are
discussed later in the chapter, but it is important to know at this point that the power
backup must be able to support the load of the data center.
Many companies choose to use large glass panes for the walls of the data center so
personnel within the center can be viewed at all times. This glass should be shatter-
resistant since the window is acting as an exterior wall. The center’s doors should not
be hollow, but rather secure solid-core doors. Doors should open out rather than in so
they don’t damage equipment when opened. Best practices indicate that the door
frame should be fixed to adjoining wall studs and that there should be at least three
hinges per door. These characteristics would make the doors much more difficult to
break down.

Chapter 5: Physical and Environmental Security
457
Protecting Assets
The main threats that physical security components combat are theft, interruptions to
services, physical damage, compromised system and environment integrity, and unau-
thorized access.
Real loss is determined by the cost to replace the stolen items, the negative effect on
productivity, the negative effect on reputation and customer confidence, fees for con-
sultants that may need to be brought in, and the cost to restore lost data and produc-
tion levels. Many times, companies just perform an inventory of their hardware and
provide value estimates that are plugged into risk analysis to determine what the cost to
the company would be if the equipment were stolen or destroyed. However, the infor-
mation held within the equipment may be much more valuable than the equipment
itself, and proper recovery mechanisms and procedures also need to be plugged into
the risk assessment for a more realistic and fair assessment of cost.
Laptop theft is increasing at incredible rates each year. They have been stolen for
years, but in the past they were stolen mainly to sell the hardware. Now laptops are also
being stolen to gain sensitive data for identity theft crimes. What is important to under-
stand is that this is a rampant, and potentially very dangerous, crime. Many people
claim, “My whole life is on my laptop” or possibly their smartphone. Since employees
use laptops as they travel, they may have extremely sensitive company or customer data
on their systems that can easily fall into the wrong hands. The following list provides
many of the protection mechanisms that can be used to protect laptops and the data
they hold:
• Inventory all laptops, including serial numbers, so they can be properly
identified if recovered.
• Harden the operating system.
• Password-protect the BIOS.
• Register all laptops with the vendor, and file a report when one is stolen.
If a stolen laptop is sent in for repairs, after it is stolen it will be flagged by
the vendor.
• Do not check a laptop as luggage when flying.
• Never leave a laptop unattended, and carry it in a nondescript carrying case.
• Engrave the laptop with a symbol or number for proper identification.
• Use a slot lock with a cable to connect a laptop to a stationary object.
• Back up the data from the laptop and store it on a stationary PC or backup
media.
• Use specialized safes if storing laptops in vehicles.
• Encrypt all sensitive data.

CISSP All-in-One Exam Guide
458
Tracing software can be installed so that your laptop can “phone home” if it is taken
from you. Several products offer this tracing capability. Once installed and configured,
the software periodically sends in a signal to a tracking center. If you report that your
laptop has been stolen, the vendor of this software will work with service providers and
law enforcement to track down and return your laptop.
A company may have need for a safe. Safes are commonly used to store backup data
tapes, original contracts, or other types of valuables. The safe should be penetration-
resistant and provide fire protection. The types of safes an organization can choose
from are
•Wall safe Embedded into the wall and easily hidden
•Floor safe Embedded into the floor and easily hidden
•Chests Stand-alone safes
•Depositories Safes with slots, which allow the valuables to be easily slipped in
•Vaults Safes that are large enough to provide walk-in access
If a safe has a combination lock, it should be changed periodically, and only a small
subset of people should have access to the combination or key. The safe should be in a
visible location, so anyone who is interacting with the safe can be seen. The goal is to
uncover any unauthorized access attempts. Some safes have passive or thermal relock-
ing functionality. If the safe has a passive relocking function, it can detect when some-
one attempts to tamper with it, in which case extra internal bolts will fall into place to
ensure it cannot be compromised. If a safe has a thermal relocking function, when a
certain temperature is met (possibly from drilling), an extra lock is implemented to
ensure the valuables are properly protected.
Internal Support Systems
This place has no air conditioning or water. Who would want to break into it anyway?
Having a fortified facility with secure compartmentalized areas and protected assets
is nice, but also having lights, air conditioning, and water within this facility is even
better. Physical security needs to address these support services, because their malfunc-
tion or disruption could negatively affect the organization in many ways.
Although there are many incidents of various power losses here and there for differ-
ent reasons (storms, hurricanes, California nearly running out of electricity), one of the
most notable power losses took place in August 2003, when eight East Coast states and
portions of Canada lost power for several days. There were rumors about a worm caus-
ing this disruption, but the official report blamed it on a software bug in GE Energy’s
XA/21 system. This disaster left over 50 million people without power for days, caused
four nuclear power plants to be shut down, and put a lot of companies in insecure and
chaotic conditions. Security professionals need to be able to help organizations handle
both the small bumps in the road, such as power surges or sags, and the gigantic sink-
holes, such as what happened in the United States and Canada on August 14, 2003.

Chapter 5: Physical and Environmental Security
459
Electric Power
We don’t need no stinkin’ power supply. Just rub these two sticks together.
Because computing and communication have become so essential in almost every
aspect of life, power failure is a much more devastating event than it was 10 to 15 years
ago. The need for good plans to fall back on is crucial to ensure that a business will not
be drastically affected by storms, high winds, hardware failure, lightning, or other
events that can stop or disrupt power supplies. A continuous supply of electricity as-
sures the availability of company resources; thus, a security professional must be famil-
iar with the threats to electric power and the corresponding countermeasures.
Several types of power backup capabilities exist. Before a company chooses one, it
should calculate the total cost of anticipated downtime and its effects. This information
can be gathered from past records and other businesses in the same area on the same
power grid. The total cost per hour for backup power is derived by dividing the annual
expenditures by the annual standard hours of use.
Large and small issues can cause power failure or fluctuations. The effects manifest
in variations of voltage that can last a millisecond to days. A company can pay to have
two different supplies of power to reduce its risks, but this approach can be costly.
Other, less expensive mechanisms are to have generators or UPSs in place. Some gen-
erators have sensors to detect power failure and will start automatically upon failure.
Depending on the type and size of the generator, it might provide power for hours or
days. UPSs are usually short-term solutions compared to generators.
Smart Grid
Most of our power grid today is not considered “smart.” There are power plants
that turn something (i.e., coal) into electricity. The electricity goes through a trans-
mission substation, which puts the electricity on long-haul transmission lines.
These lines distribute the electricity to large areas. Before the electricity gets to our
home or office, it goes through a power substation and a transformer, which
changes the electrical current and voltage to the proper levels, and the electricity
travels over power lines (usually on poles) and connects to our buildings. So our
current power grid is similar to a system of rivers and streams—electricity gets to
where it needs to go without much technological intelligence involved. This
“dumb” system makes it hard to identify disruptions when they happen, deal with
high-peak demands, use renewable energy sources, react to attacks, and deploy
solutions that would make our overall energy consumption more efficient.
We are moving to smart grids, which mean that there is a lot more computing
software and technology embedded into the grids to optimize and automate these
functions. Some of the goals of a smart grid are self-healing, resistant to physical
and cyber-attacks, bidirectional communication capabilities, increased efficiency,
and better integration of renewable energy sources. We want our grids to be more
reliable, resilient, flexible, and efficient. While all of this is wonderful and terrific,
it means that almost every component of the new power grid has to be computer-
ized in some manner (smart meters, smart thermostats, automated control soft-
ware, automated feedback loops, digital scheduling and load shifting, etc.).

CISSP All-in-One Exam Guide
460
Power Protection
Protecting power can be done in three ways: through UPSs, power line conditioners,
and backup sources. UPSs use battery packs that range in size and capacity. A UPS can
be online or standby. Online UPS systems use AC line voltage to charge a bank of bat-
teries. When in use, the UPS has an inverter that changes the DC output from the bat-
teries into the required AC form and that regulates the voltage as it powers computer
devices. This conversion process is shown in Figure 5-6. Online UPS systems have the
normal primary power passing through them day in and day out. They constantly pro-
vide power from their own inverters, even when the electric power is in proper use.
Since the environment’s electricity passes through this type of UPS all the time, the UPS
device is able to quickly detect when a power failure takes place. An online UPS can
provide the necessary electricity and picks up the load after a power failure much more
quickly than a standby UPS.
Standby UPS devices stay inactive until a power line fails. The system has sensors
that detect a power failure, and the load is switched to the battery pack. The switch to
the battery pack is what causes the small delay in electricity being provided. So an on-
line UPS picks up the load much more quickly than a standby UPS, but costs more, of
course.
The actual definition of “smart grid” is nebulous because it is hard to delin-
eate between what falls within and outside the grid’s boundaries, many different
technologies are involved, and it is in an immature evolutionary stage. From a
security point of view, while the whole grid will be more resilient and centrally
controlled, now there could be more attack vectors because most pieces will have
some type of technology embedded.
In the past our telephones were “dumb,” but now they are small computers,
so they are “smart.” The increased functionality and intelligence open the doors
for more attacks on our individual smart phones. The smart grid is similar to our
advances in telephony. We can secure the core infrastructure, but it is the end
points that are very difficult to secure. And while telephones are important, pow-
er grids are part of every nation’s critical infrastructure.
Figure 5-6 A UPS device converts DC current from its internal or external batteries to usable AC
by using an inverter.

Chapter 5: Physical and Environmental Security
461
Backup power supplies are necessary when there is a power failure and the outage
will last longer than a UPS can last. Backup supplies can be a redundant line from an-
other electrical substation or from a motor generator, and can be used to supply main
power or to charge the batteries in a UPS system.
A company should identify critical systems that need protection from interrupted
power supplies, and then estimate how long secondary power would be needed and
how much power is required per device. Some UPS devices provide just enough power
to allow systems to shut down gracefully, whereas others allow the systems to run for a
longer period. A company needs to determine whether systems should only have a big
enough power supply to allow them to shut down properly or whether they need a
system that keeps them up and running so critical operations remain available.
Just having a generator in the closet should not give a company that warm fuzzy
feeling of protection. An alternate power source should be tested periodically to make
sure it works, and to the extent expected. It is never good to find yourself in an emer-
gency only to discover the generator does not work, or someone forgot to buy the gas
necessary to keep the thing running.
Electric Power Issues
Electric power enables us to be productive and functional in many different ways, but
if it is not installed, monitored, and respected properly, it can do us great harm.
When clean power is being provided, the power supply contains no interference or
voltage fluctuation. The possible types of interference (line noise) are electromagnetic
interference (EMI) and radio frequency interference (RFI), which can cause disturbance
to the flow of electric power while it travels across a power line, as shown in Figure 5-7.
EMI can be created by the difference between three wires: hot, neutral, and ground, and
the magnetic field they create. Lightning and electrical motors can induce EMI, which
could then interrupt the proper flow of electrical current as it travels over wires to, from,
Figure 5-7 RFI and EMI can cause line noise on power lines.

CISSP All-in-One Exam Guide
462
and within buildings. RFI can be caused by anything that creates radio waves. Fluores-
cent lighting is one of the main causes of RFI within buildings today, so does that mean
we need to rip out all the fluorescent lighting? That’s one choice, but we could also just
use shielded cabling where fluorescent lighting could cause a problem. If you take a
break from your reading, climb up into your office’s dropped ceiling, and look around,
you would probably see wires bundled and tied up to the true ceiling. If your office is
using fluorescent lighting, the power and data lines should not be running over, or on
top of, the fluorescent lights. This is because the radio frequencies being given off can
interfere with the data or power current as it travels through these wires. Now, get back
down from the ceiling. We have work to do.
Interference interrupts the flow of an electrical current, and fluctuations can actu-
ally deliver a different level of voltage than what was expected. Each fluctuation can be
damaging to devices and people. The following explains the different types of voltage
fluctuations possible with electric power:
•Power excess
•Spike Momentary high voltage
•Surge Prolonged high voltage
•Power loss
•Fault Momentary power outage
•Blackout Prolonged, complete loss of electric power
•Power degradation
•Sag/dip Momentary low-voltage condition, from one cycle to a few
seconds
•Brownout Prolonged power supply that is below normal voltage
•In-rush current Initial surge of current required to start a load
Electric Power Definitions
The following list summarizes many of the electric power concepts discussed so far:
•Ground The pathway to the earth to enable excessive voltage to dissipate
•Noise Electromagnetic or frequency interference that disrupts the
power flow and can cause fluctuations
•Transient noise A short duration of power line disruption
•Clean power Electrical current that does not fluctuate
•EMI Electromagnetic interference
•RFI Radio frequency interference

Chapter 5: Physical and Environmental Security
463
When an electrical device is turned on, it can draw a large amount of current, which
is referred to as in-rush current. If the device sucks up enough current, it can cause a sag
in the available power for surrounding devices. This could negatively affect their perfor-
mance. As stated earlier, it is a good idea to have the data processing center and devices
on a different electrical wiring segment from that of the rest of the facility, if possible,
so the devices will not be affected by these issues. For example, if you are in a building
or house without efficient wiring and you turn on a vacuum cleaner or microwave, you
may see the lights quickly dim because of this in-rush current. The drain on the power
supply caused by in-rush currents still happens in other environments when these types
of electrical devices are used—you just might not be able to see the effects. Any type of
device that would cause such a dramatic in-rush current should not be used on the
same electrical segment as data processing systems.
Surge Asurge is a prolonged rise in voltage from a power source. Surges can cause a
lot of damage very quickly. A surge is one of the most common power problems and is
controlled with surge protectors. These protectors use a device called a metal oxide va-
ristor, which moves the excess voltage to ground when a surge occurs. Its source can be
from a strong lightning strike, a power plant going online or offline, a shift in the com-
mercial utility power grid, and electrical equipment within a business starting and stop-
ping. Most computers have a built-in surge protector in their power supplies, but these
are baby surge protectors and cannot provide protection against the damage that larger
surges (say, from storms) can cause. So, you need to ensure all devices are properly
plugged into larger surge protectors, whose only job is to absorb any extra current be-
fore it is passed to electrical devices.
Blackout A blackout is when the voltage drops to zero. This can be caused by light-
ning, a car taking out a power line, storms, or failure to pay the power bill. It can last for
seconds or days. This is when a backup power source is required for business continuity.
Brownout When power companies are experiencing high demand, they frequently
reduce the voltage in an electrical grid, which is referred to as a brownout. Constant-
voltage transformers can be used to regulate this fluctuation of power. They can use
different ranges of voltage and only release the expected 120 volts of alternating current
to devices.
Noise Noise on power lines can be a result of lightning, the use of fluorescent light-
ing, a transformer being hit by an automobile, or other environmental or human ac-
tivities. Frequency ranges overlap, which can affect electrical device operations. Light-
ning sometimes produces voltage spikes on communications and power lines, which
can destroy equipment or alter data being transmitted. When generators are switched
on because power loads have increased, they, too, can cause voltage spikes that can be
harmful and disruptive. Storms and intense cold or heat can put a heavier load on gen-
erators and cause a drop in voltage. Each of these instances is an example of how nor-
mal environmental behaviors can affect power voltage, eventually adversely affecting
equipment, communications, or the transmission of data.

CISSP All-in-One Exam Guide
464
Because these and other occurrences are common, mechanisms should be in place
to detect unwanted power fluctuations and protect the integrity of your data processing
environment. Voltage regulators and line conditioners can be used to ensure a clean and
smooth distribution of power. The primary power runs through a regulator or condi-
tioner. They have the capability to absorb extra current if there is a spike, and to store
energy to add current to the line if there is a sag. The goal is to keep the current flowing
at a nice, steady level so neither motherboard components nor employees get fried.
Many data centers are constructed to take power-sensitive equipment into consider-
ation. Because surges, sags, brownouts, blackouts, and voltage spikes frequently cause
data corruption, the centers are built to provide a high level of protection against these
events. Other types of environments usually are not built with these things in mind and
do not provide this level of protection. Offices usually have different types of devices
connected and plugged into the same outlets. Outlet strips are plugged into outlet
strips, which are connected to extension cords. This causes more line noise and a reduc-
tion of voltage to each device. Figure 5-8 depicts an environment that can cause line
noise, voltage problems, and possibly a fire hazard.
Preventive Measures and Good Practices
Don’t stand in a pool of water with a live electrical wire.
Response: Hold on, I need to write that one down.
When dealing with electric power issues, the following items can help protect de-
vices and the environment:
• Employ surge protectors to protect from excessive current.
• Shut down devices in an orderly fashion to help avoid data loss or damage to
devices due to voltage changes.
• Employ power line monitors to detect frequency and voltage amplitude changes.
• Use regulators to keep voltage steady and the power clean.
• Protect distribution panels, master circuit breakers, and transformer cables
with access controls.
Figure 5-8
This configuration
can cause a lot of
line noise and poses
a fire hazard.

Chapter 5: Physical and Environmental Security
465
• Provide protection from magnetic induction through shielded lines.
• Use shielded cabling for long cable runs.
• Do not run data or power lines directly over fluorescent lights.
• Use three-prong connections or adapters if using two-prong connections.
• Do not plug outlet strips and extension cords into each other.
Environmental Issues
Improper environmental controls can cause damage to services, hardware, and lives.
Interruption of some services can cause unpredicted and unfortunate results. Power,
heating, ventilation, air-conditioning, and air-quality controls can be complex and
contain many variables. They all need to be operating properly and to be monitored
regularly.
During facility construction, the physical security team must make certain that water,
steam, and gas lines have proper shutoff valves, as shown in Figure 5-9, and positive
drains, which means their contents flow out instead of in. If there is ever a break in a
main water pipe, the valve to shut off water flow must be readily accessible. Similarly, in
case of fire in a building, the valve to shut off the gas lines must be readily accessible. In
case of a flood, a company wants to ensure that material cannot travel up through the
water pipes and into its water supply or facility. Facility, operations, and security person-
nel should know where these shutoff valves are, and there should be strict procedures to
follow in these types of emergencies. This will help reduce the potential damage.
Figure 5-9
Water, steam, and
gas lines should have
emergency shutoff
valves.

CISSP All-in-One Exam Guide
466
Most electronic equipment must operate in a climate-controlled atmosphere. Al-
though it is important to keep the atmosphere at a proper working temperature, it is
important to understand that the components within the equipment can suffer from
overheating even in a climate-controlled atmosphere if the internal computer fans are
not cleaned or are blocked. When devices are overheated, the components can expand
and contract, which causes components to change their electronic characteristics, re-
ducing their effectiveness or damaging the system overall.
NOTE
NOTE The climate issues involved with a data processing environment are
why it needs its own separate HVAC system. Maintenance procedures should
be documented and properly followed. HVAC activities should be recorded
and reviewed annually.
Maintaining appropriate temperature and humidity is important in any facility, es-
pecially facilities with computer systems. Improper levels of either can cause damage to
computers and electrical devices. High humidity can cause corrosion, and low humidity
can cause excessive static electricity. This static electricity can short out devices, cause the
loss of information, or provide amusing entertainment for unsuspecting employees.
Lower temperatures can cause mechanisms to slow or stop, and higher tempera-
tures can cause devices to use too much fan power and eventually shut down. Table 5-1
lists different components and their corresponding damaging temperature levels.
In drier climates, or during the winter, the air contains less moisture, which can
cause static electricity when two dissimilar objects touch each other. This electricity usu-
ally travels through the body and produces a spark from a person’s finger that can re-
lease several thousand volts. This can be more damaging than you would think.
Usually the charge is released on a system casing and is of no concern, but sometimes
it is released directly to an internal computer component and causes damage. People
who work on the internal parts of a computer usually wear antistatic armbands to re-
duce the chance of this happening.
In more humid climates, or during the summer, more humidity is in the air, which
can also affect components. Particles of silver can begin to move away from connectors
onto copper circuits, which cement the connectors into their sockets. This can adverse-
ly affect the electrical efficiency of the connection. A hygrometer is usually used to mon-
itor humidity. It can be manually read, or an automatic alarm can be set up to go off if
the humidity passes a set threshold.
Material or Component Damaging Temperature
Computer systems and peripheral devices 1750F
Magnetic storage devices 1000F
Paper products 3500F
Table 5-1
Components
Affected by Specific
Temperatures

Chapter 5: Physical and Environmental Security
467
Ventilation
Can I smoke in the server room?
Response: Security!
Ventilation has several requirements that must be met to ensure a safe and comfort-
able environment. A closed-loop recirculating air-conditioning system should be in-
stalled to maintain air quality. “Closed-loop” means the air within the building is
reused after it has been properly filtered, instead of bringing outside air in. Positive
pressurization and ventilation should also be implemented to control contamination.
Positive pressurization means that when an employee opens a door, the air goes out,
and outside air does not come in. If a facility were on fire, you would want the smoke
to go out the doors instead of being pushed back in when people are fleeing.
The assessment team needs to understand the various types of contaminants, how
they can enter an environment, the damage they could cause, and the steps to ensure
that a facility is protected from dangerous substances or high levels of average con-
taminants. Airborne material and particle concentrations must be monitored for inap-
propriate levels. Dust can affect a device’s functionality by clogging up the fan that is
supposed to be cooling the device. Excessive concentrations of certain gases can acceler-
ate corrosion and cause performance issues or failure of electronic devices. Although
most disk drives are hermetically sealed, other storage devices can be affected by air-
borne contaminants. Air-quality devices and ventilation systems deal with these issues.
Fire Prevention, Detection, and Suppression
We can either try to prevent fires or have one really expensive weenie-roast.
The subject of physical security would not be complete without a discussion on fire
safety. A company must meet national and local standards pertaining to fire preven-
tion, detection, and suppression methods. Fire prevention includes training employees
on how to react properly when faced with a fire, supplying the right equipment and
ensuring it is in working order, making sure there is an easily reachable fire suppression
supply, and storing combustible elements in the proper manner. Fire prevention may
Preventive Steps Against Static Electricity
The following are some simple measures to prevent static electricity:
• Use antistatic flooring in data processing areas.
• Ensure proper humidity.
• Have proper grounding for wiring and outlets.
• Don’t have carpeting in data centers, or have static-free carpets if
necessary.
• Wear antistatic bands when working inside computer systems.

CISSP All-in-One Exam Guide
468
also include using proper noncombustible construction materials and designing the
facility with containment measures that provide barriers to minimize the spread of fire
and smoke. These thermal or fire barriers can be made up of different types of construc-
tion material that is noncombustible and has a fire-resistant coating applied.
Fire detection response systems come in many different forms. Manual detection
response systems are the red pull boxes you see on many building walls. Automatic
detection response systems have sensors that react when they detect the presence of fire
or smoke. We will review different types of detection systems in the next section.
Fire suppression is the use of a suppression agent to put out a fire. Fire suppression
can take place manually through handheld portable extinguishers, or through auto-
mated systems such as water sprinkler systems, or halon or CO2 discharge systems. The
upcoming “Fire Suppression” section reviews the different types of suppression agents
and where they are best used. Automatic sprinkler systems are widely used and highly
effective in protecting buildings and their contents. When deciding upon the type of
fire suppression systems to install, a company needs to evaluate many factors, includ-
ing an estimate of the occurrence rate of a possible fire, the amount of damage that
could result, the types of fires that would most likely take place, and the types of sup-
pression systems to choose from.
Fire protection processes should consist of implementing early smoke or fire detec-
tion devices and shutting down systems until the source of the fire is eliminated. A
warning signal may be sounded by a smoke or fire detector before the suppression agent
is released, so that if it is a false alarm or a small fire that can be handled without the
automated suppression system, someone has time to shut down the suppression system.
Types of Fire Detection
Fires present a dangerous security threat because they can damage hardware and data
and risk human life. Smoke, high temperatures, and corrosive gases from a fire can
cause devastating results. It is important to evaluate the fire safety measurements of a
building and the different sections within it.
A fire begins because something ignited it. Ignition sources can be failure of an
electrical device, improper storage of combustible materials, carelessly discarded ciga-
rettes, malfunctioning heating devices, and arson. A fire needs fuel (paper, wood, liq-
uid, and so on) and oxygen to continue to burn and grow. The more fuel per square
foot, the more intense the fire will become. A facility should be built, maintained, and
operated to minimize the accumulation of fuels that can feed fires.
There are four classes (A, B, C, and D) of fire, which are explained in the “Fire Sup-
pression” section. You need to know the differences between the types of fire so you
know how to properly extinguish each type. Portable fire extinguishers have markings
that indicate what type of fire they should be used on, as illustrated in Figure 5-10. The
markings denote what types of chemicals are within the canisters and what types of
fires they have been approved to be used on. Portable extinguishers should be located
within 50 feet of any electrical equipment, and also near exits. The extinguishers should
be marked clearly, with an unobstructed view. They should be easily reachable and
operational by employees, and inspected quarterly.

Chapter 5: Physical and Environmental Security
469
A lot of computer systems are made of components that are not combustible but
that will melt or char if overheated. Most computer circuits use only two to five volts of
direct current, which usually cannot start a fire. If a fire does happen in a computer
room, it will most likely be an electrical fire caused by overheating of wire insulation or
by overheating components that ignite surrounding plastics. Prolonged smoke usually
occurs before combustion.
Several types of detectors are available, each of which works in a different way. The
detector can be activated by smoke or heat.
Fire Resistant Ratings
Fire resistant ratings are the result of tests carried out in laboratories using spe-
cific configurations of environmental settings. The American Society for Testing
and Materials (ASTM) is the organization that creates the standards that dictate
how these tests should be performed and how to properly interpret the test re-
sults. ASTM accredited testing centers carry out the evaluations in accordance
with these standards and assign fire resistant ratings that are then used in federal
and state fire codes. The tests evaluate the fire resistance of different types of ma-
terials in various environmental configurations. Fire resistance represents the
ability of a laboratory-constructed assembly to contain a fire for a specific period
of time. For example, a 5/8-inch-thick drywall sheet installed on each side of a
wood stud provides a one-hour rating. If the thickness of this drywall is doubled,
then this would be given a two-hour rating. The rating system is used to classify
different building components.
Figure 5-10 Portable extinguishers are marked to indicate what type of fire they should be used on.

CISSP All-in-One Exam Guide
470
Smoke Activated Smoke-activated detectors are good for early-warning devices.
They can be used to sound a warning alarm before the suppression system activates. A
photoelectric device, also referred to as an optical detector, detects the variation in light
intensity. The detector produces a beam of light across a protected area, and if the beam
is obstructed, the alarm sounds. Figure 5-11 illustrates how a photoelectric device works.
Another type of photoelectric device samples the surrounding air by drawing air
into a pipe. If the light source is obscured, the alarm will sound.
Heat Activated Heat-activated detectors can be configured to sound an alarm ei-
ther when a predefined temperature (fixed temperature) is reached or when the tem-
perature increases over a period of time (rate-of-rise). Rate-of-rise temperature sensors
usually provide a quicker warning than fixed-temperature sensors because they are
more sensitive, but they can also cause more false alarms. The sensors can either be
spaced uniformly throughout a facility, or implemented in a line type of installation,
which is operated by a heat-sensitive cable.
It is not enough to have these fire and smoke detectors installed in a facility; they
must be installed in the right places. Detectors should be installed both on and above
suspended ceilings and raised floors, because companies run many types of wires in
both places that could start an electrical fire. No one would know about the fire until it
broke through the floor or dropped ceiling if detectors were not placed in these areas.
Detectors should also be located in enclosures and air ducts, because smoke can gather
in these areas before entering other spaces. It is important that people are alerted about
a fire as quickly as possible so damage may be reduced, fire suppression activities may
start quickly, and lives may be saved. Figure 5-12 illustrates the proper placement of
smoke detectors.
Figure 5-11 A photoelectric device uses a light emitter and a receiver.

Chapter 5: Physical and Environmental Security
471
Fire Suppression
How about if I just spit on the fire?
Response: I’m sure that will work just fine.
It is important to know the different types of fires and what should be done to
properly suppress them. Each fire type has a rating that indicates what materials are
burning. Table 5-2 shows the four types of fire and their suppression methods, which
all employees should know.
Figure 5-12 Smoke detectors should be located above suspended ceilings, below raised floors, and
in air vents.
Automatic Dial-Up Alarm
Fire detection systems can be configured to call the local fire station, and possibly
the police station, to report a detected fire. The system plays a prerecorded mes-
sage that gives the necessary information so officials can properly prepare for the
stated emergency and arrive at the right location. A recording of someone scream-
ing “We are all melting” would not be helpful to fire officials.

CISSP All-in-One Exam Guide
472
You can suppress a fire in several ways, all of which require that certain precautions
be taken. In many buildings, suppression agents located in different areas are designed
to initiate after a specific trigger has been set off. Each agent has a zone of coverage,
meaning an area that the agent supplier is responsible for. If a fire ignites within a cer-
tain zone, it is the responsibility of that suppression agent device to initiate, and then
suppress that fire. Different types of suppression agents available include water, halon,
foams, CO2, and dry powders. CO2 is good for putting out fires but bad for many types
of life forms. If an organization uses CO2, the suppression-releasing device should have
a delay mechanism within it that makes sure the agent does not start applying CO2 to
the area until after an audible alarm has sounded and people have been given time to
evacuate. CO2 is a colorless, odorless substance that is potentially lethal because it re-
moves oxygen from the air. Gas masks do not provide protection against CO2. This type
of fire suppression mechanism is best used in unattended facilities and areas.
For Class B and C fires, specific types of dry powders can be used, which include
sodium or potassium bicarbonate, calcium carbonate, or monoammonium phosphate.
The first three powders interrupt the chemical combustion of a fire. Monoammonium
phosphate melts at low temperatures and excludes oxygen from the fuel.
Foams are mainly water-based and contain a foaming agent that allows them to
float on top of a burning substance to exclude the oxygen.
NOTE
NOTE There is actually a Class K fire, for commercial kitchens. These fires
should be put out with a wet chemical, which is usually a solution of potassium
acetate. This chemical works best when putting out cooking oil fires.
A fire needs fuel, oxygen, and high temperatures. Table 5-3 shows how different
suppression substances interfere with these elements of fire.
Fire Class Type of Fire Elements of Fire Suppression Method
A Common
combustibles
Wood products,
paper, and laminates
Water, foam
B Liquid Petroleum products
and coolants
Gas, CO2, foam, dry powders
C Electrical Electrical equipment
and wires
Gas, CO2, dry powders
D Combustible metals Magnesium, sodium,
potassium
Dry powder
Table 5-2 Four Types of Fire and Their Suppression Methods
Plenum Area
Wiring and cables are strung through plenum areas, such as the space above
dropped ceilings, the space in wall cavities, and the space under raised floors.
Plenum areas should have fire detectors. Also, only plenum-rated cabling should
be used in plenum areas, which is cabling that is made out of material that does
not let off hazardous gases if it burns.

Chapter 5: Physical and Environmental Security
473
By law, companies that have halon extinguishers do not have to replace them, but
the extinguishers cannot be refilled. So, companies that have halon extinguishers do
not have to replace them right away, but when the extinguisher’s lifetime runs out, FM-
200 extinguishers or other EPA-approved chemicals should be used.
NOTE
NOTE Halon has not been manufactured since January 1, 1992, by
international agreement. The Montreal Protocol banned halon in 1987, and
countries were given until 1992 to comply with these directives. The most
effective replacement for halon is FM-200, which is similar to halon but does
not damage the ozone.
The HVAC system should be connected to the fire alarm and suppression system so
it properly shuts down if a fire is identified. A fire needs oxygen, and this type of system
can feed oxygen to the fire. Plus, the HVAC system can spread deadly smoke into all
areas of the building. Many fire systems can configure the HVAC system to shut down
if a fire alarm is triggered.
Combustion Elements Suppression Methods How Suppression Works
Fuel Soda acid Removes fuel
Oxygen Carbon dioxide Removes oxygen
Temperature Water Reduces temperature
Chemical combustion Gas—Halon or halon
substitute
Interferes with the chemical
reactions between elements
Table 5-3 How Different Substances Interfere with Elements of Fire
Halon
Halon is a gas that was widely used in the past to suppress fires because it inter-
feres with the chemical combustion of the elements within a fire. It mixes quick-
ly with the air and does not cause harm to computer systems and other data
processing devices. It was used mainly in data centers and server rooms.
It was discovered that halon has chemicals (chlorofluorocarbons) that de-
plete the ozone and that concentrations greater than 10 percent are dangerous to
people. Halon used on extremely hot fires degrades into toxic chemicals, which
is even more dangerous to humans.
Some really smart people figured that the ozone was important to keep
around, which caused halon to be federally restricted, and no companies are al-
lowed to purchase and install new halon extinguishers. Companies that still have
halon systems have been instructed to replace them with nontoxic extinguishers.
The following are some of the EPA-approved replacements for halon:
• FM-200
• NAF-S-III

CISSP All-in-One Exam Guide
474
Water Sprinklers
I’m hot. Go pull that red thingy on the wall. I need some water.
Water sprinklers typically are simpler and less expensive than halon and FM-200
systems, but can cause water damage. In an electrical fire, the water can increase the
intensity of the fire, because it can work as a conductor for electricity—only making the
situation worse. If water is going to be used in any type of environment with electrical
equipment, the electricity must be turned off before the water is released. Sensors
should be used to shut down the electric power before water sprinklers activate. Each
sprinkler head should activate individually to avoid wide-area damage, and there
should be shutoff valves so the water supply can be stopped if necessary.
A company should take great care in deciding which suppression agent and system
is best for it. Four main types of water sprinkler systems are available: wet pipe, dry pipe,
preaction, and deluge.
•Wet pipe Wet pipe systems always contain water in the pipes and are usually
discharged by temperature control–level sensors. One disadvantage of wet
pipe systems is that the water in the pipes may freeze in colder climates. Also,
if there is a nozzle or pipe break, it can cause extensive water damage. These
types of systems are also called closed head systems.
•Dry pipe In dry pipe systems, the water is not actually held in the pipes.
The water is contained in a “holding tank” until it is released. The pipes hold
pressurized air, which is reduced when a fire or smoke alarm is activated,
allowing the water valve to be opened by the water pressure. Water is not
allowed into the pipes that feed the sprinklers until an actual fire is detected.
First, a heat or smoke sensor is activated; then, the water fills the pipes leading
to the sprinkler heads, the fire alarm sounds, the electric power supply is
disconnected, and finally water is allowed to flow from the sprinklers. These
pipes are best used in colder climates because the pipes will not freeze. Figure
5-13 depicts a dry pipe system.
•Preaction Preaction systems are similar to dry pipe systems in that the
water is not held in the pipes, but is released when the pressurized air within
the pipes is reduced. Once this happens, the pipes are filled with water, but
it is not released right away. A thermal-fusible link on the sprinkler head
has to melt before the water is released. The purpose of combining these
two techniques is to give people more time to respond to false alarms or to
• CEA-410
• FE-13
• Inergen
• Argon
• Argonite

Chapter 5: Physical and Environmental Security
475
small fires that can be handled by other means. Putting out a small fire with
a handheld extinguisher is better than losing a lot of electrical equipment
to water damage. These systems are usually used only in data processing
environments rather than the whole building, because of the higher cost of
these types of systems.
•Deluge A deluge system has its sprinkler heads wide open to allow a larger
volume of water to be released in a shorter period. Because the water being
released is in such large volumes, these systems are usually not used in data
processing environments.
Perimeter Security
Halt! Who goes there?
The first line of defense is perimeter control at the site location, to prevent unau-
thorized access to the facility. As mentioned earlier in this chapter, physical security
should be implemented by using a layered defense approach. For example, before an
intruder can get to the written recipe for your company’s secret barbeque sauce, she will
need to climb or cut a fence, slip by a security guard, pick a door lock, circumvent a
biometric access control reader that protects access to an internal room, and then break
into the safe that holds the recipe. The idea is that if an attacker breaks through one
control layer, there will be others in her way before she can obtain the company’s crown
jewels.
NOTE
NOTE It is also important to have a diversity of controls. For example,
if one key works on four different door locks, the intruder has to obtain
only one key. Each entry should have its own individual key or authentication
combination.
Figure 5-13 Dry pipe systems do not hold water in the pipes.

CISSP All-in-One Exam Guide
476
This defense model should work in two main modes: one mode during normal
facility operations and another mode during the time the facility is closed. When the
facility is closed, all doors should be locked with monitoring mechanisms in strategic
positions to alert security personnel of suspicious activity. When the facility is in opera-
tion, security gets more complicated because authorized individuals need to be distin-
guished from unauthorized individuals. Perimeter security deals with facility and
personnel access controls, external boundary protection mechanisms, intrusion detec-
tion, and corrective actions. The following sections describe the elements that make up
these categories.
Facility Access Control
Access control needs to be enforced through physical and technical components when it
comes to physical security. Physical access controls use mechanisms to identify individu-
als who are attempting to enter a facility or area. They make sure the right individuals
get in and the wrong individuals stay out, and provide an audit trail of these actions.
Having personnel within sensitive areas is one of the best security controls because they
can personally detect suspicious behavior. However, they need to be trained on what
activity is considered suspicious and how to report such activity.

Chapter 5: Physical and Environmental Security
477
Before a company can put into place the proper protection mechanisms, it needs to
conduct a detailed review to identify which individuals should be allowed into what
areas. Access control points can be identified and classified as external, main, and sec-
ondary entrances. Personnel should enter and exit through a specific entry, deliveries
should be made to a different entry, and sensitive areas should be restricted. Figure 5-14
illustrates the different types of access control points into a facility. After a company has
identified and classified the access control points, the next step is to determine how to
protect them.
Locks
Locks are inexpensive access control mechanisms that are widely accepted and used.
They are considered delaying devices to intruders. The longer it takes to break or pick a
lock, the longer a security guard or police officer has to arrive on the scene if the in-
truder has been detected. Almost any type of a door can be equipped with a lock, but
keys can be easily lost and duplicated, and locks can be picked or broken. If a company
depends solely on a lock-and-key mechanism for protection, an individual who has the
key can come and go as he likes without control and can remove items from the prem-
ises without detection. Locks should be used as part of the protection scheme, but
should not be the sole protection scheme.
Locks vary in functionality. Padlocks can be used on chained fences, preset locks are
usually used on doors, and programmable locks (requiring a combination to unlock)
are used on doors or vaults. Locks come in all types and sizes. It is important to have
the right type of lock so it provides the correct level of protection.
To the curious mind or a determined thief, a lock is considered a little puzzle to
solve, not a deterrent. In other words, locks may be merely a challenge, not necessarily
something to stand in the way of malicious activities. Thus, you need to make the chal-
lenge difficult, through the complexity, strength, and quality of the locking mechanisms.
Figure 5-14
Access control
points should be
identified, marked,
and monitored
properly.

CISSP All-in-One Exam Guide
478
NOTE
NOTE The delay time provided by the lock should match the penetration
resistance of the surrounding components (door, door frame, hinges). A smart
thief takes the path of least resistance, which may be to pick the lock, remove
the pins from the hinges, or just kick down the door.
Mechanical Locks Two main types of mechanical locks are available: the warded
lock and the tumbler lock. The warded lock is the basic padlock, as shown in Figure
5-15. It has a spring-loaded bolt with a notch cut in it. The key fits into this notch and
slides the bolt from the locked to the unlocked position. The lock has wards in it, which
are metal projections around the keyhole, as shown in Figure 5-16. The correct key for
a specific warded lock has notches in it that fit in these projections and a notch to slide
the bolt back and forth. These are the cheapest locks, because of their lack of any real
sophistication, and are also the easiest to pick.
The tumbler lock has more pieces and parts than a ward lock. As shown in Figure
5-17, the key fits into a cylinder, which raises the lock metal pieces to the correct height
so the bolt can slide to the locked or unlocked position. Once all of the metal pieces are
at the correct level, the internal bolt can be turned. The proper key has the required size
and sequences of notches to move these metal pieces into their correct position.
The three types of tumbler locks are the pin tumbler, wafer tumbler, and lever tum-
bler. The pin tumbler lock, shown in Figure 5-17, is the most commonly used tumbler
lock. The key has to have just the right grooves to put all the spring-loaded pins in the
right position so the lock can be locked or unlocked.
Wafer tumbler locks (also called disc tumbler locks) are the small, round locks you
usually see on file cabinets. They use flat discs (wafers) instead of pins inside the locks.
They often are used as car and desk locks. This type of lock does not provide much
protection because it can be easily circumvented.
NOTE
NOTE Some locks have interchangeable cores, which allow for the core of
the lock to be taken out. You would use this type of lock if you wanted one
key to open several locks. You would just replace all locks with the same core.
Figure 5-15
A warded lock

Chapter 5: Physical and Environmental Security
479
Combination locks, of course, require the correct combination of numbers to unlock
them. These locks have internal wheels that have to line up properly before being un-
locked. A user spins the lock interface left and right by so many clicks, which lines up
the internal wheels. Once the correct turns have taken place, all the wheels are in the
Figure 5-16
A key fits into a
notch to turn the
bolt to unlock
the lock.
Figure 5-17
Tumbler lock

CISSP All-in-One Exam Guide
480
right position for the lock to release and open the door. The more wheels within the
locks, the more protection provided. Electronic combination locks do not use internal
wheels, but rather have a keypad that allows a person to type in the combination in-
stead of turning a knob with a combination faceplate. An example of an electronic
combination lock is shown in Figure 5-18.
Cipher locks, also known as programmable locks, are keyless and use keypads to
control access into an area or facility. The lock requires a specific combination to be
entered into the keypad and possibly a swipe card. They cost more than traditional
locks, but their combinations can be changed, specific combination sequence values
can be locked out, and personnel who are in trouble or under duress can enter a spe-
cific code that will open the door and initiate a remote alarm at the same time. Thus,
compared to traditional locks, cipher locks can provide a much higher level of security
and control over who can access a facility.
The following are some functionalities commonly available on many cipher com-
bination locks that improve the performance of access control and provide for increased
security levels:
•Door delay If a door is held open for a given time, an alarm will trigger to
alert personnel of suspicious activity.
•Key override A specific combination can be programmed for use in
emergency situations to override normal procedures or for supervisory
overrides.
•Master keying Enables supervisory personnel to change access codes and
other features of the cipher lock.
•Hostage alarm If an individual is under duress and/or held hostage, a
combination he enters can communicate this situation to the guard station
and/or police station.
If a door is accompanied by a cipher lock, it should have a corresponding visibility
shield so a bystander cannot see the combination as it is keyed in. Automated cipher
locks must have a backup battery system and be set to unlock during a power failure so
personnel are not trapped inside during an emergency.
Figure 5-18
An electronic
combination lock

Chapter 5: Physical and Environmental Security
481
NOTE
NOTE It is important to change the combination of locks and to use random
combination sequences. Often, people do not change their combinations or
clean the keypads, which allows an intruder to know what key values are used
in the combination, because they are the dirty and worn keys. The intruder
then just needs to figure out the right combination of these values.
Some cipher locks require all users to know and use the same combination, which
does not allow for any individual accountability. Some of the more sophisticated ci-
pher locks permit specific codes to be assigned to unique individuals. This provides
more accountability, because each individual is responsible for keeping his access code
secret, and entry and exit activities can be logged and tracked. These are usually referred
to as smart locks, because they are designed to allow only authorized individuals access
at certain doors at certain times.
NOTE
NOTE Hotel key cards are also known as smart cards. They are programmed
by the nice hotel guy or gal behind the counter. The access code on the card
can allow access to a hotel room, workout area, business area, and better
yet—the mini bar.
Device Locks Unfortunately, hardware has a tendency to “walk away” from facili-
ties; thus, device locks are necessary to thwart these attempts. Cable locks consist of a
vinyl-coated steel cable that can secure a computer or peripheral to a desk or other
stationary components, as shown in Figure 5-19.
The following are some of the device locks available and their capabilities:
•Switch controls Cover on/off power switches
•Slot locks Secure the system to a stationary component by the use of steel
cable that is connected to a bracket mounted in a spare expansion slot
•Port controls Block access to disk drives or unused serial or parallel ports
•Peripheral switch controls Secure a keyboard by inserting an on/off switch
between the system unit and the keyboard input slot
•Cable traps Prevent the removal of input/output devices by passing their
cables through a lockable unit
Figure 5-19
FMJ/PAD.LOCK’s
notebook security
cable kit secures
a notebook by
enabling the user
to attach the device
to a stationary
component within
an area.

CISSP All-in-One Exam Guide
482
Administrative Responsibilities It is important for a company not only to
choose the right type of lock for the right purpose, but also to follow proper mainte-
nance and procedures. Keys should be assigned by facility management, and this as-
signment should be documented. Procedures should be written out detailing how keys
are to be assigned, inventoried, and destroyed when necessary, and what should hap-
pen if and when keys are lost. Someone on the company’s facility management team
should be assigned the responsibility of overseeing key and combination maintenance.
Most organizations have master keys and submaster keys for the facility manage-
ment staff. A master key opens all the locks within the facility, and the submaster keys
open one or more locks. Each lock has its own individual unique keys as well. So if a
facility has 100 offices, the occupant of each office can have his or her own key. A mas-
ter key allows access to all offices for security personnel and for emergencies. If one
security guard is responsible for monitoring half the facility, the guard can be assigned
one of the submaster keys for just those offices.
Since these master and submaster keys are powerful, they must be properly guarded
and not widely shared. A security policy should outline what portions of the facility
and which device types need to be locked. As a security professional, you should under-
stand what type of lock is most appropriate for each situation, the level of protection
provided by various types of locks, and how these locks can be circumvented.
Circumventing Locks Each lock type has corresponding tools that can be used to
pick it (open it without the key). A tension wrench is a tool shaped like an L and is used
to apply tension to the internal cylinder of a lock. The lock picker uses a lock pick to
manipulate the individual pins to their proper placement. Once certain pins are
“picked” (put in their correct place), the tension wrench holds these down while the
lock picker figures out the correct settings for the other pins. After the intruder deter-
mines the proper pin placement, the wrench is used to then open the lock.
Intruders may carry out another technique, referred to as raking. To circumvent a
pin tumbler lock, a lock pick is pushed to the back of the lock and quickly slid out
Lock Strengths
Basically, three grades of locks are available:
•Grade 1 Commercial and industrial use
•Grade 2 Heavy-duty residential/light-duty commercial
•Grade 3 Residential/consumer
The cylinders within the locks fall into three main categories:
•Low security No pick or drill resistance provided (can fall within any
of the three grades of locks)
•Medium security A degree of pick resistance protection provided
(uses tighter and more complex keyways [notch combination]; can
fall within any of the three grades of locks)
•High security Pick resistance protection through many different
mechanisms (only used in grade 1 and 2 locks)

Chapter 5: Physical and Environmental Security
483
while providing upward pressure. This movement makes many of the pins fall into
place. A tension wrench is also put in to hold the pins that pop into the right place. If
all the pins do not slide to the necessary height for the lock to open, the intruder holds
the tension wrench and uses a thinner pick to move the rest of the pins into place.
Lock bumping is a tactic that intruders can use to force the pins in a tumbler lock to
their open position by using a special key called a bump key. The stronger the material
that makes up the lock, the smaller the chance that this type of lock attack would be
successful.
Now, if this is all too much trouble for the intruder, she can just drill the lock, use
bolt cutters, attempt to break through the door or the doorframe, or remove the hinges.
There are just so many choices for the bad guys.
Personnel Access Controls
Proper identification needs to verify whether the person attempting to access a facility
or area should actually be allowed in. Identification and authentication can be verified
by matching an anatomical attribute (biometric system), using smart or memory cards

CISSP All-in-One Exam Guide
484
(swipe cards), presenting a photo ID to a security guard, using a key, or providing a card
and entering a password or PIN.
A common problem with controlling authorized access into a facility or area is
called piggybacking. This occurs when an individual gains unauthorized access by using
someone else’s legitimate credentials or access rights. Usually an individual just follows
another person closely through a door without providing any credentials. The best pre-
ventive measures against piggybacking are to have security guards at access points and
to educate employees about good security practices.
If a company wants to use a card badge reader, it has several types of systems to
choose from. Individuals usually have cards that have embedded magnetic strips that
contain access information. The reader can just look for simple access information within
the magnetic strip, or it can be connected to a more sophisticated system that scans the
information, makes more complex access decisions, and logs badge IDs and access times.
If the card is a memory card, then the reader just pulls information from it and makes
an access decision. If the card is a smart card, the individual may be required to enter a
PIN or password, which the reader compares against the information held within the
card or in an authentication server. (Memory and smart cards are covered in Chapter 3.)
These access cards can be used with user-activated readers, which just means the user
actually has to do something—swipe the card or enter a PIN. System sensing access con-
trol readers, also called transponders, recognize the presence of an approaching object
within a specific area. This type of system does not require the user to swipe the card
through the reader. The reader sends out interrogating signals and obtains the access
code from the card without the user having to do anything. Spooky Star Trek magic.
NOTE
NOTE Electronic access control (EAC) tokens is a generic term used
to describe proximity authentication devices, such as proximity readers,
programmable locks, or biometric systems, which identify and authenticate
users before allowing them entrance into physically controlled areas.
External Boundary Protection Mechanisms
Let’s build a fort and let only the people who know the secret handshake inside!
Proximity protection components are usually put into place to provide one or more
of the following services:
• Control pedestrian and vehicle traffic flows
• Various levels of protection for different security zones
• Buffers and delaying mechanisms to protect against forced entry attempts
• Limit and control entry points
These services can be provided by using the following control types:
•Access control mechanisms Locks and keys, an electronic card access
system, personnel awareness
•Physical barriers Fences, gates, walls, doors, windows, protected vents,
vehicular barriers

Chapter 5: Physical and Environmental Security
485
•Intrusion detection Perimeter sensors, interior sensors, annunciation
mechanisms
•Assessment Guards, CCTV cameras
•Response Guards, local law enforcement agencies
•Deterrents Signs, lighting, environmental design
Several types of perimeter protection mechanisms and controls can be put into
place to protect a company’s facility, assets, and personnel. They can deter would-be
intruders, detect intruders and unusual activities, and provide ways of dealing with
these issues when they arise. Perimeter security controls can be natural (hills, rivers) or
manmade (fencing, lighting, gates). Landscaping is a mix of the two. In the beginning
of this chapter, we explored CPTED and how this approach is used to reduce the likeli-
hood of crime. Landscaping is a tool employed in the CPTED method. Sidewalks, bush-
es, and created paths can point people to the correct entry points, and trees and spiky
bushes can be used as natural barriers. These bushes and trees should be placed such
that they cannot be used as ladders or accessories to gain unauthorized access to unap-
proved entry points. Also, there should not be an overwhelming number of trees and
bushes, which could provide intruders with places to hide. In the following sections, we
look at the manmade components that can work within the landscaping design.
Fencing
I just want a little fence to keep out all the little mean people.
Fencing can be quite an effective physical barrier. Although the presence of a fence
may only delay dedicated intruders in their access attempts, it can work as a psychologi-
cal deterrent by telling the world that your company is serious about protecting itself.
Fencing can provide crowd control and helps control access to entrances and facili-
ties. However, fencing can be costly and unsightly. Many companies plant bushes or
trees in front of the fence that surrounds their buildings for aesthetics and to make the
building less noticeable. But this type of vegetation can damage the fencing over time
or negatively affect its integrity. The fencing needs to be properly maintained, because
if a company has a sagging, rusted, pathetic fence, it is equivalent to telling the world
that the company is not truly serious and disciplined about protection. But a nice,
shiny, intimidating fence can send a different message—especially if the fencing is
topped with three rungs of barbed wire.
When deciding upon the type of fencing, several factors should be considered. The
gauge of the metal should correlate to the types of physical threats the company would
most likely face. After carrying out the risk analysis (covered earlier in the chapter), the
physical security team should understand the probability of enemies attempting to cut
the fencing, drive through it, or climb over or crawl under it. Understanding these threats
will help the team determine the necessary gauge and mesh sizing of the fence wiring.
The risk analysis results will also help indicate what height of fencing the organiza-
tion should implement. Fences come in varying heights, and each height provides a
different level of security:
• Fences three to four feet high only deter casual trespassers.
• Fences six to seven feet high are considered too high to climb easily.

CISSP All-in-One Exam Guide
486
• Fences eight feet high (possibly with strands of barbed or razor wire at the top)
means you are serious about protecting your property. They often deter the
more determined intruder.
The barbed wire on top of fences can be tilted in or out, which also provides extra
protection. If the organization is a prison, it would have the barbed wire on top of the
fencing pointed in, which makes it harder for prisoners to climb and escape. If the or-
ganization is a military base, the barbed wire would be tilted out, making it harder for
someone to climb over the fence and gain access to the premises.
Critical areas should have fences at least eight feet high to provide the proper level
of protection. The fencing should not sag in any areas and must be taut and securely
connected to the posts. The fencing should not be easily circumvented by pulling up its
posts. The posts should be buried sufficiently deep in the ground and should be se-
cured with concrete to ensure the posts cannot be dug up or tied to vehicles and ex-
tracted. If the ground is soft or uneven, this might provide ways for intruders to slip or
dig under the fence. In these situations, the fencing should actually extend into the dirt
to thwart these types of attacks.
Fences work as “first line of defense” mechanisms. A few other controls can be used
also. Strong and secure gates need to be implemented. It does no good to install a
highly fortified and expensive fence and then have an unlocked or weenie gate that al-
lows easy access.
Gauges and Mesh Sizes
The gauge of fence wiring is the thickness of the wires used within the fence
mesh. The lower the gauge number, the larger the wire diameter:
•11 gauge = 0.0907-inch diameter
•9 gauge = 0.1144-inch diameter
•6 gauge = 0.162-inch diameter
The mesh sizing is the minimum clear distance between the wires. Common
mesh sizes are 2 inches, 1 inch, and 3/8 inch. It is more difficult to climb or cut
fencing with smaller mesh sizes, and the heavier gauged wiring is harder to cut.
The following list indicates the strength levels of the most common gauge and
mesh sizes used in chain-link fencing today:
•Extremely high security 3/8-inch mesh, 11 gauge
•Very high security 1-inch mesh, 9 gauge
•High security 1-inch mesh, 11 gauge
•Greater security 2-inch mesh, 6 gauge
•Normal industrial security 2-inch mesh, 9 gauge

Chapter 5: Physical and Environmental Security
487
Gates basically have four distinct classifications:
•Class I Residential usage
•Class II Commercial usage, where general public access is expected;
examples include a public parking lot entrance, a gated community, or a self-
storage facility
•Class III Industrial usage, where limited access is expected; an example is a
warehouse property entrance not intended to serve the general public
•Class IV Restricted access; this includes a prison entrance that is monitored
either in person or via closed circuitry
Each gate classification has its own long list of implementation and maintenance
guidelines in order to ensure the necessary level of protection. These classifications and
guidelines are developed by Underwriters Laboratory (UL), a nonprofit organization
that tests, inspects, and classifies electronic devices, fire protection equipment, and spe-
cific construction materials. This is the group that certifies these different items to en-
sure they are in compliance with national building codes. Their specific code, UL-325,
deals with garage doors, drapery, gates, and louver and window operators and systems.
So, whereas in the information security world we look to NIST for our best prac-
tices and industry standards, in the physical security world, we look to UL for the same
type of direction.
Bollards
Bollards usually look like small concrete pillars outside a building. Sometimes compa-
nies try to dress them up by putting flowers or lights in them to soften the look of a
protected environment. They are placed by the sides of buildings that have the most
immediate threat of someone driving a vehicle through the exterior wall. They are usu-
ally placed between the facility and a parking lot and/or between the facility and a road
that runs close to an exterior wall. Within the United States after September 11, 2001,
many military and government institutions, which did not have bollards, hauled in
huge boulders to surround and protect sensitive buildings. They provided the same
type of protection that bollards would provide. These were not overly attractive, but
provided the sense that the government was serious about protecting those facilities.
PIDAS Fencing
Perimeter Intrusion Detection and Assessment System (PIDAS) is a type of fencing
that has sensors located on the wire mesh and at the base of the fence. It is used
to detect if someone attempts to cut or climb the fence. It has a passive cable vi-
bration sensor that sets off an alarm if an intrusion is detected. PIDAS is very
sensitive and can cause many false alarms.

CISSP All-in-One Exam Guide
488
Lighting
Many of the items mentioned in this chapter are things people take for granted day in
and day out during our usual busy lives. Lighting is certainly one of those items you
would probably not give much thought to, unless it wasn’t there. Unlit (or improperly
lit) parking lots and parking garages have invited many attackers to carry out criminal
activity that they may not have engaged in otherwise with proper lighting. Breaking
into cars, stealing cars, and attacking employees as they leave the office are the more
common types of attacks that take place in such situations. A security professional
should understand that the right illumination needs to be in place, that no dead spots
(unlit areas) should exist between the lights, and that all areas where individuals may
walk should be properly lit. A security professional should also understand the various
types of lighting available and where they should be used.
Wherever an array of lights is used, each light covers its own zone or area. The zone
each light covers depends upon the illumination of light produced, which usually has
a direct relationship to the wattage capacity of the bulbs. In most cases, the higher the
lamp’s wattage, the more illumination it produces. It is important that the zones of il-
lumination coverage overlap. For example, if a company has an open parking lot, then
light poles must be positioned within the correct distance of each other to eliminate
any dead spots. If the lamps that will be used provide a 30-foot radius of illumination,
then the light poles should be erected less than 30 feet apart so there is an overlap be-
tween the areas of illumination.
NOTE
NOTE Critical areas need to have illumination that reaches at least eight
feet with the illumination of two foot-candles. Foot candle is an illuminated
measuring metric.
If an organization does not implement the right types of lights and ensure they
provide proper coverage, it increases the probability of criminal activity, accidents, and
lawsuits.
Exterior lights that provide protection usually require less illumination intensity
than interior working lighting, except for areas that require security personnel to in-
spect identification credentials for authorization. It is also important to have the correct
lighting when using various types of surveillance equipment. The correct contrast be-
tween a potential intruder and background items needs to be provided, which only
happens with the correct illumination and placement of lights. If the light is going to
bounce off of dark, dirty, or darkly painted surfaces, then more illumination is required
for the necessary contrast between people and the environment. If the area has clean
concrete and light-colored painted surfaces, then not as much illumination is required.
This is because when the same amount of light falls on an object and the surrounding
background, an observer must depend on the contrast to tell them apart.
When lighting is installed, it should be directed toward areas where potential in-
truders would most likely be coming from and directed away from the security force
posts. For example, lighting should be pointed at gates or exterior access points, and the
guard locations should be more in the shadows, or under a lower amount of illumina-
tion. This is referred to as glare protection for the security force. If you are familiar with

Chapter 5: Physical and Environmental Security
489
military operations, you might know that when you are approaching a military entry
point, there is a fortified guard building with lights pointing toward the oncoming cars.
A large sign instructs you to turn off your headlights, so the guards are not temporarily
blinded by your lights and have a clear view of anything coming their way.
Lights used within the organization’s security perimeter should be directed out-
ward, which keeps the security personnel in relative darkness and allows them to easily
view intruders beyond the company’s perimeter.
An array of lights that provides an even amount of illumination across an area is
usually referred to as continuous lighting. Examples are the evenly spaced light poles in
a parking lot, light fixtures that run across the outside of a building, or series of fluores-
cent lights used in parking garages. If the company building is relatively close to an-
other company’s property, a railway, an airport, or a highway, the owner may need to
ensure the lighting does not “bleed over” property lines in an obtrusive manner. Thus,
the illumination needs to be controlled, which just means an organization should erect
lights and use illumination in such a way that it does not blind its neighbors or any
passing cars, trains, or planes.
You probably are familiar with the special home lighting gadgets that turn certain
lights on and off at predetermined times, giving the illusion to potential burglars that a
house is occupied even when the residents are away. Companies can use a similar tech-
nology, which is referred to as standby lighting. The security personnel can configure
the times that different lights turn on and off, so potential intruders think different ar-
eas of the facility are populated.
NOTE
NOTE Redundant or backup lights should be available in case of power
failures or emergencies. Special care must be given to understand what type
of lighting is needed in different parts of the facility in these types of situations.
This lighting may run on generators or battery packs.
Responsive area illumination takes place when an IDS detects suspicious activities
and turns on the lights within a specific area. When this type of technology is plugged
into automated IDS products, there is a high likelihood of false alarms. Instead of con-
tinuously having to dispatch a security guard to check out these issues, a CCTV camera
can be installed to scan the area for intruders.
If intruders want to disrupt the security personnel or decrease the probability of be-
ing seen while attempting to enter a company’s premises or building, they could at-
tempt to turn off the lights or cut power to them. This is why lighting controls and
switches should be in protected, locked, and centralized areas.
Surveillance Devices
Usually, installing fences and lights does not provide the necessary level of protection a
company needs to protect its facility, equipment, and employees. Areas need to be under
surveillance so improper actions are noticed and taken care of before damage occurs.
Surveillance can happen through visual detection or through devices that use sophisti-
cated means of detecting abnormal behavior or unwanted conditions. It is important
that every organization have a proper mix of lighting, security personnel, IDSs, and
surveillance technologies and techniques.

CISSP All-in-One Exam Guide
490
Visual Recording Devices
Because surveillance is based on sensory perception, surveillance devices usually work
in conjunction with guards and other monitoring mechanisms to extend their capa-
bilities and range of perception. A closed-circuit TV (CCTV) system is a commonly used
monitoring device in most organizations, but before purchasing and implementing a
CCTV, you need to consider several items:
•The purpose of CCTV To detect, assess, and/or identify intruders
•The type of environment the CCTV camera will work in Internal or
external areas
•The field of view required Large or small area to be monitored
•Amount of illumination of the environment Lit areas, unlit areas, areas
affected by sunlight
•Integration with other security controls Guards, IDSs, alarm systems
The reason you need to consider these items before you purchase a CCTV product
is that there are so many different types of cameras, lenses, and monitors that make up
the different CCTV products. You must understand what is expected of this physical
security control, so that you purchase and implement the right type.
CCTVs are made up of cameras, transmitters, receivers, a recording system, and a
monitor. The camera captures the data and transmits it to a receiver, which allows the
data to be displayed on a monitor. The data are recorded so they can be reviewed at a
later time if needed. Figure 5-20 shows how multiple cameras can be connected to one
multiplexer, which allows several different areas to be monitored at one time. The mul-
tiplexer accepts video feed from all the cameras and interleaves these transmissions
over one line to the central monitor. This is more effective and efficient than the older
systems that require the security guard to physically flip a switch from one environment
to the next. In these older systems, the guard can view only one environment at a time,
which, of course, makes it more likely that suspicious activities will be missed.
A CCTV sends the captured data from the camera’s transmitter to the monitor’s re-
ceiver, usually through a coaxial cable, instead of broadcasting the signals over a public
network. This is where the term “closed-circuit” comes in. This circuit should be tam-
perproof, which means an intruder cannot manipulate the video feed that the security
guard is monitoring. The most common type of attack is to replay previous recordings
without the security personnel knowing it. For example, if an attacker is able to com-
promise a company’s CCTV and play the recording from the day before, the security
guard would not know an intruder is in the facility carrying out some type of crime.
This is one reason why CCTVs should be used in conjunction with intruder detection
controls, which we address in the next section.
NOTE
NOTE CCTVs should have some type of recording system. Digital recorders
save images to hard drives and allow advanced search techniques that are
not possible with videotape recorders. Digital recorders use advanced
compression techniques, which drastically reduce the storage media
requirements.

Chapter 5: Physical and Environmental Security
491
Most of the CCTV cameras in use today employ light-sensitive chips called charged-
coupled devices (CCDs). The CCD is an electrical circuit that receives input light from
the lens and converts it into an electronic signal, which is then displayed on the moni-
tor. Images are focused through a lens onto the CCD chip surface, which forms the
electrical representation of the optical image. It is this technology that allows for the
capture of extraordinary detail of objects and precise representation, because it has sen-
sors that work in the infrared range, which extends beyond human perception. The
CCD sensor picks up this extra “data” and integrates it into the images shown on the
monitor to allow for better granularity and quality in the video.
CCDs are also used in fax machines, photocopiers, bar code readers, and even tele-
scopes. CCTVs that use CCDs allow more granular information within an environment
to be captured and shown on the monitor compared to the older CCTV technology that
relied upon cathode ray tubes (CRTs).
Two main types of lenses are used in CCTV: fixed focal length and zoom (varifocal).
The focal length of a lens defines its effectiveness in viewing objects from a horizontal
and vertical view. The focal length value relates to the angle of view that can be achieved.
Short focal length lenses provide wider-angle views, while long focal length lenses pro-
vide a narrower view. The size of the images shown on a monitor, along with the area
covered by one camera, is defined by the focal length. For example, if a company imple-
ments a CCTV camera in a warehouse, the focal length lens values should be between
2.8 and 4.3 millimeters (mm) so the whole area can be captured. If the company im-
Figure 5-20 Several cameras can be connected to a multiplexer.

CISSP All-in-One Exam Guide
492
plements another CCTV camera that monitors an entrance, that lens value should be
around 8mm, which allows a smaller area to be monitored.
NOTE
NOTE Fixed focal length lenses are available in various fields of views: wide,
medium, and narrow. A lens that provides a “normal” focal length creates a
picture that approximates the field of view of the human eye. A wide-angle
lens has a short focal length, and a telephoto lens has a long focal length.
When a company selects a fixed focal length lens for a particular view of an
environment, it should understand that if the field of view needs to be changed
(wide to narrow), the lens must be changed.
So, if we need to monitor a large area, we use a lens with a smaller focal length value.
Great, but what if a security guard hears a noise or thinks he sees something suspicious?
A fixed focal length lens is stationary, meaning the guard cannot move the camera from
one point to the other and properly focus the lens automatically. The zoom lenses pro-
vide flexibility by allowing the viewer to change the field of view to different angles and
distances. The security personnel usually have a remote-control component integrated
within the centralized CCTV monitoring area that allows them to move the cameras and
zoom in and out on objects as needed. When both wide scenes and close-up captures are
needed, a zoom lens is best. This type of lens allows the focal length to change from
wide angle to telephoto while maintaining the focus of the image.
To understand the next characteristic, depth of field, think about pictures you might
take while on vacation with your family. For example, if you want to take a picture of
your spouse with the Grand Canyon in the background, the main object of the picture
is your spouse. Your camera is going to zoom in and use a shallow depth of focus. This
provides a softer backdrop, which will lead the viewers of the photograph to the fore-
ground, which is your spouse. Now, let’s say you get tired of taking pictures of your
spouse and want to get a scenic picture of just the Grand Canyon itself. The camera
would use a greater depth of focus, so there is not such a distinction between objects in
the foreground and background.
The depth of field is necessary to understand when choosing the correct lenses and
configurations for your company’s CCTV. The depth of field refers to the portion of the
environment that is in focus when shown on the monitor. The depth of field varies
depending upon the size of the lens opening, the distance of the object being focused
on, and the focal length of the lens. The depth of field increases as the size of the lens
opening decreases, the subject distance increases, or the focal length of the lens de-
creases. So, if you want to cover a large area and not focus on specific items, it is best to
use a wide-angle lens and a small lens opening.
CCTV lenses have irises, which control the amount of light that enters the lens.
Manual iris lenses have a ring around the CCTV lens that can be manually turned and
controlled. A lens with a manual iris would be used in areas that have fixed lighting,
since the iris cannot self-adjust to changes of light. An auto iris lens should be used in
environments where the light changes, as in an outdoor setting. As the environment

Chapter 5: Physical and Environmental Security
493
brightens, this is sensed by the iris, which automatically adjusts itself. Security person-
nel will configure the CCTV to have a specific fixed exposure value, which the iris is
responsible for maintaining. On a sunny day, the iris lens closes to reduce the amount
of light entering the camera, while at night, the iris opens to capture more light—just
like our eyes.
When choosing the right CCTV for the right environment, you must determine the
amount of light present in the environment. Different CCTV camera and lens products
have specific illumination requirements to ensure the best quality images possible. The
illumination requirements are usually represented in the lux value, which is a metric
used to represent illumination strengths. The illumination can be measured by using a
light meter. The intensity of light (illumination) is measured and represented in mea-
surement units of lux or foot-candles. (The conversion between the two is one foot-
candle = 10.76 lux.) The illumination measurement is not something that can be
accurately provided by the vendor of a light bulb, because the environment can directly
affect the illumination. This is why illumination strengths are most effectively mea-
sured where the light source is implemented.
Next, you need to consider the mounting requirements of the CCTV cameras. The
cameras can be implemented in a fixed mounting or in a mounting that allows the cam-
eras to move when necessary. A fixed camera cannot move in response to security per-
sonnel commands, whereas cameras that provide PTZ capabilities can pan, tilt, or zoom
(PTZ) as necessary.
So, buying and implementing a CCTV system may not be as straightforward as it
seems. As a security professional, you would need to understand the intended use of
the CCTV, the environment that will be monitored, and the functionalities that will be
required by the security staff that will use the CCTV on a daily basis. The different com-
ponents that can make up a CCTV product are shown in Figure 5-21.
Great—your assessment team has done all of its research and bought and imple-
mented the correct CCTV system. Now it would be nice if someone actually watched
the monitors for suspicious activities. Realizing that monitor watching is a mentally
deadening activity may lead your team to implement a type of annunciator system. Dif-
ferent types of annunciator products are available that can either “listen” for noise and
activate electrical devices, such as lights, sirens, or CCTV cameras, or detect movement.
Instead of expecting a security guard to stare at a CCTV monitor for eight hours straight,
the guard can carry out other activities and be alerted by an annunciator if movement
is detected on a screen.
Intrusion Detection Systems
Surveillance techniques are used to watch for unusual behaviors, whereas intrusion
detection devices are used to sense changes that take place in an environment. Both are
monitoring methods, but they use different devices and approaches. This section ad-
dresses the types of technologies that can be used to detect the presence of an intruder.
One such technology, a perimeter scanning device, is shown in Figure 5-22.

CISSP All-in-One Exam Guide
494
Figure 5-21 A CCTV product can comprise several components.
Figure 5-22
Different perimeter
scanning devices
work by covering a
specific area.

Chapter 5: Physical and Environmental Security
495
IDSs are used to detect unauthorized entries and to alert a responsible entity to re-
spond. These systems can monitor entries, doors, windows, devices, or removable cov-
erings of equipment. Many work with magnetic contacts or vibration-detection devices
that are sensitive to certain types of changes in the environment. When a change is de-
tected, the IDS device sounds an alarm either in the local area or in both the local area
and a remote police or guard station.
IDSs can be used to detect changes in the following:
• Beams of light
• Sounds and vibrations
• Motion
• Different types of fields (microwave, ultrasonic, electrostatic)
• Electrical circuit
IDSs can be used to detect intruders by employing electromechanical systems (mag-
netic switches, metallic foil in windows, pressure mats) or volumetric systems. Volu-
metric systems are more sensitive because they detect changes in subtle environmental
characteristics, such as vibration, microwaves, ultrasonic frequencies, infrared values,
and photoelectric changes.
Electromechanical systems work by detecting a change or break in a circuit. The electri-
cal circuits can be strips of foil embedded in or connected to windows. If the window
breaks, the foil strip breaks, which sounds an alarm. Vibration detectors can detect move-
ment on walls, screens, ceilings, and floors when the fine wires embedded within the
structure are broken. Magnetic contact switches can be installed on windows and doors.
If the contacts are separated because the window or door is opened, an alarm will sound.
Another type of electromechanical detector is a pressure pad. This is placed under-
neath a rug or portion of the carpet and is activated after hours. If someone steps on the
pad, an alarm initiates, because no one is supposed to be in this area during this time.
Types of volumetric IDSs are photoelectric, acoustical-seismic, ultrasonic, and mi-
crowave.
Aphotoelectric system, or photometric system, detects the change in a light beam and
thus can be used only in windowless rooms. These systems work like photoelectric
smoke detectors, which emit a beam that hits the receiver. If this beam of light is inter-
rupted, an alarm sounds. The beams emitted by the photoelectric cell can be cross-
sectional and can be invisible or visible beams. Cross-sectional means that one area can
have several different light beams extending across it, which is usually carried out by
using hidden mirrors to bounce the beam from one place to another until it hits the
light receiver. These are the most commonly used systems in the movies. You have
probably seen James Bond and other noteworthy movie spies or criminals use night-
vision goggles to see the invisible beams and then step over them.
Apassive infrared system (PIR) identifies the changes of heat waves in an area it is
configured to monitor. If the particles’ temperature within the air rises, it could be an
indication of the presence of an intruder, so an alarm is sounded.
An acoustical detection system uses microphones installed on floors, walls, or ceil-
ings. The goal is to detect any sound made during a forced entry. Although these systems

CISSP All-in-One Exam Guide
496
are easily installed, they are very sensitive and cannot be used in areas open to sounds of
storms or traffic. Vibration sensors are similar and are also implemented to detect forced
entry. Financial institutions may choose to implement these types of sensors on exterior
walls, where bank robbers may attempt to drive a vehicle through. They are also com-
monly used around the ceiling and flooring of vaults to detect someone trying to make
an unauthorized bank withdrawal.
Wave-pattern motion detectors differ in the frequency of the waves they monitor. The
different frequencies are microwave, ultrasonic, and low frequency. All of these devices
generate a wave pattern that is sent over a sensitive area and reflected back to a receiver.
If the pattern is returned undisturbed, the device does nothing. If the pattern returns
altered because something in the room is moving, an alarm sounds.
Aproximity detector, or capacitance detector, emits a measurable magnetic field. The
detector monitors this magnetic field, and an alarm sounds if the field is disrupted.
These devices are usually used to protect specific objects (artwork, cabinets, or a safe)
versus protecting a whole room or area. Capacitance change in an electrostatic field can
be used to catch a bad guy, but first you need to understand what capacitance change
means. An electrostatic IDS creates an electrostatic magnetic field, which is just an elec-
tric field associated with static electric charges. All objects have a static electric charge.
They are all made up of many subatomic particles, and when everything is stable and
static, these particles constitute one holistic electric charge. This means there is a bal-
ance between the electric capacitance and inductance. Now, if an intruder enters the
area, his subatomic particles will mess up this lovely balance in the electrostatic field,
causing a capacitance change, and an alarm will sound. So if you want to rob a com-
pany that uses these types of detectors, leave the subatomic particles that make up your
body at home.
The type of motion detector that a company chooses to implement, its power capac-
ity, and its configurations dictate the number of detectors needed to cover a sensitive area.
Also, the size and shape of the room and the items within the room may cause barriers,
in which case more detectors would be needed to provide the necessary level of coverage.
IDSs are support mechanisms intended to detect and announce an attempted intru-
sion. They will not prevent or apprehend intruders, so they should be seen as an aid to
the organization’s security forces.
Intrusion Detection Systems Characteristics
IDSs are very valuable controls to use in every physical security program, but sev-
eral issues need to be understood before implementing them:
• They are expensive and require human intervention to respond to the
alarms.
• A redundant power supply and emergency backup power are necessary.
• They can be linked to a centralized security system.
• They should have a fail-safe configuration, which defaults to “activated.”
• They should detect, and be resistant to, tampering.

Chapter 5: Physical and Environmental Security
497
Patrol Force and Guards
One of the best security mechanisms is a security guard and/or a patrol force to moni-
tor a facility’s grounds. This type of security control is more flexible than other security
mechanisms, provides good response to suspicious activities, and works as a great de-
terrent. However, it can be a costly endeavor, because it requires a salary, benefits, and
time off. People sometimes are unreliable. Screening and bonding is an important part
of selecting a security guard, but this only provides a certain level of assurance. One is-
sue is if the security guard decides to make exceptions for people who do not follow the
organization’s approved policies. Because basic human nature is to trust and help peo-
ple, a seemingly innocent favor can put an organization at risk.
IDSs and physical protection measures ultimately require human intervention. Se-
curity guards can be at a fixed post or can patrol specific areas. Different organizations
will have different needs from security guards. They may be required to check individ-
ual credentials and enforce filling out a sign-in log. They may be responsible for moni-
toring IDSs and expected to respond to alarms. They may need to issue and recover
visitor badges, respond to fire alarms, enforce rules established by the company within
the building, and control what materials can come into or go out of the environment.
The guard may need to verify that doors, windows, safes, and vaults are secured; report
identified safety hazards; enforce restrictions of sensitive areas; and escort individuals
throughout facilities.
The security guard should have clear and decisive tasks that she is expected to fulfill.
The guard should be fully trained on the activities she is expected to perform and on
the responses expected from her in different situations. She should also have a central
control point to check in to, two-way radios to ensure proper communication, and the
necessary access into areas she is responsible for protecting.
The best security has a combination of security mechanisms and does not depend
on just one component of security. Thus, a security guard should be accompanied by
other surveillance and detection mechanisms.
Dogs
Dogs have proven to be highly useful in detecting intruders and other unwanted condi-
tions. Their hearing and sight outperform those of humans, and their intelligence and
loyalty can be used for protection.
The best security dogs go through intensive training to respond to a wide range of
commands and to perform many tasks. Dogs can be trained to hold an intruder at bay
until security personnel arrive or to chase an intruder and attack. Some dogs are trained
to smell smoke so they can alert personnel to a fire.
Of course, dogs cannot always know the difference between an authorized person
and an unauthorized person, so if an employee goes into work after hours, he can have
more on his hands than expected. Dogs can provide a good supplementary security
mechanism, or a company can ask the security guard to bare his teeth at the sight of an
unknown individual instead. Whatever works.

CISSP All-in-One Exam Guide
498
Auditing Physical Access
Physical access control systems can use software and auditing features to produce audit
trails or access logs pertaining to access attempts. The following information should be
logged and reviewed:
• The date and time of the access attempt
• The entry point at which access was attempted
• The user ID employed when access was attempted
• Any unsuccessful access attempts, especially if during unauthorized hours
As with audit logs produced by computers, access logs are useless unless someone
actually reviews them. A security guard may be required to review these logs, but a se-
curity professional or a facility manager should also review these logs periodically.
Management needs to know where entry points into the facility exist and who attempts
to use them.
Audit and access logs are detective, not preventive. They are used to piece together a
situation after the fact instead of attempting to prevent an access attempt in the first place.
Testing and Drills
Having fire detectors, portable extinguishers, and suppressions agents is great, but peo-
ple also need to be properly trained on what to do when a fire (or other type of emer-
gency) takes place. An evacuation and emergency response plan must be developed and
actually put into action. The plan needs to be documented and to be easily accessible
in times of crisis. People who are assigned specific tasks must be taught and informed
how to fulfill those tasks, and dry runs must be done to walk people through different
emergency situations. The drills should take place at least once a year, and the entire
program should be continually updated and improved.
The tests and drills prepare personnel for what they may be faced with and provide
a controlled environment to learn the tasks expected of them. These tests and drills also
point out issues that may not have been previously thought about and addressed in the
planning process.
The exercise should have a predetermined scenario that the company may indeed
be faced with one day. Specific parameters and a scope of the exercise must be worked
out before sounding the alarms. The team of testers must agree upon what exactly is
getting tested and how to properly determine success or failure. The team must agree
upon the timing and duration of the exercise, who will participate in the exercise, who
will receive which assignments, and what steps should be taken. During evacuation,
specific people should be given lists of employees that they are responsible for ensuring
they have escaped the building. This is the only way the organization will know if
someone is still left inside and who that person is.

Chapter 5: Physical and Environmental Security
499
Summary
Our distributed environments have put much more responsibility on the individual user,
facility management, and administrative procedures and controls than in the old days.
Physical security is not just the night guard who carries around a big flashlight. Now, se-
curity can be extremely technical, comes in many forms, and raises many liability and
legal issues. Natural disasters, fires, floods, intruders, vandals, environmental issues, con-
struction materials, and power supplies all need to be planned for and dealt with.
Every organization should develop, implement, and maintain a physical security
program that contains the following control categories: deterrence, delay, detection,
assessment, and response. It is up to the organization to determine its acceptable risk
level and the specific controls required to fulfill the responsibility of each category.
Physical security is not often considered when people think of organizational secu-
rity and company asset protection, but real threats and risks need to be addressed and
planned for. Who cares if a hacker can get through an open port on the web server if the
building is burning down?
Quick Tips
• Physical security is usually the first line of defense against environmental risks
and unpredictable human behavior.
• Crime Prevention Through Environmental Design (CPTED) combines the
physical environment and sociology issues that surround it to reduce crime
rates and the fear of crime.
• The value of property within the facility and the value of the facility itself need
to be ascertained to determine the proper budget for physical security so that
security controls are cost-effective.
• Automated environmental controls help minimize the resulting damage and
speed the recovery process. Manual controls can be time-consuming and error-
prone, and require constant attention.
• Construction materials and structure composition need to be evaluated for
their protective characteristics, their utility, and their costs and benefits.
• Some physical security controls may conflict with the safety of people. These
issues need to be addressed; human life is always more important than
protecting a facility or the assets it contains.
• When looking at locations for a facility, consider local crime, natural disaster
possibilities, and distance to hospitals, police and fire stations, airports, and
railroads.

CISSP All-in-One Exam Guide
500
• The HVAC system should maintain the appropriate temperature and humidity
levels and provide closed-loop recirculating air-conditioning and positive
pressurization and ventilation.
• High humidity can cause corrosion, and low humidity can cause static electricity.
• Dust and other air contaminants may adversely affect computer hardware, and
should be kept to acceptable levels.
• Administrative controls include drills and exercises of emergency procedures,
simulation testing, documentation, inspections and reports, prescreening of
employees, post-employment procedures, delegation of responsibility and
rotation of duties, and security-awareness training.
• Emergency procedure documentation should be readily available and
periodically reviewed and updated.
• Proximity identification devices can be user-activated (action needs to be
taken by a user) or system sensing (no action needs to be taken by the user).
• A transponder is a proximity identification device that does not require action
by the user. The reader transmits signals to the device, and the device responds
with an access code.
• Exterior fencing can be costly and unsightly, but can provide crowd control
and help control access to the facility.
• If interior partitions do not go all the way up to the true ceiling, an intruder
can remove a ceiling tile and climb over the partition into a critical portion of
the facility.
• Intrusion detection devices include motion detectors, CCTVs, vibration
sensors, and electromechanical devices.
• Intrusion detection devices can be penetrated, are expensive to install and
monitor, require human response, and are subject to false alarms.
• CCTV enables one person to monitor a large area, but should be coupled with
alerting functions to ensure proper response.
• Security guards are expensive but provide flexibility in response to security
breaches and can deter intruders from attempting an attack.
• A cipher lock uses a keypad and is programmable.
• Company property should be marked as such, and security guards should
be trained how to identify when these items leave the facility in an improper
manner.
• Floors, ceilings, and walls need to be able to hold the necessary load and
provide the required fire rating.
• Water, steam, and gas lines need to have shutoff valves and positive drains
(substance flows out instead of in).
• The threats to physical security are interruption of services, theft, physical
damage, unauthorized disclosure, and loss of system integrity.

Chapter 5: Physical and Environmental Security
501
• The primary power source is what is used in day-to-day operations, and the
alternate power source is a backup in case the primary source fails.
• Power companies usually plan and implement brownouts when they are
experiencing high demand.
• Power noise is a disturbance of power and can be caused by electromagnetic
interference (EMI) or radio frequency interference (RFI).
• EMI can be caused by lightning, motors, and the current difference between
wires. RFI can be caused by electrical system mechanisms, fluorescent lighting,
and electrical cables.
• Power transient noise is a disturbance imposed on a power line that causes
electrical interference.
• Power regulators condition the line to keep voltage steady and clean.
• UPS factors that should be reviewed are the size of the electrical load the UPS
can support, the speed with which it can assume the load when the primary
source fails, and the amount of time it can support the load.
• Shielded lines protect from electrical and magnetic induction, which causes
interference to the power voltage.
• Perimeter protection is used to deter trespassing and to enable people to enter
a facility through a few controlled entrances.
• Smoke detectors should be located on and above suspended ceilings, below
raised floors, and in air ducts to provide maximum fire detection.
• A fire needs high temperatures, oxygen, and fuel. To suppress it, one or more
of those items needs to be reduced or eliminated.
• Gases like halon, FM-200, and other halon substitutes interfere with the
chemical reaction of a fire.
• The HVAC system should be turned off before activation of a fire suppressant
to ensure it stays in the needed area and that smoke is not distributed to
different areas of the facility.
• Portable fire extinguishers should be located within 50 feet of electrical
equipment and should be inspected quarterly.
• CO2 is a colorless, odorless, and potentially lethal substance because it
removes the oxygen from the air in order to suppress fires.
• Piggybacking, when unauthorized access is achieved to a facility via another
individual’s legitimate access, is a common concern with physical security.
• Halon is no longer available because it depletes the ozone. FM-200 or other
similar substances are used instead of halon.
• Proximity systems require human response, can cause false alarms, and
depend on a constant power supply, so these protection systems should be
backed up by other types of security systems.

CISSP All-in-One Exam Guide
502
• Dry pipe systems reduce the accidental discharge of water because the water
does not enter the pipes until an automatic fire sensor indicates there is an
actual fire.
• In locations with freezing temperatures where broken pipes cause problems,
dry pipes should be used.
• A preaction pipe delays water release.
• CCTVs are best used in conjunction with other monitoring and intrusion alert
methods.
• CPTED provides three main strategies, which are natural access control,
natural surveillance, and natural territorial reinforcement.
• Window types that should be understood are standard, tempered, acrylic,
wired, and laminated.
• Perimeter Intrusion Detection and Assessment System is a type of fence that
has a passive cable vibration sensor that sets off an alarm if an intrusion is
detected.
• Security lighting can be continuous, controlled, stand by, or responsive.
• CCTV lenses can be fixed focal length or zoom, which control the focal length,
depth of focus, and depth of field.
• IDS can be a photoelectric system, passive infrared system, acoustical detection
system, wave-pattern motion detectors, or proximity detector.
Questions
Please remember that these questions are formatted and asked in a certain way for a
reason. You must remember that the CISSP exam is asking questions at a conceptual
level. Questions may not always have the perfect answer, and the candidate is advised
against always looking for the perfect answer. The candidate should look for the best
answer in the list.
1. What is the first step that should be taken when a fire has been detected?
A. Turn off the HVAC system and activate fire door releases.
B. Determine which type of fire it is.
C. Advise individuals within the building to leave.
D. Activate the fire suppression system.
2. A company needs to implement a CCTV system that will monitor a large area
outside the facility. Which of the following is the correct lens combination
for this?
A. A wide-angle lens and a small lens opening
B. A wide-angle lens and a large lens opening

Chapter 5: Physical and Environmental Security
503
C. A wide-angle lens and a large lens opening with a small focal length
D. A wide-angle lens and a large lens opening with a large focal length
3. When should a Class C fire extinguisher be used instead of a Class A fire
extinguisher?
A. When electrical equipment is on fire
B. When wood and paper are on fire
C. When a combustible liquid is on fire
D. When the fire is in an open area
4. Which of the following is not a true statement about CCTV lenses?
A. Lenses that have a manual iris should be used in outside monitoring.
B. Zoom lenses will carry out focus functionality automatically.
C. Depth of field increases as the size of the lens opening decreases.
D. Depth of field increases as the focal length of the lens decreases.
5. How does halon fight fires?
A. It reduces the fire’s fuel intake.
B. It reduces the temperature of the area and cools the fire out.
C. It disrupts the chemical reactions of a fire.
D. It reduces the oxygen in the area.
6. What is a mantrap?
A. A trusted security domain
B. A logical access control mechanism
C. A double-door room used for physical access control
D. A fire suppression device
7. What is true about a transponder?
A. It is a card that can be read without sliding it through a card reader.
B. It is a biometric proximity device.
C. It is a card that a user swipes through a card reader to gain access to a
facility.
D. It exchanges tokens with an authentication server.
8. When is a security guard the best choice for a physical access control
mechanism?
A. When discriminating judgment is required
B. When intrusion detection is required
C. When the security budget is low
D. When access controls are in place

CISSP All-in-One Exam Guide
504
9. Which of the following is not a characteristic of an electrostatic intrusion
detection system?
A. It creates an electrostatic field and monitors for a capacitance change.
B. It can be used as an intrusion detection system for large areas.
C. It produces a balance between the electric capacitance and inductance of
an object.
D. It can detect if an intruder comes within a certain range of an object.
10. What is a common problem with vibration-detection devices used for
perimeter security?
A. They can be defeated by emitting the right electrical signals in the
protected area.
B. The power source is easily disabled.
C. They cause false alarms.
D. They interfere with computing devices.
11. Which of the following is an example of glare protection?
A. Using automated iris lenses with short focal lengths
B. Using standby lighting, which is produced by a CCTV camera
C. Directing light toward entry points and away from a security force post
D. Ensuring that the lighting system uses positive pressure
12. Which of the following is not a main component of CPTED?
A. Natural access control
B. Natural surveillance
C. Territorial reinforcement
D. Target hardening
13. Which problems may be caused by humidity in an area with electrical
devices?
A. High humidity causes excess electricity, and low humidity causes
corrosion.
B. High humidity causes corrosion, and low humidity causes static electricity.
C. High humidity causes power fluctuations, and low humidity causes static
electricity.
D. High humidity causes corrosion, and low humidity causes power
fluctuations.

Chapter 5: Physical and Environmental Security
505
14. What does positive pressurization pertaining to ventilation mean?
A. When a door opens, the air comes in.
B. When a fire takes place, the power supply is disabled.
C. When a fire takes place, the smoke is diverted to one room.
D. When a door opens, the air goes out.
15. Which of the following answers contains a category of controls that does not
belong in a physical security program?
A. Deterrence and delaying
B. Response and detection
C. Assessment and detection
D. Delaying and lighting
16. Which is not an administrative control pertaining to emergency procedures?
A. Intrusion detection systems
B. Awareness and training
C. Drills and inspections
D. Delegation of duties
17. If an access control has a fail-safe characteristic but not a fail-secure
characteristic, what does that mean?
A. It defaults to no access.
B. It defaults to being unlocked.
C. It defaults to being locked.
D. It defaults to sounding a remote alarm instead of a local alarm.
18. Which of the following is not considered a delaying mechanism?
A. Locks
B. Defense-in-depth measures
C. Warning signs
D. Access controls
19. What are the two general types of proximity identification devices?
A. Biometric devices and access control devices
B. Swipe card devices and passive devices
C. Preset code devices and wireless devices
D. User-activated devices and system sensing devices

CISSP All-in-One Exam Guide
506
20. Which of the following answers best describes the relationship between a risk
analysis, acceptable risk level, baselines, countermeasures, and metrics?
A. The risk analysis output is used to determine the proper countermeasures
required. Baselines are derived to measure these countermeasures. Metrics
are used to track countermeasure performance to ensure baselines are
being met.
B. The risk analysis output is used to help management understand and
set an acceptable risk level. Baselines are derived from this level. Metrics
are used to track countermeasure performance to ensure baselines are
being met.
C. The risk analysis output is used to help management understand and set
baselines. An acceptable risk level is derived from these baselines. Metrics
are used to track countermeasure performance to ensure baselines are
being met.
D. The risk analysis output is used to help management understand and set
an acceptable risk level. Baselines are derived from the metrics. Metrics
are used to track countermeasure performance to ensure baselines are
being met.
21. Most of today’s CCTV systems use charged-coupled devices. Which of the
following is not a characteristic of these devices?
A. Receives input through the lenses and converts it into an electronic signal
B. Captures signals in the infrared range
C. Provides better-quality images
D. Records data on hard drives instead of tapes
22. Which is not a drawback to installing intrusion detection and monitoring
systems?
A. It’s expensive to install.
B. It cannot be penetrated.
C. It requires human response.
D. It’s subject to false alarms.
23. What is a cipher lock?
A. A lock that uses cryptographic keys
B. A lock that uses a type of key that cannot be reproduced
C. A lock that uses a token and perimeter reader
D. A lock that uses a keypad
24. If a cipher lock has a door delay option, what does that mean?
A. After a door is open for a specific period, the alarm goes off.
B. It can only be opened during emergency situations.

Chapter 5: Physical and Environmental Security
507
C. It has a hostage alarm capability.
D. It has supervisory override capability.
25. Which of the following best describes the difference between a warded lock
and a tumbler lock?
A. A tumbler lock is more simplistic and easier to circumvent than
a warded lock.
B. A tumbler lock uses an internal bolt, and a warded lock uses internal
cylinders.
C. A tumbler lock has more components than a warded lock.
D. A warded lock is mainly used externally, and a tumbler lock is used
internally.
26. During the construction of her company’s facility, Mary has been told that
light frame construction material has been used to build the internal walls.
Which of the following best describes why Mary is concerned about this issue?
i. It provides the least amount of protection against fire.
ii. It provides the least amount of protection against forcible entry attempts.
iii. It is noncombustible.
iv. It provides the least amount of protection for mounting walls and windows.
A. i, iii
B. i, ii
C. ii, iii
D. ii, iii, iv
27. Which of the following is not true pertaining to facility construction
characteristics?
i. Calculations of approximate penetration times for different types of
explosives and attacks are based on the thickness of the concrete walls and
the gauge of rebar used.
ii. Using thicker rebar and properly placing it within the concrete provides
increased protection.
iii. Reinforced walls, rebar, and the use of double walls can be used as
delaying mechanisms.
iv. Steel rods encased in concrete are referred to as rebar.
A. All of them
B. None of them
C. iii
D. i, ii

CISSP All-in-One Exam Guide
508
28. It is important to choose the correct type of windows when building a facility.
Each type of window provides a different level of protection. Which of the
following is a correct description of window glass types?
i. Standard glass is made by heating the glass and then suddenly cooling it.
ii. Tempered glass windows are commonly used in residential homes and are
easily broken.
iii. Acrylic glass has two sheets of glass with a plastic film in between.
iv. Laminated glass can be made out of polycarbonate acrylic, which is
stronger than standard glass but produces toxic fumes if burned.
A. ii, iii
B. ii, iii, iv
C. None of them
D. All of them
29. Sandy needs to implement the right type of fencing in an area where there is
no foot traffic or observation capabilities. Sandy has decided to implement a
Perimeter Intrusion Detection and Assessment System. Which of the following
is not a characteristic of this type of fence?
i. It has sensors located on the wire mesh and at the base of the fence.
ii. It cannot detect if someone attempts to cut or climb the fence.
iii. It has a passive cable vibration sensor that sets off an alarm if an intrusion
is detected.
iv. It can cause many false alarms.
A. i
B. ii
C. iii, iv
D. i, ii, iv
30. CCTV lenses have irises, which control the amount of light that enters the
lens. Which of the following has an incorrect characteristic of the types of
CCTV irises that are available?
i. Automated iris lenses have a ring around the CCTV lens that can be
manually turned and controlled.
ii. A lens with a manual iris would be used in areas that have fixed lighting,
since the iris cannot self-adjust to changes of light.
iii. An auto iris lens should be used in environments where the light changes,
as in an outdoor setting.
iv. As the environment brightens, this is sensed by the manual iris, which
automatically adjusts itself.

Chapter 5: Physical and Environmental Security
509
A. i, iv
B. i, ii, iii
C. i, ii
D. i, ii, iv
Answers
1. C. Human life takes precedence. Although the other answers are important
steps in this type of situation, the first step is to warn others and save as many
lives as possible.
2. A. The depth of field refers to the portion of the environment that is in focus
when shown on the monitor. The depth of field varies depending upon the
size of the lens opening, the distance of the object being focused on, and the
focal length of the lens. The depth of field increases as the size of the lens
opening decreases, the subject distance increases, or the focal length of the
lens decreases. So if you want to cover a large area and not focus on specific
items, it is best to use a wide-angle lens and a small lens opening.
3. A. A Class C fire is an electrical fire. Thus, an extinguisher with the proper
suppression agent should be used. The following table shows the fire types,
their attributes, and suppression methods:
Fire Class Type of Fire Elements of Fire Suppression Method
A Common combustibles Wood products, paper, and
laminates
Water, foam
B Liquid Petroleum products and
coolants
Gas, CO2, foam, dry powders
C Electrical Electrical equipment and wires Gas, CO2, dry powders
D Combustible metals Magnesium, sodium, potassium Dry powder
4. A. Manual iris lenses have a ring around the CCTV lens that can be manually
turned and controlled. A lens that has a manual iris would be used in an area
that has fixed lighting, since the iris cannot self-adjust to changes of light. An
auto iris lens should be used in environments where the light changes, such
as an outdoor setting. As the environment brightens, this is sensed by the
iris, which automatically adjusts itself. Security personnel will configure the
CCTV to have a specific fixed exposure value, which the iris is responsible for
maintaining. The other answers are true.
5. C. Halon is a type of gas used to interfere with the chemical reactions between
the elements of a fire. A fire requires fuel, oxygen, high temperatures, and
chemical reactions to burn properly. Different suppressant agents have been
developed to attack each aspect of a fire: CO2 displaces the oxygen, water
reduces the temperature, and soda acid removes the fuel.

CISSP All-in-One Exam Guide
510
6. C. A mantrap is a small room with two doors. The first door is locked; a person
is identified and authenticated by a security guard, biometric system, smart
card reader, or swipe card reader. Once the person is authenticated and access
is authorized, the first door opens and allows the person into the mantrap. The
first door locks and the person is trapped. The person must be authenticated
again before the second door unlocks and allows him into the facility.
7. A. A transponder is a type of physical access control device that does not
require the user to slide a card through a reader. The reader and card
communicate directly. The card and reader have a receiver, transmitter, and
battery. The reader sends signals to the card to request information. The card
sends the reader an access code.
8. A. Although many effective physical security mechanisms are on the market
today, none can look at a situation, make a judgment about it, and decide what
the next step should be. A security guard is employed when a company needs to
have a countermeasure that can think and make decisions in different scenarios.
9. B. An electrostatic IDS creates an electrostatic field, which is just an electric
field associated with static electric charges. The IDS creates a balanced
electrostatic field between itself and the object being monitored. If an intruder
comes within a certain range of the monitored object, there is capacitance
change. The IDS can detect this change and sound an alarm.
10. C. This type of system is sensitive to sounds and vibrations and detects the
changes in the noise level of an area it is placed within. This level of sensitivity
can cause many false alarms. These devices do not emit any waves; they only
listen for sounds within an area and are considered passive devices.
11. C. When lighting is installed, it should be directed toward areas where
potential intruders would most likely be coming from, and directed away
from the security force posts. For example, lighting should be pointed at gates
or exterior access points, and the guard locations should be in the shadows, or
under a lower amount of illumination. This is referred to as “glare protection”
for the security force.
12. D. Natural access control is the use of the environment to control access to
entry points, such as using landscaping and bollards. An example of natural
surveillance is the construction of pedestrian walkways so there is a clear line
of sight of all the activities in the surroundings. Territorial reinforcement gives
people a sense of ownership of a property, giving them a greater tendency to
protect it. These concepts are all parts of CPTED. Target hardening has to do
with implementing locks, security guards, and proximity devices.
13. B. High humidity can cause corrosion, and low humidity can cause excessive
static electricity. Static electricity can short-out devices or cause loss of
information.
14. D. Positive pressurization means that when someone opens a door, the air
goes out, and outside air does not come in. If a facility were on fire and the
doors were opened, positive pressure would cause the smoke to go out instead
of being pushed back into the building.

Chapter 5: Physical and Environmental Security
511
15. D. The categories of controls that should make up any physical security
program are deterrence, delaying, detection, assessment, and response.
Lighting is a control itself, not a category of controls.
16. A. Awareness and training, drills and inspections, and delegation of duties are
all items that have a direct correlation to proper emergency procedures. It is
management’s responsibility to ensure that these items are in place, properly
tested, and carried out. Intrusion detection systems are technical or physical
controls—not administrative.
17. B. A fail-safe setting means that if a power disruption were to affect the
automated locking system, the doors would default to being unlocked. A fail-
secure configuration means a door would default to being locked if there were
any problems with the power.
18. C. Every physical security program should have delaying mechanisms, which have
the purpose of slowing down an intruder so security personnel can be alerted and
arrive at the scene. A warning sign is a deterrence control, not a delaying control.
19. D. A user-activated system requires the user to do something: swipe the card
through the reader and/or enter a code. A system sensing device recognizes
the presence of the card and communicates with it without the user needing
to carry out any activity.
20. B. The physical security team needs to carry out a risk analysis, which will
identify the organization’s vulnerabilities, threats, and business impacts. The
team should present these findings to management and work with them to
define an acceptable risk level for the physical security program. From there,
the team should develop baselines (minimum levels of security) and metrics
to properly evaluate and determine whether the baselines are being met by the
implemented countermeasures. Once the team identifies and implements the
countermeasures, the countermeasures’ performance should be continually
evaluated and expressed in the previously created metrics. These performance
values are compared against the set baselines. If the baselines are continually
maintained, then the security program is successful because the company’s
acceptable risk level is not being exceeded.
21. D. The CCD is an electrical circuit that receives input light from the lens and
converts it into an electronic signal, which is then displayed on the monitor.
Images are focused through a lens onto the CCD chip surface, which forms
the electrical representation of the optical image. This technology allows the
capture of extraordinary details of objects and precise representation because
it has sensors that work in the infrared range, which extends beyond human
perception. The CCD sensor picks up this extra “data” and integrates it into
the images shown on the monitor, to allow for better granularity and quality
in the video. CCD does not record data.
22. B. Monitoring and intrusion detection systems are expensive, require someone
to respond when they set off an alarm, and, because of their level of sensitivity,
can cause several false alarms. Like any other type of technology or device, they
have their own vulnerabilities that can be exploited and penetrated.

CISSP All-in-One Exam Guide
512
23. D. Cipher locks, also known as programmable locks, use keypads to control
access into an area or facility. The lock can require a swipe card and a specific
combination that’s entered into the keypad.
24. A. A security guard would want to be alerted when a door has been open for
an extended period. It may be an indication that something is taking place
other than a person entering or exiting the door. A security system can have
a threshold set so that if the door is open past the defined time period, an
alarm sounds.
25. C. The tumbler lock has more pieces and parts than a warded lock. The key
fits into a cylinder, which raises the lock metal pieces to the correct height so
the bolt can slide to the locked or unlocked position. A warded lock is easier
to circumvent than a tumbler lock.
26. B. Light frame construction material provides the least amount of protection
against fire and forcible entry attempts. It is composed of untreated lumber
that would be combustible during a fire. Light frame construction material is
usually used to build homes, primarily because it is cheap, but also because
homes typically are not under the same types of fire and intrusion threats that
office buildings are.
27. B. Calculations of approximate penetration times for different types of
explosives and attacks are based on the thickness of the concrete walls and
the gauge of rebar used. (Rebar refers to the steel rods encased within the
concrete.) So even if the concrete were damaged, it would take longer to
actually cut or break through the rebar. Using thicker rebar and properly
placing it within the concrete provides even more protection. Reinforced
walls, rebar, and the use of double walls can be used as delaying mechanisms.
The idea is that it will take the bad guy longer to get through two reinforced
walls, which gives the response force sufficient time to arrive at the scene and
stop the attacker.
28. C. Standard glass windows are commonly used in residential homes and are
easily broken. Tempered glass is made by heating the glass and then suddenly
cooling it. This increases its mechanical strength, which means it can handle
more stress and is harder to break. It is usually five to seven times stronger
than standard glass. Acrylic glass can be made out of polycarbonate acrylic,
which is stronger than standard glass but produces toxic fumes if burned.
Laminated glass has two sheets of glass with a plastic film in between. This
added plastic makes it much more difficult to break the window.
29. B. Perimeter Intrusion Detection and Assessment System (PIDAS) is a type of
fencing that has sensors located on the wire mesh and at the base of the fence.
It is used to detect if someone attempts to cut or climb the fence. It has a
passive cable vibration sensor that sets off an alarm if an intrusion is detected.
PIDAS is very sensitive and can cause many false alarms.

Chapter 5: Physical and Environmental Security
513
30. A. CCTV lenses have irises, which control the amount of light that enters
the lens. Manual iris lenses have a ring around the CCTV lens that can be
manually turned and controlled. A lens with a manual iris would be used
in areas that have fixed lighting, since the iris cannot self-adjust to changes
of light. An auto iris lens should be used in environments where the light
changes, as in an outdoor setting. As the environment brightens, this is
sensed by the iris, which automatically adjusts itself. Security personnel will
configure the CCTV to have a specific fixed exposure value, which the iris is
responsible for maintaining. On a sunny day, the iris lens closes to reduce the
amount of light entering the camera, while at night, the iris opens to capture
more light—just like our eyes.
This page intentionally left blank

CHAPTER 6
Telecommunications and
Network Security
This chapter presents the following:
• OSI and TCP/IP models
• Protocol types and security issues
• LAN, WAN, MAN, intranet, and extranet technologies
• Cable types and data transmission types
• Network devices and services
• Communications security management
• Telecommunications devices and technologies
• Remote connectivity technologies
• Wireless technologies
• Threat and attack types
Telecommunications and networking use various mechanisms, devices, software, and
protocols that are interrelated and integrated. Networking is one of the more complex
topics in the computer field, mainly because so many technologies are involved and are
evolving. Our current technologies are improving in functionality and security expo-
nentially, and every month there seems to be new “emerging” technologies that we
have to learn, understand, implement, and secure. A network administrator must know
how to configure networking software, protocols and services, and devices; deal with
interoperability issues; install, configure, and interface with telecommunications soft-
ware and devices; and troubleshoot effectively. A security professional must understand
these issues and be able to analyze them a few levels deeper to recognize fully where
vulnerabilities can arise within each of these components and then know what to do
about them. This can be an overwhelming and challenging task. However, if you are
knowledgeable, have a solid practical skill set, and are willing to continue to learn, you
can have more career opportunities than you know what to do with.
While almost every country in the world has had to deal with hard economic times,
one industry that has not been greatly affected by the downward economies is informa-
tion security. Organizations and government agencies do not have a large enough pool
515

CISSP All-in-One Exam Guide
516
of people with the necessary skill set to hire from, and the attacks against these entities
are only increasing and becoming more critical. Security is a good business to be in, if
you are truly knowledgeable, skilled, and disciplined.
Ten years ago it seemed possible to understand and secure a network and every-
thing that resided within it. As technology grew in importance in every aspect of our
lives over the years, however, almost every component that made up a traditional net-
work grew in complexity. We still need to know the basics (routers, firewalls, TCP/IP
protocols, cabling, switching technologies, etc.), but now we also need to understand
data loss prevention, web and e-mail security, mobile technologies, antimalware prod-
ucts, virtualization, cloud computing, endpoint security solutions, radio-frequency
identification (RFID), virtual private network protocols, social networking threats,
wireless technologies, continuous monitoring capabilities, and more. Our society has
come up with so many different real-time communication technologies (instant mes-
saging, IP telephony, video conferencing, SMS, etc.) we had to develop unified com-
munication models to allow for interoperability and optimization. The IEEE standards
that define various editions and components of wireless local area network (LAN) tech-
nologies have gone through the whole alphabet (802.11a, 802.11b, 802.11c, 802.11d,
802.11e, 802.11f, etc.) and we have had to start doubling up on our letters, as in IEEE
802.11ac. Mobile communication technology has gone from 1G to 4G, with some half
G’s in between (2.5G, 3.5G). And as the technology increases in complexity and the
attackers become more determined and creative, we not only need to understand basic
attack types (buffer overflows, fragmentation attacks, DoS, viruses, social engineering),
but also the more advanced (client-side, injection, fuzzing, pointer manipulation,
cache poisoning, etc.).
A network used to be a construct with boundaries, but today most environments do
not have clear-cut boundaries because most communication gadgets are some type of
computer (smart phones, tablet PC, medical devices and appliances). These devices do
not stay within the walls of an office, because people are road warriors, telecommuting,
and working from virtual offices. The increased use of outsourcing also increases the
boundaries of our traditional networks and with so many entities needing access, the
boundaries are commonly porous in nature.
As our technologies continue to explode with complexity, the threats of compro-
mise from attackers continue to increase—not just in volume but in criticality. Today’s
attackers are commonly part of organized crime rings or funded by nation states. This
means that the attackers are trained, organized, and very focused. Various ways of steal-
ing funds (siphoning, identity theft, money mules, carding) are rampant; stealing intel-
lectual property is continuously on the rise, and cyber warfare is becoming more well
known. When the Stuxnet worm negatively affected Iran’s uranium enrichment infra-
structure in 2010, the world had a better idea of what malware is capable of.
Today’s security professional needs to understand many things on many different
levels because the world of technology is only getting more complex and the risks are
only increasing. In this chapter we will start with the basics of networking and tele-
communications and build upon them and identify many of the security issues that
are involved.

Chapter 6: Telecommunications and Network Security
517
Telecommunications
Telecommunications is the electrical transmission of data among systems, whether through
analog, digital, or wireless transmission types. The data can flow through copper wires;
coaxial cable; airwaves; the telephone company’s public-switched telephone network
(PSTN); and a service provider’s fiber cables, switches, and routers. Definitive lines exist
between the media used for transmission, the technologies, the protocols, and whose
equipment is being used. However, the definitive lines get blurry when one follows how
data created on a user’s workstation flows within seconds through a complex path of
Ethernet cables, to a router that divides the company’s network and the rest of the world,
through the Asynchronous Transfer Mode (ATM) switch provided by the service provider,
to the many switches the packets transverse throughout the ATM cloud, on to another
company’s network, through its router, and to another user’s workstation. Each piece is
interesting, but when they are all integrated and work together, it is awesome.
Telecommunications usually refers to telephone systems, service providers, and car-
rier services. Most telecommunications systems are regulated by governments and in-
ternational organizations. In the United States, telecommunications systems are
regulated by the Federal Communications Commission (FCC), which includes voice
and data transmissions. In Canada, agreements are managed through Spectrum, Infor-
mation Technologies and Telecommunications (SITT), Industry Canada. Globally, or-
ganizations develop policies, recommend standards, and work together to provide
standardization and the capability for different technologies to properly interact.
The main standards organizations are the International Telecommunication Union
(ITU) and the International Standards Organization (ISO). Their models and standards
have shaped our technology today, and the technological issues governed by these or-
ganizations are addressed throughout this chapter.
Open Systems Interconnection Reference Model
I don’t understand what all of these protocols are doing.
Response: Okay, let’s make a model to explain it then.
ISO is a worldwide federation that works to provide international standards. In the
early 1980s, ISO worked to develop a protocol set that would be used by all vendors
throughout the world to allow the interconnection of network devices. This movement
was fueled with the hopes of ensuring that all vendor products and technologies could
communicate and interact across international and technical boundaries. The actual
protocol set did not catch on as a standard, but the model of this protocol set, the OSI
model, was adopted and is used as an abstract framework to which most operating sys-
tems and protocols adhere.
Many people think that the OSI reference model arrived at the beginning of the
computing age as we know it and helped shape and provide direction for many, if not
all, networking technologies. However, this is not true. In fact, it was introduced in
1984, at which time the basics of the Internet had already been developed and imple-
mented, and the basic Internet protocols had been in use for many years. The Transmis-
sion Control Protocol/Internet Protocol (TCP/IP) suite actually has its own model that

CISSP All-in-One Exam Guide
518
is often used today when examining and understanding networking issues. Figure 6-1
shows the differences between the OSI and TCP/IP networking models. In this chapter,
we will focus more on the OSI model.
NOTE
NOTE The host-to-host layer is sometimes called the transport layer in
the TCP/IP model. The application layer in the TCP/IP architecture model is
equivalent to a combination of the application, presentation, and session layers
in the OSI model.
Protocol
A network protocol is a standard set of rules that determines how systems will commu-
nicate across networks. Two different systems that use the same protocol can commu-
nicate and understand each other despite their differences, similar to how two people
can communicate and understand each other by using the same language.
The OSI reference model, as described by ISO Standard 7498, provides important
guidelines used by vendors, engineers, developers, and others. The model segments the
networking tasks, protocols, and services into different layers. Each layer has its own
responsibilities regarding how two computers communicate over a network. Each layer
has certain functionalities, and the services and protocols that work within that layer
fulfill them.
The OSI model’s goal is to help others develop products that will work within an
open network architecture. An open network architecture is one that no vendor owns,
that is not proprietary, and that can easily integrate various technologies and vendor
implementations of those technologies. Vendors have used the OSI model as a jump-
ing-off point for developing their own networking frameworks. These vendors use the
Figure 6-1
The OSI and TCP/IP
networking models

Chapter 6: Telecommunications and Network Security
519
OSI model as a blueprint and develop their own protocols and services to produce
functionality that is different from, or overlaps, that of other vendors. However, because
these vendors use the OSI model as their starting place, integration of other vendor
products is an easier task, and the interoperability issues are less burdensome than if
the vendors had developed their own networking framework from scratch.
Although computers communicate in a physical sense (electronic signals are passed
from one computer over a wire to the other computer), they also communicate through
logical channels. Each protocol at a specific OSI layer on one computer communicates
with a corresponding protocol operating at the same OSI layer on another computer.
This happens through encapsulation.
Here’s how encapsulation works: A message is constructed within a program on one
computer and is then passed down through the network protocol’s stack. A protocol at
each layer adds its own information to the message; thus, the message grows in size as
it goes down the protocol stack. The message is then sent to the destination computer,
and the encapsulation is reversed by taking the packet apart through the same steps
used by the source computer that encapsulated it. At the data link layer, only the infor-
mation pertaining to the data link layer is extracted, and the message is sent up to the
next layer. Then at the network layer, only the network layer data are stripped and pro-
cessed, and the packet is again passed up to the next layer, and so on. This is how com-
puters communicate logically. The information stripped off at the destination
computer informs it how to interpret and process the packet properly. Data encapsula-
tion is shown in Figure 6-2.

CISSP All-in-One Exam Guide
520
A protocol at each layer has specific responsibilities and control functions it per-
forms, as well as data format syntaxes it expects. Each layer has a special interface (con-
nection point) that allows it to interact with three other layers: (1) communications
from the interface of the layer above it, (2) communications to the interface of the
layer below it, and (3) communications with the same layer in the interface of the target
packet address. The control functions, added by the protocols at each layer, are in the
form of headers and trailers of the packet.
The benefit of modularizing these layers, and the functionality within each layer, is
that various technologies, protocols, and services can interact with each other and pro-
vide the proper interfaces to enable communications. This means a computer can use an
application protocol developed by Novell, a transport protocol developed by Apple, and
a data link protocol developed by IBM to construct and send a message over a network.
The protocols, technologies, and computers that operate within the OSI model are con-
sidered open systems. Open systems are capable of communicating with other open sys-
tems because they implement international standard protocols and interfaces. The
specification for each layer’s interface is very structured, while the actual code that makes
up the internal part of the software layer is not defined. This makes it easy for vendors to
write plug-ins in a modularized manner. Systems are able to integrate the plug-ins into
the network stack seamlessly, gaining the vendor-specific extensions and functions.
Understanding the functionalities that take place at each OSI layer and the corre-
sponding protocols that work at those layers helps you understand the overall commu-
nication process between computers. Once you understand this process, a more detailed
look at each protocol will show you the full range of options each protocol provides
and the security weaknesses embedded into each of those options.
Figure 6-2 Each OSI layer protocol adds its own information to the data packet.

Chapter 6: Telecommunications and Network Security
521
Application Layer
Hand me your information. I will take it from here.
The application layer, layer 7, works closest to the user and provides file transmis-
sions, message exchanges, terminal sessions, and much more. This layer does not in-
clude the actual applications, but rather the protocols that support the applications.
When an application needs to send data over the network, it passes instructions and the
data to the protocols that support it at the application layer. This layer processes and
properly formats the data and passes the same down to the next layer within the OSI
model. This happens until the data the application layer constructed contain the es-
sential information from each layer necessary to transmit the data over the network.
The data are then put on the network cable and are transmitted until they arrive at the
destination computer.
As an analogy, let’s say that you write a letter that you would like to send to your
congressman. Your job is to write the letter, my job is to figure out how to get it to him,
and the congressman’s job is to totally ignore you and your comments. You (the ap-
plication) create the content (message) and hand it to me (application layer protocol).
I put the content into an envelope, write the congressman’s address on the envelope
(insert headers and trailers), and put it into the mailbox (pass it onto the next protocol
in the network stack). When I check the mailbox a week later, there is a message ad-
dressed to you. I open the envelope (strip off headers and trailers) and give you the
message (pass message up to the application).
Some examples of the protocols working at this layer are the Simple Mail Transfer
Protocol (SMTP), Hypertext Transfer Protocol (HTTP), Line Printer Daemon (LPD),
File Transfer Protocol (FTP), Telnet, and Trivial File Transfer Protocol (TFTP). Figure 6-3
shows how applications communicate with the underlying protocols through applica-
tion programming interfaces (APIs). If a user makes a request to send an e-mail mes-
sage through her e-mail client Outlook, the e-mail client sends this information to
SMTP. SMTP adds its information to the user’s message and passes it down to the pre-
sentation layer.
Attacks at Different Layers
As we examine the different layers of a common network stack, we will also look
at the specific attack types that can take place at each layer. One concept to under-
stand at this point is that a network can be used as a channel for an attack or the
network can be the target of an attack. If the network is a channel for an attack, this
means the attacker is using the network as a resource. For example, when an at-
tacker sends a virus from one system to another system, the virus travels through
the network channel. If an attacker carries out a denial of service (DoS) attack,
which sends a large amount of bogus traffic over a network link to bog it down,
then the network itself is the target. As you will see throughout this chapter, it is
important to understand how attacks take place and where they take place so that
the correct countermeasures can be put into place.

CISSP All-in-One Exam Guide
522
Presentation Layer
You will now be transformed into something that everyone can understand.
The presentation layer, layer 6, receives information from the application layer pro-
tocol and puts it in a format all computers following the OSI model can understand.
This layer provides a common means of representing data in a structure that can be
properly processed by the end system. This means that when a user creates a Word
document and sends it out to several people, it does not matter whether the receiving
computers have different word processing programs; each of these computers will be
able to receive this file and understand and present it to its user as a document. It is the
data representation processing that is done at the presentation layer that enables this to
take place. For example, when a Windows 7 computer receives a file from another com-
puter system, information within the file’s header indicates what type of file it is. The
Windows 7 operating system has a list of file types it understands and a table describing
what program should be used to open and manipulate each of these file types. For ex-
ample, the sender could create a Word file in Word 2010, while the receiver uses Open
Office. The receiver can open this file because the presentation layer on the sender’s
system converted the file to American Standard Code for Information Interchange
(ASCII), and the receiver’s computer knows it opens these types of files with its word
processor, Open Office.
The presentation layer is not concerned with the meaning of data, but with the syn-
tax and format of those data. It works as a translator, translating the format an applica-
tion is using to a standard format used for passing messages over a network. If a user
uses a Corel application to save a graphic, for example, the graphic could be a Tagged
Image File Format (TIFF), Graphic Interchange Format (GIF), or Joint Photographic
Experts Group (JPEG) format. The presentation layer adds information to tell the des-
tination computer the file type and how to process and present it. This way, if the user
sends this graphic to another user who does not have the Corel application, the user’s
operating system can still present the graphic because it has been saved into a standard
format. Figure 6-4 illustrates the conversion of a file into different standard file types.
Figure 6-3 Applications send requests to an API, which is the interface to the supporting protocol.

Chapter 6: Telecommunications and Network Security
523
This layer also handles data compression and encryption issues. If a program re-
quests a certain file to be compressed and encrypted before being transferred over the
network, the presentation layer provides the necessary information for the destination
computer. It provides information on how the file was encrypted and/or compressed so
that the receiving system knows what software and processes are necessary to decrypt
and decompress the file. Let’s say I compress a file using WinZip and send it to you.
When your system receives this file it will look at data within the header and knows
what application can decompress the file. If your system has WinZip installed, then the
file can be decompressed and presented to you in its original form. If your system does
not have an application that understands the compression/decompression instruc-
tions, the file will be presented to you with an unassociated icon.
There are no protocols that work at the presentation layer. Network services work at
this layer, and when a message is received from a different computer, the service basi-
cally tells the application protocol, “This is an ASCII file and it is encrypted with Win-
Zip. Now you figure out what to do with it.”
Session Layer
I don’t want to talk to another computer. I want to talk to an application.
When two applications need to communicate or transfer data between themselves,
a connection may need to be set up between them. The session layer, layer 5, is respon-
sible for establishing a connection between the two applications, maintaining it during
the transfer of data, and controlling the release of this connection. A good analogy for
the functionality within this layer is a telephone conversation. When Kandy wants to
call a friend, she uses the telephone. The telephone network circuitry and protocols set
up the connection over the telephone lines and maintain that communication path,
and when Kandy hangs up, they release all the resources they were using to keep that
connection open.
Similar to how telephone circuitry works, the session layer works in three phases:
connection establishment, data transfer, and connection release. It provides session
restart and recovery if necessary and provides the overall maintenance of the session.
When the conversation is over, this path is broken down and all parameters are set back
Figure 6-4
The presentation
layer receives data
from the application
layer and puts it into
a standard format.

CISSP All-in-One Exam Guide
524
to their original settings. This process is known as dialog management. Figure 6-5 depicts
the three phases of a session. Some protocols that work at this layer are Structured
Query Language (SQL), NetBIOS, and remote procedure call (RPC).
The session layer protocol can enable communication between two applications to
happen in three different modes:
•Simplex Communication takes place in one direction.
•Half-duplex Communication takes place in both directions, but only one
application can send information at a time.
•Full-duplex Communication takes place in both directions, and both
applications can send information at the same time.
Many people have a hard time understanding the difference between what takes
place at the session layer versus the transport layer because their definitions sound
similar. Session layer protocols control application-to-application communication,
whereas the transport layer protocols handle computer-to-computer communication.
For example, if you are using a product that is working in a client/server model, in real-
ity you have a small piece of the product on your computer (client portion) and the
larger piece of the software product is running on a different computer (server portion).
The communication between these two pieces of the same software product needs to be
controlled, which is why session layer protocols even exist. Session layer protocols take
on the functionality of middleware, which allows software on two different computers
to communicate.
Figure 6-5
The session
layer sets up the
connection, maintains
it, and tears it down
once communication
is completed.

Chapter 6: Telecommunications and Network Security
525
Session layer protocols provide interprocess communication channels, which allow
a piece of software on one system to call upon a piece of software on another system
without the programmer having to know the specifics of the software on the receiving
system. The programmer of a piece of software can write a function call that calls upon
a subroutine. The subroutine could be local to the system or be on a remote system. If
the subroutine is on a remote system, the request is carried over a session layer protocol.
The result that the remote system provides is then returned to the requesting system over
the same session layer protocol. This is how RPC works. A piece of software can execute
components that reside on another system. This is the core of distributed computing. We
will be looking at standards and technologies (CORBA, DCOM, SOAP, .Net Framework)
that are used by programmers to provide this type of functionality in Chapter 10.
CAUTION
CAUTION One security issue common to RPC (and similar interprocess
communication software) is the lack of authentication or the use of
weak authentication. Secure RPC can be implemented, which requires
authentication to take place before two computers located in different
locations can communicate with each other. Authentication can take place
using shared secrets, public keys, or Kerberos tickets. Session layer protocols
need to provide secure authentication capabilities.
Session layer protocols are the least used protocols in a network environment; thus,
many of them should be disabled on systems to decrease the chance of them getting
exploited. RPC and similar distributed computing calls usually only need to take place
within a network; thus, firewalls should be configured so this type of traffic is not al-
lowed into or out of a network. Firewall filtering rules should be in place to stop this
type of unnecessary and dangerous traffic.
The next section will dive into the functionality of the transport layer protocols.
Transport Layer
How do I know if I lose a piece of the message?
Response: The transport layer will fix it for you.
When two computers are going to communicate through a connection-oriented
protocol, they will first agree on how much information each computer will send at a
time, how to verify the integrity of the data once received, and how to determine whether
a packet was lost along the way. The two computers agree on these parameters through
a handshaking process at the transport layer, layer 4. The agreement on these issues
before transferring data helps provide more reliable data transfer, error detection, cor-
rection, recovery, and flow control, and it optimizes the network services needed to
perform these tasks. The transport layer provides end-to-end data transport services and
establishes the logical connection between two communicating computers.
NOTE
NOTE Connection-oriented protocols, such as TCP, provide reliable data
transmission when compared to connectionless protocols, such as User
Datagram Protocol (UDP). This distinction is covered in more detail in the
“TCP/IP Model” section, later in the chapter.

CISSP All-in-One Exam Guide
526
The functionality of the session and transport layers is similar insofar as they both
set up some type of session or virtual connection for communication to take place. The
difference is that protocols that work at the session layer set up connections between
applications, whereas protocols that work at the transport layer set up connections be-
tween computer systems. For example, we can have three different applications on com-
puter A communicating to three applications on computer B. The session layer protocols
keep track of these different sessions. You can think of the transport layer protocol as
the bus. It does not know or care what applications are communicating with each oth-
er. It just provides the mechanism to get the data from one system to another.
The transport layer receives data from many different applications and assembles
the data into a stream to be properly transmitted over the network. The main protocols
that work at this layer are TCP, UDP, Secure Sockets Layer (SSL), and Sequenced Packet
Exchange (SPX). Information is passed down from different entities at higher layers to
the transport layer, which must assemble the information into a stream, as shown in
Figure 6-6. The stream is made up of the various data segments passed to it. Just like a
bus can carry a variety of people, the transport layer protocol can carry a variety of ap-
plication data types.
NOTE
NOTE Different references can place specific protocols at different layers.
For example, many references place the SSL protocol in the session layer, while
other references place it in the transport layer. It is not that one is right or
wrong. The OSI model tries to draw boxes around reality, but some protocols
straddle the different layers. SSL is made up of two protocols—one works in
the lower portion of the session layer and the other works in the transport
layer. For purposes of the CISSP exam, SSL resides in the transport layer.
Figure 6-6 TCP formats data from applications into a stream to be prepared for transmission.

Chapter 6: Telecommunications and Network Security
527
Network Layer
Many roads lead to Rome.
The main responsibilities of the network layer, layer 3, are to insert information into
the packet’s header so it can be properly addressed and routed, and then to actually
route the packets to their proper destination. In a network, many routes can lead to one
destination. The protocols at the network layer must determine the best path for the
packet to take. Routing protocols build and maintain their routing tables. These tables
are maps of the network, and when a packet must be sent from computer A to com-
puter M, the protocols check the routing table, add the necessary information to the
packet’s header, and send it on its way.
The protocols that work at this layer do not ensure the delivery of the packets. They
depend on the protocols at the transport layer to catch any problems and resend pack-
ets if necessary. IP is a common protocol working at the network layer, although other
routing and routed protocols work there as well. Some of the other protocols are the
Internet Control Message Protocol (ICMP), Routing Information Protocol (RIP), Open
Shortest Path First (OSPF), Border Gateway Protocol (BGP), and Internet Group Man-
agement Protocol (IGMP). Figure 6-7 shows that a packet can take many routes and
that the network layer enters routing information into the header to help the packet
arrive at its destination.
Figure 6-7 The network layer determines the most efficient path for each packet to take.

CISSP All-in-One Exam Guide
528
Data Link Layer
As we continue down the protocol stack, we are getting closer to the actual transmission
channel (i.e., network wire) over which all these data will travel. The outer format of the
data packet changes slightly at each layer, and it comes to a point where it needs to be
translated into the LAN or wide area network (WAN) technology binary format for
proper line transmission. This happens at the data link layer, layer 2.
LAN and WAN technologies can use different protocols, network interface cards
(NICs), cables, and transmission methods. Each of these components has a different
header data format structure, and they interpret electricity voltages in different ways.
The data link layer is where the network stack knows what format the data frame must
be in to transmit properly over Token Ring, Ethernet, ATM, or Fiber Distributed Data
Interface (FDDI) networks. If the network is an Ethernet network, for example, all the
computers will expect packet headers to be a certain length, the flags to be positioned
in certain field locations within the header, and the trailer information to be in a cer-
tain place with specific fields. Compared to Ethernet, Token Ring network technology
has different frame header lengths, flag values, and header formats.
The data link layer is responsible for proper communication within the network
components and for changing the data into the necessary format (electrical voltage) for
the physical layer. It will also manage to reorder frames that are received out of se-
quence, and notify upper-layer protocols when there are transmission error conditions.
The data link layer is divided into two functional sublayers: the Logical Link Control
(LLC) and the Media Access Control (MAC). The LLC, defined in the IEEE 802.2 speci-
fication, communicates with the protocol immediately above it, the network layer. The
MAC will have the appropriately loaded protocols to interface with the protocol re-
quirements of the physical layer.
As data is passed down the network stack it has to go from the network layer to the
data link layer. The protocol at the network layer does not know if the underlying net-
work is Ethernet, Token Ring, or ATM—it does not need to have this type of insight. The
protocol at the network layer just adds its header and trailer information to the packet
and it passes it on to the next layer, which is the LLC sublayer. The LLC layer takes care
of flow control and error checking. Data coming from the network layer passes down
through the LLC sublayer and goes to MAC. The technology at the MAC sublayer knows
if the network is Ethernet, Token Ring, or ATM, so it knows how to put the last header
and trailer on the packet before it “hits the wire” for transmission.
The IEEE MAC specification for Ethernet is 802.3, Token Ring is 802.5, wireless LAN
is 802.11, and so on. So when you see a reference to an IEEE standard, such as 802.11,
802.16, or 802.3, it refers to the protocol working at the MAC sublayer of the data link
layer of a protocol stack.
Some of the protocols that work at the data link layer are the Point-to-Point Proto-
col (PPP), ATM, Layer 2 Tunneling Protocol (L2TP), FDDI, Ethernet, and Token Ring.
Figure 6-8 shows the two sublayers that make up the data link layer.

Chapter 6: Telecommunications and Network Security
529
Each network technology (Ethernet, ATM, FDDI, and so on) defines the compatible
physical transmission type (coaxial, twisted pair, fiber, wireless) that is required to en-
able network communication. Each network technology also has defined electronic sig-
naling and encoding patterns. For example, if the MAC sublayer received a bit with the
value of 1 that needed to be transmitted over an Ethernet network, the MAC sublayer
technology would tell the physical layer to create 0.5 volts in electricity. In the “language
of Ethernet” this means that 0.5 volts is the encoding value for a bit with the value of 1.
If the next bit the MAC sublayer receives is 0, the MAC layer would tell the physical layer
to transmit 0 volts. The different network types will have different encoding schemes. So
a bit value of 1 in an ATM network might actually be encoded to the voltage value of
0.85. It is just a sophisticated Morse code system. The receiving end will know when it
receives a voltage value of 0.85 that a bit with the value of 1 has been transmitted.
Network cards bridge the data link and physical layers. Data is passed down through
the first six layers and reaches the network card driver at the data link layer. Depending
on the network technology being used (Ethernet, Token Ring, FDDI, and so on), the
network card driver encodes the bits at the data link layer, which are then turned into
electricity states at the physical layer and placed onto the wire for transmission.
NOTE
NOTE When the data link layer applies the last header and trailer to the data
message, this is referred to as framing. The unit of data is now called a frame.
IEEE 802 Layers
Upper layers
LLC
Logical link control
MAC
Media access control
Data link
Physical Physical
Figure 6-8 The data link layer is made up of two sublayers.

CISSP All-in-One Exam Guide
530
Physical Layer
Everything ends up as electrical signals anyway.
The physical layer, layer 1, converts bits into voltage for transmission. Signals and
voltage schemes have different meanings for different LAN and WAN technologies, as
covered earlier. If a user sends data through his dial-up software and out his modem
onto a telephone line, the data format, electrical signals, and control functionality are
much different than if that user sends data through the NIC and onto a unshielded
twisted pair (UTP) wire for LAN communication. The mechanisms that control this
data going onto the telephone line, or the UTP wire, work at the physical layer. This
layer controls synchronization, data rates, line noise, and transmission techniques.
Specifications for the physical layer include the timing of voltage changes, voltage lev-
els, and the physical connectors for electrical, optical, and mechanical transmission.
NOTE
NOTE APSTNDP—To remember all the layers within the OSI model in
the correct order, memorize “All People Seem To Need Data Processing.”
Remember that you are starting at layer 7, the application layer, at the top.
Functions and Protocols in the OSI Model
For the exam, you will need to know the functionality that takes place at the different
layers of the OSI model, along with specific protocols that work at each layer. The fol-
lowing is a quick overview of each layer and its components.
Application
The protocols at the application layer handle file transfer, virtual terminals, network
management, and fulfilling networking requests of applications. A few of the protocols
that work at this layer include
• File Transfer Protocol (FTP)
• Trivial File Transfer Protocol (TFTP)
• Simple Network Management Protocol (SNMP)
• Simple Mail Transfer Protocol (SMTP)
• Telnet
• Hypertext Transfer Protocol (HTTP)
Presentation
The services of the presentation layer handle translation into standard formats, data com-
pression and decompression, and data encryption and decryption. No protocols work at
this layer, just services. The following lists some of the presentation layer standards:
• American Standard Code for Information Interchange (ASCII)
• Extended Binary-Coded Decimal Interchange Mode (EBCDIC)
• Tagged Image File Format (TIFF)

Chapter 6: Telecommunications and Network Security
531
• Joint Photographic Experts Group (JPEG)
• Motion Picture Experts Group (MPEG)
• Musical Instrument Digital Interface (MIDI)
Session
The session layer protocols set up connections between applications; maintain dialog
control; and negotiate, establish, maintain, and tear down the communication chan-
nel. Some of the protocols that work at this layer include
• Network File System (NFS)
• NetBIOS
• Structured Query Language (SQL)
• Remote procedure call (RPC)
Transport
The protocols at the transport layer handle end-to-end transmission and segmentation
of a data stream. The following protocols work at this layer:
• Transmission Control Protocol (TCP)
• User Datagram Protocol (UDP)
• Secure Sockets Layer (SSL)/Transport Layer Security (TLS)
• Sequenced Packet Exchange (SPX)
Network
The responsibilities of the network layer protocols include internetworking service, ad-
dressing, and routing. The following lists some of the protocols that work at this layer:
• Internet Protocol (IP)
• Internet Control Message Protocol (ICMP)
• Internet Group Management Protocol (IGMP)
• Routing Information Protocol (RIP)
• Open Shortest Path First (OSPF)
• Internetwork Packet Exchange (IPX)
Data Link
The protocols at the data link layer convert data into LAN or WAN frames for transmis-
sion and define how a computer accesses a network. This layer is divided into the Logi-
cal Link Control (LLC) and the Media Access Control (MAC) sublayers. Some protocols
that work at this layer include the following:
• Address Resolution Protocol (ARP)
• Reverse Address Resolution Protocol (RARP)

CISSP All-in-One Exam Guide
532
• Point-to-Point Protocol (PPP)
• Serial Line Internet Protocol (SLIP)
• Ethernet
• Token Ring
• FDDI
• ATM
Physical
Network interface cards and drivers convert bits into electrical signals and control the
physical aspects of data transmission, including optical, electrical, and mechanical re-
quirements. The following are some of the standard interfaces at this layer:
• EIA-422, EIA-423, RS-449, RS-485
• 10BASE-T, 10BASE2, 10BASE5, 100BASE-TX, 100BASE-FX, 100BASE-T,
1000BASE-T, 1000BASE-SX
• Integrated Services Digital Network (ISDN)
• Digital subscriber line (DSL)
• Synchronous Optical Networking (SONET)
Tying the Layers Together
Pick up all of these protocols from the floor and put them into a stack—a network stack.
The OSI model is used as a framework for many network-based products and is
used by many types of vendors. Various types of devices and protocols work at different
parts of this seven-layer model. The main reason that a Cisco switch, Microsoft web
server, a Barracuda firewall, and a Belkin wireless access point can all communicate
properly on one network is because they all work within the OSI model. They do not
have their own individual ways of sending data; they follow a standardized manner of
communication, which allows for interoperability and allows a network to be a net-
work. If a product does not follow the OSI model, it will not be able to communicate
with other devices on the network because the other devices will not understand its
proprietary way of communicating.
The different device types work at specific OSI layers. For example, computers can
interpret and process data at each of the seven layers, but routers can understand infor-
mation only up to the network layer because a router’s main function is to route pack-
ets, which does not require knowledge about any further information within the
packet. A router peels back the header information until it reaches the network layer
data, where the routing and IP address information is located. The router looks at this
information to make its decisions on where the packet should be routed. Bridges and
switches understand only up to the data link layer, and repeaters understand traffic
only at the physical layer. So if you hear someone mention a “layer 3 device,” the per-
son is referring to a device that works at the network layer. A “layer 2 device” works at
the data link layer. Figure 6-9 shows what level of the OSI model each type of device
works within.

Chapter 6: Telecommunications and Network Security
533
NOTE
NOTE Some techies like to joke that all computer problems reside at
layer 8. The OSI model does not have an eighth layer, and what these people
are referring to is the user of a computer. So if someone states that there is
a problem at layer 8, this is code for “the user is an idiot.”
Let’s walk through an example. I open an FTP client on my computer and connect
to an FTP server on my network. In my FTP client I chose to download a Word docu-
ment from the server. The FTP server now has to move this file over the network to my
computer. The server sends this document to the FTP application protocol on its net-
work stack. This FTP protocol puts headers and trailers on the document and passes it
down to the presentation layer. A service at the presentation layer adds a header that
indicates this document is in ASCII format so my system knows how to open the file
when it is received.
Figure 6-9 Each device works at a particular layer within the OSI model.

CISSP All-in-One Exam Guide
534
This bundle is then handed to the transport layer TCP, which also adds a header and
trailer, which include source and destination port values. The bundle continues down
the network stack to the IP protocol, which provides a source IP address (FTP server)
and a destination IP address (my system). The bundle goes to the data link layer, and
the server’s NIC driver encodes the bundle to be able to be transmitted over the Ether-
net connection between the server and my system.
TCP/IP Model
Transmission Control Protocol/Internet Protocol (TCP/IP) is a suite of protocols that
governs the way data travel from one device to another. Besides its eponymous two
main protocols, TCP/IP includes other protocols as well, which we will cover in this
chapter.
IP is a network layer protocol and provides datagram routing services. IP’s main task
is to support internetwork addressing and packet routing. It is a connectionless proto-
col that envelops data passed to it from the transport layer. The IP protocol addresses
the datagram with the source and destination IP addresses. The protocols within the
TCP/IP suite work together to break down the data passed from the application layer
into pieces that can be moved along a network. They work with other protocols to
transmit the data to the destination computer and then reassemble the data back into
a form that the application layer can understand and process.
Two main protocols work at the transport layer: TCP and UDP. TCP is a reliable and
connection-oriented protocol, which means it ensures packets are delivered to the desti-
nation computer. If a packet is lost during transmission, TCP has the ability to identify
this issue and resend the lost or corrupted packet. TCP also supports packet sequencing
(to ensure each and every packet was received), flow and congestion control, and error
detection and correction. UDP, on the other hand, is a best-effort and connectionless
protocol. It has neither packet sequencing nor flow and congestion control, and the
destination does not acknowledge every packet it receives.
IP
IP is a connectionless protocol that provides the addressing and routing capabili-
ties for each package of data.
The data, IP, and network relationship can be compared to the relationship
between a letter and the postal system:
• Data = Letter
• IP = Addressed envelope
• Network = Postal system
The message is the letter, which is enveloped and addressed by IP, and the
network and its services enable the message to be sent from its origin to its desti-
nation, like the postal system.

Chapter 6: Telecommunications and Network Security
535
TCP
TCP is referred to as a connection-oriented protocol because before any user data are
actually sent, handshaking takes place between the two systems that want to communi-
cate. Once the handshaking completes successfully, a virtual connection is set up be-
tween the two systems. UDP is considered a connectionless protocol because it does
not go through these steps. Instead, UDP sends out messages without first contacting
the destination computer and does not know if the packets were received properly or
dropped. Figure 6-10 shows the difference between a connection-oriented and a con-
nectionless protocol.
UDP and TCP sit together on the transport layer, and developers can choose which
to use when developing applications. Many times, TCP is the transport protocol of
choice because it provides reliability and ensures the packets are delivered. For exam-
ple, SMTP is used to transmit e-mail messages and uses TCP because it must make sure
the data are delivered. TCP provides a full-duplex, reliable communication mechanism,
and if any packets are lost or damaged, they are re-sent; however, TCP requires a lot of
system overhead compared to UDP.
If a programmer knows data dropped during transmission is not detrimental to the
application, he may choose to use UDP because it is faster and requires fewer resources.
For example, UDP is a better choice than TCP when a server sends status information
to all listening nodes on the network. A node will not be negatively affected if, by some
chance, it did not receive this status information because the information will be re-
sent every 60 seconds.
UDP and TCP are transport protocols that applications use to get their data across
a network. They both use ports to communicate with upper OSI layers and to keep track
of various conversations that take place simultaneously. The ports are also the mecha-
nism used to identify how other computers access services. When a TCP or UDP
Figure 6-10 Connection-oriented versus connectionless protocol functionality

CISSP All-in-One Exam Guide
536
message is formed, source and destination ports are contained within the header infor-
mation along with the source and destination IP addresses. This makes up a socket, and
is how packets know where to go (by the address) and how to communicate with the
right service or protocol on the other computer (by the port number). The IP address
acts as the doorway to a computer, and the port acts as the doorway to the actual pro-
tocol or service. To communicate properly, the packet needs to know these doors. Fig-
ure 6-11 shows how packets communicate with applications and services through ports.
Figure 6-11 The packet can communicate with upper-layer protocols and services through a port.

Chapter 6: Telecommunications and Network Security
537
The difference between TCP and UDP can also be seen in the message formats. Be-
cause TCP offers more services than UDP, it must contain much more information
within its packet header format, as shown in Figure 6-12. Table 6-1 lists the major dif-
ferences between TCP and UDP.
Ports Types
Port numbers up to 1023 (0 to 1023) are called well-known ports, and almost
every computer in the world has the exact same protocol mapped to the exact
same port number. That is why they are called well known—everyone follows this
same standardized approach. This means that on almost every computer, port 25
is mapped to SMTP, port 21 is mapped to FTP, port 80 is mapped to HTTP, and so
on. This mapping between lower-numbered ports and specific protocols is a de
facto standard, which just means that we all do this and that we do not have a
standards body dictating that it absolutely has to be done this way. The fact that
almost everyone follows this approach translates to more interoperability among
systems all over the world.
NOTE
NOTE Ports 0 to 1023 can be used only by privileged system or root
processes.
Because this is a de facto standard and not a standard that absolutely must be
followed, administrators can map different protocols to different port numbers if
that fits their purpose.
The following shows some of the most commonly used protocols and the
ports to which they are usually mapped:
• Telnet port 23
• SMTP port 25
• HTTP port 80
• SNMP ports 161 and 162
• FTP ports 21 and 20
Registered ports are 1024 to 49151, which can be registered with the Internet
Corporation for Assigned Names and Numbers (ICANN) for a particular use.
Vendors register specific ports to map to their proprietary software. Dynamic ports
are 49152 to 65535 and are available to be used by any application on an “as
needed” basis.

CISSP All-in-One Exam Guide
538
Figure 6-12 TCP carries a lot more information within its segment because it offers more services
than UDP.
Property TCP UDP
Reliability Ensures that packets reach their
destinations, returns ACKs when
packets are received, and is a
reliable protocol.
Does not return ACKs and does
not guarantee that a packet
will reach its destination. Is an
unreliable protocol.
Connection Connection-oriented. It
performs handshaking and
develops a virtual connection
with the destination computer.
Connectionless. It does no
handshaking and does not set up a
virtual connection.
Packet sequencing Uses sequence numbers within
headers to make sure each
packet within a transmission is
received.
Does not use sequence numbers.
Congestion controls The destination computer
can tell the source if it is
overwhelmed and thus slow the
transmission rate.
The destination computer does not
communicate back to the source
computer about flow control.
Usage Used when reliable delivery
is required. Suitable for
relatively small amounts
of data transmission.
Used when reliable delivery is
not required and high volumes of
data need to be transmitted, such
as in streaming video and status
broadcasts.
Speed and overhead Uses a considerable amount
of resources and is slower
than UDP.
Uses fewer resources and is faster
than TCP.
Table 6-1 Major Differences between TCP and UDP

Chapter 6: Telecommunications and Network Security
539
TCP Handshake
Every proper dialog begins with a polite handshake.
TCP must set up a virtual connection between two hosts before any data are sent.
This means the two hosts must agree on certain parameters, data flow, windowing, er-
ror detection, and options. These issues are negotiated during the handshaking phase,
as shown in Figure 6-13.
The host that initiates communication sends a synchronous (SYN) packet to the
receiver. The receiver acknowledges this request by sending a SYN/ACK packet. This
packet translates into, “I have received your request and am ready to communicate with
you.” The sending host acknowledges this with an acknowledgment (ACK) packet,
which translates into, “I received your acknowledgment. Let’s start transmitting our
data.” This completes the handshaking phase, after which a virtual connection is set up,
and actual data can now be passed. The connection that has been set up at this point is
considered full duplex, which means transmission in both directions is possible using
the same transmission line.
If an attacker sends a target system SYN packets with a spoofed address, then the
victim system replies to the spoofed address with SYN/ACK packets. Each time the vic-
tim system receives one of these SYN packets it sets aside resources to manage the new
connection. If the attacker floods the victim system with SYN packets, eventually the
victim system allocates all of its available TCP connection resources and can no longer
process new requests. This is a type of DoS that is referred to as a SYN flood. To thwart
this type of attack you can use SYN proxies, which limit the number of open and aban-
doned network connections. The SYN proxy is a piece of software that resides between
the sender and receiver and only sends on TCP traffic to the receiving system if the TCP
handshake process completes successfully.
Another attack vector we need to understand is TCP sequence numbers. One of the
values that is agreed upon during a TCP handshake between two systems is the se-
quence numbers that will be inserted into the packet headers. Once the sequence num-
ber is agreed upon, if a receiving system receives a packet from the sending system that
does not have this predetermined value, it will disregard the packet. This means that an
attacker cannot just spoof the address of a sending system to fool a receiving system; the
attacker has to spoof the sender’s address and use the correct sequence number values.
If an attacker can correctly predict the TCP sequence numbers that two systems will use,
then she can create packets containing those numbers and fool the receiving system
into thinking that the packets are coming from the authorized sending system. She can
then take over the TCP connection between the two systems, which is referred to as TCP
session hijacking.
Figure 6-13
The TCP three-way
handshake

CISSP All-in-One Exam Guide
540
Data Structures
What’s in a name?
As stated earlier, the message is formed and passed to the application layer from a
program and sent down through the protocol stack. Each protocol at each layer adds its
own information to the message and passes it down to the next level. This activity is
referred to as data encapsulation. As the message is passed down the stack, it goes
through a sort of evolution, and each stage has a specific name that indicates what is
taking place. When an application formats data to be transmitted over the network, the
data are called a message or data. The message is sent to the transport layer, where TCP
does its magic on the data. The bundle of data is now a segment. The segment is sent to
the network layer. The network layer adds routing and addressing, and now the bundle
is called a packet. The network layer passes off the packet to the data link layer, which
frames the packet with a header and a trailer, and now it is called a frame. Figure 6-14
illustrates these stages.
NOTE
NOTE If the message is being transmitted over TCP, it is referred to as a
“segment.” If it is being transmitted over UDP, it is referred to as a “datagram.”
Sometimes when an author refers to a segment, she is specifying the stage in which
the data are located within the protocol stack. If the literature is describing routers,
which work at the network layer, the author might use the word “packet” because the
data at this level have routing and addressing information attached. If an author is de-
Protocol Data Units (PDUs)
Application: Data
Transport: Segments
Network: Packets
Data Link:
Frames
Bits
101001000101010001110101010000100
Data
Decapsulation
Data
Data
Data
Transport
header
Transport
header
Transport
header
Network
header
Network
header
Frame
header
Frame
trailer
Figure 6-14 The data go through their own evolutionary stages as they pass through the layers
within the network stack.

Chapter 6: Telecommunications and Network Security
541
scribing network traffic and flow control, she might use the word “frame” because all
data actually end up in the frame format before they are put on the network wire.
The important thing here is that you understand the various steps a data package
goes through when it moves up and down the protocol stack.
IP Addressing
Take a right at the router and a left at the access server. I live at 10.10.2.3.
Each node on a network must have a unique IP address. Today, the most com-
monly used version of IP is IP version 4 (IPv4), but its addresses are in such high de-
mand that their supply has started to run out. IP version 6 (IPv6) was created to address
this shortage. (IPv6 also has many security features built into it that are not part of
IPv4.) IPv6 is covered later in this chapter.
IPv4 uses 32 bits for its addresses, whereas IPv6 uses 128 bits; thus, IPv6 provides
more possible addresses with which to work. Each address has a host portion and a
network portion, and the addresses are grouped into classes and then into subnets. The
subnet mask of the address differentiates the groups of addresses that define the sub-
nets of a network. IPv4 address classes are listed in Table 6-2.
For any given IP network within an organization, all nodes connected to the net-
work can have different host addresses but a common network address. The host ad-
dress identifies every individual node, whereas the network address is the identity of
the network they are all connected to; therefore, it is the same for each one of them. Any
traffic meant for nodes on this network will be sent to the prescribed network address.
Asubnet is created from the host portion of an IP address to designate a “sub” net-
work. This allows us to further break the host portion of the address into two or more
logical groupings, as shown in Figure 6-15. A network can be logically partitioned to
reduce administration headaches, traffic performance, and potentially security. As an
analogy, let’s say you work at Toddlers R Us and you are responsible for babysitting 100
toddlers. If you kept all 100 toddlers in one room, you would probably end up killing
a few of them or yourself. To better manage these kids, you could break them up into
groups. The three-year-olds go in the yellow room, the four-year-olds go in the green
Class A 0.0.0.0 to 127.255.255.255 The first byte is the network portion, and
the remaining three bytes are the host
portion.
Class B 128.0.0.0 to 191.255.255.255 The first two bytes are the network
portion, and the remaining two bytes are
the host portion.
Class C 192.0.0.0 to 223.255.255.255 The first three bytes are the network
portion, and the remaining one byte is the
host portion.
Class D 224.0.0.0 to 239.255.255.255 Used for multicast addresses.
Class E 240.0.0.0 to 255.255.255.255 Reserved for research.
Table 6-2 IPv4 Addressing

CISSP All-in-One Exam Guide
542
room, and the five-year-olds go in the blue room. This is what a network administrator
would do—break up and separate computer nodes to be able to better control them.
Instead of putting them into physical rooms, the administrator puts them into logical
rooms (subnets).
To continue with our analogy, when you put your toddlers in different rooms, you
would have physical barriers that separate them—walls. Network subnetting is not
physical; it is logical. This means you would not have physical walls separating your
individual subnets, so how do you keep them separate? This is where subnet masks
come into play. A subnet mask defines smaller networks inside a larger network, just
like individual rooms are defined within a building.
Subnetting allows large IP ranges to be divided into smaller, logical, and more tan-
gible network segments. Consider an organization with several divisions, such as IT,
Accounting, HR, and so on. Creating subnets for each division breaks the networks into
logical partitions that route traffic directly to recipients without dispersing data all over
the network. This drastically reduces the traffic load across the network, reducing the
possibility of network congestion and excessive broadcast packets in the network. Im-
plementing network security policies is also much more effective across logically cate-
gorized subnets with a demarcated perimeter, as compared to a large, cluttered, and
complex network.
Figure 6-15
Subnets create
logical partitions.

Chapter 6: Telecommunications and Network Security
543
Subnetting is particularly beneficial in keeping down routing table sizes, because
external routers can directly send data to the actual network segment without having to
worry about the internal architecture of that network and getting the data to individual
hosts. This job can be handled by the internal routers, which can determine the indi-
vidual hosts in a subnetted environment and save the external routers the hassle of
analyzing all 32 bits of an IP address and just look at the “masked” bits.
NOTE
NOTE To really understand subnetting, you need to dig down into how
IP addresses work at the binary level. You should not have to calculate any
subnets for the CISSP exam, but for a better understanding of how this
stuff works under the hood, visit http://compnetworking.about.com/od/
workingwithipaddresses/a/subnetmask.htm.
If the traditional subnet masks are used, they are referred to as classful or classical
IP addresses. If an organization needs to create subnets that do not follow these tradi-
tional sizes, then it would use classless IP addresses. This just means a different subnet
mask would be used to define the network and host portions of the addresses. After it
became clear that available IP addresses were running out as more individuals and cor-
porations participated on the Internet, classless interdomain routing (CIDR) was creat-
ed. A Class B address range is usually too large for most companies, and a Class C
address range is too small, so CIDR provides the flexibility to increase or decrease the
class sizes as necessary. CIDR is the method to specify more flexible IP address classes.
CIDR is also referred to as supernetting.
NOTE
NOTE To better understand CIDR, visit the following resource:
www.tcpipguide.com/free/t_
IPClasslessAddressingClasslessInterDomainRoutingCI.htm.
Although each node has an IP address, people usually refer to their hostname rath-
er than their IP address. Hostnames, such as www.logicalsecurity.com, are easier for
humans to remember than IP addresses, such as 10.13.84.4. However, the use of these
two nomenclatures requires mapping between the hostnames and IP addresses, be-
cause the computer understands only the numbering scheme. This process is addressed
in the “Domain Name Service” section later in this chapter.
NOTE
NOTE IP provides addressing, packet fragmentation, and packet timeouts. To
ensure that packets do not continually traverse a network forever, IP provides
aTime to Live (TTL) value that is decremented every time the packet passes
through a router. IP can also provide a Type of Service (ToS) capability, which
means it can prioritize different packets for time-sensitive functions.

CISSP All-in-One Exam Guide
544
IPv6
What happened to version 5?
Response: It smelled funny.
IPv6, also called IP next generation (IPng), not only has a larger address space than
IPv4 to support more IP addresses; it has some capabilities that IPv4 does not and it
accomplishes some of the same tasks differently. All of the specifics of the new func-
tions within IPv6 are beyond the scope of this book, but we will look at a few of them,
because IPv6 is the way of the future. IPv6 allows for scoped addresses, which enables
an administrator to restrict specific addresses for specific servers or file and print shar-
ing, for example. IPv6 has Internet Protocol Security (IPSec) integrated into the proto-
col stack, which provides end-to-end secure transmission and authentication. IPv6 has
more flexibility and routing capabilities and allows for Quality of Service (QoS) prior-
ity values to be assigned to time-sensitive transmissions. The protocol offers autocon-
figuration, which makes administration much easier, and it does not require network
address translation (NAT) to extend its address space.
NAT was developed because IPv4 addresses were running out. Although the NAT
technology is extremely useful, it has caused a lot of overhead and transmission prob-
lems because it breaks the client/server model that many applications use today. One
reason the industry did not jump on the IPv6 bandwagon when it came out years ago
is that NAT was developed, which reduced the speed at which IP addresses were being
depleted. Although the conversion rate from IPv4 to IPv6 is slow in some parts of the
world and the implementation process is quite complicated, the industry is making the
shift because of all the benefits that IPv6 brings to the table.
NOTE
NOTE NAT is covered in the “Network Address Translation” section later in
this chapter.
The IPv6 specification, as outlined in RFC 2460, lays out the differences and bene-
fits of IPv6 over IPv4. A few of the differences are as follows:
• IPv6 increases the IP address size from 32 bits to 128 bits to support more
levels of addressing hierarchy, a much greater number of addressable nodes,
and simpler autoconfiguration of addresses.
• The scalability of multicast routing is improved by adding a “scope” field to
multicast addresses. Also, a new type of address called an anycast address is
defined, which is used to send a packet to any one of a group of nodes.
• Some IPv4 header fields have been dropped or made optional to reduce the
common-case processing cost of packet handling and to limit the bandwidth
cost of the IPv6 header. This is illustrated in Figure 6-16.
• Changes in the way IP header options are encoded allow for more efficient
forwarding, less stringent limits on the length of options, and greater
flexibility for introducing new options in the future.

Chapter 6: Telecommunications and Network Security
545
• A new capability is added to enable the labeling of packets belonging to
particular traffic “flows” for which the sender requests special handling, such
as nondefault QoS or “real-time” service.
• Extensions to support authentication, data integrity, and (optional) data
confidentiality are also specified for IPv6.
IPv4 limits packets to 65,535 octets of payload, and IPv6 extends this size to
4,294,967,295 octets. These larger packets are referred to as jumbograms and improve
performance over high-maximum transmission unit (MTU) links. Currently most of
the world still uses IPv4, but IPv6 is being deployed more rapidly. This means that there
are “pockets” of networks using IPv4 and “pockets” of networks using IPv6 that still
need to communicate. This communication takes place through different tunneling
techniques, which either encapsulates IPv6 packets within IPv4 packets or carries out
automated address translations. Automatic tunneling is a technique where the routing
infrastructure automatically determines the tunnel endpoints so that protocol tunnel-
ing can take place without preconfiguration. In the 6to4 tunneling method the tunnel
endpoints are determined by using a well-known IPv4 anycast address on the remote
side and embeds IPv4 address data within IPv6 addresses on the local side. Teredo is
another automatic tunneling technique that uses UDP encapsulation so that NAT ad-
dress translations are not affected. Intra-Site Automatic Tunnel Addressing Protocol (ISA-
TAP) treats the IPv4 network as a virtual IPv6 local link, with mappings from each IPv4
address to a link-local IPv6 address.
Figure 6-16 IPv4 versus IPv6 headers

CISSP All-in-One Exam Guide
546
The 6to4 and Teredo are intersite tunneling mechanisms, and ISATAP is an intrasite
mechanism. So the first two are used for connectivity between different networks, and
ISATAP is used for connectivity of systems within a specific network. Notice in Figure 6-17
that 6to4 and Teredo are used on the Internet and ISATAP is used within an intranet.
While many of these automatic tunneling techniques reduce administration over-
head because network administrators do not have to configure each and every system
and network device with two different IP addresses, there are security risks that need to
be understood. Many times users and network administrators do not know that auto-
matic tunneling capabilities are enabled, thus they do not ensure that these different
tunnels are secured and/or are being monitored. If you are an administrator of a net-
work and have Intrusion Detection Service (IDS), Intrusion Prevention Service (IPS),
and firewalls that are only configured to monitor and restrict IPv4 traffic, then all IPv6
traffic could be traversing your network insecurely. Attackers use these protocol tunnels
and misconfigurations to get past these types of security devices so that malicious ac-
tivities can take place unnoticed. If you are a user and have a host-based firewall that
only understands IPv4 and your operating system has a dual IPv4/IPv6 networking
stack, traffic could be bypassing your firewall without being monitored and logged. The
use of Teredo can actually open ports in NAT devices that allow for unintended traffic
in and out of a network. It is critical that people who are responsible for configuring
and maintaining systems and networks understand the differences between IPv4 and
IPv6 and how the various tunneling mechanisms work so that all vulnerabilities are
identified and properly addressed. Products and software may need to be updated to
address both traffic types, proxies may need to be deployed to manage traffic commu-
nication securely, IPv6 should be disabled if not needed, and security appliances need
to be configured to monitor all traffic types.
Domain controllers NAP servers Certification
authority
Internal CRL
distribution point
Network location
server
DNS servers
servers
servers
Application servers
running IPv4
Application servers
running ISATAP
Application servers
running native IPv6
Figure 6-17 Various IPv4 to IPv6 tunneling techniques

Chapter 6: Telecommunications and Network Security
547
Layer 2 Security Standards
As frames pass from one network device to another device, attackers can sniff the data;
modify the headers; redirect the traffic; spoof traffic; carry out man-in-the-middle at-
tacks, DoS attacks, and replay attacks; and indulge in other malicious activities. It has
become necessary to secure network traffic at the frame level, which is layer 2 of the OSI
model.
802.1AE is the IEEE MAC Security standard (MACSec), which defines a security in-
frastructure to provide data confidentiality, data integrity, and data origin authentica-
tion. Where a Virtual Private Network (VPN) connection provides protection at the
higher networking layers, MACSec provides hop-by-hop protection at layer 2, as shown
in Figure 6-18.
MACSec integrates security protection into wired Ethernet networks to secure LAN-
based traffic. Only authenticated and trusted devices on the network can communicate
to each other. Unauthorized devices are prevented from communicating via the net-
work, which helps prevent attackers from installing rogue devices and redirecting traffic
between nodes in an unauthorized manner. When a frame arrives at a device that is
configured with MACSec, the MACSec Security Entity (SecY) decrypts the frame if nec-
essary and computes an integrity check value (ICV) on the frame and compares it with
the ICV that was sent with the frame. If the ICVs match, the device processes the frame.
If they do not match, the device handles the frame according to a preconfigured policy,
such as discarding it.
The IEEE 802.1AR standard specifies unique per-device identifiers (DevID) and the
management and cryptographic binding of a device (router, switch, access point) to its
identifiers. A verifiable unique device identity allows establishment of the trustworthi-
ness of devices, and thus facilitates secure device provisioning.
As a security administrator you really only want devices that are allowed on your
network to be plugged into your network. But how do you properly and uniquely iden-
tify devices? The manufacture serial number is not available for a protocol to review.
MAC, hostnames, and IP addresses are easily spoofed. 802.1AR defines a globally
unique per-device secure identifier cryptographically bound to the device through the
use of public cryptography and digital certificates. These unique hardware-based cre-
dentials can be used with the Extensible Authentication Protocol-Transport Layer Secu-
rity (EAP-TLS) authentication framework. Each device that is compliant with IEEE
802.1AR comes with a single built-in initial secure device identity (iDevID). The iDev-
ID is an instance of the general concept of a DevID, which is intended to be used with
authentication protocols such as EAP, which is supported by IEEE 802.1X.
Encrypted
Server
Switch Switch Switch
Encrypted Encrypted Encrypted
Workstation
Figure 6-18 MACSec provides layer 2 frame protection.

CISSP All-in-One Exam Guide
548
So 802.1AR provides a unique ID for a device. 802.1AE provides data encryption,
integrity, and origin authentication functionality. 802.1AF carries out key agreement
functions for the session keys used for data encryption. Each of these standards pro-
vides specific parameters to work within an 802.1X EAP-TLS framework, as shown in
Figure 6-19.
As Figure 6-19 shows, when a new device is installed on the network, it cannot just
start communicating with other devices, receive an IP address from a Dynamic Host
Configuration Protocol (DHCP) server, resolve names with the Domain Name Service
(DNS) server, etc. The device cannot carry out any network activity until it is authorized
to do so. So 802.1X port authentication kicks in, which means that only authentication
data are allowed to travel from the new device to the authenticating server. The authen-
tication data is the digital certificate and hardware identity associated with that device
(802.1AR), which is processed by EAP-TLS. Once the device is authenticated, usually by
a Remote Authentication Dial-In User Server (RADIUS) server, encryption keying mate-
rial is negotiated and agreed upon between surrounding network devices. Once the
keying material is installed, then data encryption and frame integrity checking can take
place (802.1AE) as traffic goes from one network device to the next.
These IEEE standards are new and evolving and at different levels of implementa-
tion by various vendors. One way the unique hardware identity and cryptographic ma-
terial are embedded in new network devices is through the use of a Trusted Platform
Module, which is discussed in Chapter 7.
Authentication
server
Internal
network /
Certificate
authority
Upstream
device
New
infrastructure
device
0. Physically install new device
1. An 802.1X conversation starts
2. EAP-TLS messages are forwarded
3. Key material is returned and stored
4. Session keys are generated
5. MACSec encryption is enabled
IEEE
802.1AF
IEEE
802.1AE
IETF
Key mgt
framework
IEEE
802.1AR
Figure 6-19 Layer 2 security protocols

Chapter 6: Telecommunications and Network Security
549
Key Terms
•Open Systems Interconnection (OSI) model International
standardization of system-based network communication through a
modular seven-layer architecture.
•TCP/IP model Standardization of device-based network communication
through a modular four-layer architecture. Specific to the IP suite, created
in 1970 by an agency of the U.S. Department of Defense (DoD).
•Transmission Control Protocol (TCP) Core protocol of the TCP/IP
suite, which provides connection-oriented, end-to-end, reliable network
connectivity.
•Internet Protocol (IP) Core protocol of the TCP/IP suite. Provides
packet construction, addressing, and routing functionality.
•User Datagram Protocol (UDP) Connectionless, unreliable transport
layer protocol, which is considered a “best effort” protocol.
•Ports Software construct that allows for application- or service-specific
communication between systems on a network. Ports are broken down
into categories: well known (0–1023), registered (1024–49151), and
dynamic (49152–65535).
•SYN flood DoS attack where an attacker sends a succession of SYN
packets with the goal of overwhelming the victim system so that it is
unresponsive to legitimate traffic.
•Session hijacking Attack method that allows an attacker to overtake
and control a communication session between two systems.
•IPv6 IP version 6 is the successor to IP version 4 and provides 128-
bit addressing, integrated IPSec security protocol, simplified header
formats, and some automated configuration.
•Subnet Logical subdivision of a network that improves network
administration and helps reduce network traffic congestion. Process
of segmenting a network into smaller networks through the use of an
addressing scheme made up of network and host portions.
•Classless Interdomain Routing Variable-length subnet masking,
which allows a network to be divided into different-sized subnets. The
goal is to increase the efficiency of the use of IP addresses since classful
addressing schemes commonly end up in unused addresses.
•6to4 Transition mechanism for migrating from IPv4 to IPv6. It allows
systems to use IPv6 to communicate if their traffic has to transverse an
IPv4 network.
•Teredo Transition mechanism for migrating from IPv4 to IPv6. It allows
systems to use IPv6 to communicate if their traffic has to transverse an
IPv4 network, but also performs its function behind NAT devices.

CISSP All-in-One Exam Guide
550
Types of Transmission
Physical data transmission can happen in different ways (analog or digital); can use
different synchronization schemes (synchronous or asynchronous); can use either one
sole channel over a transmission medium (baseband) or several different channels over
a transmission medium (broadband); and transmission can take place as electrical
voltage, radiowave, microwave, or infrared signals. These transmission types and their
characteristics are described in the following sections.
Analog and Digital
Would you like your signals wavy or square?
Asignal is just some way of moving information in a physical format from one
point to another point. I can signal a message to you through nodding my head, waving
my hand at you, or giving you a wink. Somehow I am transmitting data to you through
my signaling method. In the world of technology, we have specific carrier signals that
are in place to move data from one system to another system. The carrier signal is like
a horse, which takes a rider (data) from one place to another place. Data can be trans-
mitted through analog or digital signaling formats. If you are moving data through an
analog transmission technology (e.g., radio), then the data is represented by the char-
acteristics of the waves that are carrying it. For example, a radio station uses a transmit-
ter to put its data (music) onto a wave that will extend all the way to your antenna. The
information is stripped off by the receiver in your radio and presented to you in its
original format—a song. The data is encoded onto the carrier signal and is represented
by various amplitude and frequency values, as shown in Figure 6-20.
Data being represented in wave values (analog) are different from data being repre-
sented in discrete voltage values (digital). As an analogy, compare an analog clock and
a digital clock. An analog clock has hands that continuously rotate on the face of the
clock. To figure out what time it is you have to interpret the position of the hands and
map their positions to specific values. So you have to know that if the large hand is on
the number 1 and the small hand is on the number 6, this actually means 1:30. The
individual and specific location of the hands corresponds to a value. A digital clock
does not take this much work. You just look at it and it gives you a time value in the
format of number:number. There is no mapping work involved with a digital clock
because it provides you with data in clear-cut formats.
•Intra-Site Automatic Tunnel Addressing Protocol An IPv6 transition
mechanism meant to transmit IPv6 packets between dual-stack nodes
on top of an IPv4 network.
•IEEE 802.1AE (MACSec) Standard that specifies a set of protocols
to meet the security requirements for protecting data traversing
Ethernet LANs.
•IEEE 802.1AR Standard that specifies unique per-device identifiers
(DevID) and the management and cryptographic binding of a device
(router, switch, access point) to its identifiers.

Chapter 6: Telecommunications and Network Security
551
An analog clock can represent different values as the hands move forward—1:35
and 1 second, 1:35 and 2 seconds, 1:35 and 3 seconds. Each movement of the hands
represents a specific value just like the individual data points on a wave in an analog
transmission. A digital clock provides discrete values without having to map anything.
The same is true with digital transmissions: the value is always either a 1 or a 0—no
need for mapping to find the actual value.
Computers have always worked in a binary and digital manner (1 or 0). When our
telecommunication infrastructure was purely analog, each system that needed to com-
municate over a telecommunication line had to have a modem (modulate/demodu-
late), which would modulate the digital data into an analog signal. The sending system’s
modem would modulate the data on to the signal, and the receiving system’s modem
would demodulate the data off the signal.
Digital signals are more reliable than analog signals over a long distance and pro-
vide a clear-cut and efficient signaling method because the voltage is either on (1) or
not on (0), compared to interpreting the waves of an analog signal. Extracting digital
signals from a noisy carrier is relatively easy. It is difficult to extract analog signals from
background noise because the amplitudes and frequencies of the waves slowly lose
form. This is because an analog signal could have an infinite number of values or states,
whereas a digital signal exists in discrete states. A digital signal is a square wave, which
does not have all of the possible values of the different amplitudes and frequencies of
an analog signal. Digital systems are superior to analog systems in that they can trans-
port more data transmissions on the same line at higher quality over longer distances.
Digital systems can implement compression mechanisms to increase data throughput,
provide signal integrity through repeaters that “clean up” the transmissions, and multi-
plex different types of data (voice, data, video) onto the same transmission channel. As
we will see in following sections, most telecommunication technologies have moved
from analog to digital transmission technologies.
Amplitude Frequency
Digital signal
Analog signal
Figure 6-20 Analog signals are measured in amplitude and frequency, whereas digital signals
represent binary digits as electrical pulses.

CISSP All-in-One Exam Guide
552
NOTE
NOTE Bandwidth refers to the number of electrical pulses that can be
transmitted over a link within a second, and these electrical pulses carry
individual bits of information. Bandwidth is the data transfer capability of
a connection and is commonly associated with the amount of available
frequencies and speed of a link. Data throughput is the actual amount of
data that can be carried over this connection. Data throughput values can be
higher than bandwidth values if compression mechanisms are implemented.
But if links are highly congested or there are interference issues, the data
throughput values can be lower. Both bandwidth and data throughput are
measured in bits per second.
Asynchronous and Synchronous
It’s all about timing.
Analog and digital transmission technologies deal with the format in which data is
moved from one system to another. Asynchronous and synchronous transmission types
are similar to the cadence rules we use for conversation synchronization. Asynchronous
and synchronous network technologies provide synchronization rules to govern how
systems communicate to each other. If you have ever spoken over a satellite phone you
have probably experienced problems with communication synchronization. You and
the other person talking do not allow for the necessary delay that satellite communica-
tion requires, so you “speak over” one another. Once you figure out the delay in the
connection, you resynchronize your timing so that only one person’s data (voice) is
transmitting at one time so that each person can properly understand the full conversa-
tion. Proper pauses frame your words in a way to make them understandable.
Synchronization through communication also happens when we write messages to
each other. Properly placed commas, periods, and semicolons provide breaks in text so
that the person reading the message can better understand the information. If you see
“stickwithmekidandyouwillweardiamonds” without the proper punctuation, it is more
difficult for you to understand. This is why we have grammar rules. If someone writes
you a letter starting from the bottom and right side of a piece of paper and you do not
know this, you will not be able to read his message properly.
Technological communication protocols also have their own grammar and syn-
chronization rules when it comes to the transmission of data. If two systems are com-
municating over a network protocol that employs asynchronous timing, then start and
stop bits are used. The sending system sends a “start” bit, then sends its character, and
then sends a “stop” bit. This happens for the whole message. The receiving system
knows when a character is starting and stopping; thus, it knows how to interpret each
character of the message. If the systems are communicating over a network protocol
that uses synchronous timing, then no start and stop bits are added. The whole message
is sent without artificial breaks, and the receiving system needs to know how to inter-
pret the information without these bits.

Chapter 6: Telecommunications and Network Security
553
NOTE
NOTE If you have ever had a friend who talked constantly with no breaks
between sentences or subjects, this is how synchronous communication takes
place. It is a constant stream of data, with no breaks in between. Synchronous
communication is also similar to a whole novel with no punctuation. There
are no stop and starts between words or sentences; it is just a constant
bombardment of data. This can be more efficient in that there is less data
being transmitted (no punctuation marks), but unless that receiver can
understand data in this format, it will be a wasted effort.
I stated earlier that in our reality we actually use synchronous and asynchronous
tactics without fully realizing it. If I write you a letter, I put spaces between my words—
similar to start and stop bits—so you can quickly identify each word as its own unit. I
use punctuation (commas, periods) in my text to let you know when an idea is ending
or changing. I use these tools to synchronize my message to you. If I speak the same
message to you, I do not actually state punctuation marks, but instead I insert pauses,
which provide a rhythm and cadence to make it easier for you to understand. This is
similar to synchronous communication, but a clock pulse is used to set the “rhythm
and cadence” for the transmission.
If two systems are going to communicate using a synchronous transmission tech-
nology, they do not use start and stop bits, but the synchronization of the transfer of
data takes place through a timing sequence, which is initiated by a clock pulse.
It is the data link protocol that has the synchronization rules embedded into it. So
when a message goes down a system’s network stack, if a data link protocol, as in high-
level data link control (HDLC), is being used, then a clocking sequence is in place. (The
receiving system has to also be using this protocol so it can interpret the data.) If the
message is going down a network stack and a protocol such as ATM is at the data link
layer, then the message is framed with start and stop indicators.
Data link protocols that employ synchronous timing mechanisms are commonly
used in environments that have systems that transfer large amounts of data in a predict-
able manner (i.e., mainframe environment). Environments that contain systems that
send data in a nonpredictable manner (i.e., Internet connections) commonly have sys-
tems with protocols that use asynchronous timing mechanisms.
So, synchronous communication protocols transfer data as a stream of bits instead
of framing them in start and stop bits. The synchronization can happen between two
systems using a clocking mechanism, or a signal can be encoded into the data stream
to let the receiver synchronize with the sender of the message. This synchronization
needs to take place before the first message is sent. The sending system can transmit a
digital clock pulse to the receiving system, which translates into, “We will start here and
work in this type of synchronization scheme.”

CISSP All-in-One Exam Guide
554
Broadband and Baseband
How many channels can you shove into this one wire?
So analog transmission means that data is being moved as waves, and digital trans-
mission means that data is being moved as discrete electric pulses. Synchronous trans-
mission means that two devices control their conversations with a clocking mechanism,
and asynchronous means that systems use start and stop bits for communication syn-
chronization. Now let’s look at how many individual communication sessions can take
place at one time.
Abaseband technology uses the entire communication channel for its transmission,
whereas a broadband technology divides the communication channel into individual
and independent subchannels so that different types of data can be transmitted simul-
taneously. Baseband permits only one signal to be transmitted at a time, whereas
broadband carries several signals over different subchannels. For example, a coaxial
cable TV (CATV) system is a broadband technology that delivers multiple television
channels over the same cable. This system can also provide home users with Internet
access, but these data are transmitted at a different frequency spectrum than the TV
channels.
As an analogy, baseband technology only provides a one-lane highway for data to
get from one point to another point. A broadband technology provides a data highway
made up of many different lanes, so that not only can more data be moved from one
point to another point, but different types of data can travel over the individual lanes.
Communication Characteristics
• Synchronous
• Robust error checking, commonly through cyclic redundancy
checking (CRC)
• Timing component for data transmission synchronization
• Used for high-speed, high-volume transmissions
• Minimal overhead compared to asynchronous communication
• Asynchronous
• No timing component
• Surrounds each byte with processing bits
• Parity bit used for error control
• Each byte requires three bits of instruction (start, stop, parity)

Chapter 6: Telecommunications and Network Security
555
Any transmission technology that “chops up” one communication channel into
multiple channels is considered broadband. The communication channel is usually
some frequency spectrum, and the broadband technology provides delineation between
these frequencies and techniques on how to modulate the data onto the individual fre-
quency channels. To continue with our analogy, we could have one large highway that
could fit eight individual lanes—but unless we have something that defines these lanes
and there are rules for how these lanes are used—this is a baseband connection. If we
take the same highway and lay down painted white lines, traffic signs, on and off ramps,
and rules that drivers have to follow, now we are talking about broadband.
A digital subscriber line (DSL) uses one single phone line and constructs a set of
high-frequency channels for Internet data transmissions. A cable modem uses the
available frequency spectrum that is provided by a cable TV carrier to move Internet
traffic to and from a household. Mobile broadband devices implement individual
channels over a cellular connection, and Wi-Fi broadband technology moves data to
and from an access point over a specified frequency set. We will cover these technolo-
gies more in-depth throughout the chapter, but for now you just need to understand
that they are different ways of cutting up one channel into individual channels for
higher data transfer and that they provide the capability to move different types of
traffic at the same time.
How Do These Work Together?
If you are new to networking, it can be hard to understand how the OSI model,
analog and digital, synchronous and asynchronous, and baseband and broad-
band technologies interrelate and differentiate. You can think of the OSI model
as a structure to build different languages. If you and I are going to speak to each
other in English, we have to follow the rules of this language to be able to under-
stand each other. If we are going to speak French, we still have to follow the rules
of language (OSI model), but the individual letters that make up the words are in
a different order. The OSI model is a generic structure that can be used to define
many different “languages” for devices to be able to talk to each other. Once we
agree that we are going to communicate using English, I can speak my message to
you, thus my words move over continuous airwaves (analog). Or I can choose to
send my message to you through Morse code, which uses individual discrete val-
ues (digital). I can send you all of my words with no pauses or punctuation (syn-
chronous) or insert pauses and punctuation (asynchronous). If I am the only one
speaking to you at a time, this would be analogous to baseband. If I have ten of
my friends speaking to you at one time, this would be broadband.

CISSP All-in-One Exam Guide
556
Next, let’s look at the different ways we connect the many devices that make up
small and large networks around the world.
Cabling
Why are cables so important?
Response: Without them, our electrons would fall onto the floor.
Network cabling and wiring are important when setting up a network or extending
an existing one. Particular types of cables must be used with specific data link layer
technologies. Cable types vary in speeds, maximum lengths, and connectivity issues
with NICs. In the 1970s and 1980s, coaxial cable was the way to go, but in the late
1980s, twisted-pair wiring hit the scene, and today it is the most popular networking
cable used.
Electrical signals travel as currents through cables and can be negatively affected by
many factors within the environment, such as motors, fluorescent lighting, magnetic
forces, and other electrical devices. These items can corrupt the data as it travels through
the cable, which is why cable standards are used to indicate cable type, shielding, trans-
mission rates, and maximum distance a particular type of cable can be used.
Key Terms
•Digital signals Binary digits are represented and transmitted as
discrete electrical pulses. Signaling allows for higher data transfer rates
and high data integrity compared to analog signaling.
•Analog signals Continuously varying electromagnetic wave that
represents and transmits data. Carrier signals vary by amplification
and frequency.
•Asynchronous communication Transmission sequencing technology
that uses start and stop bits or similar encoding mechanism. Used in
environments that transmit a variable amount of data in a periodic
fashion.
•Synchronous communication Transmission sequencing technology
that uses a clocking pulse or timing scheme for data transfer
synchronization.
•Baseband transmission Uses the full bandwidth for only one
communication channel and has a low data transfer rate compared to
broadband.
•Broadband transmission Divides the bandwidth of a communication
channel into many channels, enabling different types of data to be
transmitted at one time.

Chapter 6: Telecommunications and Network Security
557
Cabling has bandwidth values associated with it, which is different from data
throughput values. Although these two terms are related, they are indeed different. The
bandwidth of a cable indicates the highest frequency range it uses—for instance,
10Base-T uses 10 MHz and 100Base-TX uses 80 MHz. This is different from the actual
amount of data that can be pushed through a cable. The data throughput rate is the
actual amount of data that goes through the wire after compression and encoding have
been used. 10Base-T has a data rate of 10 Mbps, and 100Base-TX has a data rate of 100
Mbps. The bandwidth can be thought of as the size of the pipe, and the data through-
put rate is the actual amount of data that travels per second through that pipe.
Bandwidth is just one of the characteristics we will look at as we cover various ca-
bling types in the following sections.
Coaxial Cable
Coaxial cable has a copper core that is surrounded by a shielding layer and grounding
wire, as shown in Figure 6-21. This is all encased within a protective outer jacket. Com-
pared to twisted-pair cable, coaxial cable is more resistant to electromagnetic interfer-
ence (EMI), provides a higher bandwidth, and supports the use of longer cable lengths.
So, why is twisted-pair cable more popular? Twisted-pair cable is cheaper and easier to
work with, and the move to switched environments that provide hierarchical wiring
schemes has overcome the cable-length issue of twisted-pair cable.
Coaxial cabling is used as a transmission line for radio frequency signals. If you
have cable TV, you have coaxial cabling entering your house and the back of your TV.
The various TV channels are carried over different radio frequencies. We will cover cable
modems later in this chapter, which is a technology that allows you to use some of the
“empty” TV frequencies for Internet connectivity.
Twisted-Pair Cable
This cable is kind of flimsy. Why do we use it?
Response: It’s cheap and easy to work with.
Twisted-pair cabling has insulated copper wires surrounded by an outer protective
jacket. If the cable has an outer foil shielding, it is referred to as shielded twisted pair
(STP), which adds protection from radio frequency interference and electromagnetic
interference. Twisted-pair cabling, which does not have this extra outer shielding, is
called unshielded twisted pair (UTP).
Figure 6-21
Coaxial cable

CISSP All-in-One Exam Guide
558
The twisted-pair cable contains copper wires that twist around each other, as shown
in Figure 6-22. This twisting of the wires protects the integrity and strength of the sig-
nals they carry. Each wire forms a balanced circuit, because the voltage in each pair uses
the same amplitude, just with opposite phases. The tighter the twisting of the wires, the
more resistant the cable is to interference and attenuation. UTP has several categories
of cabling, each of which has its own unique characteristics.
The twisting of the wires, the type of insulation used, the quality of the conductive
material, and the shielding of the wire determine the rate at which data can be trans-
mitted. The UTP ratings indicate which of these components were used when the cables
were manufactured. Some types are more suitable and effective for specific uses and
environments. Table 6-3 lists the cable ratings.
Copper cable has been around for many years. It is inexpensive and easy to use. A
majority of the telephone systems today use copper cabling with the rating of voice
grade. Twisted-pair wiring is the preferred network cabling, but it also has its draw-
backs. Copper actually resists the flow of electrons, which causes a signal to degrade
after it has traveled a certain distance. This is why cable lengths are recommended for
copper cables; if these recommendations are not followed, a network could experience
signal loss and data corruption. Copper also radiates energy, which means information
can be monitored and captured by intruders. UTP is the least secure networking cable
compared to coaxial and fiber. If a company requires higher speed, higher security, and
cables to have longer runs than what is allowed in copper cabling, fiber-optic cable may
be a better choice.
Fiber-Optic Cable
Hey, I can’t tap into this fiber cable.
Response: Exactly.
Twisted-pair cable and coaxial cable use copper wires as their data transmission
media, but fiber-optic cable uses a type of glass that carries light waves, which represent
the data being transmitted. The glass core is surrounded by a protective cladding, which
in turn is encased within an outer jacket.
Figure 6-22
Twisted-pair cabling
uses copper wires.

Chapter 6: Telecommunications and Network Security
559
Because it uses glass, fiber-optic cabling has higher transmission speeds that allow
signals to travel over longer distances. Fiber cabling is not as affected by attenuation
and EMI when compared to cabling that uses copper. It does not radiate signals, as does
UTP cabling, and is difficult to eavesdrop on; therefore, fiber-optic cabling is much
more secure than UTP, STP, or coaxial.
UTP Category Characteristics Usage
Category 1 Voice-grade telephone cable for
up to 1 Mbps transmission rate
Not recommended for network use,
but modems can communicate over it.
Category 2 Data transmission up to 4 Mbps Used in mainframe and minicomputer
terminal connections, but not
recommended for high-speed networking.
Category 3 10 Mbps for Ethernet and
4 Mbps for Token Ring
Used in 10Base-T network installations.
Category 4 16 Mbps Usually used in Token Ring networks.
Category 5 100 Mbps; has high twisting and
thus low crosstalk
Used in 100Base-TX, CDDI, Ethernet,
and ATM installations; most widely used
in network installations.
Category 6 10 Gbps Used in new network installations
requiring high-speed transmission.
Standard for Gigabit Ethernet.
Category 7 10 Gbps Used in new network installations
requiring higher-speed transmission.
Table 6-3 UTP Cable Ratings
Fiber Components
Fiber-optic cables are made up of a light source, an optical cable, and a light
detector.
•Light sources Convert electrical signal into light signal
• Light-emitting diodes (LEDs)
• Diode lasers
•Optical fiber cable Data travels as light
•Single mode Small glass core, and are used for high-speed data
transmission over long distances. They are less susceptible to
attenuation than multimode fibers.
•Multimode Large glass cores, and are able to carry more data than
single-core fibers, though they are best for shorter distances because
of their higher attenuation levels.
•Light detector Converts light signal back into electrical signal

CISSP All-in-One Exam Guide
560
Using fiber-optic cable sounds like the way to go, so you might wonder why you
would even bother with UTP, STP, or coaxial. Unfortunately, fiber-optic cable is expen-
sive and difficult to work with. It is usually used in backbone networks and environ-
ments that require high data transfer rates. Most networks use UTP and connect to a
backbone that uses fiber.
NOTE
NOTE The price of fiber and the cost of installation have been continuously
decreasing, while the demand for more bandwidth only increases. More
organizations and service providers are installing fiber directly to the end user.
Cabling Problems
Cables are extremely important within networks, and when they experience problems,
the whole network could experience problems. This section addresses some of the more
common cabling issues many networks experience.
Noise
Noise on a line is usually caused by surrounding devices or by characteristics of the wir-
ing’s environment. Noise can be caused by motors, computers, copy machines, fluores-
cent lighting, and microwave ovens, to name a few. This background noise can combine
with the data being transmitted over the cable and distort the signal, as shown in Figure
6-23. The more noise there is interacting with the cable, the more likely the receiving
end will not receive the data in the form originally transmitted.
Cabling Connection Types
Cables follow universal standards to allow for interoperability and connectivity
between common devices and environments. The standards are developed and
maintained by the Telecommunications Industry Association (TIA) and the Elec-
tronic Industries Association (EIA). The TIA/EIA-568-B standard enables the de-
sign and implementation of structured cabling systems for commercial buildings.
The majority of the standards define cabling types, distances, connectors, cable
system architectures, cable termination standards and performance characteris-
tics, cable installation requirements, and methods of testing installed cable. The
following are commonly used physical interface connection standards:
•RJ-11 is often used for terminating telephone wires.
•RJ-45 is often used to terminate twisted-pair cables in Ethernet
environments.
•BNC (British Naval Connector) is often used for terminating coaxial
cables. It is used to connect various types of radio, television, and other
radio-frequency electronic equipment. (Also referred to as Bayonet
Neill–Concelman connector.)

Chapter 6: Telecommunications and Network Security
561
Attenuation
Attenuation is the loss of signal strength as it travels. The longer a cable, the more at-
tenuation occurs, which causes the signal carrying the data to deteriorate. This is why
standards include suggested cable-run lengths.
The effects of attenuation increase with higher frequencies; thus, 100Base-TX at
80MHz has a higher attenuation rate than 10Base-T at 10MHz. This means that cables
used to transmit data at higher frequencies should have shorter cable runs to ensure
attenuation does not become an issue.
If a networking cable is too long, attenuation may occur. Basically, the data are in
the form of electrons, and these electrons have to “swim” through a copper wire. How-
ever, this is more like swimming upstream, because there is a lot of resistance on the
electrons working in this media. After a certain distance, the electrons start to slow
down and their encoding format loses form. If the form gets too degraded, the receiving
system cannot interpret them any longer. If a network administrator needs to run a ca-
ble longer than its recommended segment length, she needs to insert a repeater or
some type of device that will amplify the signal and ensure it gets to its destination in
the right encoding format.
Attenuation can also be caused by cable breaks and malfunctions. This is why ca-
bles should be tested. If a cable is suspected of attenuation problems, cable testers can
inject signals into the cable and read the results at the end of the cable.
Crosstalk
Crosstalk is a phenomenon that occurs when electrical signals of one wire spill over to the
signals of another wire. When the different electrical signals mix, their integrity degrades
and data corruption can occur. UTP is much more vulnerable to crosstalk than STP or
coaxial because it does not have extra layers of shielding to help protect against it.
As stated earlier, the two-wire pairs within twisted-pair cables form a balanced cir-
cuit because they both have the same amplitude, just with different phases. Crosstalk
and background noise can throw off this balance, and the wire can actually start to act
like an antenna, which means it will be more susceptible to picking up other noises in
the environment.
Transmission
Attenuation
Background
noise
Original digital signal
Destination
Figure 6-23 Background noise can merge with an electronic signal and alter the signal’s integrity.

CISSP All-in-One Exam Guide
562
Fire Rating of Cables
This cable smells funny when it’s on fire.
Just as buildings must meet certain fire codes, so must wiring schemes. A lot of
companies string their network wires in drop ceilings—the space between the ceiling
and the next floor—or under raised floors. This hides the cables and prevents people
from tripping over them. However, when wires are strung in places like this, they are
more likely to catch on fire without anyone knowing about it. Some cables produce
hazardous gases when on fire that would spread throughout the building quickly. Net-
work cabling that is placed in these types of areas, called plenum space, must meet a
specific fire rating to ensure it will not produce and release harmful chemicals in case
of a fire. A ventilation system’s components are usually located in this plenum space, so
if toxic chemicals were to get into that area, they could easily spread throughout the
building in minutes.
Nonplenum cables usually have a polyvinyl chloride (PVC) jacket covering, whereas
plenum-rated cables have jacket covers made of fluoropolymers. When setting up a
network or extending an existing network, it is important you know which wire types
are required in which situation.
Cables should be installed in unexposed areas so they are not easily tripped over,
damaged, or eavesdropped upon. The cables should be strung behind walls and in the
protected spaces as in dropped ceilings. In environments that require extensive security,
wires are encapsulated within pressurized conduits so if someone attempts to access a
wire, the pressure of the conduit will change, causing an alarm to sound and a message
to be sent to the security staff.
NOTE
NOTE While a lot of the world’s infrastructure is wired and thus uses
one of these types of cables, remember that a growing percentage of our
infrastructure is not wired. We will cover these technologies later in the
chapter (mobile, wireless, satellite, etc.).
Networking Foundations
We really need to connect all of these resources together.
Most users on a network need to use the same type of resources, such as print serv-
ers, portals, file servers, Internet connectivity, etc. Why not just string all the systems
together and have these resources available to all? Great idea! We’ll call it networking!
Networking has made amazing advances in just a short period of time. In the begin-
ning of the computer age, mainframes were the name of the game. They were isolated
powerhouses, and many had “dumb” terminals hanging off them. However, this was
not true networking. In the late 1960s and early 1970s, some technical researchers
came up with ways of connecting all the mainframes and Unix systems to enable them
to communicate. This marked the Internet’s baby steps.
Microcomputers evolved and were used in many offices and work areas. Slowly,
dumb terminals got a little smarter and more powerful as users needed to share office
resources. And bam! Ethernet was developed, which allowed for true networking. There
was no turning back after this.

Chapter 6: Telecommunications and Network Security
563
While access to shared resources was a major drive in the evolution of networking,
today the infrastructure that supports these shared resources and the services these com-
ponents provide is really the secret to the secret sauce. As we will see, networks are made
up of routers, switches, web servers, proxies, firewalls, name resolution technologies, pro-
tocols, IDS, IPS, storage systems, antimalware software, virtual private networks, demili-
tarized zone (DMZ), data loss prevention solutions, e-mail systems, cloud computing,
web services, authentication services, redundant technologies, public key infrastructure,
private branch exchange (PBX), and more. While functionality is critical, there are other
important requirements that need to be understood when architecting a network, such as
scalability, redundancy, performance, security, manageability, and maintainability.
Infrastructure provides foundational capabilities that support almost every aspect
of our lives. When most people think of technology, they focus on the end systems that
they interact with—laptops, mobile phones, tablet PCs, workstations—or the applica-
tions they use: e-mail, fax, Facebook, websites, instant messaging, Twitter, online bank-
ing. Most people do not even give a thought to how this stuff works under the covers,
and many people do not fully realize all the other stuff dependent upon technology:
medical devices, critical infrastructure, weapon systems, transportation, satellites, tele-
phony, etc. People say it is love that makes the world go around, but let them experi-
ence one day without the Internet. We are all more dependent upon the Matrix than we
fully realize, and as security professionals we need to not only understand the Matrix
but also secure it.
Network Topology
How should we connect all these devices together?
The physical arrangement of computers and devices is called a network topology.
Topology refers to the manner in which a network is physically connected and shows
the layout of resources and systems. A difference exists between the physical network
topology and the logical topology. A network can be configured as a physical star but
work logically as a ring, as in the Token Ring technology.
The best topology for a particular network depends on such things as how nodes
are supposed to interact; which protocols are used; the types of applications that are
available; the reliability, expandability, and physical layout of a facility; existing wiring;
and the technologies implemented. The wrong topology or combination of topologies
can negatively affect the network’s performance, productivity, and growth possibilities.
This section describes the basic types of network topologies. Most networks are
much more complex and are usually implemented using a combination of topologies.
Ring Topology
Aring topology has a series of devices connected by unidirectional transmission links,
as shown in Figure 6-24. These links form a closed loop and do not connect to a central
system, as in a star topology (discussed later). In a physical ring formation, each node
is dependent upon the preceding nodes. In simple networks, if one system fails, all
other systems could be negatively affected because of this interdependence. Today,
most networks have redundancy in place or other mechanisms that will protect a whole
network from being affected by just one workstation misbehaving, but one disadvan-
tage of using a ring topology is that this possibility exists.

CISSP All-in-One Exam Guide
564
Bus Topology
In a simple bus topology, a single cable runs the entire length of the network. Nodes are
attached to the network through drop points on this cable. Data communications
transmit the length of the medium, and each packet transmitted has the capability of
being “looked at” by all nodes. Each node decides to accept or ignore the packet, de-
pending upon the packet’s destination address.
Bus topologies are of two main types: linear and tree. The linear bus topology has a
single cable with nodes attached. A tree topology has branches from the single cable,
and each branch can contain many nodes.
In simple implementations of a bus topology, if one workstation fails, other sys-
tems can be negatively affected because of the degree of interdependence. In addition,
because all nodes are connected to one main cable, the cable itself becomes a potential
single point of failure. Traditionally, Ethernet uses bus and star topologies.
Star Topology
In a star topology, all nodes connect to a central device such as a switch. Each node has
a dedicated link to the central device. The central device needs to provide enough
throughput that it does not turn out to be a detrimental bottleneck for the network as
a whole. Because a central device is required, it is a potential single point of failure, so
redundancy may need to be implemented. Switches can be configured in flat or hierar-
chical implementations so larger organizations can use them.
When one workstation fails on a star topology, it does not affect other systems, as
in the ring or bus topologies. In a star topology, each system is not as dependent on
others as it is dependent on the central connection device. This topology generally re-
quires less cabling than other types of topologies. As a result, cut cables are less likely,
and detecting cable problems is an easier task.
Not many networks use true linear bus and ring topologies anymore. A ring topol-
ogy can be used for a backbone network, but most networks are constructed in a star
topology because it enables the network to be more resilient and not as affected if an
individual node experiences a problem.
Figure 6-24
A ring topology
forms a closed-loop
connection.

Chapter 6: Telecommunications and Network Security
565
Mesh Topology
This network is a mess!
Response: We like to call it a mesh.
In a mesh topology, all systems and resources are connected to each other in a way
that does not follow the uniformity of the previous topologies, as shown in Figure 6-25.
This arrangement is usually a network of interconnected routers and switches that pro-
vides multiple paths to all the nodes on the network. In a full mesh topology, every
node is directly connected to every other node, which provides a great degree of redun-
dancy. In a partial mesh topology, every node is not directly connected. The Internet is
an example of a partial mesh topology.
A summary of the different network topologies and their important characteristics
is provided in Table 6-4.
Media Access Technologies
The physical topology of a network is the lower layer, or foundation, of a network. It
determines what type of media will be used and how the media will be connected be-
tween different systems. Media access technologies deal with how these systems com-
municate over this media and are usually represented in protocols, NIC drivers, and
interfaces. LAN access technologies set up the rules of how computers will communi-
cate on a network, how errors are handled, the maximum transmission unit (MTU) size
of frames, and much more. These rules enable all computers and devices to communi-
cate and recover from problems, and enable users to be productive in accomplishing
their networking tasks. Each participating entity needs to know how to communicate
properly so all other systems will understand the transmissions, instructions, and re-
quests. This is taken care of by the LAN media access technology.
NOTE
NOTE An MTU is a parameter that indicates how much data a frame can
carry on a specific network. Different types of network technologies may
require different MTU sizes, which is why frames are sometimes fragmented.
Figure 6-25
In a mesh topology,
each node is
connected to
all other nodes,
which provides for
redundant paths.

CISSP All-in-One Exam Guide
566
These technologies reside at the data link layer of the OSI model. Remember that as
a message is passed down through a network stack it is encapsulated by the protocols
and services at each layer. When the data message reaches the data link layer, the proto-
col at this layer adds the necessary headers and trailers that will allow the message to
transverse a specific type of network (Ethernet, Token Ring, FDDI, etc.) The protocol
and network driver work at the data link layer, and the NIC works at the physical layer,
but they have to work together and be compatible. If you install a new server on an
Ethernet network, you must implement an Ethernet NIC and driver.
The LAN-based technologies we will cover in the next sections are Ethernet, Token
Ring, and FDDI.
Alocal area network (LAN) is a network that provides shared communication and
resources in a relatively small area. What defines a LAN, as compared to a WAN, de-
pends on the physical medium, encapsulation protocols, and media access technology.
For example, a LAN could use 10Base-T cabling, TCP/IP protocols, and Ethernet media
access technology, and it could enable users who are in the same local building to com-
municate. A WAN, on the other hand, could use fiber-optic cabling, the L2TP encapsu-
lation protocol, and ATM media access technology, and could enable users from one
building to communicate with users in another building in another state (or country).
A WAN connects LANs over great distances geographically. Most of the differences be-
tween these technologies are found at the data link layer.
The term “local” in the context of a LAN refers not so much to the geographical area
as to the limitations of a LAN with regard to the shared medium, the number of de-
vices and computers that can be connected to it, the transmission rates, the types of
cable that can be used, and the compatible devices. If a network administrator develops
a very large LAN that would more appropriately be multiple LANs, too much traffic
could result in a big performance hit, or the cabling could be too long, in which case
attenuation (signal loss) becomes a factor. Environments where there are too many
nodes, routers, and switches may be overwhelmed, and administration of these net-
Topology Characteristics Problems
Bus Uses a linear, single cable for all
computers attached. All traffic
travels the full cable and can be
viewed by all other computers.
If one station experiences a problem,
it can negatively affect surrounding
computers on the same cable.
Ring All computers are connected by
a unidirectional transmission link,
and the cable is in a closed loop.
If one station experiences a problem,
it can negatively affect surrounding
computers on the same ring.
Star All computers are connected to
a central device, which provides
more resilience for the network.
The central device is a single point of
failure.
Tree A bus topology with branches off
of the main cable.
Mesh Computers are connected to each
other, which provides redundancy.
Requires more expense in cabling and
extra effort to track down cable faults.
Table 6-4 Summary of Network Topologies

Chapter 6: Telecommunications and Network Security
567
works could get complex, which opens the door for errors, collisions, and security
holes. The network administrator should follow the specifications of the technology he
is using, and once he has maxed out these numbers, he should consider implementing
two or more LANs instead of one large LAN. LANs are defined by their physical topolo-
gies, data link layer technologies, protocols, and devices used. The following sections
cover these topics and how they interrelate.
Ethernet
Ethernet is a resource-sharing technology that enables several devices to communicate
on the same network. Ethernet usually uses a bus or star topology. If a linear bus topol-
ogy is used, all devices connect to one cable. If a star topology is used, each device is
connected to a cable that is connected to a centralized device, such as a switch. Ethernet
was developed in the 1970s, became commercially available in 1980, and was officially
defined through IEEE 802.3 standard.
Ethernet has seen quite an evolution in its short history, from purely coaxial cable
installations that worked at 10 Mbps to mostly Category 5 twisted-pair cable that works
at speeds of 100 Mbps, 1,000 Mbps (1 Gbps), and 10 Gbps.
Ethernet is defined by the following characteristics:
• Contention-based technology (all resources use the same shared
communication medium)
• Uses broadcast and collision domains
• Uses the carrier sense multiple access with collision detection (CSMA/CD)
access method
• Supports full duplex communication
• Can use coaxial, twisted-pair, or fiber-optic cabling types
• Is defined by standard IEEE 802.3
Ethernet addresses how computers share a common network and how they deal
with collisions, data integrity, communication mechanisms, and transmission controls.
These are the common characteristics of Ethernet, but Ethernet does vary in the type of
cabling schemes and transfer rates it can supply. Several types of Ethernet implementa-
tions are available, as outlined in Table 6-5. The following sections discuss 10Base2,
10Base5, and 10Base-T, which are common implementations.
Q&A
Question A LAN is said to cover a relatively small geographical area.
When is a LAN no longer a LAN?
Answer When two distinct LANs are connected by a router, the
result is an internetwork, not a larger LAN. Each distinct LAN has
its own addressing scheme, broadcast domain, and communication
mechanisms. If two LANs are connected by a different data link layer
technology, such as frame relay or ATM, they are considered a WAN.

CISSP All-in-One Exam Guide
568
10Base2 10Base2, ThinNet, uses coaxial cable. It has a maximum cable length of 185
meters, provides 10-Mbps transmission rates, and requires BNCs to network devices.
10Base5 10Base5, ThickNet, uses a thicker coaxial cable that is not as flexible as
ThinNet and is more difficult to work with. However, ThickNet can have longer cable
segments than ThinNet and was used as the network backbone. ThickNet is more resis-
tant to electrical interference than ThinNet and is usually preferred when stringing wire
through electrically noisy environments that contain heavy machinery and magnetic
fields. ThickNet also requires BNCs because it uses coaxial cables.
10Base-T 10Base-T uses twisted-pair copper wiring instead of coaxial cabling.
Twisted-pair wiring uses one wire to transmit data and the other to receive data. 10Base-
T is usually implemented in a star topology, which provides easy network configura-
tion. In a star topology, all systems are connected to centralized devices, which can be
in a flat or hierarchical configuration.
10Base-T networks have RJ-45 connector faceplates to which the computer con-
nects. The wires usually run behind walls and connect the faceplate to a punchdown
block within a wiring closet. The punchdown block is often connected to a 10Base-T
hub that serves as a doorway to the network’s backbone cable or to a central switch.
This type of configuration is shown in Figure 6-26.
100Base-TX Not surprisingly, 10 Mbps was considered heaven-sent when it first
arrived on the networking scene, but soon many users were demanding more speed and
power. The smart people had to gather into small rooms and hit the whiteboards with
ideas, calculations, and new technologies. The result of these meetings, computations,
engineering designs, and testing was Fast Ethernet.
Fast Ethernet is regular Ethernet, except that it runs at 100 Mbps over twisted-pair
wiring instead of at 10 Mbps. Around the same time Fast Ethernet arrived, another 100-
Mbps technology was developed: 100-VG-AnyLAN. This technology did not use Ether-
net’s traditional CSMA/CD and did not catch on like Fast Ethernet did.
Fast Ethernet uses the traditional CSMA/CD (explained in the “CSMA” section later
in the chapter) and the original frame format of Ethernet. This is why it is used in many
enterprise LAN environments today. One environment can run 10- and 100-Mbps net-
work segments that can communicate via 10/100 hubs or switches.
Ethernet Type Cable Type Speed
10Base2, ThinNet Coaxial 10 Mbps
10Base5, ThickNet Coaxial 10 Mbps
10Base-T UTP 10 Mbps
100Base-TX, Fast Ethernet UTP 100 Mbps
1000Base-T, Gigabit Ethernet UTP 1,000 Mbps
1000Base-X Fiber 1,000 Mbps
Table 6-5 Ethernet Implementation Types

Chapter 6: Telecommunications and Network Security
569
1000Base-T Improved Ethernet technology has allowed for gigabit speeds over a
Category 5 wire. In the 1000Base-T version, all four pairs of twisted unshielded cable
pairs are used for simultaneous transmission in both directions for a maximum dis-
tance of 100 meters. Negotiation takes place on two pairs, and if two gigabit devices are
connected through a cable with only two pairs, the devices will successfully choose
“gigabit” as the highest common denominator.
1000Base-X 1000Base-X refers to Gigabit Ethernet transmission over fiber, where
options include 1000Base-CX, 1000Base-LX, 1000Base-SX, 1000Base-LX10, 1000Base-
BX10, or the nonstandard -ZX implementations.
Gigabit Ethernet builds on top of the Ethernet protocol, but increases speed tenfold
over Fast Ethernet to 1000 Mbps, or 1 gigabit per second (Gbps). Gigabit Ethernet al-
lows Ethernet to scale from 10/100 Mbps at the desktop to 100 Mbps up the riser to
1000 Mbps in the data center.
Figure 6-26 Ethernet hosts connect to a punchdown block within the wiring closet, which is
connected to the backbone via a hub or switch.

CISSP All-in-One Exam Guide
570
We will touch upon Ethernet again later in the chapter because it has beat out many
of the other competing media access technologies. While Ethernet started off as just a
LAN technology, it has evolved and is commonly used in metropolitan area networks
(MANs) also.
Token Ring
Where’s my magic token? I have something to say.
Response: We aren’t giving it to you.
Like Ethernet, Token Ring is a LAN media access technology that enables the com-
munication and sharing of networking resources. The Token Ring technology was orig-
inally developed by IBM and then defined by the IEEE 802.5 standard. It uses a
token-passing technology with a star-configured topology. The ring part of the name
pertains to how the signals travel, which is in a logical ring. Each computer is connected
to a central hub, called a Multistation Access Unit (MAU). Physically, the topology can
be a star, but the signals and transmissions are passed in a logical ring.
Atoken-passing technology is one in which a device cannot put data on the network
wire without having possession of a token, a control frame that travels in a logical circle
and is “picked up” when a system needs to communicate. This is different from Ether-
net, in which all the devices attempt to communicate at the same time. This is why
Ethernet is referred to as a “chatty protocol” and has collisions. Token Ring does not
endure collisions, since only one system can communicate at a time, but this also
means communication takes place more slowly compared to Ethernet.
At first, Token Ring technology had the ability to transmit data at 4 Mbps. Later, it
was improved to transmit at 16 Mbps. When a frame is put on the wire, each computer
looks at it to see whether the frame is addressed to it. If the frame does not have that
specific computer’s address, the computer puts the frame back on the wire, properly
amplifies the message, and passes it to the next computer on the ring.
Token Ring employs a couple of mechanisms to deal with problems that can occur
on this type of network. The active monitor mechanism removes frames that are con-
tinually circulating on the network. This can occur if a computer locks up or is taken
offline for one reason or another and cannot properly receive a token destined for it.
With the beaconing mechanism, if a computer detects a problem with the network, it
sends a beacon frame. This frame generates a failure domain, which is between the
computer that issued the beacon and its neighbor downstream. The computers and
devices within this failure domain will attempt to reconfigure certain settings to try to
work around the detected fault. Figure 6-27 depicts a Token Ring network in a physical
star configuration.
Token Ring networks were popular in the 1980s and 1990s, and although some are
still around, Ethernet has become much more popular and has taken over the LAN
networking market.
FDDI
Fiber Distributed Data Interface (FDDI) technology, developed by the American Na-
tional Standards Institute (ANSI), is a high-speed, token-passing, media access technol-
ogy. FDDI has a data transmission speed of up to 100 Mbps and is usually used as a

Chapter 6: Telecommunications and Network Security
571
backbone network using fiber-optic cabling. FDDI also provides fault tolerance by of-
fering a second counter-rotating fiber ring. The primary ring has data traveling clock-
wise and is used for regular data transmission. The second ring transmits data in a
counterclockwise fashion and is invoked only if the primary ring goes down. Sensors
watch the primary ring and, if it goes down, invoke a ring wrap so the data will be di-
verted to the second ring. Each node on the FDDI network has relays that are connected
to both rings, so if a break in the ring occurs, the two rings can be joined.
Figure 6-27 A Token Ring network
Q&A
Question Where do the differences between Ethernet, Token Ring,
and FDDI lie?
Answer These media access technologies work at the data link layer
of the OSI model. The data link layer is actually made up of a MAC
sublayer and an LLC sublayer. These media access technologies live at
the MAC layer and have to interface with the LLC layer. These media
access technologies carry out the framing functionality of a network
stack, which prepares each packet for network transmission. These
technologies differ in network capabilities, transmission speed, and
the physical medium they interact with.

CISSP All-in-One Exam Guide
572
When FDDI is used as a backbone network, it usually connects several different
networks, as shown in Figure 6-28.
Before Fast Ethernet and Gigabit Ethernet hit the market, FDDI was used mainly as
campus and service provider backbones. Because FDDI can be employed for distances
up to 100 kilometers, it was often used in MANs. The benefit of FDDI is that it can work
over long distances and at high speeds with minimal interference. It enables several
tokens to be present on the ring at the same time, causing more communication to take
place simultaneously, and it provides predictable delays that help connected networks
and devices know what to expect and when.
NOTE
NOTE FDDI-2 provides fixed bandwidth that can be allocated for specific
applications. This makes it work more like a broadband connection with
QoS capabilities, which allows for voice, video, and data to travel over the
same lines.
A version of FDDI, Copper Distributed Data Interface (CDDI), can work over UTP
cabling. Whereas FDDI would be used more as a MAN, CDDI can be used within a LAN
environment to connect network segments.
Figure 6-28 FDDI rings can be used as backbones to connect different LANs.

Chapter 6: Telecommunications and Network Security
573
Devices that connect to FDDI rings fall into one of the following categories:
•Single-attachment station (SAS) Attaches to only one ring (the primary)
through a concentrator
• Dual-attachment station (DAS) Has two ports and each port provides a
connection for both the primary and the secondary rings
•Single-attached concentrator (SAC) Concentrator that connects an SAS
device to the primary ring
•Dual-attached concentrator (DAC) Concentrator that connects DAS, SAS,
and SAC devices to both rings
The different FDDI device types are illustrated in Figure 6-29.
NOTE
NOTE Ring topologies are considered deterministic, meaning that the rate
of the traffic flow can be predicted. Since traffic can only flow if a token is in
place, the maximum time that a node will have to wait to receive traffic can be
determined. This can be beneficial for time-sensitive applications.
SAS SAC
DAC
SAS
SAS
DAS DAS
Primary ring
Secondary RingSecondary ring
Figure 6-29 FDDI device types

CISSP All-in-One Exam Guide
574
Table 6-6 sums up the important characteristics of the technologies described in the
preceding sections.
Media Sharing
There are 150 devices on this network. How can they all use this one network wire properly?
No matter what type of media access technology is being used, the main resource
that has to be shared by all systems and devices on the network is the network transmis-
sion channel. This transmission channel could be Token Ring over coaxial cabling,
Ethernet over UTP, FDDI over fiber, or Wi-Fi over a frequency spectrum. There must be
methods in place to make sure that each system gets access to the channel, that the
system’s data is not corrupted during transmission, and that there is a way to control
traffic in peak times.
The different media access technologies covered in the previous sections have their
own specific media-sharing capabilities, which are covered next.
Token Passing Atoken is a 24-bit control frame used to control which computers
communicate at what intervals. The token is passed from computer to computer, and
only the computer that has the token can actually put frames onto the wire. The token
grants a computer the right to communicate. The token contains the data to be trans-
mitted and source and destination address information. When a system has data it
needs to transmit, it has to wait to receive the token. The computer then connects its
message to the token and puts it on the wire. Each computer checks this message to
determine whether it is addressed to it, which continues until the destination com-
puter receives the message. The destination computer makes a copy of the message and
flips a bit to tell the source computer it did indeed get its message. Once this gets back
to the source computer, it removes the frames from the network. The destination com-
LAN
Implementation
Standard Characteristics
Ethernet IEEE 802.3 • Uses broadcast and collision domains.
• Uses CSMA/CD access method.
• Can use coaxial, twisted-pair, or fiber-optic
media.
• Transmission speeds of 10 Mbps to 1 Gbps.
Token Ring IEEE 802.5 • Token-passing media access method.
• Transmission speeds of 4 to 16 Mbps.
• Uses an active monitor and beaconing.
FDDI ANSI standard
Based on IEEE 802.4
• Dual counter-rotating rings for fault tolerance.
• Transmission speeds of 100 Mbps.
• Operates over long distances at high speeds
and is therefore used as a backbone.
• CDDI works over UTP.
Table 6-6 LAN Media Access Methods

Chapter 6: Telecommunications and Network Security
575
puter makes a copy of the message, but only the originator of the message can remove
the message from the token and the network.
If a computer that receives the token does not have a message to transmit, it sends
the token to the next computer on the network. An empty token has a header, data
field, and trailer, but a token that has an actual message has a new header, destination
address, source address, and a new trailer.
This type of media-sharing method is used by Token Ring and FDDI technologies.
NOTE
NOTE Some applications and network protocols work better if they can
communicate at determined intervals, instead of “whenever the data arrives.”
In token-passing technologies, traffic arrives in this type of deterministic
nature because not all systems can communicate at one time; only the system
that has control of the token can communicate.
CSMA Ethernet protocols define how nodes are to communicate, recover from er-
rors, and access the shared network cable. Ethernet uses CSMA to provide media-shar-
ing capabilities. There are two distinct types of CSMA: CSMA/CD and CSMA/CA.
A transmission is called a carrier, so if a computer is transmitting frames, it is per-
forming a carrier activity. When computers use the carrier sense multiple access with
collision detection (CSMA/CD) protocol, they monitor the transmission activity, or car-
rier activity, on the wire so they can determine when would be the best time to transmit
data. Each node monitors the wire continuously and waits until the wire is free before
it transmits its data. As an analogy, consider several people gathered in a group talking
here and there about this and that. If a person wants to talk, she usually listens to the
current conversation and waits for a break before she proceeds to talk. If she does not
wait for the first person to stop talking, she will be speaking at the same time as the
other person, and the people around them may not be able to understand fully what
each is trying to say.
When using the CSMA/CD access method, computers listen for the absence of a
carrier tone on the cable, which indicates that no other system is transmitting data. If
two computers sense this absence and transmit data at the same time, a collision can
take place. A collision happens when two or more frames collide, which most likely cor-
rupts both frames. If a computer puts frames on the wire and its frames collide with
another computer’s frames, it will abort its transmission and alert all other stations that
a collision just took place. All stations will execute a random collision timer to force a
delay before they attempt to transmit data. This random collision timer is called the
back-off algorithm.
NOTE
NOTE Collisions are usually reduced by dividing a network with routers or
switches.

CISSP All-in-One Exam Guide
576
Carrier sense multiple access with collision avoidance (CSMA/CA) is a medium-shar-
ing method in which each computer signals its intent to transmit data before it actually
does so. This tells all other computers on the network not to transmit data right now
because doing so could cause a collision. Basically, a system listens to the shared me-
dium to determine whether it is busy or free. Once the system identifies that the “coast
is clear” and it can put its data on the wire, it sends out a broadcast to all other systems,
telling them it is going to transmit information. It is similar to saying, “Everyone shut
up. I am going to talk now.” Each system will wait a period of time before attempting
to transmit data to ensure collisions do not take place. The wireless LAN technology
802.11 uses CSMA/CA for its media access functionality.
NOTE
NOTE When there is just one transmission medium (i.e., UTP cable) that
has to be shared by all nodes and devices in a network, this is referred to as
acontention-based environment. Each system has to “compete” to use the
transmission line, which can cause contention.
Collision Domains As indicated in the preceding section, a collision occurs on
Ethernet networks when two computers transmit data at the same time. Other comput-
ers on the network detect this collision because the overlapping signals of the collision
increase the voltage of the signal above a specific threshold. The more devices on a
contention-based network, the more likely collisions will occur, which increases net-
work latency (data transmission delays). A collision domain is a group of computers that
are contending, or competing, for the same shared communication medium.
An unacceptable amount of collisions can be caused by a highly populated net-
work, a damaged cable or connector, too many repeaters, or cables that exceed the rec-
ommended length. If a cable is longer than what is recommended by the Ethernet
specification, two computers on opposite ends of the cable may transmit data at the
same time. Because the computers are so far away from each other, they may both trans-
mit data and not realize that a collision took place. The systems then go merrily along
with their business, unaware that their packets have been corrupted. If the cable is too
long, the computers may not listen long enough for evidence of a collision. If the des-
tination computers receive these corrupted frames, they then have to send a request to
the source system to retransmit the message, causing even more traffic.
Carrier-Sensing and Token-Passing Access Methods
Overall, carrier-sensing access methods are faster than token-passing access meth-
ods, but the former do have the problem of collisions. A network segment with
many devices can cause too many collisions and slow down the network’s perfor-
mance. Token-passing technologies do not have problems with collisions, but
they do not perform at the speed of carrier-sensing technologies. Network routers
can help significantly in isolating the network resources for both the CSMA/CD
and the token-passing methods.

Chapter 6: Telecommunications and Network Security
577
These types of problems are dealt with mainly by implementing collision domains.
An Ethernet network has broadcast and collision domains. One subnet will be on the
same broadcast and collision domain if it is not separated by routers or bridges. If the
same subnet is divided by bridges, the bridges can enable the broadcast traffic to pass
between the different parts of a subnet, but not the collisions, as shown in Figure 6-30.
This is how collision domains are formed. Isolating collision domains reduces the
amount of collisions that take place on a network and increases its overall performance.
Another benefit of restricting and controlling broadcast and collision domains is
that it makes sniffing the network and obtaining useful information more difficult for
an intruder as he traverses the network. A useful tactic for attackers is to install a Trojan
horse that sets up a network sniffer on the compromised computer. The sniffer is usu-
ally configured to look for a specific type of information, such as usernames and pass-
words. If broadcast and collision domains are in effect, the compromised system will
have access only to the broadcast and collision traffic within its specific subnet or
broadcast domain. The compromised system will not be able to listen to traffic on
other broadcast and collision domains, and this can greatly reduce the amount of traf-
fic and information available to an attacker.
Polling Hi. Do you have anything you would like to say?
The third type of media-sharing method is polling. In an environment where a poll-
ing LAN media access and sharing method is used, some systems are configured as
primary stations and others are configured as secondary stations. At predefined inter-
vals, the primary station asks the secondary station if it has anything to transmit. This
is the only time a secondary station can communicate.
Polling is a method of monitoring multiple devices and controlling network access
transmission. If polling is used to monitor devices, the primary device communicates with
each secondary device in an interval to check its status. The primary device then logs the
response it receives and moves on to the next device. If polling is used for network access,
the primary station asks each device if it has something to communicate to another device.
Network access transmission polling is used mainly with mainframe environments.
Figure 6-30
Collision domains
within one broadcast
domain

CISSP All-in-One Exam Guide
578
So remember that there are different media access technologies (Ethernet, Token
Ring, FDDI, Wi-Fi) that work at the data link and physical layers of the OSI model.
These technologies define the data link protocol, NIC and NIC driver specifications,
and media interface requirements. These individual media access technologies have
their own way of allowing systems to share the one available network transmission
medium—Ethernet uses CSMA\CD, Token Ring uses tokens, FDDI uses tokens, Wi-Fi
uses CSMA\CA, and mainframe media access technology uses polling. The media-shar-
ing technology is a subcomponent of the media access technology.
Key Terms
•Unshielded twisted pair Cabling in which copper wires are twisted
together for the purposes of canceling out EMI from external sources.
UTP cables are found in many Ethernet networks and telephone systems.
•Shielded twisted pair Twisted-pair cables are often shielded in an
attempt to prevent RFI and EMI. This shielding can be applied to
individual pairs or to the collection of pairs.
•Attenuation Gradual loss in intensity of any kind of flux through
a medium. As an electrical signal travels down a cable, the signal can
degrade and distort or corrupt the data it is carrying.
•Crosstalk A signal on one channel of a transmission creates an
undesired effect in another channel by interacting with it. The signal
from one cable “spills over” into another cable.
• Plenum cables Cable is jacketed with a fire-retardant plastic cover
that does not release toxic chemicals when burned.
•Ring topology Each system connects to two other systems, forming a
single, unidirectional network pathway for signals, thus forming a ring.
•Bus topology Systems are connected to a single transmission channel
(i.e., network cable), forming a linear construct.
•Star topology Network consists of one central device, which acts as
a conduit to transmit messages. The central device, to which all other
nodes are connected, provides a common connection point for all nodes.
•Mesh topology Network where each system must not only capture
and disseminate its own data, but also serve as a relay for other systems;
that is, it must collaborate to propagate the data in the network.
•Ethernet Common LAN media access technology standardized by
IEEE 802.3. Uses 48-bit MAC addressing, works in contention-based
networks, and has extended outside of just LAN environments.
•Token ring LAN medium access technology that controls network
communication traffic through the use of token frames. This
technology has been mostly replaced by Ethernet.

Chapter 6: Telecommunications and Network Security
579
Transmission Methods
A packet may need to be sent to only one workstation, to a set of workstations, or to all
workstations on a particular subnet. If a packet needs to go from the source computer
to one particular system, a unicast transmission method is used. If the packet needs to
go to a specific group of systems, the sending system uses the multicast method. If a
system wants all computers on its subnet to receive a message, it will use the broadcast
method.
Unicast is pretty simple because it has a source address and a destination address.
The data go from point A to Z, it is a one-to-one transmission, and everyone is happy.
Multicast is a bit different in that it is a one-to-many transmission. Multicasting enables
one computer to send data to a selective group of computers. A good example of mul-
ticasting is tuning into a radio station on a computer. Some computers have software
that enables the user to determine whether she wants to listen to country western, pop,
or a talk radio station, for example. Once the user selects one of these genres, the soft-
ware must tell the NIC driver to pick up not only packets addressed to its specific MAC
address, but also packets that contain a specific multicast address.
The difference between broadcast and multicast is that in a broadcast one-to-all
transmission, everyone gets the data, whereas in a multicast, only the few who have
chosen to receive the data actually get them. So how does a server three states away
multicast to one particular computer on a specific network and no other networks in
between? Good question, glad you asked. The user who elects to receive a multicast
actually has to tell her local router she wants to get frames with this particular multicast
address passed her way. The local router must tell the router upstream, and this process
continues so each router between the source and destination knows where to pass this
•Fiber Distributed Data Interface Ring-based token network protocol
that was derived from the IEEE 802.4 token bus timed token protocol.
It can work in LAN or MAN environments and provides fault tolerance
through dual-ring architecture.
•Carrier sense multiple access with collision detection A media
access control method that uses a carrier sensing scheme. When a
transmitting system detects another signal while transmitting a frame, it
stops transmitting that frame, transmits a jam signal, and then waits for
a random time interval before trying to resend the frame. This reduces
collisions on a network.
•Carrier sense multiple access with collision avoidance A media
access control method that uses a carrier sensing scheme. A system
wishing to transmit data has to first listen to the channel for a
predetermined amount of time to determine whether or not another
system is transmitting on the channel. If the channel is sensed as “idle,”
then the system is permitted to begin the transmission process. If the
channel is sensed as “busy,” the system defers its transmission for a
random period of time.

CISSP All-in-One Exam Guide
580
multicast data. This ensures that the user can get her rock music without other networks
being bothered with this extra data. (The user does not actually need to tell her local
router anything; the software on her computer communicates to a gateway router to
handle and pass along the information.)
IPv4 multicast protocols use a Class D address, which is a special address space
designed especially for multicasting. It can be used to send out information, multime-
dia data, and even real-time video and voice clips.
Internet Group Management Protocol (IGMP) is used to report multicast group
memberships to routers. When a user chooses to accept multicast traffic, she becomes
a member of a particular multicast group. IGMP is the mechanism that allows her com-
puter to inform the local routers that she is part of this group and to send traffic with a
specific multicast address to her system. IGMP can be used for online streaming video
and gaming activities. The protocol allows for efficient use of the necessary resources
when supporting these types of applications.
Like most protocols, IGMP has gone through a few different versions, each improv-
ing upon the earlier one. In version 1, multicast agents periodically send queries to
systems on the network they are responsible for and update their databases, indicating
which system belongs to which group membership. Version 2 provides more granular
query types and allows a system to signal to the agent when it wants to leave a group.
Version 3 allows the systems to specify the specific sources it wants to receive multicast
traffic from.
NOTE
NOTE The previous statements are true pertaining to IPv4. IPv6 is more
than just an upgrade to the original IP protocol; it functions differently in many
respects, which has caused many interoperability issues and delay in its full
deployment. IPv6 handles multicasting differently compared to IPv4.
Network Protocols and Services
Some protocols, such as UDP, TCP, IP, and IGMP, were addressed in earlier sections.
Networks are made up of these and many other types of protocols that provide an array
of functionality. Networks are also made up of many different services, as in DHCP,
DNS, e-mail, and others. The services that network infrastructure components provide
directly support the functionality required of the users of the network. Protocols usu-
ally provide a communication channel for these services to use so that they can carry
out their jobs. Networks are complex because there are layers of protocols and services
that all work together simultaneously and hopefully seamlessly. We will cover some of
the core protocols and services that are used in all networks today.
Address Resolution Protocol
This IP does me no good! I need a MAC!
On a TCP/IP network, each computer and network device requires a unique IP ad-
dress and a unique physical hardware address. Each NIC has a unique physical address
that is programmed into the ROM chips on the card by the manufacturer. The physical

Chapter 6: Telecommunications and Network Security
581
address is also referred to as the Media Access Control (MAC) address. The network
layer works with and understands IP addresses, and the data link layer works with and
understands physical MAC addresses. So, how do these two types of addresses work
together since they operate at different layers?
NOTE
NOTE A MAC address is unique because the first 24 bits represent the
manufacturer code and the last 24 bits represent the unique serial number
assigned by the manufacturer.
When data come from the application layer, they go to the transport layer for se-
quence numbers, session establishment, and streaming. The data are then passed to the
network layer, where routing information is added to each packet and the source and
destination IP addresses are attached to the data bundle. Then this goes to the data link
layer, which must find the MAC address and add it to the header portion of the frame.
When a frame hits the wire, it only knows what MAC address it is heading toward. At
this lower layer of the OSI model, the mechanisms do not even understand IP ad-
dresses. So if a computer cannot resolve the IP address passed down from the network
layer to the corresponding MAC address, it cannot communicate with that destination
computer.
NOTE
NOTE A frame is data that are fully encapsulated, with all of the necessary
headers and trailers.
MAC and IP addresses must be properly mapped so they can be correctly resolved.
This happens through the Address Resolution Protocol (ARP). When the data link layer
receives a frame, the network layer has already attached the destination IP address to it,
but the data link layer cannot understand the IP address and thus invokes ARP for help.
ARP broadcasts a frame requesting the MAC address that corresponds with the destina-
tion IP address. Each computer on the subnet receives this broadcast frame, and all but
the computer that has the requested IP address ignore it. The computer that has the
destination IP address responds with its MAC address. Now ARP knows what hardware
address corresponds with that specific IP address. The data link layer takes the frame,
adds the hardware address to it, and passes it on to the physical layer, which enables the
frame to hit the wire and go to the destination computer. ARP maps the hardware ad-
dress and associated IP address and stores this mapping in its table for a predefined
amount of time. This caching is done so that when another frame destined for the same
IP address needs to hit the wire, ARP does not need to broadcast its request again. It just
looks in its table for this information.
Sometimes attackers alter a system’s ARP table so it contains incorrect information.
This is called ARP table cache poisoning. The attacker’s goal is to receive packets in-
tended for another computer. This is a type of masquerading attack. For example, let’s

CISSP All-in-One Exam Guide
582
say that Bob’s computer has an IP of 10.0.0.1 and a MAC address of bb:bb:bb:bb:bb:bb
and Alice’s computer has an IP of 10.0.0.7 and MAC address of aa:aa:aa:aa:aa:aa and an
attacker has an IP address of 10.0.0.3 and a MAC address of cc:cc:cc:cc:cc:cc as shown in
Figure 6-31. If the attacker modifies the MAC tables in Bob’s and Alice’s systems and
maps his MAC address to their IP addresses, all traffic can be sent to his system without
Bob and Alice being aware of it. This attack type is shown in the figure.
So ARP is critical for a system to communicate, but it can be manipulated to allow
trafffic to be sent to unintended systems. ARP is a rudimentary protocol and does not
have any security measures built in to protect itself from these types of attacks. Net-
works should have IDS sensors monitoring for this type of activity so that administra-
tors can be alerted if this type of malicious activity is underway.
Dynamic Host Configuration Protocol
Can you just throw out addresses as necessary? I am too tired to do it manually.
A computer can receive its IP addresses in a few different ways when it first boots
up. If it has a statically assigned address, nothing needs to happen. It already has the
configuration settings it needs to communicate and work on the intended network. If a
computer depends upon a Dynamic Host Configuration Protocol (DHCP) server to as-
sign it the correct IP address, it boots up and makes a request to the DHCP server. The
DHCP server assigns the IP address, and everyone is happy.
DHCP is a UDP-based protocol that allows servers to assign IP addresses to network
clients in real time. Unlike static IP addresses, where IP addresses are manually config-
ured, the DHCP automatically checks for available IP addresses and correspondingly
MAC: [cc.cc.cc.cc.cc.cc]
MAC: [bb.bb.bb.bb.bb.bb] MAC: [aa.aa.aa.aa.aa.aa]
Modified ARP cache points IP
10.0.0.7 to MAC: [cc.cc.cc.cc.cc.cc]
Modified ARP cache points IP
10.0.0.1 to MAC: [cc.cc.cc.cc.cc.cc]
Figure 6-31 ARP poisoning attack

Chapter 6: Telecommunications and Network Security
583
assigns an IP address to the client. This eliminates the possibility of IP address conflicts
that occur if two systems are assigned identical IP addresses, which could cause loss of
service. On the whole, DHCP considerably reduces the effort involved in managing
large-scale IP networks.
The DHCP assigns IP addresses in real time from a specified range when a client
connects to the network; this is different from static addresses, where each system is
individually assigned a specific IP address when coming on line. In a standard DHCP-
based network, the client computer broadcasts a DHCPDISCOVER message on the net-
work in search of the DHCP server. Once the respective DHCP server receives the
DHCPDISCOVER request, the server responds with a DHCPOFFER packet, offering the
client an IP address. The server assigns the IP address based on the subject of the avail-
ability of that IP address and in compliance with its network administration policies.
The DHCPOFFER packet that the server responds with contains the assigned IP address
information and configuration settings for client-side services.
Once the client receives the settings sent by the server through the DHCPOFFER, it
responds to the server with a DHCPREQUEST packet confirming its acceptance of the
allotted settings. The server now acknowledges with a DHCPACK packet, which in-
cludes the validity period (lease) for the allocated parameters.
So as shown in Figure 6-32, the DHCP client yells out to the network, “Who can
help me get an address?” The DHCP server responds with an offer: “Here is an address
and the parameters that go with it.” The client accepts this gracious offer with the
DHCPREQUEST message, and the server acknowledges this message. Now the client
can start interacting with other devices on the network and the user can waste his valu-
able time on Facebook.

CISSP All-in-One Exam Guide
584
Unfortunately, both the client and server segments of the DHCP are vulnerable to
falsified identity. On the client end, attackers can masquerade their systems to appear
as valid network clients. This enables rogue systems to become a part of an organiza-
tion’s network and potentially infiltrate other systems on the network. An attacker may
create an unauthorized DHCP server on the network and start responding to clients
searching for a DHCP server. A DHCP server controlled by an attacker can compro-
mise client system configurations, carry out man-in-the-middle attacks, route traffic to
unauthorized networks, and a lot more, with the end result of jeopardizing the entire
network.
An effective method to shield networks from unauthenticated DHCP clients is
through the use of DHCP snooping on network switches. DHCP snooping ensures that
DHCP servers can assign IP addresses to only selected systems, identified by their MAC
addresses. Also, advance network switches now have capability to direct clients toward
legitimate DHCP servers to get IP addresses and restrict rogue systems from becoming
DHCP servers on the network.
Diskless workstations do not have a full operating system but have just enough
code to know how to boot up and broadcast for an IP address, and they may have a
pointer to the server that holds the operating system. The diskless workstation knows
its hardware address, so it broadcasts this information so that a listening server can as-
sign it the correct IP address. As with ARP, Reverse Address Resolution Protocol (RARP)
frames go to all systems on the subnet, but only the RARP server responds. Once the
RARP server receives this request, it looks in its table to see which IP address matches
the broadcast hardware address. The server then sends a message that contains its IP
address back to the requesting computer. The system now has an IP address and can
function on the network.
Figure 6-32 The four stages of the Discover, Offer, Request, and Acknowledgment (D-O-R-A)
process

Chapter 6: Telecommunications and Network Security
585
The Bootstrap Protocol (BOOTP) was created after RARP to enhance the functional-
ity that RARP provides for diskless workstations. The diskless workstation can receive its
IP address, the name server address for future name resolutions, and the default gate-
way address from the BOOTP server. BOOTP usually provides more functionality to
diskless workstations than does RARP.
The evolution of this protocol has unfolded as follows: RARP evolved into BOOTP,
which evolved into DHCP.
Internet Control Message Protocol
The Internet Control Message Protocol (ICMP) is basically IP’s “messenger boy.” ICMP
delivers status messages, reports errors, replies to certain requests, reports routing infor-
mation, and is used to test connectivity and troubleshoot problems on IP networks.
The most commonly understood use of ICMP is through the use of the ping utility.
When a person wants to test connectivity to another system, he may ping it, which
sends out ICMP ECHO REQUEST frames. The replies on his screen that are returned to
the ping utility are called ICMP ECHO REPLY frames and are responding to the ECHO
REQUEST frames. If a reply is not returned within a predefined time period, the ping
utility sends more ECHO REQUEST frames. If there is still no reply, ping indicates the
host is unreachable.
ICMP also indicates when problems occur with a specific route on the network and
tells surrounding routers about better routes to take based on the health and conges-
tion of the various pathways. Routers use ICMP to send messages in response to packets
that could not be delivered. The router selects the proper ICMP response and sends it
back to the requesting host, indicating that problems were encountered with the trans-
mission request.
ICMP is used by other connectionless protocols, not just IP, because connectionless
protocols do not have any way of detecting and reacting to transmission errors, as do
connection-oriented protocols. In these instances, the connectionless protocol may use
ICMP to send error messages back to the sending system to indicate networking problems.
As you can see in Table 6-7, ICMP is a protocol that is used for many different net-
working purposes. This table lists the various messages that can be sent to systems and
devices through the ICMP protocol.
Attacks Using ICMP The ICMP protocol was developed to send status messages,
not to hold or transmit user data. But someone figured out how to insert some data
inside of an ICMP packet, which can be used to communicate to an already compro-
mised system. Loki is actually a client/server program used by hackers to set up back
doors on systems. The attacker targets a computer and installs the server portion of the
Loki software. This server portion “listens” on a port, which is the back door an at-
tacker can use to access the system. To gain access and open a remote shell to this com-
puter, an attacker sends commands inside of ICMP packets. This is usually successful,
because most routers and firewalls are configured to allow ICMP traffic to come and go
out of the network, based on the assumption that this is safe because ICMP was devel-
oped to not hold any data or a payload.

CISSP All-in-One Exam Guide
586
Type Name
0 Echo Reply
1 Unassigned
2 Unassigned
3 Destination Unreachable
4 Source Quench
5 Redirect
6 Alternate Host Address
7 Unassigned
8 Echo
9 Router Advertisement
10 Router Solicitation
11 Time Exceeded
12 Parameter Problem
13 Timestamp
14 Timestamp Reply
15 Information Request
16 Information Reply
17 Address Mask Request
18 Address Mask Reply
19 Reserved (for Security)
20–29 Reserved (for Robustness Experiment)
30 Traceroute
31 Datagram Conversion Error
32 Mobile Host Redirect
33 IPv6 Where-Are-You
34 IPv6 I-Am-Here
35 Mobile Registration Request
36 Mobile Registration Reply
37 Domain Name Request
38 Domain Name Reply
39 SKIP
40 Photuris (Disambiguation)
41 ICMP messages utilized by experimental mobility protocols
such as Seamoby
Table 6-7
ICMP Message Types

Chapter 6: Telecommunications and Network Security
587
Just like any tool that can be used for good can also be used for evil, attackers com-
monly use ICMP to redirect traffic. The redirected traffic can go to the attacker’s dedi-
cated system or it can go into a “black hole.” Routers use ICMP messages to update each
other on network link status. An attacker could send a bogus ICMP message with incor-
rect information, which could cause the routers to divert network traffic to where the
attacker indicates it should go.
ICMP is also used as the core protocol for a network tool called Traceroute. Trace-
route is used to diagnose network connections, but since it gathers a lot of important
network statistics, attackers use the tool to map out a victim’s network. This is similar
to a burglar “casing the joint,” meaning that the more the attacker learns about the
environment, the easier it can be for her to exploit some critical targets. So while the
Traceroute tool is a valid networking program, a security administrator might configure
the IDS sensors to monitor for extensive use of this tool because it could indicate that
an attacker is attempting to map out the network’s architecture.
The Ping of Death attack is based upon the use of oversized ICMP packets. If a sys-
tem does not know how to handle ICMP packets over the common size of 65,536
bytes, then it can become unstable and freeze or crash. An attacker can send a target
system several oversized ICMP packets that cannot actually be processed. This is a DDoS
attack that is carried out to render the target system unable to process legitimate traffic.
Another common attack using this protocol is the Smurf attack. In this situation the
attacker sends an ICMP ECHO REQUEST packet with a spoofed source address to a
victim’s network broadcast address. This means that each system on the victim’s subnet
receives an ICMP ECHO REQUEST packet. Each system then replies to that request with
an ICMP ECHO REPLY packet to the spoof address provided in the packets—which is
the victim’s address. All of these response packets go to the victim system and over-
whelm it because it is being bombarded with packets it does not necessarily know how
to process. The victim system may freeze, crash, or reboot. The Smurf attack is illus-
trated in Figure 6-33.
A similar attack to the Smurf attack is the Fraggle attack. The steps and the goal of
the different attack types are the same, but Fraggle uses the UDP protocol and Smurf
uses the ICMP protocol. They are both DDoS attacks that use spoofed source addresses
and use unknowing systems to attack a victim computer.
The countermeasures to these types of attacks are to use firewall rules that only al-
low the necessary ICMP packets into the network and the use of IDS or IPS to watch for
suspicious activities. Host-based protection (host firewalls and host IDS) can also be
installed and configured to identify this type of suspicious behavior.
Simple Network Management Protocol
Simple Network Management Protocol (SNMP) was released to the networking world in
1988 to help with the growing demand of managing network IP devices. Companies
use many types of products that use SNMP to view the status of their network, traffic
flows, and the hosts within the network. Since these tasks are commonly carried out
using graphical user interface (GUI)–based applications, many people do not have a
full understanding of how the protocol actually works. The protocol is important to

CISSP All-in-One Exam Guide
588
understand because it can provide a wealth of information to attackers, and you should
understand the amount of information that is available to the ones who wish to do you
harm, how they actually access this data, and what can be done with it.
The two main components within SNMP are managers and agents. The manager is
the server portion, which polls different devices to check status information. The server
component also receives trap messages from agents and provides a centralized place to
hold all network-wide information.
The agent is a piece of software that runs on a network device, which is commonly
integrated into the operating system. The agent has a list of objects that it is to keep
track of, which is held in a database-like structure called the Management Information
Base (MIB). An MIB is a logical grouping of managed objects that contain data used for
specific management tasks and status checks.
When the SNMP manager component polls the individual agent installed on a spe-
cific device, the agent pulls the data it has collected from the MIB and sends it to the
manager. Figure 6-34 illustrates how data pulled from different devices are located in
one centralized location (SNMP manager). This allows the network administrator to
have a holistic view of the network and the devices that make up that network.
NOTE
NOTE The trap operation allows the agent to inform the manager of an
event, instead of having to wait to be polled. For example, if an interface on a
router goes down, an agent can send a trap message to the manager. This is
the only way an agent can communicate with the manager without first being
polled.
Smurf Attack
YX
ICMP ECHO
REQUEST
Source: X
ICMP ECHO
RESPONSE
Source: X
Figure 6-33 Smurf attack

Chapter 6: Telecommunications and Network Security
589
It might be necessary to restrict which managers can request information of an
agent, so communities were developed to establish a trust between specific agents and
managers. A community string is basically a password a manager uses to request data
from the agent, and there are two main community strings with different levels of ac-
cess: read-only and read-write. As the names imply, the read-only community string
allows a manager to read data held within a device’s MIB and the read-write string al-
lows a manager to read the data and modify it. If an attacker can uncover the read-write
string she could change values held within the MIB, which could reconfigure the device.
Since the community string is a password, it should be hard to guess and protected.
It should contain mixed-case alphanumeric strings that are not dictionary words. This
is not always the case in many networks. The usual default read-only community string
is “public” and the read-write string is “private.” Many companies do not change these,
so anyone who can connect to port 161 can read the status information of a device and
potentially reconfigure it. Different vendors may put in their own default community
string values, but companies may still not take the necessary steps to change them. At-
tackers usually have lists of default vendor community string values, so they can be
easily discovered and used against networks.
To make matters worse, the community strings are sent in cleartext in SNMP v1 and
v2, so even if a company does the right thing by changing the default values they are
still easily accessible to any attacker with a sniffer. For the best protection community
strings should be changed often, and different network segments should use different
community strings, so that if one string is compromised an attacker cannot gain access
SNMP agent
MIB
WAN
Statistics
Alerts
Events SNMP
manager
SNMP agent
MIB
SNMP agent
MIB
SNMP agent
MIB
SNMP agent
MIB
SNMP agent
MIB
SNMP agent
MIB
Figure 6-34 Agents provide the manager with SNMP data.

CISSP All-in-One Exam Guide
590
to all the devices in the network. The SNMP ports (161 and 162) should not be open to
untrusted networks, like the Internet, and if needed they should be filtered to ensure
only authorized individuals can connect to them. If these ports need to be available to
an untrusted network, configure the router or firewall to only allow UDP traffic to come
and go from preapproved network-management stations. While versions 1 and 2 of this
protocol send the community string values in cleartext, version 3 has cryptographic
functionality, which provides encryption, message integrity, and authentication securi-
ty. So, SNMP v3 should be implemented for more granular protection.
If the proper countermeasures are not put into place, then an attacker can gain ac-
cess to a wealth of device-oriented data that can be used in her follow-up attacks. The
following are just some data sets held within MIB SNMP objects that attackers would
be interested in:
• .server.svSvcTable.svSvcEntry.svSvcName
• Running services
• .server.svShareTable.svShareEntry.svShareName
• Share names
• .server.sv.ShareTable.svShareEntry.svSharePath
• Share paths
• .server.sv.ShareTable.svShareEntry.svShareComment
• Comments on shares
• .server.svUserTable.svUserEntry.svUserName
• Usernames
• .domain.domPrimaryDomain
• Domain name
Gathering this type of data allows an attacker to map out the target network and
enumerate the nodes that make up the network.
As with all tools, SNMP is for good purposes (network management) and for bad
purposes (target mapping, device reconfiguration). We need to understand both sides
of all tools available to us.
Domain Name Service
I don’t understand numbers. I understand words.
Imagine how hard it would be to use the Internet if we had to remember actual
specific IP addresses to get to various websites. The Domain Name Service (DNS) is a
method of resolving hostnames to IP addresses so names can be used instead of IP ad-
dresses within networked environments.
NOTE
NOTE DNS provides hostname-to-IP address translation similar to how the
yellow pages provide a person’s name to their corresponding phone number.
We remember people and company names better than phone numbers or IP
addresses.

Chapter 6: Telecommunications and Network Security
591
The first iteration of the Internet was made up of about 100 computers (versus over
1 billion now), and a list was kept that mapped every system’s hostname to their IP ad-
dress. This list was kept on an FTP server so everyone could access it. It did not take long
for the task of maintaining this list to become overwhelming, and the computing com-
munity looked to automate it.
When a user types a uniform resource locator (URL) into his web browser, the URL
is made up of words or letters that are in a sequence that makes sense to that user, such
as www.shonharris.com. However, these words are only for humans—computers work
with IP addresses. So after the user enters this URL and presses ENTER, behind the scenes
his computer is actually being directed to a DNS server that will resolve this URL, or
hostname, into an IP address that the computer understands. Once the hostname has
been resolved to an IP address, the computer knows how to get to the web server hold-
ing the requested web page.
Many companies have their own DNS servers to resolve their internal hostnames.
These companies usually also use the DNS servers at their Internet service providers
(ISPs) to resolve hostnames on the Internet. An internal DNS server can be used to re-
solve hostnames on the entire LAN network, but usually more than one DNS server is
used so the load can be split up and so redundancy and fault tolerance are in place.
Within DNS servers, DNS namespaces are split up administratively into zones. One
zone may contain all hostnames for the marketing and accounting departments, and
another zone may contain hostnames for the administration, research, and legal de-
partments. The DNS server that holds the files for one of these zones is said to be the
authoritative name server for that particular zone. A zone may contain one or more do-
mains, and the DNS server holding those host records is the authoritative name server
for those domains.
The DNS server contains records that map hostnames to IP addresses, which are re-
ferred to as resource records. When a user’s computer needs to resolve a hostname to an IP
address, it looks to its networking settings to find its DNS server. The computer then
sends a request, containing the hostname, to the DNS server for resolution. The DNS
server looks at its resource records and finds the record with this particular hostname,
retrieves the address, and replies to the computer with the corresponding IP address.
It is recommended that a primary and a secondary DNS server cover each zone. The
primary DNS server contains the actual resource records for a zone, and the secondary
DNS server contains copies of those records. Users can use the secondary DNS server to
resolve names, which takes a load off of the primary server. If the primary server goes
down for any reason or is taken offline, users can still use the secondary server for name
resolution. Having both a primary and secondary DNS server provides fault tolerance
and redundancy to ensure users can continue to work if something happens to one of
these servers.
The primary and secondary DNS servers synchronize their information through a
zone transfer. After changes take place to the primary DNS server, those changes must
be replicated to the secondary DNS server. It is important to configure the DNS server
to allow zone transfers to take place only between the specific servers. For years now,
attackers have been carrying out unauthorized zone transfers to gather very useful net-
work information from victims’ DNS servers. An unauthorized zone transfer provides
the attacker with information on almost every system within the network. The attacker

CISSP All-in-One Exam Guide
592
now knows the hostname and IP address of each system, system alias names, PKI server,
DHCP server, DNS servers, etc. This allows an attacker to carry out very targeted attacks
on specific systems. If I were the attacker and I had a new exploit for DHCP software,
now I know the IP address of the company’s DHCP server and I can send my attack
parameters directly to that system. Also, since the zone transfer can provide data on all
of the systems in the network, the attacker can map out the network. He knows what
subnets are being used, which systems are in each subnet, and where the critical net-
work systems reside. This is analogous to you allowing a burglar into your house with
the freedom of identifying where you keep your jewels, expensive stereo equipment,
piggy bank, and keys to your car, which will allow him to more easily steal these items
when you are on vacation. Unauthorized zone transfers can take place if the DNS serv-
ers are not properly configured to restrict this type of activity.
Internet DNS and Domains
Networks on the Internet are connected in a hierarchical structure, as are the different
DNS servers, as shown in Figure 6-35. While performing routing tasks, if a router does
not know the necessary path to the requested destination, that router passes the packet
up to a router above it. The router above it knows about all the routers below it. This
router has a broader view of the routing that takes place on the Internet and has a better
chance of getting the packet to the correct destination. This holds true with DNS servers
also. If one DNS server does not know which DNS server holds the necessary resource
record to resolve a hostname, it can pass the request up to a DNS server above it.
The naming scheme of the Internet resembles an inverted tree with the root servers
at the top. Lower branches of this tree are divided into top-level domains, with second-
level domains under each. The most common top-level domains are as follows:
•COM Commercial
•EDU Education
•MIL U.S. military organization
•INT International treaty organization
•GOV Government
• ORG Organizational
•NET Networks
So how do all of these DNS servers play together in the Internet playground? When
a user types in a URL to access a web site that sells computer books, for example, his
computer asks its local DNS server if it can resolve this hostname to an IP address. If the
primary DNS cannot resolve the hostname, it must query a higher-level DNS server,
ultimately ending at an authoritative DNS server for the specified domain. Because this
web site is most likely not on the corporate network, the local LAN DNS server will not

Chapter 6: Telecommunications and Network Security
593
usually know the necessary IP address of that web site. The DNS server does not just
reject the user’s request, but rather passes it on to another DNS on the Internet. The
request for this hostname resolution continues through different DNS servers until it
reaches one that knows the IP address. The requested host’s IP information is reported
back to the user’s computer. The user’s computer then attempts to access the web site
using the IP address, and soon the user is buying computer books, happy as a clam.
DNS server and hostname resolution is extremely important in corporate networking
and Internet use. Without it, users would have to remember and type in the IP address for
each web site and individual system, instead of the name. That would be a mess.
Figure 6-35 The DNS naming hierarchy is similar to the routing hierarchy on the Internet.

CISSP All-in-One Exam Guide
594
DNS Resolution Components
Your computer has a DNS resolver, which is responsible for sending out requests
to DNS servers for host IP address information. If your system did not have this
resolver, when you type in www.logicalsecurity.com in your browser, you would
not get to this web site because your system does not actually know what www.
logicalsecurity.com means. When you type in this URL, your system’s resolver has
the IP address of a DNS server it is supposed to send its host-to-IP mapping reso-
lution request to. Your resolver can send out a non-recursive query or a recursive
query to the DNS server. A non-recursive query means that the request just goes to
that specified DNS server and either the answer is returned to the resolver or an
error is returned. A recursive query means that the request can be passed on from
one DNS server to another one until the DNS server with the correct information
is identified. In Figure 6-36, you can follow the succession of requests that com-
monly takes place. Your system’s resolver first checks to see if it already has the
necessary hostname-to-IP address mapping cached or if it is in a local HOSTS file.
If the necessary information is not found, the resolver sends the request to the
local DNS server. If the local DNS server does not have the information, it sends
the request to a different DNS server.
DNS client (resolver) Client-to-server query
Zones
DNS
server
Other DNS servers
DNS
resolver
cache
HOSTS
file
DNS
server
cache
Server-to-server
query (recursion)
URL:www.shonharris.com
Web browser
Q1 Q2
Q3
Q5
Q4
A1 A2
A4
A3
A5
HOST
S
S
S
Figure 6-36 DNS resolution steps

Chapter 6: Telecommunications and Network Security
595
DNS Threats
As stated earlier, not every DNS server knows the IP address of every hostname it is
asked to resolve. When a request for a hostname-to-IP address mapping arrives at a
DNS server (server A), the server reviews its resource records to see if it has the necessary
information to fulfill this request. If the server does not have a resource record for this
hostname, it forwards the request to another DNS server (server B), which in turn re-
views its resource records and, if it has the mapping information, sends the informa-
tion back to server A. Server A caches this hostname-to-IP address mapping in its mem-
ory (in case another client requests it) and sends the information on to the requesting
client.
With the preceding information in mind, consider a sample scenario. Andy the at-
tacker wants to make sure that any time one of his competitor’s customers tries to visit
the competitor’s web site, the customer is instead pointed to Andy’s web site. Therefore,
Andy installs a tool that listens for requests that leave DNS server A asking other DNS
servers if they know how to map the competitor’s hostname to its IP address. Once
Andy sees that server A sends out a request to server B to resolve the competitor’s host-
name, Andy quickly sends a message to server A indicating that the competitor’s
hostname resolves to Andy’s web site’s IP address. Server A’s software accepts the first
response it gets, so server A caches this incorrect mapping information and sends it on
to the requesting client. Now when the client tries to reach Andy’s competitor’s web
site, she is instead pointed to Andy’s web site. This will happen subsequently to any
user who uses server A to resolve the competitor’s hostname to an IP address, because
this information is cached on server A.
NOTE
NOTE In Chapter 3 we covered DNS pharming attacks, which are very
similar.
Previous vulnerabilities that have allowed this type of activity to take place have
been addressed, but this type of attack is still taking place because when server A re-
ceives a response to its request, it does not authenticate the sender.
The HOSTS file resides on the local computer and can contain static host-
name-to-IP mapping information. If you do not want your system to query a DNS
server, you can add the necessary data in the HOSTS file and your system will first
check its contents before reaching out to a DNS server. HOSTS files are not as
commonly used as they were in the past because IP to hostnames are dynamic in
nature, thus it would be difficult to maintain this static list as the IPs for end sys-
tems change. But some people use them to reduce the risk of an attacker sending
their system a bogus IP address that points them to a malicious web site. These
attack types are covered in the following sections.

CISSP All-in-One Exam Guide
596
Mitigating DNS threats consists of numerous measures, the most important of
which is the use of stronger authentication mechanisms such as the DNSSEC (DNS
security, which is part of the many current implementations of DNS server software).
DNSSEC implements PKI and digital signatures, which allows DNS servers to validate
the origin of a message to ensure that it is not spoofed and potentially malicious. If
DNSSEC were enabled on server A, then server A would, upon receiving a response,
validate the digital signature on the message before accepting the information to make
sure that the response is from an authorized DNS server. So even if an attacker sends a
message to a DNS server, the DNS server would discard it because the message would
not contain a valid digital signature. DNSSEC allows DNS servers to send and receive
authorized messages between themselves and thwarts the attacker’s goal of poisoning a
DNS cache table.
This sounds simple enough, but for DNSSEC to be rolled out properly, all of the
DNS servers on the Internet would have to participate in a PKI to be able to validate
digital signatures. The implementation of Internet-wide PKIs simultaneously and seam-
lessly has proved to be difficult.
Despite the fact that DNSSEC requires greater resources than the traditional DNS,
more and more organizations globally are opting to use DNSSEC. The U.S. government
has committed to using DNSSEC for all its top-level domains (.gov, .mil). Countries
such as Brazil, Sweden, and Bulgaria have already implemented DNSSEC on their top-
level domains. In addition, Internet Corporation for Assigned Names and Numbers
(ICANN) has made an agreement with VeriSign to implement DNSSEC on all of its top-
level domains (.com, .net, .org, and so on). So we are getting there, slowly but surely.
Now let’s discuss another (indirectly related) predicament in securing DNS traffic—
that is, the manipulation of the HOSTS file, a technique frequently used by malware.
The HOSTS file is used by the operating system to map hostnames to IP addresses as
described before. The HOSTS file is a plaintext file located in the %systemroot%\sys-
tem32\i386\drivers\etc/ folder in Windows and at /etc/hosts in UNIX/Linux systems. The
file simply consists of a list of IP addresses with their corresponding hostnames.
DNS Splitting
Organizations should implement split DNS, which means a DNS server in the
DMZ handles external hostname-to-IP resolution requests, while an internal
DNS server handles only internal requests. This helps ensure that the internal
DNS has layers of protection and is not exposed by being “Internet facing.” The
internal DNS server should only contain resource records for the internal com-
puter systems, and the external DNS server should only contain resource records
for the systems the organization wants the outside world to be able to connect to.
If the external DNS server is compromised and it has the resource records for all
of the internal systems, now the attacker has a lot of “inside knowledge” and can
carry out targeted attacks. External DNS servers should only contain information
on the systems within the DMZ that the organization wants others on the Inter-
net to be able to communicate with (web servers, external mail server, etc.).

Chapter 6: Telecommunications and Network Security
597
Depending on its configuration, the computer refers to the HOSTS file before issu-
ing a DNS request to a DNS server. Most operating systems give preference to HOSTS
file–returned IP addresses’ details rather than the ones from the DNS server, because
the HOSTS file is generally under the direct control of the local system administrator.
As covered previously, in the early days of the Internet and prior to the conception
of the DNS, HOSTS files were the primary source of determining a host’s network ad-
dresses from its hostname. With the increase in the number of hosts connected to the
Internet, maintaining HOST files became next to impossible and ultimately led to the
creation of the DNS.
Due to the important role of HOSTS files, they are frequently targeted by malware
to propagate across systems connected on a local network. Once a malicious program
takes over the HOSTS file, it can divert traffic from its intended destination to web sites
hosting malicious content, for example. A common example of HOSTS file manipula-
tion carried out by malware involves blocking users from visiting antivirus update web
sites. This is usually done by mapping target hostnames to the loopback interface IP
address 127.0.0.1. The most effective technique for preventing HOSTS file intrusions is
to set it as a read-only file and implement a host-based IDS that watches for critical file
modification attempts.
Attackers don’t always have to go through all this trouble to divert traffic to rogue
destinations. They can also use some very simple techniques that are surprisingly effec-
tive in routing naive users to unintended destinations. The most common approach is
known as URL hiding. Hypertext Markup Language (HTML) documents and e-mails
allow users to attach or embed hyperlinks in any given text, such as the “Click Here”
links you commonly see in e-mail messages or web pages. Attackers misuse hyperlinks
to deceive unsuspecting users into clicking rogue links.
Let’s say a malicious attacker creates an unsuspicious text, www.good.site, but em-
beds the link to an abusive website, www.bad.site. People are likely to click the www.
good.site link without knowing that they are actually being taken to the bad site. In
addition, attackers also use character encoding to obscure web addresses that may
arouse user suspicion.
We’ll now have a look at some legal aspects of domain registration. Although these
do not pose a direct security risk to your DNS servers or your IT infrastructure, ignorance
of them may risk your very domain name on the Internet, thus jeopardizing your entire
online presence. Awareness of domain grabbing and cyber squatting issues will help you
better plan out your online presence and allow you to steer clear of these traps.
The ICANN promotes a governance model that follows a first-come, first-serve pol-
icy when registering domain names, regardless of trademark considerations. This has
led to a race among individuals on securing attractive and prominent domains. Among
these are cyber squatters, individuals who register prominent or established names, hop-
ing to sell these later to real-world businesses that may require these names to establish
their online presence. So if you were preparing to launch a huge business called Securi-
tyRUS, a cyber squatter could go purchase this domain name, and its various formats,
at a low price. This person knows you will need this domain name for your web site, so
they will mark up the price by 1,000 percent and force you to pay this higher rate.

CISSP All-in-One Exam Guide
598
Another tactic employed by cyber squatters is to watch for top-used domain names
that are approaching their re-registration date. If you forget to re-register the domain
name you have used for the last ten years, a cyber squatter can purchase the name and
then require you to pay a huge amount of money just to use the name you have owned
and used for years. These are opportunist types of attacks.
To protect your organization from these threats, it is essential that you register a
domain as soon as your company conceives of launching a new brand or applies for a
new trademark. Registering important domains for longer periods, such as for five or
ten years, instead of annually renewing them reduces the chances of domains slipping
out to cyber squatters. Another technique is to register nearby domains as well. For ex-
ample, if you own the domain something.com, registering some-thing.com and some-
thing.net may be a good idea because this will prevent someone else from occupying
these domains for furtive purposes.
Key Terms
•Internet Group Management Protocol (IGMP) Used by systems and
adjacent routers on IP networks to establish and maintain multicast
group memberships.
•Media access control (MAC) Data communication protocol sublayer
of the data link layer specified in the OSI model. It provides hardware
addressing and channel access control mechanisms that make it
possible for several nodes to communicate within a multiple-access
network that incorporates a shared medium.
•Address Resolution Protocol (ARP) A networking protocol used for
resolution of network layer IP addresses into link layer MAC addresses.
•Dynamic Host Configuration Protocol (DHCP) A network
configuration service for hosts on IP networks. It provides IP
addressing, DNS server, subnet mask, and other important network
configuration data to each host through automation.
• DHCP snooping A series of techniques applied to ensure the security
of an existing DHCP infrastructure through tracking physical locations,
ensuring only authorized DHCP servers are accessible, and hosts use
only addresses assigned to them.
•Reverse Address Resolution Protocol (RARP) and Bootstrap Protocol
(BootP) Networking protocols used by host computers to request the
IP address from an administrative configuration server.
•Internet Control Message Protocol (ICMP) A core protocol of the IP
suite used to send status and error messages.
•Ping of Death A DoS attack type on a computer that involves sending
malformed or oversized ICMP packets to a target.

Chapter 6: Telecommunications and Network Security
599
E-mail Services
I think e-mail is delivered by an e-mail fairy wearing a purple dress.
Response: Exactly.
A user has an e-mail client that is used to create, modify, address, send, receive, and
forward messages. This e-mail client may provide other functionality, such as a per-
sonal address book and the ability to add attachments, set flags, recall messages, and
store messages within different folders.
A user’s e-mail message is of no use unless it can actually be sent somewhere. This
is where Simple Mail Transfer Protocol (SMTP) comes in. In e-mail clients SMTP works
as a message transfer agent, as shown in Figure 6-37, and moves the message from the
user’s computer to the mail server when the user clicks the Send button. SMTP also
functions as a message transfer protocol between e-mail servers. Last, SMTP is a mes-
sage-exchange addressing standard, and most people are used to seeing its familiar
addressing scheme: something@somewhere.com.
Many times, a message needs to travel throughout the Internet and through differ-
ent mail servers before it arrives at its destination mail server. SMTP is the protocol that
carries this message, and it works on top of the TCP because it is a reliable protocol and
provides sequencing and acknowledgments to ensure the e-mail message arrived suc-
cessfully at its destination.
The user’s e-mail client must be SMTP-compliant to be properly configured to use
this protocol. The e-mail client provides an interface to the user so the user can create
and modify messages as needed, and then the client passes the message off to the SMTP
• Smurf attack A DDoS attack type on a computer that floods the target
system with spoofed broadcast ICMP packets.
•Fraggle attack A DDoS attack type on a computer that floods the
target system with a large amount of UDP echo traffic to IP broadcast
addresses.
•Simple Network Management Protocol (SNMP) A protocol within
the IP suite that is used for network device management activities
through the use of a structure that uses managers, agents, and
Management Information Bases.
•Domain Name System (DNS) A hierarchical distributed naming
system for computers, services, or any resource connected to an IP-
based network. It associates various pieces of information with domain
names assigned to each of the participating entities.
•DNS zone transfer The process of replicating the databases
containing the DNS data across a set of DNS servers.
•DNSSEC A set of extensions to DNS that provide to DNS clients
(resolvers) origin authentication of DNS data to reduce the threat of
DNS poisoning, spoofing, and similar attack types.

CISSP All-in-One Exam Guide
600
application layer protocol. So, to use the analogy of sending a letter via the post office,
the e-mail client is the typewriter that a person uses to write the message, SMTP is the
mail courier who picks up the mail and delivers it to the post office, and the post office
is the mail server. The mail server has the responsibility of understanding where the
message is heading and properly routing the message to that destination.
The mail server is often referred to as an SMTP server. The most common SMTP
server software within the UNIX world is Sendmail, which is actually an e-mail server
application. This means that UNIX uses Sendmail software to store, maintain, and
route e-mail messages. Within the Microsoft world, Microsoft Exchange is mostly used,
and in Novell, GroupWise is the common SMTP server. SMTP works closely with two
mail server protocols, POP and IMAP, which are explained in the following sections.
POP
Post Office Protocol (POP) is an Internet mail server protocol that supports incoming
and outgoing messages. A mail server that uses the POP protocol, apart from storing and
forwarding e-mail messages, works with SMTP to move messages between mail servers.
A smaller company may have one POP server that holds all employee mailboxes,
whereas larger companies may have several POP servers, one for each department with-
in the organization. There are also Internet POP servers that enable people all over the
world to exchange messages. This system is useful because the messages are held on the
mail server until users are ready to download their messages, instead of trying to push
messages right to a person’s computer, which may be down or offline.
The e-mail server can implement different authentication schemes to ensure an in-
dividual is authorized to access a particular mailbox, but this is usually handled through
usernames and passwords.
IMAP
Internet Message Access Protocol (IMAP) is also an Internet protocol that enables users
to access mail on a mail server. IMAP provides all the functionalities of POP, but has
more capabilities. If a user is using POP, when he accesses his mail server to see if he has
received any new messages, all messages are automatically downloaded to his com-
Figure 6-37 SMTP works as a transfer agent for e-mail messages.

Chapter 6: Telecommunications and Network Security
601
puter. Once the messages are downloaded from the POP server, they are usually deleted
from that server, depending upon the configuration. POP can cause frustration for mo-
bile users because the messages are automatically pushed down to their computer or
device and they may not have the necessary space to hold all the messages. This is espe-
cially true for mobile devices that can be used to access e-mail servers. This is also in-
convenient for people checking their mail on other people’s computers. If Christina
checks her e-mail on Jessica’s computer, all of Christina’s new mail could be down-
loaded to Jessica’s computer.
NOTE
NOTE POP is commonly used for Internet-based e-mail accounts (Gmail,
Yahoo!, etc.), while IMAP is commonly used for corporate e-mail accounts.
If a user uses IMAP instead of POP, she can download all the messages or leave
them on the mail server within her remote message folder, referred to as a mailbox. The
user can also manipulate the messages within this mailbox on the mail server as if the
messages resided on her local computer. She can create or delete messages, search for
specific messages, and set and clear flags. This gives the user much more freedom and
keeps the messages in a central repository until the user specifically chooses to down-
load all messages from the mail server.
IMAP is a store-and-forward mail server protocol that is considered POP’s successor.
IMAP also gives administrators more capabilities when it comes to administering and
maintaining the users’ messages.
E-mail Authorization
POP has gone through a few version updates and is currently on POP3. POP3 has
the capability to integrate Simple Authentication and Security Layer (SASL). SASL
is a protocol-independent framework for performing authentication. This means
that any protocol that knows how to interact with SASL can use its various au-
thentication mechanisms without having to actually embed the authentication
mechanisms within its code.
To use SASL, a protocol includes a command for identifying and authenticat-
ing a user to an authentication server and for optionally negotiating protection of
subsequent protocol interactions. If its use is negotiated, a security layer is in-
serted between the protocol and the connection. The data security layer can pro-
vide data integrity, data confidentiality, and other services. SASL’s design is
intended to allow new protocols to reuse existing mechanisms without requiring
redesign of the mechanisms, and allows existing protocols to make use of new
mechanisms without redesign of protocols.
The use of SASL is not unique just to POP; other protocols, such as IMAP,
Internet Relay Chat (IRC), Lightweight Directory Access Protocol (LDAP), and
SMTP, can also use SASL and its functionality.

CISSP All-in-One Exam Guide
602
E-mail Relaying
Could you please pass on this irritating message that no one wants?
Response: Sure.
E-mail has changed drastically from the pure mainframe days. In that era, mail used
simple Systems Network Architecture (SNA) protocols and the ASCII format. Today,
several types of mail systems run on different operating systems and offer a wide range
of functionality. Sometimes companies need to implement different types of mail serv-
ers and services within the same network, which can become a bit overwhelming and a
challenge to secure.
Most companies have their public mail servers in their DMZ and may have one or
more mail servers within their internal LAN. The mail servers in the DMZ are in this
protected space because they are directly connected to the Internet. These servers should
be tightly locked down and their relaying mechanisms should be correctly configured.
Mail servers use a relay agent to send a message from one mail server to another. This
relay agent needs to be properly configured so a company’s mail server is not used by a
malicious entity for spamming activity.
Spamming usually is illegal, so the people doing the spamming do not want the traf-
fic to seem as though it originated from their equipment. They will find mail servers on
the Internet, or within company DMZs, that have loosely configured relaying mecha-
nisms and use these servers to send their spam. If relays are configured “wide open” on
a mail server, the mail server can be used to receive any mail message and send it on to
any intended recipients, as shown in Figure 6-38. This means that if a company does not
properly configure its mail relaying, its server can be used to distribute advertisements
for other companies, spam messages, and pornographic material. It is important that
mail servers have proper antispam features enabled, which are actually antirelaying fea-
tures. A company’s mail server should only accept mail destined for its domain and
should not forward messages to other mail servers and domains that may be suspicious.
Many companies also employ antivirus and content-filtering applications on their
mail servers to try and stop the spread of malicious code and not allow unacceptable
messages through the e-mail gateway. It is important to filter both incoming and outgo-
ing messages. This helps ensure that inside employees are not spreading viruses or
sending out messages that are against company policy.
E-mail Threats
E-mail spoofing is a technique used by malicious users to forge an e-mail to make it ap-
pear to be from a legitimate source. Usually, such e-mails appear to be from known and
trusted e-mail addresses when they are actually generated from a malicious source. This
technique is widely used by attackers these days for spamming and phishing purposes.
An attacker tries to acquire the target’s sensitive information, such as username and
password or bank account credentials. Sometimes, the e-mail messages contain a link
of a known web site when it is actually a fake web site used to trick the user into reveal-
ing his information.
E-mail spoofing is done by modifying the fields of e-mail headers, such as the From,
Return-Path, and Reply-To fields, so the e-mail appears to be from a trusted source. This

Chapter 6: Telecommunications and Network Security
603
results in an e-mail looking as though it is from a known e-mail address. Mostly the
From field is spoofed, but some scams have modified the Reply-To field to the attacker’s
e-mail address. E-mail spoofing is caused by the lack of security features in SMTP. When
SMTP technologies were developed, the concept of e-mail spoofing didn’t exist, so
countermeasures for this type of threat were not embedded into the protocol. A user
could use an SMTP server to send e-mail to anyone from any e-mail address.
Figure 6-38 Mail servers can be used for relaying spam if relay functionality is not properly
configured.

CISSP All-in-One Exam Guide
604
SMTP authentication (SMTP-AUTH) was developed to provide an access control
mechanism. This extension comprises an authentication feature that allows clients to
authenticate to the mail server before an e-mail is sent. Servers using the SMTP-AUTH
extension are configured in such a manner that their clients are obliged to use the ex-
tension so that the sender can be authenticated.
E-mail spoofing can be mitigated in several ways. The SMTP server can be config-
ured to prevent unauthenticated users from sending e-mails. It is important to always
log all the connections to your mail servers so that unsolicited e-mails can be traced
and tracked. It’s also advised that you filter incoming and outgoing traffic toward mail
servers through a firewall to prevent generic network-level attacks, such as packet spoof-
ing, distributed denial of service (DDoS) attacks, and so on. Important e-mails can be
communicated over encrypted channels so that the sender and receiver are properly
authenticated.
Another way to deal with the problem of forged e-mail messages is by using Sender
Policy Framework (SPF), which is an e-mail validation system designed to prevent e-mail
spam by detecting e-mail spoofing by verifying the sender’s IP address. SPF allows ad-
ministrators to specify which hosts are allowed to send e-mail from a given domain by
creating a specific SPF record in DNS. Mail exchanges use the DNS to check that mail
from a given domain is being sent by a host sanctioned by that domain’s administrators.
As covered in Chapter 3, phishing is a social engineering attack that is commonly
carried out through maliciously crafted e-mail messages. The goal is to get someone to
click a malicious link or for the victim to send the attacker some confidential data (So-
cial Security number, account number, etc.). The attacker crafts an e-mail that seems to
originate from a trusted source and sends it out to many victims at one time. A spear
phishing attack zeros in on specific people. So if an attacker wants your specific informa-
tion because she wants to break into your bank account, she could gather information
about you via Facebook, LinkedIn, or through other resources and create an e-mail
from someone she thinks you will trust. A similar attack is called whaling. In a whaling
attack an attacker usually identifies some “big fish” in an organization (CEO, CFO,
COO, CSO) and targets them because they have access to some of the most sensitive
data in the organization. The attack is finely tuned to achieve the highest likelihood of
success.
E-mail is, of course, a critical communication tool, but is the most commonly mis-
used channel for malicious activities.
Network Address Translation
I have one address I would like to share with everyone!
When computers need to communicate with each other, they must use the same
type of addressing scheme so everyone understands how to find and talk to one an-
other. The Internet uses the IP address scheme as discussed earlier in the chapter, and
any computer or network that wants to communicate with other users on the network
must conform to this scheme; otherwise, that computer will sit in a virtual room with
only itself to talk to.

Chapter 6: Telecommunications and Network Security
605
However, IP addresses have become scarce (until the full adoption of IPv6) and
expensive. So some smart people came up with network address translation (NAT),
which enables a network that does not follow the Internet’s addressing scheme to com-
municate over the Internet.
Private IP addresses have been reserved for internal LAN address use, as outlined in
RFC 1918. These addresses can be used within the boundaries of a company, but they
cannot be used on the Internet because they will not be properly routed. NAT enables
a company to use these private addresses and still be able to communicate transpar-
ently with computers on the Internet.
The following lists current private IP address ranges:
• 10.0.0.0–10.255.255.255 Class A network
• 172.16.0.0–172.31.255.255 Class B networks
• 192.168.0.0–192.168.255.255 Class C networks
NAT is a gateway that lies between a network and the Internet (or another network)
that performs transparent routing and address translation. Because IP addresses were
depleting fast, IPv6 was developed in 1999, and was intended to be the long-term fix to
the address shortage problem. NAT was developed as the short-term fix to enable more
companies to participate on the Internet. However, to date, IPv6 is slow in acceptance
and implementation, while NAT has caught on like wildfire. Many firewall vendors
have implemented NAT into their products, and it has been found that NAT actually
provides a great security benefit. When attackers want to hack a network, they first do
what they can to learn all about the network and its topology, services, and addresses.
Attackers cannot easily find out a company’s address scheme and its topology when
NAT is in place, because NAT acts like a large nightclub bouncer by standing in front of
the network and hiding the true IP scheme.
NAT hides internal addresses by centralizing them on one device, and any frames
that leave that network have only the source address of that device, not of the actual
internal computer that sends the message. So when a message comes from an internal
computer with the address of 10.10.10.2, for example, the message is stopped at the
device running NAT software, which happens to have the IP address of 1.2.3.4. NAT
changes the header of the packet from the internal address, 10.10.10.2, to the IP address
of the NAT device, 1.2.3.4. When a computer on the Internet replies to this message, it
replies to the address 1.2.3.4. The NAT device changes the header on this reply message
to 10.10.10.2 and puts it on the wire for the internal user to receive.
Three basic types of NAT implementations can be used:
•Static mapping The NAT software has a pool of public IP addresses
configured. Each private address is statically mapped to a specific public
address. So computer A always receives the public address x, computer B
always receives the public address y, and so on. This is generally used for
servers that need to keep the same public address at all times.

CISSP All-in-One Exam Guide
606
•Dynamic mapping The NAT software has a pool of IP addresses, but instead
of statically mapping a public address to a specific private address, it works
on a first-come, first-served basis. So if Bob needs to communicate over
the Internet, his system makes a request to the NAT server. The NAT server
takes the first IP address on the list and maps it to Bob’s private address. The
balancing act is to estimate how many computers will most likely need to
communicate outside the internal network at one time. This estimate is the
number of public addresses the company purchases, instead of purchasing
one public address for each computer.
•Port address translation (PAT) The company owns and uses only one
public IP address for all systems that need to communicate outside the
internal network. How in the world could all computers use the exact same IP
address? Good question. Here’s an example: The NAT device has an IP address
of 127.50.41.3. When computer A needs to communicate with a system on
the Internet, the NAT device documents this computer’s private address and
source port number (10.10.44.3; port 43,887). The NAT device changes the
IP address in the computer’s packet header to 127.50.41.3, with the source
port 40,000. When computer B also needs to communicate with a system on
the Internet, the NAT device documents the private address and source port
number (10.10.44.15; port 23,398) and changes the header information to
127.50.41.3 with source port 40,001. So when a system responds to computer
A, the packet first goes to the NAT device, which looks up the port number
40,000 and sees that it maps to computer A’s real information. So the NAT
device changes the header information to address 10.10.44.3 and port 43,887
and sends it to computer A for processing. A company can save a lot more
money by using PAT, because the company needs to buy only a few public IP
addresses, which are used by all systems in the network.
Most NAT implementations are stateful, meaning they keep track of a communica-
tion between the internal host and an external host until that session is ended. The NAT
device needs to remember the internal IP address and port to send the reply messages
back. This stateful characteristic is similar to stateful-inspection firewalls, but NAT does
not perform scans on the incoming packets to look for malicious characteristics. In-
stead, NAT is a service usually performed on routers or gateway devices within a com-
pany’s screened subnet.
Although NAT was developed to provide a quick fix for the depleting IP address
problem, it has actually put the problem off for quite some time. The more companies
that implement private address schemes, the less likely IP addresses will become scarce.
This has been helpful to NAT and the vendors that implement this technology, but it
has put the acceptance and implementation of IPv6 much farther down the road.

Chapter 6: Telecommunications and Network Security
607
Key Terms
•Simple Mail Transfer Protocol (SMTP) An Internet standard protocol
for electronic mail (e-mail) transmission across IP-based networks.
•Post Office Protocol (POP) An Internet standard protocol used by
e-mail clients to retrieve e-mail from a remote server and supports
simple download-and-delete requirements for access to remote
mailboxes.
•Internet Message Access Protocol (IMAP) An Internet standard
protocol used by e-mail clients to retrieve e-mail from a remote server.
E-mail clients using IMAP generally leave messages on the server until
the user explicitly deletes them.
•Simple Authentication and Security Layer (SASL) A framework
for authentication and data security in Internet protocols. It decouples
authentication mechanisms from application protocols and allows
any authentication mechanism supported by SASL to be used in any
application protocol that uses SASL.
•Open mail relay An SMTP server configured in such a way that it
allows anyone on the Internet to send e-mail through it, not just mail
destined to or originating from known users.
•E-mail spoofing Activity in which the sender address and other
parts of the e-mail header are altered to appear as though the e-mail
originated from a different source. Since SMTP does not provide any
authentication, it is easy to impersonate and forge e-mails.
•Sender Policy Framework (SPF) An e-mail validation system
designed to prevent e-mail spam by detecting e-mail spoofing, a
common vulnerability, by verifying sender IP addresses.
•Phishing A way of attempting to obtain data such as usernames,
passwords, credit card information, and other sensitive data by
masquerading as an authenticated entity in an electronic communication.
Spear phishing targets individuals, and whaling targets people with high
authorization (CEO, COO, CIO).
•Network address translation (NAT) The process of modifying IP
address information in packet headers while in transit across a traffic
routing device, with the goal of reducing the demand for public IP
addresses.

CISSP All-in-One Exam Guide
608
Routing Protocols
I have protocols that will tell you where to go.
Response: I would like to tell YOU where to go.
Individual networks on the Internet are referred to as autonomous systems (ASs).
These ASs are independently controlled by different service providers and organiza-
tions. An AS is made up of routers, which are administered by a single entity and use a
common Interior Gateway Protocol (IGP) within the boundaries of the AS. The bound-
aries of these ASs are delineated by border routers. These routers connect to the border
routers of other ASs and run interior and exterior routing protocols. Internal routers
connect to other routers within the same AS and run interior routing protocols. So, in
reality, the Internet is just a network made up of ASs and routing protocols.
NOTE
NOTE As an analogy, just as the world is made up of different countries,
the Internet is made up of different ASs. Each AS has delineation boundaries
just as countries do. Countries can have their own languages (Spanish, Arabic,
Russian). Similarly, ASs have their own internal routing protocols. Countries
that speak different languages need to have a way of communicating to
each other, which could happen through interpreters. ASs need to have a
standardized method of communicating and working together, which is where
external routing protocols come into play.
The architecture of the Internet that supports these various ASs is created so that no
entity that needs to connect to a specific AS has to know or understand the interior
routing protocols that are being used. Instead, for ASs to communicate, they just have
to be using the same exterior routing protocols (see Figure 6-39). As an analogy, sup-
pose you want to deliver a package to a friend who lives in another state. You give the
package to your brother, who is going to take a train to the edge of the state and hand
it to the postal system at that junction. Thus, you know how your brother will arrive at
the edge of the state—by train. You do not know how the postal system will then de-
liver your package to your friend’s house (truck, car, bus), but that is not your concern.
It will get to its destination without your participation. Similarly, when one network
communicates with another network, the first network puts the data packet (package)
on an exterior protocol (train), and when the data packet gets to the border router
(edge of the state), the data are transferred to whatever interior protocol is being used
on the receiving network.
NOTE
NOTE Routing protocols are used by routers to identify a path between the
source and destination systems.
Routing protocols can be dynamic or static. A dynamic routing protocol can discover
routes and build a routing table. Routers use these tables to make decisions on the best
route for the packets they receive. A dynamic routing protocol can change the entries in

Chapter 6: Telecommunications and Network Security
609
the routing table based on changes that take place to the different routes. When a rout-
er that is using a dynamic routing protocol finds out that a route has gone down or is
congested, it sends an update message to the other routers around it. The other routers
use this information to update their routing table, with the goal of providing efficient
routing functionality. A static routing protocol requires the administrator to manually
configure the router’s routing table. If a link goes down or there is network congestion,
the routers cannot tune themselves to use better routes.
NOTE
NOTE Route flapping refers to the constant changes in the availability of
routes. Also, if a router does not receive an update that a link has gone down,
the router will continue to forward packets to that route, which is referred to
as a black hole.
Two main types of routing protocols are used: distance-vector and link-state routing.
Distance-vector routing protocols make their routing decisions based on the distance (or
number of hops) and a vector (a direction). The protocol takes these variables and uses
them with an algorithm to determine the best route for a packet. Link-state routing pro-
tocols build a more accurate routing table because they build a topology database of the
network. These protocols look at more variables than just the number of hops between
two destinations. They use packet size, link speed, delay, network load, and reliability as
the variables in their algorithms to determine the best routes for packets to take.
Figure 6-39 Autonomous systems

CISSP All-in-One Exam Guide
610
So, a distance-vector routing protocol only looks at the number of hops between
two destinations and considers each hop to be equal. A link-state routing protocol sees
more pieces to the puzzle than just the number of hops, but understands the status of
each of those hops and makes decisions based on these factors also. As you will see, RIP
is an example of a distance-vector routing protocol, and OSPF is an example of a link-
state routing protocol. OSPF is preferred and is used in large networks. RIP is still
around but should only be used in smaller networks.
De facto and proprietary interior protocols are being used today. The following are
just a few of them:
•Routing Information Protocol RIP is a standard that outlines how routers
exchange routing table data and is considered a distance-vector protocol,
which means it calculates the shortest distance between the source and
destination. It is considered a legacy protocol because of its slow performance
and lack of functionality. It should only be used in small networks. RIP
version 1 has no authentication, and RIP version 2 sends passwords in
cleartext or hashed with MD5.
•Open Shortest Path First OSPF uses link-state algorithms to send out
routing table information. The use of these algorithms allows for smaller,
more frequent routing table updates to take place. This provides a more stable
network than RIP, but requires more memory and CPU resources to support
this extra processing. OSPF allows for a hierarchical routing network that has
a backbone link connecting all subnets together. OSPF has replaced RIP in
many networks today. Authentication can take place with cleartext passwords
or hashed passwords, or you can choose to configure no authentication on the
routers using this protocol.
•Interior Gateway Routing Protocol IGRP is a distance-vector routing
protocol that was developed by, and is proprietary to, Cisco Systems.
Whereas RIP uses one criterion to find the best path between the source and
destination, IGRP uses five criteria to make a “best route” decision. A network
administrator can set weights on these different metrics so that the protocol
works best in that specific environment.
•Enhanced Interior Gateway Routing Protocol EIGRP is a Cisco proprietary
and advanced distance-vector routing protocol. It allows for faster router
table updates than its predecessor IGRP and minimizes routing instability,
which can occur after topology changes. Routers exchange messages that
contain information about bandwidth, delay, load, reliability, and maximum
transmission unit (MTU) of the path to each destination as known by the
advertising router.
•Virtual Router Redundancy Protocol VRRP is used in networks that require
high availability where routers as points of failure cannot be tolerated. It is
designed to increase the availability of the default gateway by advertising
a “virtual router” as a default gateway. Two physical routers (primary and
secondary) are mapped to one virtual router. If one of the physical routers
fails, the other router takes over the workload.

Chapter 6: Telecommunications and Network Security
611
•Intermediate System to Intermediate System (IS-IS) Link-state protocol
that allows each router to independently build a database of a network’s
topology. Similar to the OSPF protocol, it computes the best path for traffic to
travel. It is a classless and hierarchical routing protocol that is vendor neutral.
NOTE
NOTE Although most routing protocols have authentication functionality,
most routers do not have this functionality enabled.
The exterior routing protocols used by routers connecting different ASs are generi-
cally referred to as exterior gateway protocols (EGPs). The Border Gateway Protocol
(BGP) enables routers on different ASs to share routing information to ensure effective
and efficient routing between the different AS networks. BGP is commonly used by
Internet service providers to route data from one location to the next on the Internet.
NOTE
NOTE There is an exterior routing protocol called Exterior Gateway
Protocol, but it has been widely replaced by BGP, and now the term “exterior
gateway protocol” and the acronym EGP are used to refer generically to a
type of protocol rather than to specify the outdated protocol.
BGP uses a combination of link-state and distance-vector routing algorithms. It cre-
ates a network topology by using its link-state functionality and transmits updates on a
periodic basis instead of continuously, which is how distance-vector protocols work.
Network administrators can apply weights to the different variables used by link-state
routing protocols when determining the best routes. These configurations are collec-
tively called the routing policy.
Several types of attacks can take place on routers through their routing protocols. A
majority of the attacks have the goal of misdirecting traffic through the use of spoofed
ICMP messages. An attacker can masquerade as another router and submit routing ta-
ble information to the victim router. After the victim router integrates this new infor-
mation, it may be sending traffic to the wrong subnets or computers, or even to a
nonexistent address (black hole). These attacks are successful mainly when routing
protocol authentication is not enabled. When authentication is not required, a router
can accept routing updates without knowing whether or not the sender is a legitimate
router. An attacker could divert a company’s traffic to reveal confidential information
or to just disrupt traffic, which would be considered a DoS attack.
Other types of DoS attacks exist, such as flooding a router port, buffer overflows,
and SYN floods. Since there are many different types of attacks that can take place, there
are just as many countermeasures to be aware of to thwart these types of attacks. Most
of these countermeasures involve authentication and encryption of routing data as it is
transmitted back and forth through the use of shared keys or IPSec. For a good descrip-
tion of how these attacks can take place and their corresponding countermeasures, take
a look at the Cisco Systems whitepaper “SAFE: Best Practices for Securing Routing Pro-
tocols” (www.cisco.com/warp/public/cc/so/neso/vpn/prodlit/sfblp_wp.pdf).

CISSP All-in-One Exam Guide
612
Networking Devices
Several types of devices are used in LANs, MANs, and WANs to provide intercommuni-
cation among computers and networks. We need to have physical devices throughout
the network to actually use all the protocols and services we have covered up to this
point. The different networking devices vary according to their functionality, capabili-
ties, intelligence, and network placement. We will look at the following devices:
• Repeaters
• Bridges
• Routers
• Switches
Repeaters
Arepeater provides the simplest type of connectivity, because it only repeats electrical
signals between cable segments, which enables it to extend a network. Repeaters work
at the physical layer and are add-on devices for extending a network connection over a
greater distance. The device amplifies signals because signals attenuate the farther they
have to travel.
Repeaters can also work as line conditioners by actually cleaning up the signals. This
works much better when amplifying digital signals than when amplifying analog sig-
nals, because digital signals are discrete units, which makes extraction of background
noise from them much easier for the amplifier. If the device is amplifying analog signals,
any accompanying noise often is amplified as well, which may further distort the signal.
Ahub is a multiport repeater. A hub is often referred to as a concentrator because it is
the physical communication device that allows several computers and devices to com-
Wormhole Attack
An attacker can capture a packet at one location in the network and tunnel it to
another location in the network. In this type of attack, there are two attackers,
one at each end of the tunnel (referred to as a wormhole). Attacker A could cap-
ture an authentication token that is being sent to an authentication server, and
then send this token to the other attacker, who then uses it to gain unauthorized
access to a resource. This can take place on a wired or wireless network, but it is
easier to carry out on a wireless network because the attacker does not need to
actually penetrate a physical wire.
The countermeasure to this type of attack is to use a leash, which is just data
that are put into a header of the individual packets. The leash restricts the packet’s
maximum allowed transmission distance. The leash can be either geographical,
which ensures that a packet stays within a certain distance of the sender, or tem-
poral, which limits the lifetime of the packet.
It is like the idea of using leashes for your pets. You put a collar (leash) on
your dog (packet) and it prevents him from leaving your yard (network segment).

Chapter 6: Telecommunications and Network Security
613
municate with each other. A hub does not understand or work with IP or MAC addresses.
When one system sends a signal to go to another system connected to it, the signal is
broadcast to all the ports, and thus to all the systems connected to the concentrator.
Bridges
Abridge is a LAN device used to connect LAN segments. It works at the data link layer
and therefore works with MAC addresses. A repeater does not work with addresses; it
just forwards all signals it receives. When a frame arrives at a bridge, the bridge deter-
mines whether or not the MAC address is on the local network segment. If the MAC
address is not on the local network segment, the bridge forwards the frame to the neces-
sary network segment.
A bridge is used to divide overburdened networks into smaller segments to ensure
better use of bandwidth and traffic control. A bridge amplifies the electrical signal, as
does a repeater, but it has more intelligence than a repeater and is used to extend a LAN
and enable the administrator to filter frames so he can control which frames go where.
When using bridges, you have to watch carefully for broadcast storms. Because
bridges can forward all traffic, they forward all broadcast packets as well. This can over-
whelm the network and result in a broadcast storm, which degrades the network band-
width and performance.
Three main types of bridges are used: local, remote, and translation. A local bridge con-
nects two or more LAN segments within a local area, which is usually a building. A remote
bridge can connect two or more LAN segments over a MAN by using telecommunications
links. A remote bridge is equipped with telecommunications ports, which enable it to
connect two or more LANs separated by a long distance and can be brought together via
telephone or other types of transmission lines. A translation bridge is needed if the two
LANs being connected are different types and use different standards and protocols. For
example, consider a connection between a Token Ring network and an Ethernet network.
The frames on each network type are different sizes, the fields contain different protocol
information, and the two networks transmit at different speeds. If a regular bridge were
put into place, Ethernet frames would go to the Token Ring network, and vice versa, and
neither would be able to understand messages that came from the other network seg-
ment. A translation bridge does what its name implies—it translates between the two
network types.
The following list outlines the functions of a bridge:
• Segments a large network into smaller, more controllable pieces.
• Uses filtering based on MAC addresses.
• Joins different types of network links while retaining the same broadcast
domain.
• Isolates collision domains within the same broadcast domain.
• Bridging functionality can take place locally within a LAN or remotely to
connect two distant LANs.
• Can translate between protocol types.

CISSP All-in-One Exam Guide
614
NOTE
NOTE Do not confuse routers with bridges. Routers work at the network
layer and filter packets based on IP addresses, whereas bridges work at the
data link layer and filter frames based on MAC addresses. Routers usually do
not pass broadcast information, but bridges do pass broadcast information.
Forwarding Tables
You go that way. And you—you go this way!
A bridge must know how to get a frame to its destination—that is, it must know to
which port the frame must be sent and where the destination host is located. Years ago,
network administrators had to type route paths into bridges so the bridges had static
paths indicating where to pass frames that were headed for different destinations. This
was a tedious task and prone to errors. Today, bridges use transparent bridging.
If transparent bridging is used, a bridge starts to learn about the network’s environ-
ment as soon as it is powered on and as the network changes. It does this by examining
frames and making entries in its forwarding tables. When a bridge receives a frame from
a new source computer, the bridge associates this new source address and the port on
which it arrived. It does this for all computers that send frames on the network. Eventu-
ally, the bridge knows the address of each computer on the various network segments
and to which port each is connected. If the bridge receives a request to send a frame to
a destination that is not in its forwarding table, it sends out a query frame on each net-
work segment except for the source segment. The destination host is the only one that
replies to this query. The bridge updates its table with this computer address and the
port to which it is connected, and forwards the frame.
Many bridges use the Spanning Tree Algorithm (STA), which adds more intelligence
to the bridges. STA ensures that frames do not circle networks forever, provides redun-
dant paths in case a bridge goes down, assigns unique identifiers to each bridge, assigns
priority values to these bridges, and calculates path costs. This creates much more effi-
cient frame-forwarding processes by each bridge. STA also enables an administrator to
indicate whether he wants traffic to travel certain paths instead of others.
If source routing is allowed, the packets contain the necessary information within
them to tell the bridge or router where they should go. The packets hold the forwarding
information so they can find their way to their destination without needing bridges and
routers to dictate their paths. If the computer wants to dictate its forwarding informa-
tion instead of depending on a bridge, how does it know the correct route to the desti-
nation computer? The source computer sends out explorer packets that arrive at the
destination computer. These packets contain the route information the packets had to
take to get to the destination, including what bridges and/or routers they had to pass
through. The destination computer then sends these packets back to the source com-
puter, and the source computer strips out the routing information, inserts it into the
packets, and sends them on to the destination.

Chapter 6: Telecommunications and Network Security
615
CAUTION
CAUTION External devices and border routers should not accept packets
with source routing information within their headers, because that information
will override what is laid out in the forwarding and routing tables configured
on the intermediate devices. You want to control how traffic traverses your
network; you don’t want packets to have this type of control and be able to
go wherever they want. Source routing can be used by attackers to get around
certain bridge and router filtering rules.
Routers
We are going up the chain of the OSI layers while discussing various networking de-
vices. Repeaters work at the physical layer, bridges work at the data link layer, and rout-
ers work at the network layer. As we go up each layer, each corresponding device has
more intelligence and functionality because it can look deeper into the frame. A re-
peater looks at the electrical signal. The bridge can look at the MAC address within the
header. The router can peel back the first header information and look farther into the
frame and find out the IP address and other routing information. The farther a device
can look into a frame, the more decisions it can make based on the information within
the frame.
Routers are layer 3, or network layer, devices that are used to connect similar or dif-
ferent networks. (For example, they can connect two Ethernet LANs or an Ethernet LAN
to a Token Ring LAN.) A router is a device that has two or more interfaces and a routing
table so it knows how to get packets to their destinations. It can filter traffic based on
access control lists (ACLs), and it fragments packets when necessary. Because routers
have more network-level knowledge, they can perform higher-level functions, such as
calculating the shortest and most economical path between the sending and receiving
hosts.
Q&A
Question What is the difference between two LANs connected via a
bridge versus two LANs connected via a router?
Answer If two LANs are connected with a bridge, the LANs have
been extended, because they are both in the same broadcast domain.
A router can be configured not to forward broadcast information, so
if two LANs are connected with a router, an internetwork results. An
internetwork is a group of networks connected in a way that enables
any node on any network to communicate with any other node. The
Internet is an example of an internetwork.

CISSP All-in-One Exam Guide
616
A router discovers information about routes and changes that take place in a net-
work through its routing protocols (RIP, BGP, OSPF, and others). These protocols tell
routers if a link has gone down, if a route is congested, and if another route is more
economical. They also update routing tables and indicate if a router is having problems
or has gone down.
A bridge uses the same network address for all of its ports, but a router assigns a
different address per port, which enables it to connect different networks together.
The router may be a dedicated appliance or a computer running a networking op-
erating system that is dual-homed. When packets arrive at one of the interfaces, the
router compares those packets to its ACLs. This list indicates what packets are allowed
in and what packets are denied. Access decisions are based on source and destination
IP addresses, protocol type, and source and destination ports. An administrator may
block all packets coming from the 10.10.12.0 network, any FTP requests, or any packets
headed toward a specific port on a specific host, for example. This type of control is
provided by the ACLs, which the administrator must program and update as necessary.
What actually happens inside the router when it receives a packet? Let’s follow the
steps:
1. A packet is received on one of the interfaces of a router. The router views the
routing data.
2. The router retrieves the destination IP network address from the packet.
3. The router looks at its routing table to see which port matches the requested
destination IP network address.
4. If the router does not have information in its table about the destination
address, it sends out an ICMP error message to the sending computer
indicating that the message could not reach its destination.
5. If the router does have a route in its routing table for this destination, it
decrements the TTL value and sees whether the MTU is different for the
destination network. If the destination network requires a smaller MTU, the
router fragments the datagram.
6. The router changes header information in the packet so the packet can go
to the next correct router, or if the destination computer is on a connecting
network, the changes made enable the packet to go directly to the destination
computer.
7. The router sends the packet to its output queue for the necessary interface.
Table 6-8 provides a quick review of the differences between routers and bridges.
When is it best to use a repeater, bridge, or router? A repeater is used if an adminis-
trator needs to expand a network and amplify signals so they do not weaken on longer
cables. However, a repeater will forward collision and broadcast information because it
does not have the intelligence to decipher among different types of traffic.
Bridges work at the data link layer and have a bit more intelligence than a repeater.
Bridges can do simple filtering, and they separate collision domains, not broadcast

Chapter 6: Telecommunications and Network Security
617
domains. A bridge should be used when an administrator wants to divide a network
into segments to reduce traffic congestion and excessive collisions.
A router splits up a network into collision domains and broadcast domains. A router
gives more of a clear-cut division between network segments than repeaters or bridges.
A router should be used if an administrator wants to have more defined control of where
the traffic goes, because more sophisticated filtering is available with routers, and when
a router is used to segment a network, the result is more controllable sections.
A router is used when an administrator wants to divide a network along the lines of
departments, workgroups, or other business-oriented divisions. A bridge divides seg-
ments based more on the traffic type and load.
Switches
I want to talk to you privately. Let’s talk through this switch.
Switches combine the functionality of a repeater and the functionality of a bridge. A
switch amplifies the electrical signal, like a repeater, and has the built-in circuitry and
intelligence of a bridge. It is a multiport connection device that provides connections
for individual computers or other hubs and switches. Any device connected to one port
can communicate with a device connected to another port with its own virtual private
link. How does this differ from the way in which devices communicate using a bridge
or a hub? When a frame comes to a hub, the hub sends the frame out through all of its
ports. When a frame comes to a bridge, the bridge sends the frame to the port to which
the destination network segment is connected. When a frame comes to a switch, the
switch sends the frame directly to the destination computer or network, which results
in a reduction of traffic. Figure 6-40 illustrates a network configuration that has com-
puters directly connected to their corresponding switches.
On Ethernet networks, computers have to compete for the same shared network
medium. Each computer must listen for activity on the network and transmit its data
when it thinks the coast is clear. This contention and the resulting collisions cause traf-
fic delays and use up precious bandwidth. When switches are used, contention and
collisions are not issues, which results in more efficient use of the network’s bandwidth
and decreased latency. Switches reduce or remove the sharing of the network medium
and the problems that come with it.
Bridge Router
Reads header information, but does not alter it Creates a new header for each packet
Builds forwarding tables based on MAC
addresses
Builds routing tables based on IP addresses
Uses the same network address for all ports Assigns a different network address per port
Filters traffic based on MAC addresses Filters traffic based on IP addresses
Forwards broadcast packets Does not forward broadcast packets
Forwards traffic if a destination address is
unknown to the bridge
Does not forward traffic that contains a
destination address unknown to the router
Table 6-8 Main Differences between Bridges and Routers

CISSP All-in-One Exam Guide
618
A switch is a multiport bridging device, and each port provides dedicated band-
width to the device attached to it. A port is bridged to another port so the two devices
have an end-to-end private link. The switch employs full-duplex communication, so
one wire pair is used for sending and another pair is used for receiving. This ensures the
two connected devices do not compete for the same bandwidth.
Basic switches work at the data link layer and forward traffic based on MAC ad-
dresses. However, today’s layer 3, layer 4, and other layer switches have more enhanced
functionality than layer 2 switches. These higher-level switches offer routing functional-
ity, packet inspection, traffic prioritization, and QoS functionality. These switches are
referred to as multilayered switches because they combine data link layer, network layer,
and other layer functionalities.
Multilayered switches use hardware-based processing power, which enables them
to look deeper within the packet, to make more decisions based on the information
found within the packet, and then to provide routing and traffic management tasks.
Usually this amount of work creates a lot of overhead and traffic delay, but multilayered
switches perform these activities within an application-specific integrated circuit
(ASIC). This means that most of the functions of the switch are performed at the hard-
ware and chip level rather than at the software level, making it much faster than routers.
NOTE
NOTE While it is harder for attackers to sniff traffic on switched networks,
they should not be considered safe just because switches are involved.
Attackers commonly poison cache memory used on switches to divert traffic
to their desired location.
Layer 3 and 4 Switches
I want my switch to do everything, even make muffins.
Layer 2 switches only have the intelligence to forward a frame based on its MAC
address and do not have a higher understanding of the network as a whole. A layer 3
switch has the intelligence of a router. It not only can route packets based on their IP
Figure 6-40
Switches enable
devices to
communicate with
each other via their
own virtual link.

Chapter 6: Telecommunications and Network Security
619
addresses, but also can choose routes based on availability and performance. A layer 3
switch is basically a router on steroids because it moves the route lookup functionality
to the more efficient switching hardware level.
The basic distinction between layer 2, 3, and 4 switches is the header information
the device looks at to make forwarding or routing decisions (data link, network, or
transport OSI layers). But layer 3 and 4 switches can use tags, which are assigned to each
destination network or subnet. When a packet reaches the switch, the switch compares
the destination address with its tag information base, which is a list of all the subnets
and their corresponding tag numbers. The switch appends the tag to the packet and
sends it to the next switch. All the switches in between this first switch and the destina-
tion host just review this tag information to determine which route it needs to take,
instead of analyzing the full header. Once the packet reaches the last switch, this tag is
removed and the packet is sent to the destination. This process increases the speed of
routing of packets from one location to another.
The use of these types of tags, referred to as Multiprotocol Label Switching (MPLS),
not only allows for faster routing, but also addresses service requirements for the differ-
ent packet types. Some time-sensitive traffic (such as video conferencing) requires a
certain level of service (QoS) that guarantees a minimum rate of data delivery to meet
the requirements of a user or application. When MPLS is used, different priority infor-
mation is placed into the tags to help ensure that time-sensitive traffic has a higher
priority than less sensitive traffic, as shown in Figure 6-41.
Many enterprises today use a switched network in which computers are connected
to dedicated ports on Ethernet switches, Gigabit Ethernet switches, ATM switches, and
more. This evolution of switches, added services, and the capability to incorporate re-
peater, bridge, and router functionality have made switches an important part of to-
day’s networking world.
Figure 6-41 MPLS uses tags and tables for routing functions.

CISSP All-in-One Exam Guide
620
Because security requires control over who can access specific resources, more intel-
ligent devices can provide a higher level of protection because they can make more
detail-oriented decisions regarding who can access resources. When devices can look
deeper into the packets, they have access to more information to make access decisions,
which provides more granular access control.
As previously stated, switching makes it more difficult for intruders to sniff and
monitor network traffic because no broadcast and collision information is continually
traveling throughout the network. Switches provide a security service that other devices
cannot provide. Virtual LANs (VLANs) are an important part of switching networks,
because they enable administrators to have more control over their environment and
they can isolate users and groups into logical and manageable entities. VLANs are de-
scribed in the next section.
VLANs
The technology within switches has introduced the capability to use VLANs. VLANs en-
able administrators to separate and group computers logically based on resource re-
quirements, security, or business needs instead of the standard physical location of the
systems. When repeaters, bridges, and routers are used, systems and resources are
grouped in a manner dictated by their physical location. Figure 6-42 shows how com-
puters that are physically located next to each other can be grouped logically into dif-
ferent VLANs. Administrators can form these groups based on the users’ and company’s
needs instead of the physical location of systems and resources.
An administrator may want to place the computers of all users in the marketing
department in the same VLAN network, for example, so all users receive the same
broadcast messages and can access the same types of resources. This arrangement could
get tricky if a few of the users are located in another building or on another floor, but
VLANs provide the administrator with this type of flexibility. VLANs also enable an
administrator to apply particular security policies to respective logical groups. This way,
if tighter security is required for the payroll department, for example, the administrator
can develop a policy, add all payroll systems to a specific VLAN, and apply the security
policy only to the payroll VLAN.
A VLAN exists on top of the physical network, as shown in Figure 6-43. If worksta-
tion P1 wants to communicate with workstation D1, the message has to be routed—
even though the workstations are physically next to each other—because they are on
different logical networks.
NOTE
NOTE The IEEE standard that defines how VLANs are to be constructed and
how tagging should take place to allow for interoperability is IEEE 802.1Q.
While VLANs are used to segment traffic, attackers can still gain access to traffic that
is supposed to be “walled off” in another VLAN segment. VLAN hopping attacks allow

Chapter 6: Telecommunications and Network Security
621
attackers to gain access to traffic in various VLAN segments. An attacker can have a sys-
tem act as though it is a switch. The system understands the tagging values being used
in the network and the trunking protocols and can insert itself between other VLAN
devices and gain access to the traffic going back and forth. Attackers can also insert tag-
ging values to manipulate the control of traffic at the data link layer.
Gateways
Gateway is a general term for software running on a device that connects two different
environments and that many times acts as a translator for them or somehow restricts
their interactions. Usually a gateway is needed when one environment speaks a differ-
ent language, meaning it uses a certain protocol that the other environment does not
understand. The gateway can translate Internetwork Packet Exchange (IPX) protocol
packets to IP packets, accept mail from one type of mail server and format it so another
type of mail server can accept and understand it, or connect and translate different data
link technologies such as FDDI to Ethernet.
Figure 6-42
VLANs enable
administrators
to manage logical
networks.

CISSP All-in-One Exam Guide
622
Gateways perform much more complex tasks than connection devices such as rout-
ers and bridges. However, some people refer to routers as gateways when they connect
two unlike networks (Token Ring and Ethernet) because the router has to translate be-
tween the data link technologies. Figure 6-44 shows how a network access server (NAS)
functions as a gateway between telecommunications and network connections.
When networks connect to a backbone, a gateway can translate the different tech-
nologies and frame formats used on the backbone network versus the connecting LAN
protocol frame formats. If a bridge were set up between an FDDI backbone and an
Ethernet LAN, the computers on the LAN would not understand the FDDI protocols
and frame formats. In this case, a LAN gateway would be needed to translate the proto-
cols used between the different networks.
A popular type of gateway is an electronic mail gateway. Because several e-mail ven-
dors have their own syntax, message format, and way of dealing with message transmis-
sion, e-mail gateways are needed to convert messages between e-mail server software.
For example, suppose that David, whose corporate network uses Sendmail, writes an
e-mail message to Dan, whose corporate network uses Microsoft Exchange. The e-mail
gateway will convert the message into a standard that all mail servers understand—usu-
ally X.400—and pass it on to Dan’s mail server.
Another example of a gateway is a voice and media gateway. Recently, there has
been a drive to combine voice and data networks. This provides for a lot of efficiency
because the same medium can be used for both types of data transfers. However, voice
Figure 6-43
VLANs exist on a
higher level than the
physical network and
are not bound to it.

Chapter 6: Telecommunications and Network Security
623
is a streaming technology, whereas data are usually transferred in packets. So, this
shared medium eventually has to communicate with two different types of networks:
the telephone company’s PSTN, and routers that will take the packet-based data off
to the Internet. This means that a gateway must separate the combined voice and data
information and put it into a form that each of the networks can understand.
Table 6-9 lists the devices covered in this “Networking Devices” section and points
out their important characteristics.
Figure 6-44 Several types of gateways can be used in a network. A NAS is one example.
Device OSI Layer Functionality
Repeater Physical Amplifies the signal and extends networks.
Bridge Data Link Forwards packets and filters based on MAC addresses;
forwards broadcast traffic, but not collision traffic.
Router Network Separates and connects LANs creating internetworks;
routers filter based on IP addresses.
Switch Data Link Provides a private virtual link between communicating
devices; allows for VLANs; reduces collisions; impedes
network sniffing.
Gateway Application Connects different types of networks; performs
protocol and format translations.
Table 6-9 Network Device Differences

CISSP All-in-One Exam Guide
624
PBXs
I have to dial a 9 to get an outside line.
Telephone companies use switching technologies to transmit phone calls to their
destinations. A telephone company’s central office houses the switches that connect
towns, cities, and metropolitan areas through the use of optical fiber rings. So, for ex-
ample, when Dusty makes a phone call from his house, the call first hits the local cen-
tral office of the telephone company that provides service to Dusty, and then the switch
within that office decides whether it is a local or long-distance call and where it needs
to go from there. A Private Branch Exchange (PBX) is a private telephone switch that is
located on a company’s property. This switch performs some of the same switching
tasks that take place at the telephone company’s central office. The PBX has a dedicated
connection to its local telephone company’s central office, where more intelligent
switching takes place.
A PBX can interface with several types of devices and provides a number of tele-
phone services. The voice data are multiplexed onto a dedicated line connected to the
telephone company’s central office. Figure 6-45 shows how data from different data
sources can be placed on one line at the PBX and sent to the telephone company’s
switching facility.
PBXs use digital switching devices that can control analog and digital signals. Older
PBXs may support only analog devices, but most PBXs have been updated to digital.
This move to digital systems and signals has reduced a number of the PBX and tele-
phone security vulnerabilities that used to exist. However, that in no way means PBX
fraud does not take place today. Many companies, for example, have modems hanging
off their PBX (or other transmission access methods) to enable the vendor to dial in
and perform maintenance to the system. These modems are usually unprotected door-
ways into a company’s network. The modem should be activated only when a problem
requires the vendor to dial in. It should be disabled otherwise.
Figure 6-45 A PBX combines different types of data on the same lines.

Chapter 6: Telecommunications and Network Security
625
In addition, many PBX systems have default system administrator passwords that
are hardly ever changed. These passwords are set by default; therefore, if 100 companies
purchased and implemented 100 PBX systems from the PBX vendor ABC and they do
not reset the password, a phreaker (a phone hacker) who knows this default password
would now have access to 100 PBX systems. Once a phreaker breaks into a PBX system,
she can cause mayhem by rerouting calls, reconfiguring switches, or configuring the
system to provide her and her friends with free long-distance calls. This type of fraud
happens more often than most companies realize, because many companies do not
closely watch their phone bills.
PBX systems are also vulnerable to brute force and other types of attacks, in which
phreakers use scripts and dictionaries to guess the necessary credentials to gain access to
the system. In some cases, phreakers have listened to and changed people’s voice mes-
sages. So, for example, when people call to leave Bob a message, they might not hear his
usual boring message, but a new message that is screaming obscenities and insults.
CAUTION
CAUTION Unfortunately, many security people do not even think about a
PBX when they are assessing a network’s vulnerabilities and security level.
This is because telecommunication devices have historically been managed by
service providers and/or by someone on the staff who understands telephony.
The network administrator is usually not the person who manages the PBX,
so the PBX system commonly does not even get assessed. The PBX is just a
type of switch and it is directly connected to the company’s infrastructure;
thus, it is a doorway for the bad guys to exploit and enter. These systems need
to be assessed and monitored just like any other network device.
Network Diagramming
Our network diagram states that everything is perfect and secure.
Response: I am sure everything is fine then.
In reality you can never capture a full network in a diagram; networks are too com-
plex and too layered. Many organizations have a false sense of security when they have
a pretty network diagram that they can all look at and be proud of, but let’s dig deeper
into why this can be deceiving. From what perspective should you look at a network?
There can be a cabling diagram that shows you how everything is physically connected
(coaxial, UTP, fiber) and a wireless portion that describes the WLAN structure. There
can be a network diagram that illustrates the network in infrastructure layers of access,
aggregation, edge, and core. You can have a diagram that illustrates how the various
networking routing takes place (VLANs, MPLS connections, OSPF, IGRP, and BGP
links). You can have a diagram that shows you how different data flows take place (FTP,
IPSec, HTTP, SSL, L2TP, PPP, Ethernet, FDDI, ATM, etc.). You can have a diagram that
separates workstations and the core server types that almost every network uses (DNS,
DHCP, web farm, storage, print, SQL, PKI, mail, domain controllers, RADIUS, etc.). You
can look at a network based upon trust zones, which are enforced by filtering routers,
firewalls, and DMZ structures. You can look at a network based upon its IP subnet struc-
ture. But what if you look at a network diagram from a Microsoft perspective, which
illustrates many of these things but in forest, tree, domain, and OU containers? Then

CISSP All-in-One Exam Guide
626
you need to show remote access connections, VPN concentrators, extranets, and the
various MAN and WAN connections. How do we illustrate our IP telephony structure?
How do we integrate our mobile device administration servers into the diagram? How
do we document our new cloud computing infrastructure? How do we show the layers
of virtualization within our database? How are redundant lines and fault-tolerance so-
lutions marked? How does this network correlate and interact with our offsite location
that carries out parallel processing? And we have not even gotten to our security com-
ponents (firewalls, IDS, IPS, DLP, antimalware, content filters, etc.). And in the real
world whatever network diagrams a company does have are usually out of date because
they take a lot of effort to create and maintain.
Enterprise
GW
SS
Outdoors
Wireless access
Internet
BS BTS
STB
HGW
AGW
Copper
access
ER
CR WDM WDM
OXC
ADM Metro
Metro
Edge
Edge
Core
Operation
Operation
platform
Security
platform
Oiii
NOC
SOC
VOD
PSTN HSS P-CSCFFFF
I-CSCFF
RACS NASS
MS
IM
Presenceeee
Location
Info
on
SIP-AS
Application
platform
Service
platform
Access Network
Enterprise Network Home Network
Core Network
Network Service Control
FW
CR
ER
OLTOLT Optical
access
te et
PSTN
MGW
ADM
SBC
Other IP
network
The point is that a network is a complex beast that cannot really be captured on one
piece of paper. Compare it to a human body. When you go into the doctor’s office you
see posters on the wall. One poster shows the circulatory system, one shows the mus-

Chapter 6: Telecommunications and Network Security
627
cles, one shows bones, another shows organs, another shows tendons and ligaments; a
dentist office has a bunch of posters on teeth; if you are there for acupuncture there will
be a poster on acupuncture and reflexology points. And then there is a ton of stuff no
one makes posters for: hair follicles, skin, toenails, eyebrows, but these are all part of
one system.
So what does this mean to the security professional? You have to understand a net-
work from many different aspects if you are actually going to secure it. You start by
learning all this network stuff in a modular fashion, but you need to quickly under-
stand how it all works together under the covers. You can be a complete genius on how
everything works within your current environment but not fully understand that when
an employee connects her iPhone to her company laptop that is connected to the cor-
porate network and uses it as a modem, this is an unmonitored WAN connection that
can be used as a doorway by an attacker. Security is complex and demanding, so do not
ever get too cocky and remember that a diagram is just showing a perspective of a net-
work, not the whole network.
Key Terms
•Autonomous system (AS) A collection of connected IP routing prefixes
under the control of one or more network operators that presents
a common, clearly defined routing policy to the Internet. They are
uniquely identified as individual networks on the Internet.
•Distance-vector routing protocol A routing protocol that calculates
paths based on the distance (or number of hops) and a vector (a
direction).
•Link-state routing protocol A routing protocol used in packet-
switching networks where each router constructs a map of the
connectivity within the network and calculates the best logical paths,
which form its routing table.
•Border Gateway Protocol (BGP) The protocol that carries out core
routing decisions on the Internet. It maintains a table of IP networks,
or “prefixes,” which designate network reachability among autonomous
systems (ASs).
•Wormhole attack This takes place when an attacker captures packets
at one location in the network and tunnels them to another location in
the network for a second attacker to use against a target system.
•Spanning Tree Protocol (STP) A network protocol that ensures a
loop-free topology for any bridged Ethernet LAN and allows redundant
links to be available in case connection links go down.
•Source routing Allows a sender of a packet to specify the route the
packet takes through the network versus routers determining the path.

CISSP All-in-One Exam Guide
628
Firewalls
A wall that is on fire will stop anyone.
Response: That’s the idea.
Firewalls are used to restrict access to one network from another network. Most
companies use firewalls to restrict access to their networks from the Internet. They may
also use firewalls to restrict one internal network segment from accessing another inter-
nal segment. For example, if the security administrator wants to make sure employees
cannot access the research and development network, he would place a firewall be-
tween this network and all other networks and configure the firewall to allow only the
type of traffic he deems acceptable.
A firewall device supports and enforces the company’s network security policy. An
organizational security policy provides high-level directives on acceptable and unac-
ceptable actions as they pertain to protecting critical assets. The firewall has a more
defined and granular security policy that dictates what services are allowed to be ac-
cessed, what IP addresses and ranges are to be restricted, and what ports can be ac-
cessed. The firewall is described as a “choke point” in the network because all
communication should flow through it, and this is where traffic is inspected and re-
stricted.
A firewall may be a server running a firewall software product or a specialized hard-
ware appliance. It monitors packets coming into and out of the network it is protecting.
It can discard packets, repackage them, or redirect them, depending upon the firewall
configuration. Packets are filtered based on their source and destination addresses, and
ports by service, packet type, protocol type, header information, sequence bits, and
much more. Many times, companies set up firewalls to construct a demilitarized zone
(DMZ), which is a network segment located between the protected and unprotected
•Multiprotocol Label Switching (MPLS) A networking technology
that directs data from one network node to the next based on short
path labels rather than long network addresses, avoiding complex
lookups in a routing table.
•Virtual local area network (VLAN) A group of hosts that communicate
as if they were attached to the same broadcast domain, regardless of their
physical location. VLAN membership can be configured through software
instead of physically relocating devices or connections, which allows for
easier centralized management.
•VLAN hopping An exploit that allows an attacker on a VLAN to gain
access to traffic on other VLANs that would normally not be accessible.
•Private Branch Exchange (PBX) A telephone exchange that serves a
particular business, makes connections among the internal telephones,
and connects them to the public-switched telephone network (PSTN)
via trunk lines.

Chapter 6: Telecommunications and Network Security
629
networks. The DMZ provides a buffer zone between the dangerous Internet and the
goodies within the internal network that the company is trying to protect. As shown in
Figure 6-46, two firewalls are usually installed to form the DMZ. The DMZ usually con-
tains web, mail, and DNS servers, which must be hardened systems because they would
be the first in line for attacks. Many DMZs also have an IDS sensor that listens for mali-
cious and suspicious behavior.
Many different types of firewalls are available, because each environment may have
unique requirements and security goals. Firewalls have gone through an evolution of
their own and have grown in sophistication and functionality. The following sections
describe the various types of firewalls.
The types of firewalls we will review are
• Packet filtering
• Stateful
• Proxy
• Dynamic packet filtering
• Kernel proxy
We will then dive into the three main firewall architectures, which are
• Screened host
• Multihome
• Screened subnet
Then, we will look at honeypots and their uses, so please return all seats to their
upright position and lock your tray tables in front of you—we will be taking off shortly.
Figure 6-46 At least two firewalls, or firewall interfaces, are generally used to construct a DMZ.

CISSP All-in-One Exam Guide
630
Packet Filtering Firewalls
I don’t like this packet. Oh, but I like this packet. I don’t like this packet. This other packet is okay.
Packet filtering is a firewall technology that makes access decisions based upon
network-level protocol header values. The device that is carrying out packet filtering
processes is configured with ACLs, which dictate the type of traffic that is allowed into
and out of specific networks.
Packet filtering was the first generation of firewalls and it is the most rudimentary
type of all of the firewall technologies. The filters only have the capability of reviewing
protocol header information at the network and transport levels and carrying out PER-
MIT or DENY actions on individual packets. This means the filters can make access
decisions based upon the following basic criteria:
• Source and destination IP addresses
• Source and destination port numbers
• Protocol types
• Inbound and outbound traffic direction
Packet filtering is built into a majority of the firewall products today and is a capa-
bility that many routers perform. The ACL filtering rules are enforced at the network
interface of the device, which is the doorway into or out of a network. As an analogy,
you could have a list of items you look for before allowing someone into your office
premises through your front door. Your list can indicate that a person must be 18 years
or older, have an access badge, and be wearing pants. When someone knocks on the
door, you grab your list, which you will use to decide if this person can or cannot come
inside. So your front door is one interface into your office premises. You can also have
a list that outlines who can exit your office premises through your backdoor, which is
another interface. As shown in Figure 6-47, a router has individual interfaces with their
own unique addresses, which provide doorways into and out of a network. Each inter-
face can have its own ACL values, which indicate what type of traffic is allowed in and
out of that specific interface.
We will cover some basic ACL rules to illustrate how packet filtering is implemented
and enforced. The following router configuration allows Telnet traffic to travel from
system 10.1.1.2 to system 172.16.1.1:
permit tcp host 10.1.1.2 host 172.16.1.1 eq telnet
This next rule permits UDP traffic from system 10.1.1.2 to 172.16.1.1:
permit udp host 10.1.1.2 host 172.16.1.1
If you want to ensure that no ICMP traffic enters through a certain interface, the
following ACL can be configured and deployed:
deny icmp any any

Chapter 6: Telecommunications and Network Security
631
If you want to allow Internet-based (WWW) traffic from system 1.1.1.1 to system
5.5.5.5, you can use the following ACL:
permit tcp host 1.1.1.1 host 5.5.5.5 eq www
NOTE
NOTE Filtering inbound traffic is known as ingress filtering. Outgoing traffic
can also be filtered using a process referred to as egress filtering.
So when a packet arrives at a packet filtering device, the device starts at the top of its
ACL and compares the packet’s characteristics to each rule set. If a successful match
(permit or deny) is found, then the remaining rules are not processed. If no matches are
found when the device reaches the end of the list, the traffic should be denied, but each
product is different. So if you are configuring a packet filtering device, make sure that if
no matches are identified, then the traffic is denied.
Packet filtering is also known as stateless inspection because the device does not un-
derstand the context that the packets are working within. This means that the device
does not have the capability to understand the “full picture” of the communication that
is taking place between two systems, but can only focus on individual packet character-
istics. As we will see in a later section, stateful firewalls understand and keep track of a
full communication session, not just the individual packets that make it up. Stateless
firewalls make their decisions for each packet based solely on the data contained in that
Router D
Router C
Router A Router B
10.10.13.2
f0/0
f2/0
f1/0
10.10.13.1
f0/0
10.10.10.1
f1/0
10.10.11.1 f1/0
10.10.11.2
f0/0
10.10.12.1
10.10.12.2
f0/0
10.10.10.2
f
f
1
1
0
0
0
2
2
2
Figure 6-47
ACLs are enforced
at the network
interface level.

CISSP All-in-One Exam Guide
632
individual packet. Stateful firewalls accumulate data about the packets they see and use
that data in an attempt to match incoming and outgoing packets to determine which
packets may be part of the same network communications session. By evaluating a pack-
et in the larger context of a network communications session, a stateful firewall has
much more complete information than a stateless firewall and can therefore more read-
ily recognize and reject packets that may be part of a network protocol–based attack.
Packet filtering devices can block many types of attacks at the network protocol
level, but they are not effective at protecting against attacks that exploit application-
specific vulnerabilities. That is because filtering only examines a packet’s header (i.e.,
delivery information) and not the data moving between the applications. Thus, a pack-
et filtering firewall cannot protect against packet content that could, for example, probe
for and exploit a buffer overflow in a given piece of software.
The lack of sophistication in packet filtering means that an organization should not
solely depend upon this type of firewall to protect its infrastructure and assets, but it
does not mean that this technology should not be used at all. Packet filtering is com-
monly carried out at the edge of a network to strip out all of the obvious “junk” traffic.
Since the rules are simple and only header information is analyzed, this type of filtering
can take place quickly and efficiently. After traffic is passed through a packet filtering
device, it is usually then processed by a more sophisticated firewall, which digs deeper
into the packet contents and can identify application-based attacks.
Some of the weaknesses of packet filtering firewalls are as follows:
• They cannot prevent attacks that employ application-specific vulnerabilities or
functions.
• The logging functionality present in packet filtering firewalls is limited.
• Most packet filtering firewalls do not support advanced user authentication
schemes.
• Many packet filtering firewalls cannot detect spoofed addresses.
• They may not be able to detect packet fragmentation attacks.
The advantages to using packet filtering firewalls are that they are scalable, they are
not application dependent, and they have high performance because they do not carry
out extensive processing on the packets. They are commonly used as the first line of
defense to strip out all the network traffic that is obviously malicious or unintended for
a specific network. The network traffic usually then has to be processed by more sophis-
ticated firewalls that will identify the not-so-obvious security risks.
Stateful Firewalls
This packet came from Texas, so it is okay.
Response: That’s a different type of state.
When packet filtering is used, a packet arrives at the firewall, and it runs through its
ACLs to determine whether this packet should be allowed or denied. If the packet is
allowed, it is passed on to the destination host, or to another network device, and the
packet filtering device forgets about the packet. This is different from stateful inspec-

Chapter 6: Telecommunications and Network Security
633
tion, which remembers and keeps track of what packets went where until each particu-
lar connection is closed.
A stateful firewall is like a nosy neighbor who gets into people’s business and con-
versations. She keeps track of the suspicious cars that come into the neighborhood,
who is out of town for the week, and the postman who stays a little too long at the
neighbor lady’s house. This can be annoying until your house is burglarized. Then you
and the police will want to talk to the nosy neighbor, because she knows everything
going on in the neighborhood and would be the one most likely to know something
unusual happened. A stateful inspection firewall is nosier than a regular filtering device
because it keeps track of what computers say to each other. This requires that the fire-
wall maintain a state table, which is like a score sheet of who said what to whom.
Keeping track of the state of a protocol connection requires keeping track of many
variables. Most people understand the three-step handshake a TCP connection goes
through (SYN, SYN/ACK, ACK), but what does this really mean? If my system wants to
communicate with your system using TCP, it will send your system a packet and in the
TCP header the SYN flag value will be set to 1. This makes this packet a SYN packet. If
your system accepts my connection request, it will send back a packet that has both the
SYN and ACK flags within the packet header set to 1. This is a SYN/ACK packet. While
many people know about these three steps of setting up a TCP connection, they are not
always familiar with all of the other items that are being negotiated at this time. For
example, our systems will agree upon sequence numbers, how much data to send at a
time (window size), how potential transmission errors will be identified (CRC values),
etc. Figure 6-48 shows all of the values that make up a TCP header. So there is a lot of
information going back and forth between our systems just in this one protocol—TCP.
There are other protocols that are involved with networking that a stateful firewall has
to be aware of and keep track of.
So “keeping state of a connection” means to keep a scorecard of all the various pro-
tocol header values as packets go back and forth between systems. The values not only
have to be correct—they have to happen in the right sequence. For example, if a stateful
firewall receives a packet that has all TCP flag values turned to 1, something malicious
is taking place. Under no circumstances during a legitimate TCP connection should all
of these values be turned on like this. Attackers send packets with all of these values
turned to 1 with the hopes that the firewall does not understand or check these values
and just forwards the packets onto the target system. The target system will not know
how to process a TCP packet with all header values set to 1 because it is against the
protocol rules. The target system may freeze or reboot; thus, this is a type of DoS attack.
This is referred to as an XMAS attack because all the flags are “turned on” and the
packet is lit up like a Christmas tree.
In another situation, if my system sends your system a SYN/ACK packet and your
system did not first send me a SYN packet, this, too, is against the protocol rules. The
protocol communication steps have to follow the proper sequence. Attackers send SYN/
ACK packets to target systems hoping that the firewall interprets this as an already es-
tablished connection and just allows the packets to go to the destination system with-
out inspection. A stateful firewall will not be fooled by such actions because it keeps
track of each step of the communication. It knows how protocols are supposed to work,

CISSP All-in-One Exam Guide
634
and if something is out of order (incorrect flag values, incorrect sequence, etc.) it does
not allow the traffic to pass through.
When a connection begins between two systems, the firewall investigates all layers
of the packet (all headers, payload, and trailers). All of the necessary information about
the specific connection is stored in the state table (source and destination IP addresses,
source and destination ports, protocol type, header flags, sequence numbers, time-
stamps, etc.). Once the initial packets go through this in-depth inspection and every-
thing is deemed safe, the firewall then just reviews the network and transport header
portions for the rest of the session. The values of each header for each packet are com-
pared to what is in the current state table, and the table is updated to reflect the progres-
sion of the communication process. Scaling down the inspection of the full packet to
just the headers for each packet is done to increase performance.
TCP is considered a connection-oriented protocol, and the various steps and states
this protocol operates within are very well defined. A connection progresses through a
series of states during its lifetime. The states are LISTEN, SYN-SENT, SYN-RECEIVED,
ESTABLISHED, FIN-WAIT-1, FIN-WAIT-2, CLOSE-WAIT, CLOSING, LAST-ACK, TIME-
WAIT, and the fictional state CLOSED. A stateful firewall keeps track of each of these
states for each packet that passes through, along with the corresponding acknowledg-
TCP Flags
Checksum
Offset
C E U A P R S F
Congestion Window
ECN (Explicit Congestion
Notification). See RFC
3168 for full details, valid
states below.
Number of 32-bit words
in TCP header, minimum
value of 5. Multiply by 4
to get byte count
Checksum of entire TCP
segment and pseudo
header (parts of IP
header)
Packet State DSB
00
00
00
00
00
00
01 01
01
01
10
11
11
11
11 11
ECN bits
Syn
Syn-Ack
Ack
No Congestion
No Congestion
Congestion
Receiver Response
Sender Response
C 0x80 Reduced (CWR)
E 0x40 ECN Echo (ECE)
U 0x20 Urgent
A 0x10 Ack
P 0x08 Push
R 0x04 Reset
S 0x02 Syn
F 0x01 Fin
1 No Operation (NOP, Pad)
2 Maximum segment size
3 Window Scale
4 Selective ACK ok
8 Timestamp
0 End of Options List
Congestion Notification TCP Options
Source Port Destination Port
Sequence Number
Acknowledgment Number
TCP Options (variable length, optional)
Offset Reserved TCP Flags Window
Urgent Pointer
Checksum
0
0
0
Bit 1234567891
045 67890123456789 1
3
2
Byte
Offset
20
Bytes
Offset
4
8
12
16
20
12 3
C E U A P R S F
Nibble
Byte Word
Figure 6-48 TCP header

Chapter 6: Telecommunications and Network Security
635
ment and sequence numbers. If the acknowledgment and/or sequence numbers are out
of order, this could imply that a replay attack is underway and the firewall will protect
the internal systems from this activity.
Nothing is ever simple in life, including the standardization of network protocol
communication. While the previous statements are true pertaining to the states of a
TCP connection, in some situations an application layer protocol has to change these
basic steps. For example, FTP uses an unusual communication exchange when initial-
izing its data channel compared to all of the other application layer protocols. FTP basi-
cally sets up two sessions just for one communication exchange between two computers.
The states of the two individual TCP connections that make up an FTP session can be
tracked in the normal fashion, but the state of the FTP connection follows different
rules. For a stateful device to be able to properly monitor the traffic of an FTP session,
it must be able to take into account the way that FTP uses one outbound connection for
the control channel and one inbound connection for the data channel. If you were
configuring a stateful firewall, you would need to understand the particulars of some
specific protocols to ensure that each is being properly inspected and controlled.
Since TCP is a connection-oriented protocol, it has clearly defined states during the
connection establishment, maintenance, and tearing-down stages. UDP is a connec-
tionless protocol, which means that none of these steps take place. UDP holds no state,
which makes it harder for a stateful firewall to keep track of. For connectionless proto-
cols a stateful firewall keeps track of source and destination addresses, UDP header
values, and some ACL rules. This connection information is also stored in the state ta-
ble and tracked. Since the protocol does not have a specific tear-down stage, the firewall
will just time out the connection after a period of inactivity and remove the data being
kept pertaining to that connection from the state table.
An interesting complexity of stateful firewalls and UDP connections is how the
ICMP protocol comes into play. Since UDP is connectionless, it does not provide a
mechanism to allow the receiving computer to tell the sending computer that data is
coming too fast. In TCP, the receiving computer can alter the window value in its head-
er, which tells the sending computer to reduce the amount of data that is being sent.
The message is basically, “You are overwhelming me and I cannot process the amount
of data you are sending me. Slow down.” UDP does not have a window value in its
header, so instead the receiving computer sends an ICMP packet that provides the same
function. But now this means that the stateful firewall must keep track of and allow
associated ICMP packets with specific UDP connections. If the firewall does not allow
the ICMP packets to get to the sending system, the receiving system could get over-
whelmed and crash. This is just one example of the complexity that comes into play
when a firewall has to do more than just packet filtering. Although stateful inspection
provides an extra step of protection, it also adds more complexity because this device
must now keep a dynamic state table and remember connections.
Stateful inspection firewalls unfortunately have been the victims of many types of
DoS attacks. Several types of attacks are aimed at flooding the state table with bogus
information. The state table is a resource, similar to a system’s hard drive space, mem-
ory, and CPU. When the state table is stuffed full of bogus information, the device may
either freeze or reboot. In addition, if this firewall must be rebooted for some reason, it
will lose its information on all recent connections; thus, it may deny legitimate packets.

CISSP All-in-One Exam Guide
636
Proxy Firewalls
Meet my proxy. He will be our middleman.
Aproxy is a middleman. It intercepts and inspects messages before delivering them
to the intended recipients. Suppose you need to give a box and a message to the presi-
dent of the United States. You couldn’t just walk up to the president and hand over these
items. Instead, you would have to go through a middleman, likely the Secret Service,
who would accept the box and message and thoroughly inspect the box to ensure noth-
ing dangerous was inside. This is what a proxy firewall does—it accepts messages either
entering or leaving a network, inspects them for malicious information, and, when it
decides the messages are okay, passes the data on to the destination computer.
Aproxy firewall stands between a trusted and untrusted network and makes the con-
nection, each way, on behalf of the source. What is important is that a proxy firewall
breaks the communication channel; there is no direct connection between the two com-
municating devices. Where a packet filtering device just monitors traffic as it is travers-
ing a network connection, a proxy ends the communication session and restarts it on
behalf of the sending system. Figure 6-49 illustrates the steps of a proxy-based firewall.
Notice that the firewall is not just applying ACL rules to the traffic, but stops the user
connection at the internal interface of the firewall itself and then starts a new session
on behalf of this user on the external interface. When the external web server replies to
the request, this reply goes to the external interface of the proxy firewall and ends. The
proxy firewall examines the reply information and if it is deemed safe, the firewall starts
a new session from itself to the internal system. This is just like our analogy of what the
delivery man does between you and the president.
Now a proxy technology can actually work at different layers of a network stack. A
proxy-based firewall that works at the lower layers of the OSI model is referred to as a
circuit-level proxy. A proxy-based firewall that works at the application layer is, strange-
ly enough, called an application-level proxy.
Acircuit-level proxy creates a connection (circuit) between the two communicating
systems. It works at the session layer of the OSI model and monitors traffic from a
network-based view. This type of proxy cannot “look into” the contents of a packet;
thus, it does not carry out deep-packet inspection. It can only make access decisions
based upon protocol header and session information that is available to it. While this
means that it cannot provide as much protection as an application-level proxy, because
it does not have to understand application layer protocols, it is considered application
Stateful-Inspection Firewall Characteristics
The following lists some important characteristics of a stateful-inspection firewall:
• Maintains a state table that tracks each and every communication session
• Provides a high degree of security and does not introduce the
performance hit that application proxy firewalls introduce
• Is scalable and transparent to users
• Provides data for tracking connectionless protocols such as UDP and ICMP
• Stores and updates the state and context of the data within the packets

Chapter 6: Telecommunications and Network Security
637
independent. So it cannot provide the detail-oriented protection that a proxy that
works at a higher level can, but this allows it to provide a broader range of protection
where application layer proxies may not be appropriate or available.
NOTE
NOTE Traffic sent to the receiving computer through a circuit-level proxy
appears to have originated from the firewall instead of the sending system. This
is useful for hiding information about the internal computers on the network
the firewall is protecting.
Application-level proxies inspect the packet up through the application layer. Where
a circuit-level proxy only has insight up to the session layer, an application-level proxy
understands the packet as a whole and can make access decisions based on the content
of the packets. They understand various services and protocols and the commands that
are used by them. An application-level proxy can distinguish between an FTP GET com-
mand and an FTP PUT command, for example, and make access decisions based on
this granular level of information; on the other hand, packet filtering firewalls and cir-
cuit-level proxies can allow or deny FTP requests only as a whole, not by the commands
used within the FTP protocol.
An application-level proxy firewall has one proxy per protocol. A computer can
have many types of protocols (FTP, NTP, SMTP, Telnet, HTTP, and so on). Thus, one
application-level proxy per protocol is required. This does not mean one proxy firewall
per service is required, but rather that one portion of the firewall product is dedicated
to understanding how a specific protocol works and how to properly filter it for suspi-
cious data.
Figure 6-49 Proxy firewall breaks connection
User with web
browser configured
to use firewall proxy
server requests web page
from Internet website.
Once page is
cached, proxy server
sends the user the
requested web page.
Firewall\HTTP
proxy requested
web page on
behalf of end-user.
Firewall\HTTP proxy
accepts connection request.
Web server responds to
HTTP request from proxy
server, unaware the request
is coming from a user behind a proxy.
3 5 1
24
www server Workstation
Internet

CISSP All-in-One Exam Guide
638
Providing application-level proxy protection can be a tricky undertaking. The proxy
must totally understand how specific protocols work and what commands within that
protocol are legitimate. This is a lot to know and look at during the transmission of
data. As an analogy, picture a screening station at an airport that is made up of many
employees, all with the job of interviewing people before they are allowed into the
airport and onto an airplane. These employees have been trained to ask specific ques-
tions and detect suspicious answers and activities, and have the skill set and authority
to detain suspicious individuals. Now, suppose each of these employees speaks a differ-
ent language because the people they interview come from different parts of the world.
So, one employee who speaks German could not understand and identify suspicious
answers of a person from Italy because they do not speak the same language. This is the
same for an application-level proxy firewall. Each proxy is a piece of software that has
been designed to understand how a specific protocol “talks” and how to identify suspi-
cious data within a transmission using that protocol.
NOTE
NOTE If the application-level proxy firewall does not understand a certain
protocol or service, it cannot protect this type of communication. In this
scenario, a circuit-level proxy is useful because it does not deal with such
complex issues. An advantage of a circuit-level proxy is that it can handle a
wider variety of protocols and services than an application-level proxy can,
but the downfall is that the circuit-level proxy cannot provide the degree of
granular control that an application-level proxy provides. Life is just full of
compromises.
A circuit-level proxy works similar to a packet filter in that it makes access decisions
based on address, port, and protocol type header values. It looks at the data within the
packet header rather than the data at the application layer of the packet. It does not
know whether the contents within the packet are safe or unsafe; it only understands the
traffic from a network-based view.
An application-level proxy, on the other hand, is dedicated to a particular protocol
or service. At least one proxy is used per protocol because one proxy could not properly
interpret all the commands of all the protocols coming its way. A circuit-level proxy
works at a lower layer of the OSI model and does not require one proxy per protocol
because it does not look at such detailed information.
SOCKS is an example of a circuit-level proxy gateway that provides a secure channel
between two computers. When a SOCKS-enabled client sends a request to access a com-
puter on the Internet, this request actually goes to the network’s SOCKS proxy firewall,
as shown in Figure 6-50, which inspects the packets for malicious information and
checks its policy rules to see whether this type of connection is allowed. If the packet is
acceptable and this type of connection is allowed, the SOCKS firewall sends the mes-
sage to the destination computer on the Internet. When the computer on the Internet
responds, it sends its packets to the SOCKS firewall, which again inspects the data and
then passes the packets on to the client computer.

Chapter 6: Telecommunications and Network Security
639
Application-Level Proxy Firewalls
Application-level proxy firewalls like all technologies have their pros and cons. It
is important to fully understand all characteristics of this type of firewall before
purchasing and deploying this type of solution.
Characteristics of application-level proxy firewalls:
• They have extensive logging capabilities due to the firewall being able
to examine the entire network packet rather than just the network
addresses and ports.
• Application layer proxy firewalls are capable of authenticating users
directly, as opposed to packet filtering firewalls and stateful-inspection
firewalls, which can usually only carry out system authentication.
• Since application layer proxy firewalls are not simply layer 3 devices,
they can address spoofing attacks and other sophisticated attacks.
Disadvantages of using application-level proxy firewalls:
• Are not generally well suited to high-bandwidth or real-time
applications.
• Tend to be limited in terms of support for new network applications
and protocols.
• Create performance issues because of the necessary per-packet
processing requirements.
Figure 6-50 Circuit-level proxy firewall

CISSP All-in-One Exam Guide
640
The SOCKS firewall can screen, filter, audit, log, and control data flowing in and out
of a protected network. Because of its popularity, many applications and protocols have
been configured to work with SOCKS in a manner that takes less configuration on the
administrator’s part, and various firewall products have integrated SOCKS software to
provide circuit-based protection.
NOTE
NOTE Remember that whether an application- or circuit-level proxy firewall
is being used, it is still acting as a proxy. Both types of proxy firewalls deny
actual end-to-end connectivity between the source and destination systems.
In attempting a remote connection, the client connects to and communicates
with the proxy; the proxy, in turn, establishes a connection to the destination
system and makes requests to it on the client’s behalf. The proxy maintains
two independent connections for every one network transmission. It essentially
turns a two-party session into a four-party session, with the middle process
emulating the two real systems.
Dynamic Packet Filtering
When an internal system needs to communicate to an entity outside its trusted net-
work, it must choose a source port so the receiving system knows how to respond prop-
erly. Ports up to 1023 are called well-known ports and are reserved for server-side services.
The sending system must choose a dynamic port higher than 1023 when it sets up a
connection with another entity. The dynamic packet-filtering firewall then creates an
ACL that allows the external entity to communicate with the internal system via this
Application-Level vs. Circuit-Level Proxy Firewall
Characteristics
Characteristics of application-level proxy firewalls:
• Each protocol that is to be monitored must have a unique proxy.
• Provides more protection than circuit-level proxy firewalls.
• Require more processing per packet and thus are slower than a circuit-
level proxy firewall.
Characteristics of circuit-level proxy firewalls:
• Do not require a proxy for each and every protocol.
• Do not provide the deep-inspection capabilities of an application layer
proxy.
• Provide security for a wider range of protocols.

Chapter 6: Telecommunications and Network Security
641
high port. If this were not an available option for your dynamic packet-filtering firewall,
you would have to allow “punch holes” in your firewalls for all ports above 1023, be-
cause the client side chooses these ports dynamically and the firewall would never
know exactly on which port to allow or disallow traffic.
NOTE
NOTE The standard port for HTTP is 80, which means a server will have
a service listening on port 80 for HTTP traffic. HTTP (and most other
protocols) works in a type of client/server model. The server portion uses
the well-known ports (FTP uses 20 and 21; SMTP uses 25) so everyone knows
how to connect to those services. A client will not use one of these well-
known port numbers for itself, but will choose a random, higher port number.
An internal system could choose a source port of 11,111 for its message to the out-
side system. This frame goes to the dynamic packet-filtering firewall, which builds an
ACL, as illustrated in Figure 6-51, that indicates a response from the destination com-
puter to this internal system’s IP address and port 11,111 is to be allowed. When the
destination system sends a response, the firewall allows it. These ACLs are dynamic in
nature, so once the connection is finished (either a FIN or RST packet is received), the
ACL is removed from the list. On connectionless protocols, such as UDP, the connec-
tion times out and then the ACL is removed.
The benefit of a dynamic packet-filtering firewall is that it gives you the option of
allowing any type of traffic outbound and permitting only response traffic inbound.
Figure 6-51 Dynamic packet filtering adds ACLs when connections are created.

CISSP All-in-One Exam Guide
642
Kernel Proxy Firewalls
This firewall is made from kernels of corn.
Response: Why are you here?
Akernel proxy firewall is considered a fifth-generation firewall. It differs from all the
previously discussed firewall technologies because it creates dynamic, customized net-
work stacks when a packet needs to be evaluated.
When a packet arrives at a kernel proxy firewall, a new virtual network stack is cre-
ated, which is made up of only the protocol proxies necessary to examine this specific
packet properly. If it is an FTP packet, then the FTP proxy is loaded in the stack. The
packet is scrutinized at every layer of the stack. This means the data link header will be
evaluated along with the network header, transport header, session layer information,
and the application layer data. If anything is deemed unsafe at any of these layers, the
packet is discarded.
Kernel proxy firewalls are faster than application layer proxy firewalls because all of
the inspection and processing takes place in the kernel and does not need to be passed
up to a higher software layer in the operating system. It is still a proxy-based system, so
the connection between the internal and external entity is broken by the proxy acting
as a middleman, and it can perform NAT by changing the source address, as do the
preceding proxy-based firewalls.
Table 6-10 lists the important concepts and characteristics of the firewall types dis-
cussed in the preceding sections. Although various firewall products can provide a mix
of these services and work at different layers of the OSI model, it is important you un-
derstand the basic definitions and functionalities of these firewall types.
Firewall Type OSI Layer Characteristics
Packet filtering Network layer Looks at destination and source addresses, ports,
and services requested. Routers using ACLs to
monitor network traffic.
Application-level
proxy
Application layer Looks deep into packets and makes granular
access control decisions. It requires one proxy
per protocol.
Circuit-level
proxy
Session layer Looks only at the header packet information. It
protects a wider range of protocols and services
than an application-level proxy, but does not
provide the detailed level of control available to
an application-level proxy.
Stateful Network layer Looks at the state and context of packets. Keeps
track of each conversation using a state table.
Kernel proxy Application layer Faster because processing is done in the kernel.
One network stack is created for each packet.
Table 6-10 Comparison of Different Types of Firewalls

Chapter 6: Telecommunications and Network Security
643
Today’s Firewalls
What do today’s firewalls do?
Response: Everything.
In school and in books you learn things in a modular fashion. You need to under-
stand how packet filtering works and how it is different from stateful inspection. You
need to understand how a proxy works and what it means if it works in kernel mode.
But technology, like life, is much more complicated when you get out of school because
so many of the things you learn individually are actually very entwined with each other.
Learning the basics of networking is like studying individual threads and strings. They
are foundational components that are later interwoven to make different kinds of fab-
rics. Every network in the world is different in some way, but each is made up of the
same foundational ingredients. The same can be said about firewall products. Today’s
firewall products usually provide a combination of the various technologies we de-
scribed earlier (filtering, proxy, stateful) in their own proprietary ways. Some people
learn the products directly (PIX, CheckPoint, etc.) and while they have intimate knowl-
edge of how to configure and maintain these products, they may not have a clear un-
derstanding of all types of firewall technologies available and how they work together.
Today’s firewalls not only use a combination of the modular technologies we cov-
ered earlier, each product type commonly has its own specific focus. So we have per-
sonal, enterprise, web-based, mobile firewall types and more. Many enterprise gateway
firewalls have content-filtering capabilities, which allow an administrator to develop
keywords that can be used to detect unwanted traffic. The keywords can be used to
identify spam, malicious code, pornographic material, or if employees are attempting
to transmit sensitive data outside of the network. Content filtering can be more sophis-
ticated so that logic rules are developed that identify malicious behavior.
Appliances
A firewall may take the form of either software installed on a regular computer
using a regular operating system or a dedicated hardware appliance that has its
own operating system. The second choice is usually more secure, because the
vendor uses a stripped-down version of an operating system (usually Linux or
BSD Unix). Operating systems are full of code and functionality that are not nec-
essary for a firewall. This extra complexity opens the doors for vulnerabilities. If a
hacker can exploit and bring down a company’s firewall, then the company is
very exposed and in danger.
In today’s jargon, dedicated hardware devices that have stripped-down oper-
ating systems and limited and focused software capabilities are called appliances.
Where an operating system has to provide a vast array of functionality, an appli-
ance provides very focused functionality—as in just being a firewall.
If a software-based firewall is going to run on a regular system, then the un-
necessary user accounts should be disabled, unnecessary services deactivated, un-
used subsystems disabled, unneeded ports closed, etc. If firewall software is going
to run on a regular system and not a dedicated appliance, then the system needs to
be fully locked down.

CISSP All-in-One Exam Guide
644
Bastion Host
This guy is going to get hit first; he’d better be tough.
A system is considered a bastion host if it is a highly exposed device that is
most likely to be targeted by attackers. The closer any system is to an untrusted
network, as in the Internet, the more it is considered a target candidate since it
has a smaller number of layers of protection guarding it. If a system is on the
public side of a DMZ or is directly connected to an untrusted network, it is con-
sidered a bastion host; thus, it needs to be extremely locked down.
The system should have all unnecessary services disabled, unnecessary ac-
counts disabled, unneeded ports closed, unused applications removed, unused
subsystems and administrative tools removed, etc. The attack surface of the sys-
tem needs to be reduced, which means the number of potential vulnerabilities
need to be reduced as much as possible.
In many enterprise environments, basic packet filtering is carried out by routers that
work on the outer edges of the network. Different types of traffic are routed to where
they need to go, so SMTP traffic goes to the mail server, HTTP traffic goes to the web
server, DNS traffic goes to the DNS server, etc. The more targeted firewall products are
deployed closest to the types of technologies that they were developed to protect. So
web traffic passes through a web-based firewall, e-mail passes through a product that
can carry out e-mail content filtering, traffic that needs to communicate with an inter-
nal database will pass through an application-level proxy firewall that carries out in-
spection that can identify things like SQL injection attacks, and individual workstations
and mobile devices will have host-based firewalls installed.
Organizations need to ensure that the correct firewall technology is in place to
monitor specific network traffic types and protect unique resource types. The firewalls
also have to be properly placed; we will cover this topic in the next section.
NOTE
NOTE Firewall technology has evolved as attack types have evolved. The first-
generation firewalls could only monitor network traffic. As attackers moved
from just carrying out network-based attacks (DoS, fragmentation, spoofing,
etc.) to software-based attacks (buffer overflows, injections, malware, etc.),
new generations of firewalls were developed to monitor for these types of
attacks.
Firewall Architecture
Firewalls are great, but where do we put them?
Firewalls can be placed in a number of areas on a network to meet particular needs.
They can protect an internal network from an external network and act as a choke point
for all traffic. A firewall can be used to segment and partition network sections and
enforce access controls between two or more subnets. Firewalls can also be used to
provide a DMZ architecture. And as covered in the previous section, the right firewall
type needs to be placed in the right location. Organizations have common needs for
firewalls; hence, they keep them in similar places on their networks. We will see more
on this topic in the following sections.

Chapter 6: Telecommunications and Network Security
645
Dual-Homed Firewall Dual-homed refers to a device that has two interfaces: one
facing the external network and the other facing the internal network. If firewall soft-
ware is installed on a dual-homed device, and it usually is, the underlying operating
system should have packet forwarding and routing turned off for security reasons. If
they are enabled, the computer may not apply the necessary ACLs, rules, or other re-
strictions required of a firewall. When a packet comes to the external NIC from an un-
trusted network on a dual-homed firewall and the operating system has forwarding
enabled, the operating system will forward the traffic instead of passing it up to the
firewall software for inspection.
Many network devices today are multihomed, which just means they have several
NICs that are used to connect several different networks. Multihomed devices are com-
monly used to house firewall software, since the job of a firewall is to control the traffic
as it goes from one network to another. A common multihomed firewall architecture
allows a company to have several DMZs. One DMZ may hold devices that are shared
between companies in an extranet, another DMZ may house the company’s DNS and
mail servers, and yet another DMZ may hold the company’s web servers. Different
DMZs are used for two reasons: to control the different traffic types (for example, to
make sure HTTP traffic only goes toward the web servers and ensure DNS requests go
toward the DNS server), and to ensure that if one system on one DMZ is compromised,
the other systems in the rest of the DMZs are not accessible to this attacker.
If a company depends solely upon a multihomed firewall with no redundancy, this
system could prove to be a single point of failure. If it goes down, then all traffic flow
stops. Some firewall products have embedded redundancy or fault tolerance capabili-
ties. If a company uses a firewall product that does not have these capabilities, then the
network should have redundancy built into it.
Along with potentially being a single point of failure, another security issue that
should be understood is the lack of defense in depth. If the company depends upon
just one firewall, no matter what architecture is being used or how many interfaces the
device has, there is only one layer of protection. If an attacker can compromise the one
firewall, then she can gain direct access to company network resources.
Screened Host Ascreened host is a firewall that communicates directly with a pe-
rimeter router and the internal network. Figure 6-52 shows this type of architecture.
Traffic received from the Internet is first filtered via packet filtering on the outer
router. The traffic that makes it past this phase is sent to the screened-host firewall,
which applies more rules to the traffic and drops the denied packets. Then the traffic
moves to the internal destination hosts. The screened host (the firewall) is the only
device that receives traffic directly from the router. No traffic goes directly from the In-
ternet, through the router, and to the internal network. The screened host is always part
of this equation.
A bastion host does not have to be a firewall—the term just relates to the po-
sition of the system in relation to an untrusted environment and its threat of at-
tack. Different systems can be considered bastion hosts (mail, FTP, DNS) since
many of these are placed on the outer edges of networks.

CISSP All-in-One Exam Guide
646
If the firewall is an application-based system, protection is provided at the network
layer by the router through packet filtering, and at the application layer by the firewall.
This arrangement offers a high degree of security, because for an attacker to be success-
ful, she would have to compromise two systems.
What does the word “screening” mean in this context? As shown in Figure 6-52, the
router is a screening device and the firewall is the screened host. This just means there
is a layer that scans the traffic and gets rid of a lot of the “junk” before it is directed to-
ward the firewall. A screened host is different from a screened subnet, which is de-
scribed next.
Screened Subnet Ascreened-subnet architecture adds another layer of security to
the screened-host architecture. The external firewall screens the traffic entering the DMZ
network. However, instead of the firewall then redirecting the traffic to the internal
network, an interior firewall also filters the traffic. The use of these two physical fire-
walls creates a DMZ.
In an environment with only a screened host, if an attacker successfully breaks
through the firewall, nothing lies in her way to prevent her from having full access to
the internal network. In an environment using a screened subnet, the attacker would
have to hack through another firewall to gain access. In this layered approach to secu-
rity, the more layers provided, the better the protection. Figure 6-53 shows a simple
example of a screened subnet.
The examples shown in the figures are simple in nature. Often, more complex net-
works and DMZs are implemented in real-world systems. Figures 6-54 and 6-55 show
some other possible architectures of screened subnets and their configurations.
Figure 6-52 A screened host is a firewall that is screened by a router.

Chapter 6: Telecommunications and Network Security
647
Figure 6-53 With a screened subnet, two firewalls are used to create a DMZ.
Figure 6-54 A screened subnet can have different networks within it and different firewalls that
filter for specific threats.

CISSP All-in-One Exam Guide
648
The screened-subnet approach provides more protection than a stand-alone fire-
wall or a screened-host firewall because three devices are working together and all three
devices must be compromised before an attacker can gain access to the internal net-
work. This architecture also sets up a DMZ between the two firewalls, which functions
as a small network isolated among the trusted internal and untrusted external net-
works. The internal users usually have limited access to the servers within this area.
Web, e-mail, and other public servers often are placed within the DMZ. Although this
solution provides the highest security, it also is the most complex. Configuration and
maintenance can prove to be difficult in this setup, and when new services need to be
added, three systems may need to be reconfigured instead of just one.
Figure 6-55 Some architectures have separate screened subnets with different server types in each.

Chapter 6: Telecommunications and Network Security
649
NOTE
NOTE Sometimes a screened-host architecture is referred to as a single-
tiered configuration and a screened subnet is referred to as a two-tiered
configuration. If three firewalls create two separate DMZs, this may be called a
three-tiered configuration.
Virtualized Firewalls
Even virtualized environments need protection.
A lot of the network functionality we have covered up to this point can take place in
virtual environments. Most people understand that a host system can have virtual guest
systems running on it, which allow for multiple operating systems to run on the same
hardware platform simultaneously. But the industry has advanced much further than
this when it comes to virtualized technology. Routers and switches can be virtualized,
which means you do not actually purchase a piece of hardware and plug it into your
network, but instead you can deploy software products that carry out routing and
switching functionality.
We used to deploy a piece of hardware for every network function needed (DNS,
mail, routers, switches, storage, Web), but today many of these items run within virtual
machines on a smaller number of hardware machines. This reduces software and hard-
ware costs and allows for more centralized administration, but these components still
needed to be protected from each other and external malicious entities. As an analogy,
let’s say that 15 years ago each person lived in their own house and you had policemen
placed between each house so that the people in the houses could not attack each
Firewall Architecture Characteristics
It is important to understand the following characteristics of these firewall archi-
tecture types:
Dual-homed:
• A single computer with separate NICs connected to each network.
• Used to divide an internal trusted network from an external untrusted
network.
• Must disable a computer’s forwarding and routing functionality so the
two networks are truly segregated.
Screened host:
• Router filters (screens) traffic before it is passed to the firewall.
Screened subnet:
• External router filters (screens) traffic before it enters the subnet. Traffic
headed toward the internal network then goes through two firewalls.

CISSP All-in-One Exam Guide
650
other. Then last year, many of these people moved in together so at least five people live
in the same physical house. These people still need to be protected from each other, so
you had to move some of the policemen inside the houses to enforce the laws and keep
the peace. This is the same thing that virtualized firewalls do—they have “moved into”
the virtualized environments to provide the necessary protection between virtualized
entities.
As illustrated in Figure 6-56, a network can have a traditional physical firewall on
the physical network and virtual firewalls within the individual virtual environments.
Virtual firewalls can provide bridge-type functionality in which individual traffic
links are monitored between virtual machines, or they can be integrated within the
hypervisor. The hypervisor is the software component that carries out virtual machine
management and oversees guest system software execution. If the firewall is embedded
within the hypervisor, then it can “see” and monitor all the activities taking place with-
in the system.
Web servers
VLAN1 VLAN1VLAN2 VLAN2
vNIC
Hypervisor 1
Access
Aggregation
vNIC
Switch
Switch
Firewall
Border router
VLAN1 VLAN2
vNIC
Database
Applications
Hypervisor 2 Hypervisor 3
Figure 6-56 Virtual firewalls

Chapter 6: Telecommunications and Network Security
651
The “Shoulds” of Firewalls
Look both ways before crossing the street, and always floss.
Response: Wrong rule set.
The default action of any firewall should be to implicitly deny any packets not ex-
plicitly allowed. This means that if no rule states that the packet can be accepted, that
packet should be denied, no questions asked. Any packet entering the network that has
a source address of an internal host should be denied. Masquerading, or spoofing, is a
popular attacking trick in which the attacker modifies a packet header to have the
source address of a host inside the network she wants to attack. This packet is spoofed
and illegitimate. There is no reason a packet coming from the Internet should have an
internal source network address, so the firewall should deny it. The same is true for
outbound traffic. No traffic should be allowed to leave a network that does not have an
internal source address. If this occurs, it means someone, or some program, on the in-
ternal network is spoofing traffic. This is how zombies work—the agents used in distrib-
uted DoS (DDoS) attacks. If packets are leaving a network with different source
addresses, these packets are spoofed and the network is most likely being used as an
accomplice in a DDoS attack.
Firewalls should reassemble fragmented packets before sending them on to their
destination. In some types of attacks, the hackers alter the packets and make them seem
to be something they are not. When a fragmented packet comes to a firewall, the fire-
wall is seeing only part of the picture. It will make its best guess as to whether this piece
of a packet is malicious or not. Because these fragments contain only a part of the full
packet, the firewall is making a decision without having all the facts. Once all fragments
are allowed through to a host computer, they can be reassembled into malicious pack-
ages that can cause a lot of damage. A firewall should accept each fragment, assemble
the fragments into a complete packet, and then make an access decision based on the
whole packet. The drawback to this, however, is that firewalls that do reassemble pack-
et fragments before allowing them to go on to their destination computer cause traffic
delay and more overhead. It is up to the organization to decide whether this configura-
tion is necessary and whether the added traffic delay is acceptable.
Fragmentation Attacks
Attackers have constructed several exploits that take advantage of some of the
packet fragmentation steps within networking protocols. The following are three
such examples:
•IP fragmentation Exploitation of fragmentation and reassembly flaws
within IP, which causes DoS.
•Teardrop attack Malformed fragments are created by the attacker, and
once they are reassembled, they could cause the victim system to become
unstable.
•Overlapping fragment attack Used to subvert packet filters that
do not reassemble packet fragments before inspection. A malicious
fragment overwrites a previously approved fragment and executes an
attack on the victim’s system.

CISSP All-in-One Exam Guide
652
Many companies choose to deny network entrance to packets that contain source
routing information, which was mentioned earlier. Source routing means the packet
decides how to get to its destination, not the routers in between the source and destina-
tion computer. Source routing moves a packet throughout a network on a predeter-
mined path. The sending computer must know about the topology of the network and
how to route data properly. This is easier for the routers and connection mechanisms in
between, because they do not need to make any decisions on how to route the packet.
However, it can also pose a security risk. When a router receives a packet that contains
source routing information, it figures the packet knows what needs to be done and
passes it on. In some cases, not all filters may be applied to the packet, and a network
administrator may want packets to be routed only through a certain path and not the
route a particular packet dictates. To make sure none of this misrouting happens, many
firewalls are configured to check for source routing information within the packet and
deny it if it is present.
Some common firewall rules that should be implemented are as follows:
•Silent rule Drop “noisy” traffic without logging it. This reduces log sizes by
not responding to packets that are deemed unimportant.
•Stealth rule Disallows access to firewall software from unauthorized systems.
•Cleanup rule Last rule in rule-base that drops and logs any traffic that does
not meet preceding rules.
•Negate rule Used instead of the broad and permissive “any rules.” Negate
rules provide tighter permission rights by specifying what system can be
accessed and how.
Firewalls are not effective “right out of the box.” You really need to understand the
type of firewall being implemented and its configuration ramifications. For example, a
firewall may have implied rules, which are used before the rules you configure. These
implied rules might contradict your rules and override them. In this case you think a
certain traffic type is being restricted, but the firewall may allow that type of traffic into
your network by default.
Unfortunately, once a company erects a firewall, it may have a false sense of secu-
rity. Firewalls are only one piece of the puzzle, and security has a lot of pieces.
The following list addresses some of the issues that need to be understood as they
pertain to firewalls:
• Most of the time a distributed approach needs to be used to control all network
access points, which cannot happen through the use of just one firewall.
• Firewalls can present a potential bottleneck to the flow of traffic and a single
point of failure threat.
• Most firewalls do not provide protection from malware and can be fooled by
the more sophisticated attack types.
• Firewalls do not protect against sniffers or rogue wireless access points, and
provide little protection against insider attacks.

Chapter 6: Telecommunications and Network Security
653
The role of firewalls is becoming more and more complex as they evolve and take
on more functionality and responsibility. At times, this complexity works against the
security professional because it requires them to understand and properly implement
additional functionality. Without an understanding of the different types of firewalls
and architectures available, many more security holes can be introduced, which lays out
the welcome mat for attackers.
Proxy Servers
Earlier we covered two types of proxy-based firewalls, which are different from proxy
servers. Proxy servers act as an intermediary between the clients that want access to cer-
tain services and the servers that provide those services. As a security administrator, you
do not want internal systems to directly connect to external servers without some type
of control taking place. For example, if users on your network could connect directly to
web sites without some type of filtering and rules in place, the users could allow mali-
cious traffic into the network or the user could be surfing web sites your company
deems inappropriate. In this situation, all internal web browsers would be configured
to send their web requests to a web proxy server. The proxy server validates that the re-
quest is safe and then sends an independent request to the web site on behalf of the
user. A very basic proxy server architecture is shown in Figure 6-57.
The proxy server may cache the response it receives from the server so when other
clients make the same request, a connection does not have to go out to the actual web
server again, but the necessary data is served up directly from the proxy server. This
drastically reduces latency and allows the clients to get the data they need much more
quickly.
Computer A
Computer B
Proxy
server
Web
server
Computer C
Figure 6-57
Proxy servers
control traffic
between clients
and servers.

CISSP All-in-One Exam Guide
654
There are different types of proxies that provide specific services. A forwarding proxy
is one that allows the client to specify the server it wants to communicate with, as in our
scenario earlier. An open proxy is a forwarding proxy that is open for anyone to use. An
anonymous open proxy allows users to conceal their IP address while browsing web
sites or using other Internet services. A reverse proxy appears to the clients as the original
server. The client sends a request to what it thinks is the original server, but in reality
this reverse proxy makes a request to the actual server and provides the client with the
response. The forwarding and reverse proxy functionality seems similar, but as Figure
6-58 illustrates, a forwarding proxy server is commonly on an internal network control-
ling traffic that is exiting the network. A reverse proxy server is commonly on the net-
work that fulfills clients’ requests; thus, it is handling traffic that is entering its network.
The reverse proxy can carry out load balancing, encryption acceleration, security, and
caching.
Web proxy servers are commonly used to carry out content filtering to ensure that
Internet use conforms to the organization’s acceptable-use policy. These types of prox-
ies can block unacceptable web traffic, provide logs with detailed information pertain-
ing to the sites specific users visited, monitor bandwidth usage statistics, block
restricted web site usage, and screen traffic for specific keywords (porn, confidential,
User Proxy
Internal network
Internet
Web server
Internal network
Internet Proxy
Figure 6-58 Forward versus reverse proxy services

Chapter 6: Telecommunications and Network Security
655
Social Security numbers). The proxy servers can be configured to act mainly as caching
servers, which keep local copies of frequently requested resources, allowing organiza-
tions to significantly reduce their upstream bandwidth usage and costs, while signifi-
cantly increasing performance.
While it is most common to use proxy servers for web-based traffic, they can be
used for other network functionality and capabilities, as in DNS proxy servers. Proxy
servers are a critical component of almost every network today. They need to be prop-
erly placed, configured, and monitored.
NOTE
NOTE The use of proxy servers to allow for online anonymity has increased
over the years. Some people use it to protect their browsing behaviors from
others, with the goal of providing personal freedom and privacy. Attackers use
the same functionality to help ensure their activities cannot be tracked back
to their local systems.
Honeypot
Hey! Here is a vulnerable system to attack!
Ahoneypot system is a computer that usually sits in the screened subnet, or DMZ,
and attempts to lure attackers to it instead of to actual production computers. To make
a honeypot system lure attackers, administrators may enable services and ports that are
popular to exploit. Some honeypot systems have services emulated, meaning the actual
service is not running but software that acts like those services is available. Honeypot
systems can get an attacker’s attention by advertising themselves as easy targets to com-
promise. They are configured to look like regular company systems so that attackers
will be drawn to them like bears are to honey.
Honeypots can work as early detection mechanisms, meaning that the network staff
can be alerted that an intruder is attacking a honeypot system, and they can quickly go
into action to make sure no production systems are vulnerable to that specific attack
type. If two or more honeypot systems are used together, this is referred to as a honeynet.
Organizations use these systems to identify, quantify, and qualify specific traffic
types to help determine their danger levels. The systems can gather network traffic sta-
tistics and return them to a centralized location for better analysis. So as the systems are
being attacked, they gather intelligence information that can help the network staff
better understand what is taking place within their environment.
It is important to make sure that the honeypot systems are not connected to pro-
duction systems and do not provide any “jumping off” points for the attacker. There
have been instances where companies improperly implemented honeypots and after
they were exploited the attackers were able to move from those systems to the compa-
ny’s internal systems. The honeypots need to be properly segmented from any other live
systems on the network.
On a smaller scale, companies may choose to implement tarpits, which are similar
to honeypots in that they appear to be easy targets for exploitation. A tarpit can be con-
figured to appear as a vulnerable service that attackers will commonly attempt to ex-
ploit. Once the attackers start to send packets to this “service,” the connection to the

CISSP All-in-One Exam Guide
656
victim system seems to be live and ongoing, but the response from the victim system is
slow and the connection may time out. Most attacks and scanning activities take place
through automated tools that require quick responses from their victim systems. If the
victim systems do not reply or are very slow to reply, the automated tools may not be
successful because the protocol connection times out.
Unified Threat Management
Can’t we just shove everything into one box?
It has become very challenging to manage the long laundry list of security solutions
almost every network needs to have in place. The list includes, but is not limited to,
firewalls, antimalware, antispam, IDS/IPS, content filtering, data leak prevention, VPN
capabilities, and continuous monitoring and reporting. Unified Threat Management
(UTM) appliance products have been developed that provide all (or many) of these
functionalities in a single network appliance. The goals of UTM are simplicity, stream-
lined installation and maintenance, centralized control, and the ability to understand
a network’s security from a holistic point of view. Figure 6-59 illustrates how all of these
security functions are applied to traffic as it enters this type of dedicated device.
These products are considered all-in-one devices, and the actual type of functionality
that is provided varies between vendors. Some products may be able to carry out this type
of security for wired, wireless, and Voice over Internet Protocol (VoIP) types of traffic.
Some issues with implementing UTM products are
•Single point of failure for traffic Some type of redundancy should be put
into place.
Gateway antivirus
Intrusion prevention
Spyware protection
Spam protection
URL filtering
Deep application inspection
Stateful firewall
VPN
QoS
DoS protection
Figure 6-59 Unified Threat Management

Chapter 6: Telecommunications and Network Security
657
•Single point of compromise If the UTM is successfully hacked, there may
not have other layers deployed for protection.
•Performance issues Latency and bandwidth issues can arise since this is a
“choke point” device that requires a lot of processing.
Cloud Computing
We went from centralized, to distributed, and back to centralized computing resources.
Response: Everything comes back in fashion. Even bellbottoms.
In the 1960s all of our computing and processing took place on centralized main-
frames. In the 1980s we started distributing processing capabilities, which brought
about the personal computers. In the late 1990s we started to combine processing ca-
pabilities on individual systems through virtualization. Then around 2005 we started
harnessing distributed computing capabilities and centrally managing these individual
systems as one, which introduced cloud computing.
We have centrally harnessed other capabilities and served them up to the masses
over many years in several different industries. We have a highway and interstate system
that allows millions of people to get from one place to another. We do not have a
unique highway for each and every person, but we have one infrastructure for all to use.
Each company and household does not have their own energy processing plant, but
instead we have an electrical grid system that provides this one resource to millions of
users. People do not have their own Internet, but instead it is a shared structure. In a
sense, we have done the same thing with computing capabilities.
For many years every single company had to have its own data center. This is very
expensive and time consuming. Every company had to have the necessary software,
hardware, and staff to maintain these environments. Software had to be patched and
updated. Hardware had to be refreshed as processing demands increased. The skill set
of the staff had to increase as the complexity of the networked environments increased.
But in reality, while the companies were different, their underlining infrastructure and
computing processing needs were the same. So just as we would not maintain our own
power plant for our individual companies, maybe we don’t need to maintain huge,
expensive data centers for every company.
As an industry we combined load balancing techniques, virtualization, service-ori-
ented architectures, application service providing, network convergence, and distribut-
ed computing and came up with cloud computing. Computing can now be provided as
a service instead of a product.
NOTE
NOTE Network convergence means the combining of server, storage,
and network capabilities into a single framework. This helps to decrease
the costs and complexity of running data centers and has accelerated the
evolution of cloud computing. Converged infrastructures provide the
ability to pool resources, automate resource provisioning, and increase and
decrease processing capacity quickly to meet the needs of dynamic computing
workloads.

CISSP All-in-One Exam Guide
658
Looking at computing as a service that can be purchased, rather than as a physical
box, can offer the following advantages:
• Organizations have more flexibility and agility in IT growth and functionality.
• Cost of computing can be reduced since it is a shared delivery model.
(Includes reduction of real-estate, electrical, operational, and personnel costs.)
• Location independence can be achieved because the computing is not
centralized and tied to a physical data center.
• Applications and functionality can be more easily migrated from one physical
server to another because environments are virtualized.
• Improved reliability can be achieved for business continuity and disaster
recovery without the need of dedicated backup site locations.
• Scalability and elasticity of resources can be accomplished in near real time
through automation.
• Performance can increase as processing is shifted to available systems during
peak loads.
There is a controversy pertaining to whether cloud computing hurts or helps secu-
rity. On one hand, data is centralized and security resources are focused, but on the
other hand, direct control of sensitive data is lost and the complexity of securing dy-
namic and distributed environments can be overwhelming. A core part of the term
“cloud computing” is cloud. If you pay for cloud computing, you do not know where
your data is actually being held and processed because it is happening dynamically on
different systems that could be at any location in the world. This is wonderful because
all of the magic is happening in the background and you do not have to worry about
any of the details, but also terrifying because you do not really know who is doing what
with your data.
The most common cloud service models are
•Infrastructure as a Service (IaaS) Cloud providers offer the infrastructure
environment of a traditional data center in an on-demand delivery method.
Companies deploy their own operating systems, applications, and software
onto this provided infrastructure and are responsible for maintaining them.
•Platform as a Service (PaaS) Cloud providers deliver a computing platform,
which can include an operating system, database, and web server as a holistic
execution environment. Where IaaS is the “raw IT network,” PaaS is the
software environment that runs on top of the IT network.
•Software as a Service (SaaS) Provider gives users access to specific application
software (CRM, e-mail, games). The provider gives the customers network-
based access to a single copy of an application created specifically for SaaS
distribution and use.

Chapter 6: Telecommunications and Network Security
659
NOTE
NOTE IaaS and PaaS services are typically billed on a utility computing basis;
thus, as more resources are allocated and consumed, the cost increases. SaaS
is typically a monthly or annual flat fee per user. The cloud infrastructure can
be public and thus shared by anyone and everyone (e.g., Google,), or private
and thus owned and maintained by one company, or a hybrid, which is a mix
of the two.
While combining the use of these shared resources (infrastructure, platforms, ap-
plications) reduces administration costs and overhead in a straightforward manner, the
privacy, security, and compliance concerns involved are far from straightforward. Do
you really want all of your personal data in one location for entities who have lawful or
unlawful access rights? How do you know that your cloud provider is actually securing
all of your data as it bounces around in their cloud? Legal and regulatory compliance
gets very complicated when you are no longer the one securing and maintaining the
environment that contains the data you are responsible for protecting.
The technical practicality of cloud computing is muddied by the privacy, security,
and compliance issues that surround it. The industry is working through each of these
issues, and the evolution of cloud computing is continuing.
SaaS
PaaS
laaS
CRM, e-mail, virtual desktop, communication,
games...
Execution runtime, database, web server,
development tools...
Virtual machines, servers, storage, load
balancers, network...
Platform
Infrastructure
Key Terms
•Bastion host A highly exposed device that will most likely be targeted
for attacks, and thus should be properly locked down.
•Dual-homed firewall This device has two interfaces and sits between
an untrusted network and trusted network to provide secure access.
A multihomed device just means it has multiple interfaces. Firewalls
that have multiple interfaces allow for networks to be segmented based
upon security zone, with unique security configurations.

CISSP All-in-One Exam Guide
660
Intranets and Extranets
We kind of trust you, but not really. We’re going to put you on the extranet.
Web technologies and their uses have exploded with functionality, capability, and
popularity. Companies set up internal web sites for centralized business information
such as employee phone numbers, policies, events, news, and operations instructions.
Many companies have also implemented web-based terminals that enable employees
to perform their daily tasks, access centralized databases, make transactions, collabo-
rate on projects, access global calendars, use videoconferencing tools and whiteboard
applications, and obtain often-used technical or marketing data.
•Screened host A firewall that communicates directly with a perimeter
router and the internal network. The router carries out filtering activities
on the traffic before it reaches the firewall.
•Screened subnet architecture When two filtering devices are used to
create a DMZ. The external device screens the traffic entering the DMZ
network, and the internal filtering device screens the traffic before it
enters the internal network.
•Virtual firewall A firewall that runs within a virtualized environment
and monitors and controls traffic as it passes through virtual machines.
The firewall can be a traditional firewall running within a guest virtual
machine or a component of a hypervisor.
•Proxy server A system that acts as an intermediary for requests from
clients seeking resources from other sources. A client connects to the
proxy server, requesting some service, and the proxy server evaluates
the request according to its filtering rules and makes the connection
on behalf of the client. Proxies can be open or carry out forwarding or
reverse forwarding capabilities.
•Honeypots Systems that entice with the goal of protecting critical
production systems. If two or more honeypots are used together, this
is considered a honeynet.
•Network convergence The combining of server, storage, and network
capabilities into a single framework, which decreases the costs and
complexity of data centers. Converged infrastructures provide the ability
to pool resources, automate resource provisioning, and increase and
decrease processing capacity quickly to meet the needs of dynamic
computing workloads.
•Cloud computing The delivery of computer processing capabilities as
a service rather than as a product, whereby shared resources, software,
and information are provided to end users as a utility. Offerings are
usually bundled as an infrastructure, platform, or software.

Chapter 6: Telecommunications and Network Security
661
Web-based clients are different from workstations that log into a network and have
their own desktop. Web-based clients limit a user’s ability to access the computer’s sys-
tem files, resources, and hard drive space; access back-end systems; and perform other
tasks. The web-based client can be configured to provide a GUI with only the buttons,
fields, and pages necessary for the users to perform tasks. This gives all users a standard
universal interface with similar capabilities.
When a company uses web-based technologies inside its networks, it is using an
intranet, a “private” network. The company has web servers and client machines using
web browsers, and it uses the TCP/IP protocol suite. The web pages are written in HTML
or XML and are accessed via HTTP.
Using web-based technologies has many pluses. They have been around for quite
some time, they are easy to implement, no major interoperability issues occur, and with
just the click of a link, a user can be taken to the location of the requested resource.
Web-based technologies are not platform dependent, meaning all web sites and pages
may be maintained on various platforms and different flavors of client workstations
can access them—they only need a web browser.
An extranet extends outside the bounds of the company’s network to enable two or
more companies to share common information and resources. Business partners com-
monly set up extranets to accommodate business-to-business communication. An ex-
tranet enables business partners to work on projects together; share marketing
information; communicate and work collaboratively on issues; post orders; and share
catalogs, pricing structures, and information on upcoming events. Trading partners of-
ten use electronic data interchange (EDI), which provides structure and organization to
electronic documents, orders, invoices, purchase orders, and a data flow. EDI has
evolved into web-based technologies to provide easy access and easier methods of com-
munication.
For many businesses, an extranet can create a weakness or hole in their security if
the extranet is not implemented and maintained properly. Properly configured fire-
walls need to be in place to control who can use the extranet communication channels.
Extranets used to be based mainly on dedicated transmission lines, which are more
difficult for attackers to infiltrate, but today many extranets are set up over the Internet,
which requires properly configured VPNs and security policies.
Value-Added Networks
Many different types of companies use EDI for internal communication and for
communication with other companies. A very common implementation is be-
tween a company and its supplier. For example, some supplier companies pro-
vide inventory to many different companies, such as Target, Wal-Mart, and Kmart.
Many of these supplies are made in China and then shipped to a warehouse
somewhere in a specific country, as in the United States. When Wal-Mart needs to
order more inventory, it sends its request through an EDI network, which is basi-
cally an electronic form of our paper-based world. Instead of using paper pur-
chase orders, receipts, and forms, EDI provides all of this digitally.

CISSP All-in-One Exam Guide
662
Avalue-added network (VAN) is an EDI infrastructure developed and main-
tained by a service bureau. A Wal-Mart store tracks its inventory by having em-
ployees scan bar codes on individual items. When the inventory of an item
becomes low, a Wal-Mart employee sends a request for more of that specific item.
This request goes to a mailbox at a VAN that Wal-Mart pays to use, and the request
is then pushed out to a supplier that provides this type of inventory for Wal-Mart.
Because Wal-Mart (and other stores) deals with thousands of suppliers, using a
VAN simplifies the ordering process: instead of an employee having to track
down the right supplier and submit a purchase order, this all happens in the
background through an automated EDI network, which is managed by a VAN
company for use by other companies.
EDI is moving away from proprietary VAN EDI structures to standardized
communication structures to allow more interoperability and easier maintenance.
This means that XML, SOAP, and web services are being used. These are the com-
munication structures used in supply chain infrastructures, as illustrated here.
Suppliers Marketplace
Suppliers
Suppliers
Business customers
Business customers
Distributors
Strategic
partners
Financial
services
Logistics
providers
Corporate
division
Your
company
VAN
eStore
One
connection
(Value Added Network)

Chapter 6: Telecommunications and Network Security
663
Metropolitan Area Networks
Behind every good man…
Response: Wrong MAN.
Ametropolitan area network (MAN) is usually a backbone that connects LANs to
each other and LANs to WANs, the Internet, and telecommunications and cable net-
works. A majority of today’s MANs are Synchronous Optical Networks (SONETs)
or FDDI rings and Metro Ethernet provided by the telecommunications service provid-
ers. (FDDI technology was discussed earlier in the chapter.) The SONET and FDDI rings
cover a large area, and businesses can connect to the rings via T1, fractional T1, and T3
lines. Figure 6-60 illustrates two companies connected via a SONET ring and the de-
vices usually necessary to make this type of communication possible. This is a simpli-
fied example of a MAN. In reality, several businesses are usually connected to one ring.
SONET is actually a standard for telecommunications transmissions over fiber-op-
tic cables. Carriers and telephone companies have deployed SONET networks for North
America, and if they follow the SONET standards properly, these various networks can
communicate with little difficulty.
SONET is self-healing, meaning that if a break in the line occurs, it can use a back-
up redundant ring to ensure transmission continues. All SONET lines and rings are
fully redundant. The redundant line waits in the wings in case anything happens to
the primary ring.
SONET networks can transmit voice, video, and data over optical networks. Slower-
speed SONET networks often feed into larger, faster SONET networks, as shown in
Figure 6-61. This enables businesses in different cities and regions to communicate.
-
-
Figure 6-60 A MAN covers a large area and enables businesses to connect to each other, to the
Internet, or to other WAN connections.

CISSP All-in-One Exam Guide
664
MANs can be made up of wireless infrastructures, optical fiber, or Ethernet connec-
tions. Ethernet has evolved from just being a LAN technology to being used in MAN
environments. Due to its prevalent use within organizations’ networks, it is easily ex-
tended and interfaced into MAN networks. A service provider commonly uses layer 2
and 3 switches to connect optical fibers, which can be constructed in a ring, star, or
partial mesh topology.
VLANs are commonly implemented to differentiate between the various logical net-
work connections that run over the same physical network connection. The VLANs al-
low for the isolation of the different customers’ traffic from each other and from the
core network internal signaling traffic.
Metro Ethernet
Can we just stretch Ethernet farther?
Ethernet has been around for many years and embedded in almost every LAN. Eth-
ernet LANs can connect to previously mentioned MAN technologies, or they can be
extended to cover a metropolitan area, which is called Metro Ethernet.
Figure 6-61 Smaller SONET rings connect to larger SONET rings to construct individual MANs.

Chapter 6: Telecommunications and Network Security
665
Ethernet on the MAN can be used as pure Ethernet or Ethernet integrated with
other networking technologies, as in MPLS. Pure Ethernet is less expensive, but less reli-
able and scalable. MPLS-based deployments are more expensive but highly reliable and
scalable, and are typically used by large service providers.
MAN architectures are commonly built upon the following layers: access, aggrega-
tion/distribution, metro, and core, as illustrated in Figure 6-62.
Access devices exist at a customer’s premises, which connect the customer’s equip-
ment to the service provider’s network. The service provider’s distribution network ag-
gregates the traffic and sends it to the provider’s core network. From there, the traffic is
moved to the next aggregation network that is closest to the destination. This is similar
to how smaller highways are connected to larger interstates with on and off ramps that
allow people to quickly travel from one location to a different one.
NOTE
NOTE A Virtual Private LAN Service (VPLS) is a multipoint, layer 2 virtual
private network that connects two or more customer devices using Ethernet
bridging techniques. In other words, VPLS emulates a LAN over a managed IP/
MPLS network.
Wide Area Networks
LAN technologies provide communication capabilities over a small geographic area,
whereas wide area network (WAN) technologies are used when communication needs
to travel over a larger geographical area. LAN technologies encompass how a computer
puts its data onto a network cable, the rules and protocols of how that data are format-
ted and transmitted, how errors are handled, and how the destination computer picks
up this data from the cable. When a computer on one network needs to communicate
with a network on the other side of the country or in a different country altogether,
WAN technologies kick in.
Customer
Government
GigE
10 GigE
Layer 2/3
switches
GigE
10 GigE
Layer 2/3
Enterprise
Content hosting
Government
Enterprise
Content hosting
Standards-
based,
intelligent
packet-to-
circuit
mapping
VCG 1
VCG 1
VCG 1
Standards-
based,
intelligent
packet-to-
circuit
mapping
CustomerAggregation AggregationCore
Figure 6-62 MAN architecture

CISSP All-in-One Exam Guide
666
The network must have some avenue to other networks, which is most likely a
router that communicates with the company’s service provider’s switches or telephone
company facilities. Just as several types of technologies lie within the LAN arena, sev-
eral technologies lie within the WAN arena. This section touches on many of these
WAN technologies.
Telecommunications Evolution
On the eighth day, God created the tel