CCNA Cyber Ops SECFND #210 250 Official Cert Guide 210
User Manual:
Open the PDF directly: View PDF
.
Page Count: 946 [warning: Documents this large are best viewed by clicking the View PDF Link!]
- About This E-Book
- Title Page
- Copyright Page
- About the Authors
- About the Technical Reviewers
- Dedications
- Acknowledgments
- Contents at a Glance
- Contents
- Command Syntax Conventions
- Introduction
- Part I: Network Concepts
- Part II: Security Concepts
- Chapter 3. Security Principles
- “Do I Know This Already?” Quiz
- Foundation Topics
- The Principles of the Defense-in-Depth Strategy
- What Are Threats, Vulnerabilities, and Exploits?
- Confidentiality, Integrity, and Availability: The CIA Triad
- Risk and Risk Analysis
- Personally Identifiable Information and Protected Health Information
- Principle of Least Privilege and Separation of Duties
- Security Operation Centers
- Forensics
- Exam Preparation Tasks
- Chapter 4. Introduction to Access Controls
- Chapter 5. Introduction to Security Operations Management
- Chapter 3. Security Principles
- Part III: Cryptography
- Part IV: Host-Based Analysis
- Part V: Security Monitoring and Attack Methods
- Chapter 11. Network and Host Telemetry
- Chapter 12. Security Monitoring Operational Challenges
- Chapter 13. Types of Attacks and Vulnerabilities
- Chapter 14. Security Evasion Techniques
- Part VI: Final Preparation
- Part VII: Appendixes
- Glossary
- Index
- Elements Available on the Book Website
- Inside Back Cover
- Inside Front Cover
- Access Card
- Where are the companion content files?
- Code Snippets
About This E-Book
EPUB is an open, industry-standard format for e-books. However, support for EPUB
and its many features varies across reading devices and applications. Use your device
or app settings to customize the presentation to your liking. Settings that you can
customize often include font, font size, single or double column, landscape or portrait
mode, and figures that you can click or tap to enlarge. For additional information about
the settings and features on your reading device or app, visit the device manufacturer’s
Web site.
Many titles include programming code or configuration examples. To optimize the
presentation of these elements, view the e-book in single-column, landscape mode and
adjust the font size to the smallest setting. In addition to presenting code and
configurations in the reflowable text format, we have included images of the code that
mimic the presentation found in the print book; therefore, where the reflowable format
may compromise the presentation of the code listing, you will see a “Click here to view
code image” link. Click the link to view the print-fidelity code image. To return to the
previous page viewed, click the Back button on your device or app.
www.hellodigi.ir
CCNA
Cyber
Ops
SECFND
210-250
Official
Cert
Guide
Omar
Santos
Joseph
Muniz
Stefano
De
Crescenzo
Copyright
©
2017
Pearson
Education,
Inc,
Published
by:
Cisco
Press
800
East
96th
Street
Indianapolis,
IN
46240
USA
All
rights
reserved.
No
part
of
this
book
may
be
reproduced
or
transmitted
in
any
form
or
by
any
means,
electronic
or
mechanical,
including
photocopying,
recording,
or
by
any
information
storage
and
retrieval
system,
without
written
permission
from
the
publisher,
except
for
the
inclusion
of
brief
quotations
in
a
review.
Printed in the United States of America
1 17
Library of Congress Control Number: 2017931952
ISBN-10: 1-58714-702-5
ISBN-13: 978-1-58714-702-9
Warning and Disclaimer
This book is designed to provide information about the CCNA Cyber Ops SECFND
#210-250 exam. Every effort has been made to make this book as complete and accurate
as possible, but no warranty or fitness is implied.
The information is provided on an “as is” basis. The authors, Cisco Press, and Cisco
Systems, Inc., shall have neither liability nor responsibility to any person or entity with
respect to any loss or damages arising from the information contained in this book or
from the use of the discs or programs that may accompany it.
The opinions expressed in this book belong to the authors and are not necessarily those
of Cisco Systems, Inc.
Editor-in-Chief: Mark Taub
Product Line Manager: Brett Bartow
Managing Editor: Sandra Schroeder
Development Editor: Christopher Cleveland
Project Editor: Mandie Frank
www.hellodigi.ir

Composition: Tricia Bronkella
Indexer: Ken Johnson
Alliances Manager, Cisco Press: Ron Fligge
Executive Editor: Mary Beth Ray
Technical Editors: Pavan Reddy, Ron Taylor
Copy Editor: Bart Reed
Designer: Chuti Prasertsith
Editorial Assistant: Vanessa Evans
Proofreader: The Wordsmithery LLC
Trademark Acknowledgments
All terms mentioned in this book that are known to be trademarks or service marks have
been appropriately capitalized. Cisco Press or Cisco Systems, Inc., cannot attest to the
accuracy of this information. Use of a term in this book should not be regarded as
affecting the validity of any trademark or service mark.
Special Sales
For information about buying this title in bulk quantities, or for special sales
opportunities (which may include electronic versions; custom cover designs; and
content particular to your business, training goals, marketing focus, or branding
interests), please contact our corporate sales department at corpsales@pearsoned.com
or (800) 382-3419.
For government sales inquiries, please contact governmentsales@pearsoned.com.
For questions about sales outside the United States, please contact intlcs@pearson.com.
Feedback Information
At Cisco Press, our goal is to create in-depth technical books of the highest quality and
value. Each book is crafted with care and precision, undergoing rigorous development
that involves the unique expertise of members from the professional technical
community.
Readers’ feedback is a natural continuation of this process. If you have any comments
regarding how we could improve the quality of this book, or otherwise alter it to better
suit your needs, you can contact us through email at feedback@ciscopress.com. Please
make sure to include the book title and ISBN in your message.
We greatly appreciate your assistance.
www.hellodigi.ir

Americas Headquarters
Cisco Systems. Inc.
San Jose, CA
Asia Pacific Headquarters
Cisco Systems (USA) Pte. Ltd.
Singapore
Europe Headquarters
Cisco Systems International BV
Amsterdam, The Netherlands
Cisco has more than 200 offices worldwide. Addresses, phone numbers, and fax
numbers are listed on the Cisco Website at www.cisco.com/go/offices.
CCDE, CCENT, Cisco Eos, Cisco HealthPresence, the Cisco logo, Cisco Lumin, Cisco
Nexus, Cisco StadiumVision, Cisco Telepresence, Cisco WebEx, DCE, and Welcome to
the Human Network are trademarks; Changing the Way We Work, Live, Play, and Learn
and Cisco Store are service marks; and Access Registrar, Aironet, AsyncOS, Bringing
the Meeting To You, Catalyst, CCDA, CCDP, CCIE, CCIP, CCNA, CCNP, CCSP,
CCVP, Cisco, the Cisco Certified Internetwork Expert logo, Cisco IOS, Cisco Press,
Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Cisco Unity,
Collaboration Without Limitation, EtherFast, EtherSwitch, Event Center, Fast Step,
Follow Me Browsing, FormShare, GigaDrive, HomeLink, Internet Quotient, IOS,
iPhone, iQuick Study, IronPort, the IronPort logo, LightStream, Linksys, MediaTone,
MeetingPlace, MeetingPlace Chime Sound, MGX, Networkers, Networking Academy,
Network Registrar, PCNow, PIX, PowerPanels, ProConnect, ScriptShare, SenderBase,
SMARTnet, Spectrum Expert, StackWise, The Fastest Way to Increase Your Internet
Quotient, TransPath, WebEx, and the WebEx logo are registered trademarks of Cisco
Systems, Inc. and/or its affiliates in the United States and certain other countries.
All other trademarks mentioned in this document or website are the property of their
respective owners. The use of the word partner does not imply a partnership
relationship between Cisco and any other company. (0812R)
www.hellodigi.ir

About the Authors
Omar Santos is an active member of the cyber security community, where he leads
several industry-wide initiatives and standards bodies. His active role helps
businesses, academic institutions, state and local law enforcement agencies, and other
participants dedicated to increasing the security of their critical infrastructures.
Omar is the author of over a dozen books and video courses, as well as numerous white
papers, articles, and security configuration guidelines and best practices. Omar is a
principal engineer of the Cisco Product Security Incident Response Team (PSIRT),
where he mentors and leads engineers and incident managers during the investigation
and resolution of cyber security vulnerabilities. Additional information about Omar’s
current projects can be found at omarsantos.io, and you can follow Omar on Twitter
@santosomar.
Joseph Muniz is an architect at Cisco Systems and security researcher. He has
extensive experience in designing security solutions and architectures for the top
Fortune 500 corporations and the U.S. government. Joseph’s current role gives him
visibility into the latest trends in cyber security, from both leading vendors and
customers. Examples of Joseph’s research include his RSA talk titled “Social Media
Deception,” which has been quoted by many sources (search for “Emily Williams
Social Engineering”), as well as his articles in PenTest Magazine regarding various
security topics.
Joseph runs The Security Blogger website, a popular resource for security, hacking, and
product implementation. He is the author and contributor of several publications
covering various penetration testing and security topics. You can follow Joseph at
www.thesecurityblogger.com and @SecureBlogger.
Stefano De Crescenzo is a senior incident manager with the Cisco Product Security
Incident Response Team (PSIRT), where he focuses on product vulnerability
management and Cisco products forensics. He is the author of several blog posts and
white papers about security best practices and forensics. He is an active member of the
security community and has been a speaker at several security conferences.
Stefano specializes in malware detection and integrity assurance in critical
infrastructure devices, and he is the author of integrity assurance guidelines for Cisco
IOS, IOS-XE, and ASA.
Stefano holds a B.Sc. and M.Sc. in telecommunication engineering from Politecnico di
Milano, Italy, and an M.Sc. in telecommunication from Danish Technical University,
Denmark. He is currently pursuing an Executive MBA at Vlerick Business School in
Belgium. He also holds a CCIE in Security #26025 and is CISSP and CISM certified.
www.hellodigi.ir
www.hellodigi.ir
About the Technical Reviewers
Pavan Reddy serves as a Security Principal in Cisco Security Services. Pavan has 20+
years of security and network consulting experience in Financial Services, Healthcare,
Service Provider, and Retail arenas. Recent projects cover Technical Security Strategy
and Architecture, Network Segmentation Strategy, Threat Intelligence Analytics,
Distributed Denial-of-Service Mitigation Architectures, and DNS Architecture and
Security. Pavan holds multiple CCIEs and BS in Computer Engineering.
Ron Taylor has been in the Information Security field for almost 20 years. Ten of those
years were spent in consulting where he gained experience in many areas. In 2008, he
joined the Cisco Global Certification Team as an SME in Information Assurance. In
2012, he moved into a position with the Security Research & Operations group
(PSIRT), where his focus was mostly on penetration testing of Cisco products and
services. He was also involved in developing and presenting security training to
internal development and test teams globally. Additionally, he provided consulting
support to many product teams as an SME on product security testing. In his current
role, he is a Consulting Systems Engineer specializing in Cisco’s security product line.
Certifications include GPEN, GWEB, GCIA, GCIH, GWAPT, RHCE, CCSP, CCNA,
CISSP, and MCSE. Ron is also a Cisco Security Blackbelt, SANS mentor, Cofounder
and President of the Raleigh BSides Security Conference, and a member of the Packet
Hacking Village team at Defcon.
www.hellodigi.ir
Dedications
I would like to dedicate this book to my lovely wife, Jeannette, and my two beautiful
children, Hannah and Derek, who have inspired and supported me throughout the
development of this book.
I also dedicate this book to my father, Jose, and to the memory of my mother, Generosa.
Without their knowledge, wisdom, and guidance, I would not have the goals that I strive
to achieve today.
—Omar Santos
I would like to dedicate this book to the memory of my father, Raymond Muniz. He
never saw me graduate from college or accomplish great things, such as writing this
book. I would also like to apologize to him for dropping out of soccer in high school. I
picked it back up later in life, and today play in at least two competitive matches a
week. Your hard work paid off. Hopefully you somehow know that.
—Joseph Muniz
This book is dedicated to my wife, Nevena, and my beautiful daughters, Sara and Tea,
who supported and inspired me during the development of this book. Specifically, Tea
was born a few weeks before I started writing my first chapter, so she is especially
connected with this book.
I would also like to mention my whole family: my mother, Mariagrazia, and my sister,
Francesca, who supported my family and me while I was away writing. I also dedicate
this book to the memory of my father, Cataldo.
—Stefano De Crescenzo
www.hellodigi.ir
Acknowledgments
I would like to thank the technical editors, Pavan Reddy and Ron Taylor, for their time
and technical expertise. They verified our work and contributed to the success of this
book. I would also like to thank the Cisco Press team, especially Mary Beth Ray,
Denise Lincoln, and Christopher Cleveland, for their patience, guidance, and
consideration. Their efforts are greatly appreciated. Finally, I would like to
acknowledge the Cisco Security Research and Operations teams, Cisco Advanced
Threat Analytics, and Cisco Talos. Several leaders in the network security industry
work there, supporting our Cisco customers, often under very stressful conditions, and
working miracles daily. They are truly unsung heroes, and I am honored to have had the
privilege of working side by side with them in the trenches while protecting customers
and Cisco.
—Omar Santos
I would first like to thank Omar and Stefano for including me on this project. I really
enjoyed working with these guys and hope we can do more in the future. I also would
like to thank the Cisco Press team and technical editors, Pavan Reddy and Ron Taylor,
for their fantastic support in making the writing process top quality and easy for
everybody. Hey, Ron, you got this and the CTR comic. 2016 was great for you, Mr.
Green.
I would also like to thank all the great people in my life who make me who I am.
Finally, a message for Raylin Muniz (age 7): Hopefully one day you can accomplish
your dreams like I have with this book.
—Joseph Muniz
I would like to thank Omar and Joey for being fantastic mates in the development of this
book. A special mention goes to my wife as well, for supporting me throughout this
journey and for helping me by reviewing my work.
Additionally, this book wouldn’t have been possible without the help of the Cisco Press
team and in particular of Chris Cleveland. His guidance has been very precious. A big
thanks goes to the technical reviewers, Pavan and Ron. Thanks for keeping me honest
and to the point! A big thanks also to Eric Vyncke for his numerous suggestions.
—Stefano De Crescenzo
www.hellodigi.ir

Contents at a Glance
Introduction
Part I Network Concepts
Chapter 1 Fundamentals of Networking Protocols and Networking Devices
Chapter 2 Network Security Devices and Cloud Services
Part II Security Concepts
Chapter 3 Security Principles
Chapter 4 Introduction to Access Controls
Chapter 5 Introduction to Security Operations Management
Part III Cryptography
Chapter 6 Fundamentals of Cryptography and Public Key Infrastructure (PKI)
Chapter 7 Introduction to Virtual Private Networks (VPNs)
Part IV Host-Based Analysis
Chapter 8 Windows-Based Analysis
Chapter 9 Linux- and Mac OS X–Based Analysis
Chapter 10 Endpoint Security Technologies
Part V Security Monitoring and Attack Methods
Chapter 11 Network and Host Telemetry
Chapter 12 Security Monitoring Operational Challenges
Chapter 13 Types of Attacks and Vulnerabilities
Chapter 14 Security Evasion Techniques
Part VI Final Preparation
Chapter 15 Final Preparation
Part VII Appendixes
Appendix A Answers to the “Do I Know This Already?” Quizzes and Q&A Questions
Glossary
Index
Elements Available on the Book Website
www.hellodigi.ir

Contents
Introduction
Part I Network Concepts
Chapter 1 Fundamentals of Networking Protocols and Networking Devices
“Do I Know This Already?” Quiz
Foundation Topics
TCP/IP and OSI Model
TCP/IP Model
TCP/IP Model Encapsulation
Networking Communication with the TCP/IP Model
Open System Interconnection Model
Layer 2 Fundamentals and Technologies
Ethernet LAN Fundamentals and Technologies
Ethernet Physical Layer
Ethernet Medium Access Control
Ethernet Frame
Ethernet Addresses
Ethernet Devices and Frame-Forwarding Behavior
LAN Hubs and Bridges
LAN Switches
Link Layer Loop and Spanning Tree Protocols
Virtual LAN (VLAN) and VLAN Trunking
Cisco VLAN Trunking Protocol
Inter-VLAN Traffic and Multilayer Switches
Wireless LAN Fundamentals and Technologies
802.11 Architecture and Basic Concepts
802.11 Frame
WLAN Access Point Types and Management
Internet Protocol and Layer 3 Technologies
IPv4 Header
www.hellodigi.ir

IPv4 Fragmentation
IPv4 Addresses and Addressing Architecture
IP Network Subnetting and Classless Interdomain Routing (CIDR)
Variable-Length Subnet Mask (VLSM)
Public and Private IP Addresses
Special and Reserved IPv4 Addresses
IP Addresses Assignment and DHCP
IP Communication Within a Subnet and Address Resolution Protocol (ARP)
Intersubnet IP Packet Routing
Routing Tables and IP Routing Protocols
Distance Vector
Advanced Distance Vector or Hybrid
Link-State
Using Multiple Routing Protocols
Internet Control Message Protocol (ICMP)
Domain Name System (DNS)
IPv6 Fundamentals
IPv6 Header
IPv6 Addressing and Subnets
Special and Reserved IPv6 Addresses
IPv6 Addresses Assignment, Neighbor Discovery Protocol, and DHCPv6
Transport Layer Technologies and Protocols
Transmission Control Protocol (TCP)
TCP Header
TCP Connection Establishment and Termination
TCP Socket
TCP Error Detection and Recovery
TCP Flow Control
User Datagram Protocol (UDP)
UDP Header
UDP Socket and Known UDP Application
Exam Preparation Tasks
www.hellodigi.ir

Review All Key Topics
Complete Tables and Lists from Memory
Define Key Terms
Q&A
References and Further Reading
Chapter 2 Network Security Devices and Cloud Services
“Do I Know This Already?” Quiz
Foundation Topics
Network Security Systems
Traditional Firewalls
Packet-Filtering Techniques
Application Proxies
Network Address Translation
Port Address Translation
Static Translation
Stateful Inspection Firewalls
Demilitarized Zones
Firewalls Provide Network Segmentation
High Availability
Firewalls in the Data Center
Virtual Firewalls
Deep Packet Inspection
Next-Generation Firewalls
Cisco Firepower Threat Defense
Personal Firewalls
Intrusion Detection Systems and Intrusion Prevention Systems
Pattern Matching and Stateful Pattern-Matching Recognition
Protocol Analysis
Heuristic-Based Analysis
Anomaly-Based Analysis
Global Threat Correlation Capabilities
Next-Generation Intrusion Prevention Systems
www.hellodigi.ir

Firepower Management Center
Advance Malware Protection
AMP for Endpoints
AMP for Networks
Web Security Appliance
Email Security Appliance
Cisco Security Management Appliance
Cisco Identity Services Engine
Security Cloud-based Solutions
Cisco Cloud Web Security
Cisco Cloud Email Security
Cisco AMP Threat Grid
Cisco Threat Awareness Service
OpenDNS
CloudLock
Cisco NetFlow
What Is the Flow in NetFlow?
NetFlow vs. Full Packet Capture
The NetFlow Cache
Data Loss Prevention
Exam Preparation Tasks
Review All Key Topics
Complete Tables and Lists from Memory
Define Key Terms
Q&A
Part II Security Concepts
Chapter 3 Security Principles
“Do I Know This Already?” Quiz
Foundation Topics
The Principles of the Defense-in-Depth Strategy
What Are Threats, Vulnerabilities, and Exploits?
Vulnerabilities
www.hellodigi.ir

Threats
Threat Actors
Threat Intelligence
Exploits
Confidentiality, Integrity, and Availability: The CIA Triad
Confidentiality
Integrity
Availability
Risk and Risk Analysis
Personally Identifiable Information and Protected Health Information
PII
PHI
Principle of Least Privilege and Separation of Duties
Principle of Least Privilege
Separation of Duties
Security Operation Centers
Runbook Automation
Forensics
Evidentiary Chain of Custody
Reverse Engineering
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Q&A
Chapter 4 Introduction to Access Controls
“Do I Know This Already?” Quiz
Foundation Topics
Information Security Principles
Subject and Object Definition
Access Control Fundamentals
Identification
Authentication
www.hellodigi.ir

Authentication by Knowledge
Authentication by Ownership
Authentication by Characteristic
Multifactor Authentication
Authorization
Accounting
Access Control Fundamentals: Summary
Access Control Process
Asset Classification
Asset Marking
Access Control Policy
Data Disposal
Information Security Roles and Responsibilities
Access Control Types
Access Control Models
Discretionary Access Control
Mandatory Access Control
Role-Based Access Control
Attribute-Based Access Control
Access Control Mechanisms
Identity and Access Control Implementation
Authentication, Authorization, and Accounting Protocols
RADIUS
TACACS+
Diameter
Port-Based Access Control
Port Security
802.1x
Network Access Control List and Firewalling
VLAN Map
Security Group–Based ACL
Downloadable ACL
www.hellodigi.ir

Firewalling
Identity Management and Profiling
Network Segmentation
Network Segmentation Through VLAN
Firewall DMZ
Cisco TrustSec
Intrusion Detection and Prevention
Network-Based Intrusion Detection and Protection System
Host-Based Intrusion Detection and Prevention
Antivirus and Antimalware
Exam Preparation Tasks
Review All Key Topics
Complete Tables and Lists from Memory
Define Key Terms
Q&A
References and Additional Reading
Chapter 5 Introduction to Security Operations Management
“Do I Know This Already?” Quiz
Foundation Topics
Introduction to Identity and Access Management
Phases of the Identity and Access Lifecycle
Registration and Identity Validation
Privileges Provisioning
Access Review
Access Revocation
Password Management
Password Creation
Password Storage and Transmission
Password Reset
Password Synchronization
Directory Management
Single Sign-On
www.hellodigi.ir

Kerberos
Federated SSO
Security Assertion Markup Language
OAuth
OpenID Connect
Security Events and Logs Management
Logs Collection, Analysis, and Disposal
Syslog
Security Information and Event Manager
Assets Management
Assets Inventory
Assets Ownership
Assets Acceptable Use and Return Policies
Assets Classification
Assets Labeling
Assets and Information Handling
Media Management
Introduction to Enterprise Mobility Management
Mobile Device Management
Cisco BYOD Architecture
Cisco ISE and MDM Integration
Cisco Meraki Enterprise Mobility Management
Configuration and Change Management
Configuration Management
Change Management
Vulnerability Management
Vulnerability Identification
Finding Information about a Vulnerability
Vulnerability Scan
Penetration Assessment
Product Vulnerability Management
Vulnerability Analysis and Prioritization
www.hellodigi.ir

Vulnerability Remediation
Patch Management
References and Additional Readings
Exam Preparation Tasks
Review All Key Topics
Complete Tables and Lists from Memory
Define Key Terms
Q&A
Part III Cryptography
Chapter 6 Fundamentals of Cryptography and Public Key Infrastructure (PKI)
“Do I Know This Already?” Quiz
Foundation Topics
Cryptography
Ciphers and Keys
Ciphers
Keys
Block and Stream Ciphers
Symmetric and Asymmetric Algorithms
Symmetric Algorithms
Asymmetric Algorithms
Hashes
Hashed Message Authentication Code
Digital Signatures
Digital Signatures in Action
Key Management
Next-Generation Encryption Protocols
IPsec and SSL
IPsec
SSL
Fundamentals of PKI
Public and Private Key Pairs
RSA Algorithm, the Keys, and Digital Certificates
www.hellodigi.ir

Certificate Authorities
Root and Identity Certificates
Root Certificate
Identity Certificate
X.500 and X.509v3 Certificates
Authenticating and Enrolling with the CA
Public Key Cryptography Standards
Simple Certificate Enrollment Protocol
Revoking Digital Certificates
Using Digital Certificates
PKI Topologies
Single Root CA
Hierarchical CA with Subordinate CAs
Cross-certifying CAs
Exam Preparation Tasks
Review All Key Topics
Complete Tables and Lists from Memory
Define Key Terms
Q&A
Chapter 7 Introduction to Virtual Private Networks (VPNs)
“Do I Know This Already?” Quiz
Foundation Topics
What Are VPNs?
Site-to-site vs. Remote-Access VPNs
An Overview of IPsec
IKEv1 Phase 1
IKEv1 Phase 2
IKEv2
SSL VPNs
SSL VPN Design Considerations
User Connectivity
VPN Device Feature Set
www.hellodigi.ir

Infrastructure Planning
Implementation Scope
Exam Preparation Tasks
Review All Key Topics
Complete Tables and Lists from Memory
Define Key Terms
Q&A
Part IV Host-Based Analysis
Chapter 8 Windows-Based Analysis
“Do I Know This Already?” Quiz
Foundation Topics
Process and Threads
Memory Allocation
Windows Registration
Windows Management Instrumentation
Handles
Services
Windows Event Logs
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Q&A
References and Further Reading
Chapter 9 Linux- and Mac OS X–Based Analysis
“Do I Know This Already?” Quiz
Foundation Topics
Processes
Forks
Permissions
Symlinks
Daemons
www.hellodigi.ir

UNIX-Based Syslog
Apache Access Logs
Exam Preparation Tasks
Review All Key Topics
Complete Tables and Lists from Memory
Define Key Terms
Q&A
References and Further Reading
Chapter 10 Endpoint Security Technologies
“Do I Know This Already?” Quiz
Foundation Topics
Antimalware and Antivirus Software
Host-Based Firewalls and Host-Based Intrusion Prevention
Application-Level Whitelisting and Blacklisting
System-Based Sandboxing
Exam Preparation Tasks
Review All Key Topics
Complete Tables and Lists from Memory
Define Key Terms
Q&A
Part V Security Monitoring and Attack Methods
Chapter 11 Network and Host Telemetry
“Do I Know This Already?” Quiz
Foundation Topics
Network Telemetry
Network Infrastructure Logs
Network Time Protocol and Why It Is Important
Configuring Syslog in a Cisco Router or Switch
Traditional Firewall Logs
Console Logging
Terminal Logging
www.hellodigi.ir

ASDM Logging
Email Logging
Syslog Server Logging
SNMP Trap Logging
Buffered Logging
Configuring Logging on the Cisco ASA
Syslog in Large Scale Environments
Splunk
Graylog
Elasticsearch, Logstash, and Kibana (ELK) Stack
Next-Generation Firewall and Next-Generation IPS Logs
NetFlow Analysis
Commercial NetFlow Analysis Tools
Open Source NetFlow Analysis Tools
Counting, Grouping, and Mating NetFlow Records with Silk
Big Data Analytics for Cyber Security Network Telemetry
Configuring Flexible NetFlow in Cisco IOS and Cisco IOS-XE Devices
Cisco Application Visibility and Control (AVC)
Network Packet Capture
tcpdump
Wireshark
Cisco Prime Infrastructure
Host Telemetry
Logs from User Endpoints
Logs from Servers
Exam Preparation Tasks
Review All Key Topics
Complete Tables and Lists from Memory
Define Key Terms
Q&A
Chapter 12 Security Monitoring Operational Challenges
“Do I Know This Already?” Quiz
www.hellodigi.ir

Foundation Topics
Security Monitoring and Encryption
Security Monitoring and Network Address Translation
Security Monitoring and Event Correlation Time Synchronization
DNS Tunneling and Other Exfiltration Methods
Security Monitoring and Tor
Security Monitoring and Peer-to-Peer Communication
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Q&A
Chapter 13 Types of Attacks and Vulnerabilities
“Do I Know This Already?” Quiz
Foundation Topics
Types of Attacks
Reconnaissance Attacks
Social Engineering
Privilege Escalation Attacks
Backdoors
Code Execution
Man-in-the Middle Attacks
Denial-of-Service Attacks
Direct DDoS
Botnets Participating in DDoS Attacks
Reflected DDoS Attacks
Attack Methods for Data Exfiltration
ARP Cache Poisoning
Spoofing Attacks
Route Manipulation Attacks
Password Attacks
Wireless Attacks
Types of Vulnerabilities
www.hellodigi.ir

Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Q&A
Chapter 14 Security Evasion Techniques
“Do I Know This Already?” Quiz
Foundation Topics
Encryption and Tunneling
Key Encryption and Tunneling Concepts
Resource Exhaustion
Traffic Fragmentation
Protocol-Level Misinterpretation
Traffic Timing, Substitution, and Insertion
Pivoting
Exam Preparation Tasks
Review All Key Topics
Complete Tables and Lists from Memory
Define Key Terms
Q&A
References and Further Reading
Part VI Final Preparation
Chapter 15 Final Preparation
Tools for Final Preparation
Pearson Cert Practice Test Engine and Questions on the Website
Accessing the Pearson Test Prep Software Online
Accessing the Pearson Test Prep Software Offline
Customizing Your Exams
Updating Your Exams
Premium Edition
The Cisco Learning Network
Memory Tables
www.hellodigi.ir

Chapter-Ending Review Tools
Suggested Plan for Final Review/Study
Summary
Part VII Appendixes
Appendix A Answers to the “Do I Know This Already?” Quizzes and Q&A
Questions
Glossary
Index
Elements Available on the Book Website
Appendix B Memory Tables
Appendix C Memory Tables Answer Key
Appendix D Study Planner
www.hellodigi.ir

Command Syntax Conventions
The conventions used to present command syntax in this book are the same conventions
used in the IOS Command Reference. The Command Reference describes these
conventions as follows:
Bold indicates commands and keywords that are entered literally as shown. In
actual configuration examples and output (not general command syntax), bold
indicates commands that are manually input by the user (such as a show command).
Italic indicates arguments for which you supply actual values.
Vertical bars (|) separate alternative, mutually exclusive elements.
Square brackets ([ ]) indicate an optional element.
Braces ({ }) indicate a required choice.
Braces within brackets ([{ }]) indicate a required choice within an optional
element.
www.hellodigi.ir

Introduction
Congratulations! If you are reading this, you have in your possession a powerful tool
that can help you to:
Improve your awareness and knowledge of cyber security fundamentals
Increase your skill level related to the implementation of that security
Prepare for the CCNA Cyber Ops SECFND certification exam
Whether you are preparing for the CCNA Cyber Ops certification or just changing
careers to cyber security, this book will help you gain the knowledge you need to get
started and prepared. When writing this book, we did so with you in mind, and together
we will discover the critical ingredients that make up the recipe for a secure network
and how to succeed in cyber security operations. By focusing on covering the objectives
for the CCNA Cyber Ops SECFND exam and integrating that with real-world best
practices and examples, we created this content with the intention of being your
personal tour guides as we take you on a journey through the world of network security.
The CCNA Cyber Ops: Understanding Cisco Cybersecurity Fundamentals (SECFND)
210-250 exam is required for the CCNA Cyber Ops certification. This book covers all
the topics listed in Cisco’s exam blueprint, and each chapter includes key topics and
preparation tasks to assist you in mastering this information. Reviewing tables and
practicing test questions will help you practice your knowledge in all subject areas.
About the 210-250 CCNA Cyber Ops SECFND Exam
The CCNA Cyber Ops: Understanding Cisco Cybersecurity Fundamentals (SECFND)
210-250 exam is the first of the two required exams to achieve the CCNA Cyber Ops
certification and is aligned with the job role of associate-level security operations
center (SOC) security analyst. The SECFND exam tests candidates’ understanding of
cyber security’s basic principles, foundational knowledge, and core skills needed to
grasp the more advanced associate-level materials in the second required exam:
Implementing Cisco Cybersecurity Operations (SECOPS).
The CCNA Cyber Ops: Understanding Cisco Cybersecurity Fundamentals (SECFND)
210-250 exam is a computer-based test that has 55 to 60 questions and a 90-minute time
limit. Because all exam information is managed by Cisco Systems and is therefore
subject to change, candidates should continually monitor the Cisco Systems site for
exam updates at http://www.cisco.com/c/en/us/training-events/training-
certifications/exams/current-list/secfnd.html.
You can take the exam at Pearson VUE testing centers. You can register with VUE at
www.vue.com/cisco.
www.hellodigi.ir

www.hellodigi.ir

www.hellodigi.ir

www.hellodigi.ir

www.hellodigi.ir

Table I-1 210-250 SECFND Exam Topics
About the CCNA Cyber Ops SECFND 210-250 Official Cert Guide
This book maps to the topic areas of the 210-250 SECFND exam and uses a number of
features to help you understand the topics and prepare for the exam.
Objectives and Methods
This book uses several key methodologies to help you discover the exam topics on
which you need more review, to help you fully understand and remember those details,
and to help you prove to yourself that you have retained your knowledge of those topics.
So, this book does not try to help you pass the exams only by memorization, but by truly
learning and understanding the topics. This book is designed to help you pass the
SECFND exam by using the following methods:
Helping you discover which exam topics you have not mastered
Providing explanations and information to fill in your knowledge gaps
Supplying exercises that enhance your ability to recall and deduce the answers to
test questions
Providing practice exercises on the topics and the testing process via test questions
on the companion website
Book Features
To help you customize your study time using this book, the core chapters have several
features that help you make the best use of your time:
“Do I Know This Already?” quiz: Each chapter begins with a quiz that helps you
determine how much time you need to spend studying that chapter.
Foundation Topics: These are the core sections of each chapter. They explain the
concepts for the topics in that chapter.
www.hellodigi.ir

Exam Preparation Tasks: After the “Foundation Topics” section of each chapter,
the “Exam Preparation Tasks” section lists a series of study activities that you
should do at the end of the chapter. Each chapter includes the activities that make
the most sense for studying the topics in that chapter:
Review All the Key Topics: The Key Topic icon appears next to the most
important items in the “Foundation Topics” section of the chapter. The “Review
All the Key Topics” activity lists the key topics from the chapter, along with
their page numbers. Although the contents of the entire chapter could be on the
exam, you should definitely know the information listed in each key topic, so you
should review these.
Complete the Tables and Lists from Memory: To help you memorize some
lists of facts, many of the more important lists and tables from the chapter are
included in a document on the companion website. This document lists only
partial information, allowing you to complete the table or list.
Define Key Terms: Although the exam is unlikely to ask you to define a term,
the CCNA Cyber Ops exams do require that you learn and know a lot of
networking terminology. This section lists the most important terms from the
chapter, asking you to write a short definition and compare your answer to the
glossary at the end of the book.
Q&A: Confirm that you understand the content you just covered.
Web-based practice exam: The companion website includes the Pearson Cert
Practice Test engine, which allows you to take practice exam questions. Use it to
prepare with a sample exam and to pinpoint topics where you need more study.
How This Book Is Organized
This book contains 14 core chapters—Chapters 1 through 14. Chapter 15 includes some
preparation tips and suggestions for how to approach the exam. Each core chapter
covers a subset of the topics on the CCNA Cyber Ops SECFND exam. The core
chapters are organized into parts. They cover the following topics:
Part I: Network Concepts
Chapter 1: Fundamentals of Networking Protocols and Networking Devices
covers the networking technology fundamentals such as the OSI model and different
protocols, including IP, TCP, UDP, ICMP, DNS, DHCP, ARP, and others. It also
covers the basic operations of network infrastructure devices such as routers,
switches, hubs, wireless access points, and wireless LAN controllers.
Chapter 2: Network Security Devices and Cloud Services covers the
fundamentals of firewalls, intrusion prevention systems (IPSs), Advance Malware
www.hellodigi.ir

Protection (AMP), and fundamentals of the Cisco Web Security Appliance (WSA),
Cisco Cloud Web Security (CWS), Cisco Email Security Appliance (ESA), and the
Cisco Cloud Email Security (CES) service. This chapter also describes the
operation of access control lists applied as packet filters on the interfaces of
network devices and compares and contrasts deep packet inspection with packet
filtering and stateful firewall operations. It provides details about inline traffic
interrogation and taps or traffic mirroring. This chapter compares and contrasts the
characteristics of data obtained from taps or traffic mirroring and NetFlow in the
analysis of network traffic.
Part II: Security Concepts
Chapter 3: Security Principles covers the principles of the defense-in-depth
strategy and compares and contrasts the concepts of risks, threats, vulnerabilities,
and exploits. This chapter also defines threat actor, runbook automation (RBA),
chain of custody (evidentiary), reverse engineering, sliding window anomaly
detection, personally identifiable information (PII), protected health information
(PHI), as well as the principle of least privilege and how to perform separation of
duties. It also covers the concepts of risk scoring, risk weighting, risk reduction,
and how to perform overall risk assessments.
Chapter 4: Introduction to Access Controls covers the foundation of access
control and management. It provides an overview of authentication, authorization,
and accounting principles, and introduces some of the most used access control
models, including discretionary access control (DAC), mandatory access control
(MAC), role-based access control (RBAC), and attribute-based access control
(ABAC). Also, this chapter covers the actual implementation of access control,
such as AAA protocols, port security, 802.1x, Cisco TrustSec, intrusion prevention
and detection, and antimalware.
Chapter 5: Introduction to Security Operations Management covers the
foundation of security operations management. Specifically, it provides an
overview of identity management, protocol and technologies, asset security
management, change and configuration management, mobile device management,
event and logging management, including Security Information and Event
Management (SIEM) technologies, vulnerability management, and patch
management.
Part III: Cryptography
Chapter 6: Fundamentals of Cryptography and Public Key Infrastructure (PKI)
covers the different hashing and encryption algorithms in the industry. It provides a
comparison of symmetric and asymmetric encryption algorithms and an introduction
of public key infrastructure (PKI), the operations of a PKI, and an overview of the
www.hellodigi.ir

IPsec, SSL, and TLS protocols.
Chapter 7: Introduction to Virtual Private Networks (VPNs) provides an
introduction to remote access and site-to-site VPNs, different deployment
scenarios, and the VPN solutions provided by Cisco.
Part IV: Host-based Analysis
Chapter 8: Windows-Based Analysis covers the basics of how a system running
Windows handles applications. This includes details about how memory is used as
well as how resources are processed by the operating system. These skills are
essential for maximizing performance and securing a Windows system.
Chapter 9: Linux- and Mac OS X–Based Analysis covers how things work inside
a UNIX environment. This includes process execution and event logging. Learning
how the environment functions will not only improve your technical skills but can
also be used to build a strategy for securing these systems.
Chapter 10: Endpoint Security Technologies covers the functionality of endpoint
security technologies, including host-based intrusion detection, host-based
firewalls, application-level whitelisting and blacklisting, as well as systems-based
sandboxing.
Part V: Security Monitoring and Attack Methods
Chapter 11: Network and Host Telemetry covers the different types of data
provided by network and host-based telemetry technologies, including NetFlow,
traditional and next-generation firewalls, packet captures, application visibility and
control, and web and email content filtering. It also provides an overview of how
full packet captures, session data, transaction logs, and security alert data are used
in security operations and security monitoring.
Chapter 12: Security Monitoring Operational Challenges covers the different
operational challenges, including Tor, access control lists, tunneling, peer-to-peer
(P2P) communication, encapsulation, load balancing, and other technologies.
Chapter 13: Types of Attacks and Vulnerabilities covers the different types of
cyber security attacks and vulnerabilities and how they are carried out by threat
actors nowadays.
Chapter 14: Security Evasion Techniques covers how attackers obtain stealth as
well as the tricks used to negatively impact detection and forensic technologies.
Topics include encryption, exhausting resources, fragmenting traffic, manipulating
protocols, and pivoting within a compromised environment.
Part VI: Final Preparation
Chapter 15: Final Preparation identifies the tools for final exam preparation and
helps you develop an effective study plan. It contains tips on how to best use the
www.hellodigi.ir

web-based material to study.
Part VII: Appendixes
Appendix A: Answers to the “Do I Know This Already?” Quizzes and Q&A
Questions includes the answers to all the questions from Chapters 1 through 14.
Appendix B: Memory Tables (a website-only appendix) contains the key tables
and lists from each chapter, with some of the contents removed. You can print this
appendix and, as a memory exercise, complete the tables and lists. The goal is to
help you memorize facts that can be useful on the exam. This appendix is available
in PDF format at the book website; it is not in the printed book.
Appendix C: Memory Tables Answer Key (a website-only appendix) contains the
answer key for the memory tables in Appendix B. This appendix is available in
PDF format at the book website; it is not in the printed book.
Appendix D: Study Planner is a spreadsheet, available from the book website,
with major study milestones, where you can track your progress throughout your
study.
Companion Website
Register this book to get access to the Pearson Test Prep practice test software and other
study materials, plus additional bonus content. Check this site regularly for new and
updated postings written by the authors that provide further insight into the more
troublesome topics on the exam. Be sure to check the box that you would like to hear
from us to receive updates and exclusive discounts on future editions of this product or
related products.
To access this companion website, follow these steps:
1. Go to www.pearsonITcertification.com/register and log in or create a new
account.
2. Enter the ISBN 9781587147029.
3. Answer the challenge question as proof of purchase.
4. Click the “Access Bonus Content” link in the Registered Products section of your
account page, to be taken to the page where your downloadable content is
available.
Please note that many of our companion content files can be very large, especially
image and video files.
If you are unable to locate the files for this title by following the steps, please visit
www.pearsonITcertification.com/contact and select the “Site Problems/Comments”
option. Our customer service representatives will assist you.
www.hellodigi.ir

Pearson Test Prep Practice Test Software
As noted previously, this book comes complete with the Pearson Test Prep practice test
software containing two full exams. These practice tests are available to you either
online or as an offline Windows application. To access the practice exams that were
developed with this book, please see the instructions in the card inserted in the sleeve in
the back of the book. This card includes a unique access code that enables you to
activate your exams in the Pearson Test Prep software.
Accessing the Pearson Test Prep Software Online
The online version of this software can be used on any device with a browser and
connectivity to the Internet, including desktop machines, tablets, and smartphones. To
start using your practice exams online, simply follow these steps:
1. Go to http://www.PearsonTestPrep.com.
2. Select Pearson IT Certification as your product group.
3. Enter your email/password for your account. If you don’t have an account on
PearsonITCertification.com or CiscoPress.com, you will need to establish one by
going to PearsonITCertification.com/join.
4. In the My Products tab, click the Activate New Product button.
5. Enter the access code printed on the insert card in the back of your book to
activate your product.
6. The product will now be listed in your My Products page. Click the Exams
button to launch the exam settings screen and start your exam.
Accessing the Pearson Test Prep Software Offline
If you wish to study offline, you can download and install the Windows version of the
Pearson Test Prep software. There is a download link for this software on the book’s
companion website, or you can just enter the following link in your browser:
http://www.pearsonitcertification.com/content/downloads/pcpt/engine.zip
To access the book’s companion website and the software, simply follow these steps:
1. Register your book by going to PearsonITCertification.com/register and entering
the ISBN 9781587147029.
2. Respond to the challenge questions.
3. Go to your account page and select the Registered Products tab.
4. Click the Access Bonus Content link under the product listing.
5. Click the Install Pearson Test Prep Desktop Version link under the Practice
www.hellodigi.ir

Exams section of the page to download the software.
6. Once the software finishes downloading, unzip all the files on your computer.
7. Double-click the application file to start the installation, and follow the onscreen
instructions to complete the registration.
8. Once the installation is complete, launch the application and select Activate
Exam button on the My Products tab.
9. Click the Activate a Product button in the Activate Product Wizard.
10. Enter the unique access code found on the card in the sleeve in the back of your
book and click the Activate button.
11. Click Next and then the Finish button to download the exam data to your
application.
12. You can now start using the practice exams by selecting the product and clicking
the Open Exam button to open the exam settings screen.
Note that the offline and online versions will synch together, so saved exams and grade
results recorded on one version will be available to you on the other as well.
Customizing Your Exams
Once you are in the exam settings screen, you can choose to take exams in one of three
modes:
Study mode
Practice Exam mode
Flash Card mode
Study mode allows you to fully customize your exams and review answers as you are
taking the exam. This is typically the mode you would use first to assess your
knowledge and identify information gaps. Practice Exam mode locks certain
customization options, as it is presenting a realistic exam experience. Use this mode
when you are preparing to test your exam readiness. Flash Card mode strips out the
answers and presents you with only the question stem. This mode is great for late-stage
preparation when you really want to challenge yourself to provide answers without the
benefit of seeing multiple-choice options. This mode will not provide the detailed score
reports that the other two modes will, so it should not be used if you are trying to
identify knowledge gaps.
In addition to these three modes, you will be able to select the source of your questions.
You can choose to take exams that cover all of the chapters or you can narrow your
selection to just a single chapter or the chapters that make up a specific part in the book.
All chapters are selected by default. If you want to narrow your focus to individual
www.hellodigi.ir
chapters, simply deselect all the chapters then select only those on which you wish to
focus in the Objectives area.
You can also select the exam banks on which to focus. Each exam bank comes complete
with a full exam of questions that cover topics in every chapter. The two exams printed
in the book are available to you as well as two additional exams of unique questions.
You can have the test engine serve up exams from all four banks or just from one
individual bank by selecting the desired banks in the exam bank area.
There are several other customizations you can make to your exam from the exam
settings screen, such as the time of the exam, the number of questions served up, whether
to randomize questions and answers, whether to show the number of correct answers for
multiple-answer questions, and whether to serve up only specific types of questions.
You can also create custom test banks by selecting only questions that you have marked
or questions on which you have added notes.
Updating Your Exams
If you are using the online version of the Pearson Test Prep software, you should always
have access to the latest version of the software as well as the exam data. If you are
using the Windows desktop version, every time you launch the software, it will check to
see if there are any updates to your exam data and automatically download any changes
that were made since the last time you used the software. This requires that you are
connected to the Internet at the time you launch the software.
Sometimes, due to many factors, the exam data may not fully download when you
activate your exam. If you find that figures or exhibits are missing, you may need to
manually update your exam.
To update a particular exam you have already activated and downloaded, simply select
the Tools tab and select the Update Products button. Again, this is only an issue with
the desktop Windows application.
If you wish to check for updates to the Pearson Test Prep software, Windows desktop
version, simply select the Tools tab and select the Update Application button. This will
ensure you are running the latest version of the software engine.
www.hellodigi.ir
Part I: Network Concepts
www.hellodigi.ir

Chapter 1. Fundamentals of Networking Protocols and
Networking Devices
This chapter covers the following topics:
Introduction to TCP/IP and OSI models
Wired LAN and Ethernet
Frame switching
Hub, switch, and router
Wireless LAN and technologies
Wireless LAN controller and access point
IPv4 and IPv6 addressing
IP routing
ARP, DHCP, ICMP, and DNS
Transport layer protocols
Welcome to the first chapter of the CCNA Cyber Ops SECFND #210-250 Official Cert
Guide. In this chapter, we go through the fundamentals of networking protocols and
explore how devices such as switches and routers work to allow two hosts to
communicate with each other, even if they are separated by many miles.
If you are already familiar with these topics—for example, if you already have a CCNA
Routing and Switching certification—this chapter will serve as a refresher on protocols
and device operations. If, on the other hand, you are approaching these topics for the
first time, you’ll learn about the fundamental protocols and devices at the base of
Internet communication and how they work.
This chapter begins with an introduction to the TCP/IP and OSI models and then
explores link layer technologies and protocols—specifically the Ethernet and Wireless
LAN technologies. We then discuss how the Internet Protocol (IP) works and how a
router uses IP to move packets from one site to another. Finally, we look into the two
most used transport layer protocols: Transmission Control Protocol (TCP) and User
Datagram Protocol (UDP).
www.hellodigi.ir

“Do I Know This Already?” Quiz
The “Do I Know This Already?” quiz helps you identify your strengths and deficiencies
in this chapter’s topics. The 13-question quiz, derived from the major sections in the
“Foundation Topics” portion of the chapter, helps you determine how to spend your
limited study time. You can find the answers in Appendix A Answers to the “Do I Know
This Already?” Quizzes and Q&A Questions.
Table 1-1 outlines the major topics discussed in this chapter and the “Do I Know This
Already?” quiz questions that correspond to those topics.
Table 1-1 “Do I Know This Already?” Section-to-Question Mapping
1. Which layer of the TCP/IP model is concerned with end-to-end communication
and offers multiplexing service?
a. Transport
b. Internet
c. Link layer
d. Application
2. Which statement is true concerning a link working in Ethernet half-duplex mode?
a. A collision cannot happen.
b. When a collision happens, the two stations immediately retransmit.
c. When a collision happens, the two stations wait for a random time before
retransmitting.
d. To avoid a collision, stations wait a random time before transmitting.
3. What is the main characteristic of a hub?
a. It regenerates the signal and retransmits on all ports.
b. It uses a MAC address table to switch frames.
c. When a packet arrives, the hub looks up the routing table before forwarding
www.hellodigi.ir

the packet.
d. It supports full-duplex mode of transmission.
4. Where is the information about ports and device Layer 2 addresses kept in a
switch?
a. MAC address table
b. Routing table
c. L2 address table
d. Port table
5. Which of the following features are implemented by a wireless LAN controller?
(Select all that apply.)
a. Wireless station authentication
b. Quality of Service
c. Channel encryption
d. Transmission and reception of frames
6. Which IP header field is used to recognize fragments from the same packet?
a. Identification
b. Fragment Offset
c. Flags
d. Destination Address
7. Which protocol is used to request a host MAC address given a known IP
address?
a. ARP
b. DHCP
c. ARPv6
d. DNS
8. Which type of query is sent from a DNS resolver to a DNS server?
a. Recursive
b. Iterative
c. Simple
d. Type Q query
9. How many host IPv4 addresses are possible in a /25 network?
a. 126
www.hellodigi.ir

b. 128
c. 254
d. 192
10. How many bits can be used for host IPv6 addresses assignment in the 2345::/64
network?
a. 48
b. 64
c. 16
d. 264
11. What is SLAAC used for?
a. To provide an IPv6 address to a client
b. To route IPv6 packets
c. To assign a DNS server
d. To provide a MAC address given an IP address
12. Which one of these protocols requires a connection to be established before
transmitting data?
a. TCP
b. UDP
c. IP
d. OSPF
13. What is the TCP window field used for?
a. Error detection
b. Flow control
c. Fragmentation
d. Multiplexing
Foundation Topics
TCP/IP and OSI Model
Two main models are currently used to explain the operation of an IP-based network.
These are the TCP/IP model and the Open System Interconnection (OSI) model. This
section provides an overview of these two models.
www.hellodigi.ir

TCP/IP Model
The TCP/IP model is the foundation for most of the modern communication networks.
Every day, each of us uses some application based on the TCP/IP model to
communicate. Think, for example, about a task we consider simple: browsing a web
page. That simple action would not be possible without the TCP/IP model.
The TCP/IP model’s name includes the two main protocols we will discuss in the
course of this chapter: Transmission Control Protocol (TCP) and Internet Protocol (IP).
However, the model goes beyond these two protocols and defines a layered approach
that can map nearly any protocol used in today’s communication.
In its original definition, the TCP/IP model included four layers, where each of the
layers would provide transmission and other services for the level above it. These are
the link layer, internet layer, transport layer, and application layer.
In its most modern definition, the link layer is split into two additional layers to clearly
demark the physical and data link type of services and protocols included in this layer.
Internet layer is also sometimes called the networking layer, which is based on another
very known model, the OSI model, which is described in the next section. Figure 1-1
shows the TCP/IP stack model.
www.hellodigi.ir

Figure 1-1 TCP/IP Stack Model
The TCP/IP model works on two main concepts that define how the layers interact:
On the same host, each layer works by providing services for the layer above it on
the TCP/IP stack.
On different hosts, a same layer communication is established by using the same
layer protocol.
For example, on your personal computer, the TCP/IP stack is implemented to allow
networking communication. The link layer provides services for the IP layer (for
example, encapsulation of an IP packet in an Ethernet frame). The IP layer provides
services to the transport layer (for example, IP routing and IP addressing), and so on.
These are all examples of services provided to the layer above it within the host.
Now imagine that your personal computer wants to connect to a web server (for
example, to browse a web page). The web server will also implement the TCP/IP stack.
In this case, the IP layer of your personal computer and the IP layer of the web server
will use a common protocol, IP, for the communication. The same thing will happen
with the transport protocol, where the two devices will use TCP, and so on. These are
www.hellodigi.ir

examples of the same layer protocol used on different hosts to communicate.
Later in this chapter, the “Networking Communication with the TCP/IP Model,” section
provides more detail about how the communication works between two hosts and how
the TCP/IP stack is used on the same host.
The list that follows analyzes each layer in a bit more detail:
Link layer: The link layer provides physical transmission support and includes the
protocols used to transmit information over a link between two devices. In simple
terms, the link layer includes the hardware and protocol necessary to send
information between two hosts that are connected by a physical link (for example, a
cable) or over the air (for example, via radio waves). It also includes the notion of
and mechanisms for information being replicated and retransmitted over several
ports or links by dedicated devices such as switches and bridges.
Because different physical means are used to transmit information, there are several
protocols that work at the link layer. One of the most popular is the Ethernet
protocol. As mentioned earlier, nowadays the link layer is usually split further in
the physical layer, which is concerned about physical bit transmission, and the data
link layer, which provides encapsulation and addressing facilities as well as
abstraction for the upper layers.
At link layer, the message unit is called a frame.
Internet layer: Of course, not all devices can be directly connected to each other,
so there is a need to transmit the information across multiple devices. The Internet
layer provides networking services and includes protocols that allow for the
transmission of information through multiple hops. To do that, each host is identified
by an Internet Protocol (IP) address, or a different address if another Internet
Protocol type is used. Each hop device between two hosts, called networking
nodes, knows how to reach the destination IP address and transmit the information
to the next best node to reach the destination. The nodes are said to perform the
routing of the information, and the way each node, also called router, determines the
best next node to the destination is called the routing protocol.
At the Internet layer, the message unit is called a packet.
Transport layer: When transmitting information, the sending host knows when the
information is sent, but has no way to know whether it actually made it to the
destination. The transport layer provides services to successfully transfer
information between two end points. It abstracts the lower-level layer and is
concerned about the end-to-end process. For example, it is used to detect whether
any part of the information went missing. It also provides information about which
www.hellodigi.ir

type of information is being transmitted. For example, a host may want to request a
web page and also start an FTP transaction. How do we distinguish between these
two actions? The transport layer helps to separate the two requests by using the
concept of a transport layer port. Each service is enabled on a different transport
layer port—for example, port 80 for a web request or port 21 for an FTP
transaction. So when the destination host receives a request on port 80, it knows
that this needs to be passed to the application layer handling web requests. This
type of service provided by the transport layer is called multiplexing.
At this layer, the message unit is called a segment.
Application layer: The application layer is the top layer and is the one most
familiar to end users. For example, at the application layer, a user may use the
email client to send an email message or use a web browser to browse a website.
Both of these actions map to a specific application, which uses a protocol to fulfill
the service.
In this example, the Simple Message Transfer Protocol (SMTP) is used to handle
the email transfer, whereas the Hypertext Transfer Protocol (HTTP) is used to
request a web page within a browser. At this level, the protocols are not concerned
with how the information will reach the destination, but only work on defining the
content of the information being transmitted.
Table 1-2 shows examples of protocols working at each layer of the TCP/IP model.
Table 1-2 Protocols at Each Layer of the TCP/IP Model
Table 1-3 summarizes what message units are referred to as at each layer.
www.hellodigi.ir

Table 1-3 Message Unit Naming at Each Layer of the TCP/IP Model
TCP/IP Model Encapsulation
In the TCP/IP model, each layer provides services for the level above it. Protocols at
each layer include a protocol header and in some cases a trailer to the information
provided by the upper layer. The protocol header includes enough information for the
protocol to work toward the delivery of the information. This process is called
encapsulation.
When the information arrives to the destination, the inverse process is used. Each layer
reads the information present in the header of the protocol working at that specific layer,
performs an action based on that information, and, if needed, passes the remaining
information to the next layer in the stack. This process is called decapsulation.
Figure 1-2 shows an example of encapsulation.
Figure 1-2 Encapsulation
Referring to Figure 1-2, let’s assume that this represents the TCP/IP stack of a host, for
example Host A, trying to request a web page using HTTP. Let’s see how the
www.hellodigi.ir

encapsulation works, step by step:
Step 1. In this example, the host has requested a web page using the HTTP
application layer protocol. The HTTP application generates the information,
represented as HTTP “data” in this example.
Step 2. On the host, the TCP/IP implementation would detect that HTTP uses TCP at
the transport layer and will send the HTTP data to the transport layer for further
handling. The protocol at the transport layer, TCP, will create a TCP header,
which includes information such as the service port (TCP port 80 for a web
page request), and will send it to the next layer, the Internet layer, for further
processing. The TCP header plus the payload forms a TCP segment.
Step 3. The Internet layer receives the TCP information, attaches an IP header, and
encapsulates it in an IP packet. The IP header will contain information to handle
the packet at the Internet layer. This includes, for example, the IP addresses of
the source and destination.
Step 4. The IP packet is then passed to the link layer for further processing. The
TCP/IP stack detects that it needs to use Ethernet to transmit the frame to the
next device. It will add an Ethernet header and trailer and transmit the frame to
the physical network interface card (NIC), which will take care of the physical
transmission of the frame.
When the information arrives to the destination, the receiving host will start from the
bottom of the TCP/IP stack by receiving an Ethernet frame. The link layer of the
destination host will read and process the header and trailer, and then pass the IP packet
to the Internet layer for further processing.
The same process happens at the Internet layer, and the TCP segment is passed to the
transport layer, which will again process the TCP header information and pass the
HTTP data for final processing to the HTTP application.
Networking Communication with the TCP/IP Model
Let’s look back at the example of browsing a web page and see how the TCP/IP model
is used to transmit and receive information through a networking connection path.
A networking device is a device that implements the TCP/IP model. The model may be
fully implemented (for example, in the case of a user computer or a server) or partially
implemented (for example, a router might implement the TCP/IP stack only up to the
Internet layer).
Figure 1-3 shows the logical topology. It includes two hosts: Host A, which is
requesting a web page, and Server B, which is the destination of the request. The
network connectivity is provided by two routers: R1 and R2, which are connected via
an optical link. The host and server are directly connected to R1 and R2, respectively,
www.hellodigi.ir

with a physical cable.
Figure 1-3 Logical Topology Demonstrating Networking Communication with
TCP/IP Model
Figure 1-4 shows how each TCP/IP model layer interacts in this case.
Figure 1-4 Interaction of the TCP/IP Model Layers
Referring to Figure 1-4, let’s see how the steps are executed:
Step 1. The HTTP application on Host A will create an HTTP Application message
that includes an HTTP header and the contents of the request in the payload.
This will be encapsulated up to the link layer, as described in Figure 1-2, and
transmitted over the cable to R1.
Step 2. The R1 link layer will receive the frame, extract the IP packet, and send it to
the IP layer. Because the main function of the router is to forward the IP packet,
it will not further decapsulate the packet. It will use the information in the IP
header to forward the packet to the best next router, R2. To do that, it will
encapsulate the IP packet in a new link layer frame—for example, Point-to-
www.hellodigi.ir

Point over ATM (PPPoA)—and send the frame on the physical link toward R2.
Step 3. R2 will follow the same process that R1 followed in step 2 and will send the
IP packet encapsulated in a new Ethernet frame to Host B.
Step 4. Server B’s link layer will decapsulate the frame and send it to the Internet
layer.
Step 5. The Internet layer detects that the packet is destined to Server B itself by
looking into the IP header information (more specifically the value of the
destination IP address). It strips the IP header and passes the TCP segment to
the transport layer.
Step 6. The transport layer uses the port information included in the TCP header to
determine to which application to pass the data (in this case, the web service
application).
Step 7. The application layer, the web service, finally receives the request and may
decide to respond (for example, by providing the web page to Host A). The
process will start again, with the web service creating some data and passing it
to the HTTP application layer protocol for handling.
The example in Figure 1-4 is very simplistic. For example, TCP requires a connection
to be established before transmitting data. However, it is important that the main idea
behind the TCP/IP model is clear as a basis for understanding how the various
protocols work.
Open System Interconnection Model
The Open System Interconnection (OSI) reference model is another model that uses
abstraction layers to represent the operation of communication systems. The idea behind
the design of the OSI model is to be comprehensive enough to take into account
advancement in network communications and to be general enough to allow several
existing models for communication systems to transition to the OSI model.
The OSI model presents several similarities with the TCP/IP model described in the
previous section. One of the most important similarities is the use of abstraction layers.
As with TCP/IP, each layer provides service for the layer above it within the same
computing device, while it interacts at the same layer with other computing devices.
The OSI model includes seven abstract layers, each representing a different function and
service within a communication network:
Physical layer—Layer 1 (L1): Provides services for the transmission of bits over
the data link.
www.hellodigi.ir

Data link layer—Layer 2 (L2): Includes protocols and functions to transmit
information over a link between two connected devices. For example, it provides
flow control and L1 error detection.
Network layer—Layer 3 (L3): This layer includes the function necessary to
transmit information across a network and provides abstraction on the underlying
means of connection. It defines L3 addressing, routing, and packet forwarding.
Transport layer—Layer 4 (L4): This layer includes services for end-to-end
connection establishment and information delivery. For example, it includes error
detection, retransmission capabilities, and multiplexing.
Session layer—Layer 5 (L5): This layer provides services to the presentation
layer to establish a session and exchange presentation layer data.
Presentation layer—Layer 6 (L6): This layer provides services to the
application layer to deal with specific syntax, which is how data is presented to the
end user.
Application layer—Layer 7 (L7): This is the last (or first) layer of the OSI model
(depending on how you see it). It includes all the services of a user application,
including the interaction with the end user.
The functionalities of the OSI layers can be mapped to similar functionalities provided
by the TCP/IP model. It is sometimes common to use OSI layer terminology to indicate a
protocol operating at a specific layer, even if the communication device implements the
TCP/IP model instead of the OSI model.
Figure 1-5 shows how each layer of the OSI model maps to the corresponding TCP/IP
layer.
www.hellodigi.ir

Figure 1-5 Mapping the OSI Reference Model to the TCP/IP Model
The physical and data link layers of the OSI model provide the same functions as the
link layer in the TCP/IP model. The network layer can be mapped to the Internet layer,
and the transport layer in OSI provides similar services as the transport layer in TCP/IP.
The OSI session, presentation, and application layers map to the TCP/IP application
layer.
Within the same host, each layer interacts with the adjacent layer in a way that is similar
to the encapsulation performed in the TCP/IP model. The encapsulation is formalized in
the OSI model as follows:
Protocol control information (PCI) for a layer (N) is the information added by the
protocol.
A protocol data unit (PDU) for a layer (N) is composed by the data produced at that
layer plus the PCI for that layer.
A service data unit (SDU) for a layer (N) is the (N+1) layer PDU.
Figure 1-6 shows the relationship between PCI, PDU, and SDU.
www.hellodigi.ir

Figure 1-6 Relationship Between PCI, PDU, and SDU
For example, a TCP segment includes the TCP header, which maps to the L4PCI and a
TCP payload, including the data to transmit. Together, they form a L4PDU. When the
L4PDU is passed to the networking layer (for example, to be processed by IP), the
L4PDU is the same as the L3SDU. IP will add an IP header, the L3PCI. The L3PCI plus
the L3SDU will form the L3PDU, and so on.
The encapsulation process works in a similar way to the TCP/IP model. Each layer
protocol adds its own protocol header and passes the information to the lower-layer
protocol.
Figure 1-7 shows an example of encapsulation in the OSI model.
www.hellodigi.ir

Figure 1-7 Encapsulation in the OSI Model
Table 1-4 shows examples of protocols and devices that work at a specific OSI layer.
Note that each device is mapped to a level related to its main function capability. For
example, a router’s main function is forwarding packets based on L3 information, so it
is usually referred to as an L3 device; however, it also needs to incorporate L2 and L1
functionalities. Furthermore, a router may implement the full OSI model (for example,
because it implements some additional features such as firewalling or VPN). The same
rationale could be applied to firewalls. They are usually classified as L4 devices;
however, most of the time they are able to inspect traffic up to the application layer.
Table 1-4 Protocols and Devices Mapping to the OSI Layer Model and the TCP/IP
Model
The flow of information through a network in the OSI model is similar to what’s
www.hellodigi.ir

described in Figure 1-4 for the TCP/IP model. This is not by chance, because the OSI
model has been designed to offer compatibility and enable the transition to the OSI
model from multiple other communication models (for example, from TCP/IP).
Figure 1-8 shows a network implementing the OSI model.
Figure 1-8 Flow of Information Through a Network Implementing the OSI Model
In the rest of this book, we will use the OSI model and TCP/IP model layer names
interchangeably.
Layer 2 Fundamentals and Technologies
This section goes through the fundamentals of the link layer (or Layer 2). Although it is
not required to know specific implementations and configurations, the CCNA Cyber
Ops SECFND exam requires candidates to understand the various link layer
technologies, such as hubs, bridges, and switches, and their behavior. Candidates also
need to understand the protocols that enable the link layer communication. Readers
interested in learning more about Layer 2 technologies and protocols can refer to CCNA
Routing and Switching materials for more comprehensive information on the topic.
Two very well-known concepts used to describe communication networks at Layer 2
are local area network (LAN) and wide area network (WAN). As the names suggest, a
LAN is a collection of devices, protocols, and technologies operating nearby each
other, whereas a WAN typically deals with devices, protocols, and technologies used to
transmit information over a long distance.
The next sections introduce two of the most used LAN types: wired LANs (specifically
www.hellodigi.ir

Ethernet-based LANs) and wireless LANs.
Ethernet LAN Fundamentals and Technologies
Ethernet is a protocol used to provide transmission and services for the physical and
data link layers, and it is described in the IEEE 802.3 standards collection. Ethernet is
part of the larger IEEE 802 standards for LAN communication. Another example of the
IEEE 802 standards is 802.11, which covers wireless LAN.
The Ethernet collection includes standards specifying the functionality at the physical
layer and data link layer. The Ethernet physical layer includes several standards,
depending on the physical means used to transmit the information. The data link layer
functionality is provided by the Ethernet Medium Access Control (MAC) described in
IEEE 802.3, together with the Logical Link Control (LLC) described in IEEE 802.2.
Note that MAC is sometimes referred to as Media Access Control instead of Medium
Access Control. Both ways are correct according to the IEEE 802. In the rest of this
document we will use Medium Access Control or simply MAC.
LLC was initially used to allow several types of Layer 3 protocols to work with the
MAC. However, in most networks in use today, there is only one type of Layer 3
protocol, which is the Internet Protocol (IP), so LLC is seldom used because IP can be
directly encapsulated using MAC.
The following sections provide an overview of the Ethernet physical layer and MAC
layer standards.
Ethernet Physical Layer
The physical layer includes several standards to account for the various physical means
possibly encountered in a LAN deployment. For example, the transmission can happen
over an optical fiber, copper, and so on.
Examples of Ethernet standards are 10BASE-T and 1000BASE-LX. Each Ethernet
standard is characterized by the maximum transmission speed and maximum distance
between two connected stations. Specifically, the transmission speed has seen (and is
currently seeing) the biggest evolution.
Table 1-5 shows examples of popular Ethernet physical layer standards.
www.hellodigi.ir

Table 1-5 Popular Ethernet Physical Layer Standards
The Ethernet nomenclature is easy to understand. Each standard name follows this
format:
sTYPE-M
where:
s: The speed (for example, 1000).
TYPE: The modulation type (for example, baseband [BASE]).
M: The information about the medium. Examples include T for twisted pair, F for
fiber, L for long wavelength, and X for external sourced coding.
For example, with 1000BASE-T, the speed is 1000, the modulation is baseband, and the
medium (T) is twisted-pair cable (copper).
An additional characteristic of a physical Ethernet standard is the type of cable and
connector used to connect two stations. For example, 1000BASE-T would need a
Category 6 (CAT 6) unshielded twisted-pair cable (UTP) and RJ-45 connectors.
Ethernet Medium Access Control
Ethernet MAC deals with the means used to transfer information between two Ethernet
devices, also called stations, and it is independent from the physical means used for
transmission.
The standard describes two modes of medium access:
Half duplex: In half-duplex mode, two Ethernet devices share a common
transmission medium. The access is controlled by implementing Carrier Sense
Multiple Access with Collision Detection (CSMA/CD). In CSMA/CD, a device
www.hellodigi.ir

has the ability to detect whether there is a transmission occurring over the shared
medium. When there is no transmission, a device can start sending. It can happen
that two devices send nearly at the same time. In that case, there is a message
collision. When a collision occurs, it is detected by CSMA/CD-enabled devices,
which will then stop transmitting and will delay the transmission for a certain
amount of time, called the backoff time. The jam signal is used by the station to
signal that a collision occurred. All stations that can sense a collision are said to be
in the same collision domain.
Half-duplex mode was used in early implementations of Ethernet; however, due to
several limitations, including transmission performance, it is rarely seen nowadays.
A network hub is an example of a device that can be used to share a common
transmission medium across multiple Ethernet stations. You’ll learn more about
hubs later in this chapter in the “LAN Hubs and Bridges” section.
Figure 1-9 shows an example of CSMA/CD access.
www.hellodigi.ir

Figure 1-9 CSMA/CD Access
Full duplex: In full-duplex mode, two devices can transmit simultaneously because
there is a dedicated channel allocated for the transmission. Because of that, there is
no need to detect collisions or to wait before transmitting. Full duplex is called
“collision free” because collisions cannot happen.
www.hellodigi.ir

A switch is an example of a device that provides a collision-free domain and
dedicated transmission channel. You’ll learn more about switches later in this
chapter in the “LAN Switches” section.
Ethernet Frame
Figure 1-10 shows an example of an Ethernet frame.
Figure 1-10 Ethernet Frame
The Ethernet frame includes the following fields:
Preamble: Used for the two stations for synchronization purposes.
Start Frame Delimiter (SFD): Indicates the start of the Ethernet frame. This is
always set to 10101011.
Destination Address: Contains the recipient address of the frame.
Source Address: Contains the source of the frame.
Length/Type: This field can contain either the length of the MAC Client Data
(length interpretation) or the type code of the Layer 3 protocol transported in the
frame payload (type interpretation). The latter is the most common. For example,
code 0800 indicates IPv4, and code 08DD indicates IPv6.
MAC Client Data and Pad: This field contains information being encapsulated at
the Ethernet layer (for example, an LLC PDU or an IP packet). The minimum length
is 46 bytes; the maximum length depends on the type of Ethernet frame:
1500 bytes for basic frames. This is the most common Ethernet frame.
1504 bytes for Q-tagged frames.
1982 bytes for envelope frames.
Frame Check Sequence (FCS): This field is used by the receiving device to
detect errors in transmission. This is usually called the Ethernet trailer. Optionally,
an additional extension may be present.
www.hellodigi.ir

Ethernet Addresses
To transmit a frame, Ethernet uses source and destination addresses. The Ethernet
addresses are called MAC addresses, or Extended Unique Identifier (EUI) in the new
terminology, and they are either 48 bits (MAC-48 or EUI-48) or 64 bits (MAC-64 or
EUI-64), if we consider all MAC addresses for the larger IEEE 802 standard.
The MAC address is usually expressed in hexadecimal. There are few ways it can be
written for easier reading. The following two ways are the ones used the most:
01-23-45-67-89-ab (IEEE 802)
0123.4567.89ab (Cisco notation)
There are three types of MAC addresses:
Broadcast: A broadcast MAC address is obtained by setting all 1s in the MAC
address field. This results in an address like FFFF.FFFF.FFFF. A frame with a
broadcast destination address is transmitted to all the devices within a LAN.
Multicast: A frame with a multicast destination MAC address is transmitted to all
frames belonging to the specific group.
Unicast: A unicast address is associated with a particular device’s NIC or port. It
is composed of two sections. The first 24 bits contain the Organizational Unique
Identifier (OUI) assigned to an organization. Although this is unique for an
organization, the same organization can request several OUIs. For example, Cisco
has multiple registered OUIs. The other portion of the MAC address (for example,
the remaining 24 bits in the case of MAC-48) can be assigned by the vendor itself.
Figure 1-11 shows the two portions of a MAC address.
Figure 1-11 MAC Address Portions
www.hellodigi.ir

Ethernet Devices and Frame-Forwarding Behavior
So far we have discussed the basic concepts of Ethernet, such as frame formats and
addresses. It is now time to see how all this works in practice. We will start with the
most basic case and progress toward a more complicated frame forwarding behavior
and topology.
LAN Hubs and Bridges
As discussed previously, a collision domain is defined as two or more stations needing
to share the same medium. This setup requires some algorithm to avoid two frames
being sent at nearly the same time and thus colliding. When a collision occurs, the
information is lost. CSMA/CD has been used to resolve the collision problem by
allowing an Ethernet station to detect a collision and avoid retransmitting at the same
time.
The simplest example of a collision domain is an Ethernet bus where all the stations are
connected as shown in Figure 1-12.
Figure 1-12 Ethernet Bus
Because the Ethernet signal will degrade across the distance between the stations, the
same topology could be obtained by using a central LAN hub where all the stations
connect. The role of the LAN hub or repeater was to regenerate the signal uniquely and
transmit this signal to all its ports. This topology is typically half-duplex transmission
mode and, as in the case of an Ethernet bus, defines a single collision domain.
Figure 1-13 shows how the information sent by Host A is repeated over all the hub’s
ports.
www.hellodigi.ir

Figure 1-13 A Network Hub Where the Electrical Signal of a Frame Is Regenerated
and the Information Sent Out to All the Device Ports
Before transmitting, a station senses the medium (also called carrier) to see if any frame
is being transmitted. If the medium is empty, the station can start transmitting. If two
stations start at nearly the same time, as is the case in this example, a collision occurs.
All stations in the collision domain detect the collision and adopt a backoff algorithm to
delay the transmission.
Figure 1-14 shows an example of a collision happening with a hub network. Note that B
will also receive a copy of the frame sent from C, and C will receive a copy of the
frame sent from B; although, this is not shown in the picture for simplicity.
Figure 1-14 Collision Domain with a Hub or Repeater
Collision domains are highly inefficient because two stations cannot transmit at the
www.hellodigi.ir

same time. The performance becomes even more impacted as the number of stations
connected to the same hubs increases. To partially overcome that situation, networking
bridges are used. A bridge is a device that allows the separation of collision domain.
Unlike a LAN hub, which will just regenerate the signal, a LAN bridge typically
implements some frame-forwarding decision based on whether or not a frame needs to
reach a device on the other side of the bridge.
Figure 1-15 shows an example of a network with hubs and bridges. The bridges
partition the network into two collision domains, thus allowing the size of the network
to scale.
Figure 1-15 A Bridge Creating Two Collision Domains
LAN Switches
In modern networks, half-duplex mode has been replaced by full-duplex mode. Full-
duplex mode allows two stations to transmit simultaneously because the transmission
and receiver channels are separated. Because of that, in full duplex, CSMA/CD is not
used because collisions cannot occur.
A LAN switch is a device that allows multiple stations to connect in full-duplex mode.
This creates a separate collision domain for each of the ports, so collisions cannot
happen. For example, Figure 1-16 shows four hosts connected to a switch. Each host
has a separate channel to transmit and receive, so each port actually identifies a
collision domain. Note that usually in this kind of scenario it does not make sense to
www.hellodigi.ir

refer to a port as collision domain, and it is usually more practical to assume that there
is no collision domain—because no collision can occur.
Figure 1-16 A Switch Creating Several Collision Domains in Full-Duplex Mode
How does a switch forward a frame? Whereas a hub would just replicate the same
information on all the ports, a switch tries to do something a bit more intelligent and use
the destination MAC address to forward the frame to the right station.
Figure 1-17 shows a simple example of frame forwarding.
Figure 1-17 Frame Forwarding with a Switch
How does a switch know to which port to forward a frame? Before this forwarding
mechanism can be explained, we need to discuss three concepts:
MAC address table: This table holds the link between a MAC address and the
physical port of the switch where frames for that MAC address should be
forwarded.
Figure 1-18 shows an example of a simplified MAC address table.
www.hellodigi.ir

Figure 1-18 Simple MAC Address Table
Dynamic MAC address learning: It is possible to populate the MAC address table
manually, but that is probably not the best use of anyone’s time. Dynamic learning is
a mechanism that helps with populating the MAC address table. When a switch
receives an Ethernet frame on a port, it notes the source MAC address and inserts
an entry in the MAC address table, marking that MAC address as reachable from
that port.
Ethernet Broadcast domain: A broadcast domain is formed by all devices
connected to the same LAN switches. Broadcast domains are separated by network
layer devices such as routers. An Ethernet broadcast domain is sometimes also
called a subnet.
Figure 1-19 shows an example of a network with two broadcast domains separated
by a router.
Now that you have been introduced to the concepts of a MAC address table, dynamic
MAC address learning, and broadcast domain, we can look at a few examples that
explain how the forwarding is done.
The forwarding decision is uniquely done based on the destination MAC address. In this
example, Host A with MAC address 0200.1111.1111, connected to switch port F0/1, is
sending traffic (Ethernet frames) to Host C with MAC address 0200.3333.3333,
connected to port F0/3.
www.hellodigi.ir

Figure 1-19 A Router Dividing the Network into Two Broadcast Domains
At the beginning, the MAC address table of the switch is empty. When the first frame is
received on port F0/1, the switch does two things:
It looks up the MAC address table. Because the table is empty, it forwards the
frame to all its ports except the one where the frame was received. This is usually
called flooding.
It uses dynamic MAC address learning to update the MAC address table with the
information that 0200.1111.1111 is reachable through port F0/1.
Figure 1-20 shows the frame flooding and the MAC address table updated with the
information about Host A.
www.hellodigi.ir

Figure 1-20 Example of a MAC Address Table Being Updated as the Frame Is
Received and Forwarded by the Switch
Host B receives a copy of the frame; however, because the destination MAC address is
not its own, it discards the frame. Host C receives the frame and may decide to respond.
When Host C responds, the switch will look up the MAC address table. This time, it
will find an entry for Host A and will just forward the frame on port F0/1 toward Host
A. Like in the previous case, it will update the MAC address table to indicate that
0200.3333.3333 (Host C) is reachable through port F0/3, as shown in Figure 1-21.
www.hellodigi.ir

Figure 1-21 Dynamic Learning of the Host C MAC Address
The flooding mechanism is also used when a frame has a broadcast destination MAC
address. In that case, the frame will be forwarded to all ports in the Ethernet broadcast
domain. In a more complex topology, switches may be connected to each other,
sometimes with multiple ports to ensure redundancy; however, the basic forwarding
principles do not change. All MAC addresses that are reachable via other switches will
be marked in the MAC address table as reachable via the port where the switches are
connected.
Figure 1-22 shows an example of Host A connected to port F0/1 of Switch 1 and
sending traffic to Host E, connected to F0/1 of Switch 2. Switch 1 and Switch 2 are
connected via port F0/10 on both sides.
www.hellodigi.ir

Figure 1-22 Frame Forwarding and MAC Address Table Updates with Multiple
Switches. Host A sends a frame for Host E.
When Host A sends the first frame, Switch 1 will flood it on all ports, including on port
F0/10 toward Switch 2. Switch 2 will also flood on all its ports because it does not
know where Host E is located. Both Switch 1 and Switch 2 will use dynamic learning
to update their own MAC address tables. Switch 1 will mark Host A as reachable via
F0/1, while Switch 2 will mark Host A as reachable via F0/10.
If Host E responds to Host A, the same steps will be repeated, as shown in Figure 1-23.
Figure 1-23 Frame Forwarding and MAC Address Table Updates with Multiple
Switches. Host E replies to a frame sent by Host A.
www.hellodigi.ir

Link Layer Loop and Spanning Tree Protocols
Let’s now consider another example, shown in Figure 1-24, where three switches
(SW1, SW2, and SW3) are interconnected.
Figure 1-24 Example of a Broadcast Storm Caused in a Network with Redundant
Links
Assume that Host A, connected to SW1, sends a broadcast frame. SW1 will forward the
frame to SW2 and SW3 on ports G0/2 and G0/3. SW2 will receive the frame and
forward it to SW3 and Host E. SW3 will do the same and forward the frame to SW2.
SW3 will again receive the frame from SW2 and will forward it to SW1, and so on.
As you can see, the frame will loop indefinitely within the LAN, thus causing
degradation of the network performance due to the useless forwarding of frames. This is
called a broadcast storm. Other types of loops can happen—for example, if Host A
would have sent a frame to a host that never replies (hence, no switches know where the
host is). In general, link layer (or Layer 2) loops can happen every time there is a
redundant link within the Layer 2 topology.
The second undesirable effect of Layer 2 loops is MAC table instability. SW1 in the
preceding example will keep (incorrectly) updating the MAC address table, marking
Host A on port G0/2 and G0/3 as it receives the looping frames with the source address
of Host A on these two ports. So, whenever SW1 receives frames for Host A, it will
incorrectly send them to the wrong port, making the problem worse.
The third effect of a Layer 2 loop is that a host (for example, Host E) will keep
receiving a copy of the same frame that’s circulating within the network. This can
confuse the host and may result in higher-layer protocol failure.
Spanning Tree Protocols (STPs) are used to avoid Layer 2 loops. This section
describes the fundamental concepts of STPs. Over the years, the concept has been
enhanced to improve performance and to take into consideration the evolution of
www.hellodigi.ir

network complexity. In its basic function, the STP creates a logical Layer 2 topology
that is loop free. This is done by allowing traffic on certain ports and blocking traffic on
others. If the topology changes (for example, if a link fails), STP will recalculate the
new logical topology (it is said to “reconverge”) and unblock certain ports to adapt to
the new topology.
Figure 1-25 shows STP applied to the previous example. Port G0/2 on SW3 is marked
as blocked, and it will not forward traffic. This avoids frames looping. If the link
between SW1 and SW3 goes down, STP will unblock the link between SW3 and SW2
to allow traffic to pass and provide redundancy.
Figure 1-25 Example of Layer 2 with STP Enabled
STP uses a spanning tree algorithm (STA) to create a tree-like, loop-free logical
topology. To understand how a basic STP works, we need to explore a few concepts:
Bridge ID (BID): An 8-byte ID that is independently calculated on each switch.
The first 2 bytes of the BID contain the priority, while the remaining 6 bytes
includes the MAC address of the switch (of one of its ports).
Bridge PDU (BPDU): Represents the STP protocol messages. The BPDU is sent to
a multicast MAC address. The address may depend on the specific STP protocol in
use.
Root switch: Represents the root of the spanning tree. The spanning tree root is
identified through a process called root election. The root switch BID is called the
root BID.
Port cost: A numerical value associated to each spanning tree port. Usually this
value depends on the speed of the port. The higher the speed, the lower the cost.
Table 1-6 reports the recommended values from IEEE (in IEEE 802.1Q-2014).
www.hellodigi.ir

Table 1-6 Spanning Tree Port Costs
Root cost: Represents the cost to reach the root switch. The root cost is given by
summing all the costs of the ports on the shortest path to the root switch. The root
cost value of the root switch is 0.
At initialization, an STP root switch needs to be identified. The root switch will be the
switch with the lower BID. The BID priority field is used first to determine the lower
BID; if two switches have the same priority, then the MAC address is used to determine
the root.
The process to identify the switch with the lower BID is called root election. At the
beginning, each switch tries to become the root and sends out a Hello BPDU to
announce its presence in the network to the rest of the switches. The initial Hello BPDU
includes its own switch BID as the root BID in the BPDU field.
When a switch receives a Hello BPDU with a better root BID (lower BID), it will stop
sending its own Hello BPDU and will forward the Hello BPDU generated from the root
switch. It will also update the root cost and add the cost of the port where the BPDU
was received. The process continues until the root election is over and a root switch is
identified. At this point, all switches on the network know which switch is the root and
what the root cost is to that switch. Figure 1-26 shows an example of root election in
our sample topology.
www.hellodigi.ir

Figure 1-26 STP Root Election
SW1 will send a BPDU to SW2 and SW3. When SW2 receives the BPDU from SW1, it
will see that the BID for SW1 is lower than its own BID, so it will update the Root BID
entry to include the BID of SW1. SW2 will then forward the BPDU to SW3 with a root
cost of 4.
SW3 has also received the BPDU from SW1 and already updated the Root BID entry
with SW1’s BID because it is lower than its own BID. It will then forward the BPDU to
SW2 with a root cost of 5. At the end, SW1 becomes the root within this topology.
As stated at the beginning of this section, the spanning tree is created by blocking a
certain port. Once the root switch is elected, the tree can start to be built. At this point,
we need to discuss the concepts of port role and port state:
Port role: Depending on the STP-specific protocols, there are a few names and
roles for ports; however, three main roles are important for understanding how STP
works. Once that is clear, the nuances of the various STP protocols can be easily
understood.
Root port (RP) is the port that offers the lowest path cost (root cost) to the root
on non-root switches.
Designated port (DP) is the port that offers the lowest path to the root for a
given LAN segment. For example, if a switch has a host attached to a port, that
port becomes a DP because it’s the closest port to the root for that LAN segment.
The switch is told to be the designated switch for that LAN segment. All ports
on a root switch are DP.
www.hellodigi.ir

Non-designated ports are all the other ports that are not either the RP or DP.
Depending on the specific STP standards, they can assume various names, and
the standard can define additional port categories.
Let’s look again at our topology, but in a bit different way. Referring to Figure 1-26,
we can identify three segments. On the root switch, SW1, all ports are DPs because
they offer the shortest path to the root for Segments 1 and 2. What is the DP for
Segment 3? Port G0/3 on SW2 will become the DP because its cost to the root is 4,
whereas Port G0/2 on SW3 would have a cost of 5.
The RP identification is a bit easier. For each port on a non-root switch, we select
the port with the lower path to the root. In this case, G0/1 on SW2 and G0/1 on
SW3 become the RP. All remaining ports will be non-designated ports.
Port state: The port state is related to the specific action a port can take while in
that state. As in the port role definition, the name of the state depends on the STP
protocol being used. Here are some common examples of port states:
Blocking: In this state, a port blocks all frames received except Layer 2
management frames (for example, BPDU).
Listening: A port transitions to this state from the blocking state when the STP
determines that the port needs to participate in the forwarding. At this stage,
however, the port is not fully functional. It can process BPDU and respond to
Layer 2 management messages, but it does not accept frames.
Learning: The port transitions to learning after the listening phase. In this phase,
the port still does not forward frames; however, it learns the MAC addresses via
dynamic learning and fills in the MAC address table.
Forwarding: In this state, the port is fully operational and receives and
forwards frames.
Disabled: A port in disable state does not forward and receive frames and does
not participate in the STP process, so it does not process BPDU.
When the STP protocol has converged, which means the RPs and DPs are identified,
each port transitions to a terminal state. Every RP and DP will be in the forwarding
state, while all the other ports will be in the blocking state. Figure 1-27 shows the
terminal state of the ports in our topology.
www.hellodigi.ir

Figure 1-27 STP Terminal State Applied to the Network Topology
STP provides a critical function within communication networks, so a wrong design or
implementation of the Spanning Tree Protocol (for example, an incorrect selection of the
root switch) could lead to poor performance or even catastrophic failure in some cases.
Through the years, Spanning Tree Protocols have seen several updates and new
standards have emerged. The most common versions of Spanning Tree Protocols in use
today are Rapid STP, Per-VLAN STP+ (PVSTP+), and Multiple Spanning Tree (MST).
Virtual LAN (VLAN) and VLAN Trunking
So far, we have assumed that everything happens within a single LAN. In simple terms,
a LAN can be identified as a part of the network within a single broadcast domain.
LANs (and broadcast domains) are separated by Layer 3 devices such as routers.
As the network grows and becomes more complex, operating within a single broadcast
domain degrades the network performance and adds complexity to management
protocols, such as to the STP.
The concept of a virtual LAN (VLAN) has been introduced to overcome the issues
created by a very large single LAN. A VLAN can exist within a switch, and each switch
port can be assigned to a specific VLAN.
Figure 1-28 shows four hosts connected to the same switch. Host A and Host E are
assigned to VLAN 101 whereas Host B and Host D are assigned to VLAN 102. The
switch treats a host in one VLAN as being in a single broadcast domain. A packet from
one VLAN cannot be forwarded to a different VLAN at Layer 2. As such, a VLAN
provides Layer 2 network separation.
www.hellodigi.ir

Figure 1-28 Two Different VLANs Used to Separate Broadcast Domains within the
Same Switch
Here are some common benefits of using a VLAN:
Reduces the number of devices receiving the broadcast frame and the related
overhead
Creates Layer 2 network separation
Reduces management protocols’ load and complexity
Segments troubleshooting and failure areas, as failure in one VLAN will not be
propagated to the rest of the network
How does frame forwarding work in VLANs? The same process we described for a
single LAN applies for each VLAN. The switch knows which port is linked to which
VLAN and will forward the frame accordingly. In the case of multiple switches, the
VLAN concept can still work. Figure 1-29 shows the VLAN concept across two
switches.
www.hellodigi.ir

Figure 1-29 Example of a VLAN and VLAN Trunk Used on a Topology with
Multiple Switches
In this case, Host A and Host E, although attached to two different switches, can still be
configured within the same VLAN (for example, VLAN 101). The link between SW1
and SW2 is called a trunk, and it is a special link because it can transport frames
belonging to several VLANs.
VLAN tagging is used to enable the forwarding between Host A and Host E within the
same VLAN as well as across multiple switches. Referring to Figure 1-29, when Host
A sends a frame to Host E, SW1 does not know where Host E is, so it will forward the
frame to all ports in VLAN 101, including the trunk port to SW2.
As you can see, SW1 will not forward the frame to Host B because it is in a different
VLAN. SW1, before sending the frame on the trunk link to SW2, will add a VLAN tag to
the frame that carries the VLAN ID, VLAN 101. This tells SW2 that this frame should
be forwarded to ports in VLAN 101 only.
SW2 receives the frame over the trunk link, strips the VLAN tagging, and forwards the
frame to all its ports in VLAN 101 (in this case, only to F0/1). If Host E responds, the
same process applies. SW2 will only send the packets over the trunk link (because SW2
now knows how to reach Host A) and will tag the packet with VLAN 101.
The VLAN information is added to the Ethernet frame. The way that it’s done depends
on the protocol used for trunking. The most known and used trunking protocol nowadays
is defined in IEEE 802.1Q (dot1q). Another protocol is Inter-Switch Link (ISL), which
www.hellodigi.ir

is a Cisco proprietary protocol that was used in the past.
In IEEE 802.1Q, the VLAN tagging is obtained by adding an IEEE 802.1Q tag between
the source MAC address and the Type field in the Ethernet frame.
Figure 1-30 shows an example of an IEEE 802.1Q tag. The tag includes the VLAN ID.
Figure 1-30 IEEE 802.1Q Tag
IEEE 802.1Q introduces the concept of a native VLAN. The difference between a native
and non-native VLAN is that a native VLAN goes without tag over the trunk link. When
the trunk is configured for IEEE 802.1Q, if a switch receives a frame without a tag over
a trunk link, it will interpret it as belonging to the native VLAN and forward
accordingly.
Cisco VLAN Trunking Protocol
Cisco VLAN Trunking Protocol (VTP) is a Cisco proprietary protocol used to manage
VLAN distribution across switches. VTP should not be confused with protocols that
actually handle the tagging of frames with VLAN information when being sent over a
trunk link. VTP is used to distribute information about existing VLANs to all switches in
a VTP domain so that VLANs do not have to be manually configured, thus reducing the
burden of the administrator.
For example, when a new VLAN is created on one switch, the same VLAN may need to
be created on all switches to enable VLAN trunking and consistent use of VLAN IDs.
VTP facilitates the process by sending automatic advertisements about the state of
VLAN databases across the VTP domain. Switches that receive advertisements will
maintain the VLAN database, synchronized based on the information found in the VTP
message.
VTP relies on protocols such as 802.1Q to transmit information. VTP defines three
modes of operation:
Server mode: In VTP server mode, the administrator can configure or remove a
VLAN. VTP will take care of distributing the information to other switches in the
www.hellodigi.ir

VTP domain.
Client mode: In VTP client mode, a switch receives updates about a VLAN and
advertises the VLAN configured already; however, a VLAN cannot be added or
removed.
Transparent mode: In transparent mode, the switch does not participate in VTP, so
it does not perform a VLAN database update and does not generate VTP
advertisement; however, it forwards VTP advertisements from other switches.
Inter-VLAN Traffic and Multilayer Switches
As described in the previous section, VLANs provide a convenient way to separate
broadcast domains. This means, however, that a Layer 3 device is needed to forward
traffic between two VLANs even if they are on the same switch. We have defined
switches as Layer 2 devices, so a switch by itself would not be able to forward traffic
from one VLAN to the other, even if the source and destination host reside physically on
the same switch.
Figure 1-31 shows an example of inter-VLAN traffic. Host A in VLAN 101 is sending
traffic to Host B in VLAN 102. Both hosts are connected to SW1. Because SW1 is a
switch operating at Layer 2, a Layer 3 device (for example, a router, R1) is needed to
forward the traffic. In the figure, the router uses two different interfaces connected to the
switch, where G0/1 is in VLAN 101 and G0/2 is in VLAN 102.
www.hellodigi.ir

Figure 1-32 Router on a Stick (ROAS)
In both of the preceding examples, there is a waste of resources. For example, a packet
needs to travel to the first router in the path, to then come back again to the same switch
creating additional load on the links. Additionally, there is a loss in performance due to
the encapsulation and upper-layer processing of the frame.
The solution is to integrate Layer 3 function within a classic Layer 2 switch. This type
of switch is called a Layer 3 switch or sometimes a multilayer switch. Figure 1-33
shows an example of inter-VLAN flow with a multilayer switch.
www.hellodigi.ir

Figure 1-33 Inter-VLAN Flow with a Multilayer Switch
Wireless LAN Fundamentals and Technologies
Together with Ethernet, which is defined as wired access to a LAN, wireless LAN
(WLAN) is one of the most used technologies for LAN access. This book covers the
basics of WLAN fundamentals and technologies. Interested readers can refer to the
CCNA Wireless 200-355 Official Cert Guide book for additional information.
Wireless LAN is defined within the IEEE 802.11 standards. While in some aspects
WLANs resemble classic Ethernet technology, there are several significant differences.
The first and most notable difference is the medium. Here are several other
characteristics that distinguish a wireless medium from a wire medium:
There is no defined boundary.
It is more prone to interference by other signals on the same medium.
It is less reliable.
The signal can propagate in asymmetric ways (for example, due to reflection).
The way stations access the medium is also different. In the previous section, you
learned that Ethernet defines two operational modes: half duplex, where the stations can
transmit one at time, and full-duplex, where stations can transmit simultaneously. In
WLANs, network stations can only use half-duplex mode because they are not able to
transmit and receive at the same time due to the limitation of the medium.
This means that two stations need to implement a way to detect if the medium (in this
case, the radio frequency channel) is being used to avoid transmitting at the same time.
This functionality is provided by a Carrier Sense Media Access with Collision
Avoidance (CSMA/CA). Note that this is different from the CSMA/CD used in Ethernet.
www.hellodigi.ir

The main difference is in how a collision is handled. Wired devices can detect
collisions over the medium, whereas wireless devices cannot.
Like we have seen for Ethernet, a wireless station senses the medium to determine
whether is it possible to transmit. However, the way this is done is different for wired
devices. In a wired technology, the device can sense an electrical signal on the wire and
determine whether someone else is transmitting. This cannot happen in the case of
wireless devices. There are mainly two methods for carrier sense:
Physical carrier sense: When the station is not transmitting, it can sense the
channel for the presence of other frames. This is sometimes referred to as Clear
Channel Assessment (CCA).
Virtual carrier sense: Stations when transmitting a frame include an estimated time
for the transmission of the frame in the frame header. This value can be used to
estimate how long the channel will be busy.
Collision detection is not possible for similar reasons. Wireless clients thus need to
avoid collisions. To do that, they use a mechanism called Collision Avoidance. The
mechanism works by using backoff timers. Each station waits a backoff period before
transmitting. In addition to the backoff period, a station may need to wait for an
additional time, called interframe space, which is used to reduce the likelihood of a
collision and to allow an extra cushion of time between two frames.
802.11 defines several interframe space timers. The standard interframe timer is called
Distributed Interframe Space (DIFS).
The basic process of transmitting frames includes three steps:
Step 1. Sense the channel to see whether it is busy.
Step 2. Select a delay based on the backoff timer. If, in the meantime, the channel gets
busy, the backoff timer is stopped. When the channel is clear again, the backoff
timer is restarted.
Step 3. Wait for an additional DIFS time.
Figure 1-34 illustrates the process of transmitting frames in a WLAN. Client A is ready
to transmit, it senses the medium, selects a backoff time, and then transmits. The
duration of the frame is included in the frame header. Client B and Client C wait until
the frame from Client A has been transmitted plus the DIFS, and then start the backoff
timer. Client C’s backoff timer expires before Client B’s, so Client C transmits before
Client B. Client B finds the channel busy, so it stops the backoff timer. Client B waits
for the new transmission time, the DIFS period and the remaining backoff timer, and
then it transmits.
www.hellodigi.ir

Figure 1-34 Transmitting Frames in a WLAN
One particularity of WLANs compared to wired networks is that a WLAN requires the
other party to send an acknowledgement so that the sender knows the frame has been
received.
802.11 Architecture and Basic Concepts
Unlike wired connections, where a station needs a physical connection to be able to
transmit, the wireless medium is open, so any station can start transmitting. The IEEE
802.11 standards define the concept of Basic Service Set (BSS), which identifies a set
of devices that share some common parameters and can communicate through a wireless
connection. The most basic type of BSS is called Independent BSS (IBSS), and it is
formed by two or more wireless stations communicating directly. IBSS is sometimes
called ad-hoc wireless network.
Figure 1-35 shows an example of IBSS.
www.hellodigi.ir

Figure 1-35 Independent BSS
Another type of BSS is called infrastructure BSS. The core of an infrastructure BSS is a
wireless access point, or simply an access point (AP). Each station will associate to the
AP, and each frame is sent to the AP, which will then forward it to the receiving station.
The access point advertises a Service Set Identifier (SSID), which is used by each
station to recognize a particular network.
To communicate with other stations that are not in the same BSS (for example, a server
station in the organization’s data center), access points can be connected in uplink with
the rest of the organization’s network (for example, with a wired connection). The
uplink wired network is called a Distribution System (DS). The AP creates a boundary
point between the BSS and the DS.
Figure 1-36 shows an example of infrastructure BSS with four wireless stations and an
access point connected upstream with a DS.
www.hellodigi.ir

Figure
1-36
Infrastructure
BSS
An
access
point
has
limited
spatial
coverage
due
to
the
wireless
signal
degradation.
To
extend
the
wireless
coverage
of
a
specific
network
(that
is,
a
network
identified
by
a
single
SSID),
multiple
BSSs
can
be
linked
together
to
form
an
Extended
Service
Set
(ESS).
A
client
can
move
from
one
AP
to
the
other
in
a
seamless
way.
The
method
to
release
a
client
from
one
AP
and
associate
to
the
other
AP
is
called
roaming,
www.hellodigi.ir

Figure 1-37
shows an example of an ESS with two APs connected to a DS and a user
roaming between two BSSs.
Figure 1-37 Extended Service Set (ESS) Example
802.11
Frame
An 802.11 frame is a bit different from the Ethernet frame, although there are some
commonalities. Figure 1-38 shows an example of 802.11 frame..ir
www.hellodigi.ir

Figure 1-38 802.11 Frame
The 802.11 frame includes the following elements:
Frame control: Includes some additional sub-elements, as indicated in Figure 1-
37. It provides information on the frame type and whether this frame is directed
toward the DS or is coming from the DS toward the wireless network.
Duration field: Can have different meanings depending on the frame type.
However, one common value is the expected time the frame will be traveling on the
channel for the Virtual Carrier Sense functionality.
Address fields: Contain addresses in 802 MAC format (for example, MAC-48).
The following are the typical addresses included:
Transmitter address (TA) is the MAC address of the transmitter of the frame (for
example, a wireless client).
Receiver address (RA) is the MAC address of the receiver of the frame (for
example, the AP).
Source address (SA) is the MAC address of the source of the frame, if it is
different from the TA. For example, if a frame is coming from the DS toward a
wireless station, the SA would be the original Ethernet source address whereas
the TA would be the AP.
Destination address (DA) is the MAC address of the final destination if different
from the RA (for example, for a frame destined to the DS).
Sequence Control field: This is used for sequence and fragmentation numbering.
Frame body: Includes the upper-layer PDU, as in the case of Ethernet.
Frame Check Sequence (FCS) field: Used by the receiving device to detect an
error in transmission.
www.hellodigi.ir

WLAN Access Point Types and Management
In the previous sections you learned about the wireless access point (AP). The main
functionality of an AP is to bridge frames from the wireless interface to the wired
interfaces so that a wireless station can communicate with the rest of the wired network.
This means, for example, extracting the payload of an 802.11 frame and re-
encapsulating it in an Ethernet frame.
The AP provides additional functionalities that are as important for the correct
functionality of a wireless network. For example, an AP needs to manage the
association or the roaming of wireless stations, implement authentication and security
features, manage the radio frequency (RF), and so on.
The functionality provided by an access point can be classified in two categories:
Real-time functions include all the functionality to actually transmit and receive
frames, or to encrypt the information over the channel.
Management functions include functions such as RF management, security
management, QoS, and so on.
The access points also can be categorized based on the type of functionality provided:
Autonomous APs are access points that implement both real-time and management
functions. These are autonomous and thus work in a standalone mode. Each AP
needs to be configured singularly.
Lightweight APs (LAPs) only implement the real-time functions and work together
with a management device called a wireless LAN controller (WLC), which
provides the management functions. The communication between LAPs and the
WLC is done using the Control and Provision of Wireless Access Point
(CAPWAP).
Figure 1-39 shows the difference between the two types of APs.
www.hellodigi.ir

Figure 1-39 Comparison Between an Autonomous Access Point and a Lightweight
Access Point
Depending on the type of AP, the network architecture and packet flow may change. In a
network using autonomous AP, the packet flow is similar to a network with a switch, as
seen in previous sections. Each wireless client will be associated to a VLAN, and the
AP will be configured with a trunk on its DS interface. The AP can participate in STP
and will behave much like a switch.
Autonomous APs can be managed singularly or through centralized management
software. For example, Cisco Prime Infrastructure can be used to manage several
autonomous access points. This type of architecture is called autonomous architecture.
Another option is to use autonomous access points that are managed from the cloud.
This is called cloud-based architecture. An example of such a deployment is the Cisco
Meraki cloud-based wireless network architecture.
A third option is to use LAPs and WLC. This type of deployment is called split MAC
due to the splitting of functionalities between the LAPs and the WLC. The CAPWAP
protocol is used for communication between the LAPs and the WLC. CAPWAP is a
tunneling protocol described in RFC 5415. It is used to tunnel 802.11 frames from a
LAP to the WLC for additional forwarding. The encapsulation is needed because the
www.hellodigi.ir

WLC can reside anywhere in the DS (for example, in a different VLAN than the LAP).
CAPWAP encapsulates the 802.11 frame in an IP packet that can be used to reach the
WLC regardless of its logical position. CAPWAP uses UDP to provide end-to-end
connectivity between the LAP and WLC, and it uses DTLS to protect the tunnels.
CAPWAP consists of two logical tunnels:
CAPWAP control messages, which transport management frames
CAPWAP data, which transports the actual data to and from the LAP
When a LAP is added to the network, it establishes a tunnel to the WLC. After that, the
WLC can push configuration and other management information.
In a split-MAC deployment, when a wireless station sends information, the AP will
encapsulate the information using the CAPWAP specification and send it to the WLC.
For example, in the case of a WLAN, it will use the CAPWAP protocol binding for
802.11 described in RFC 5416, which also specifies how the 802.11 frame should be
encapsulated in a CAPWAP tunnel.
The WLC will then decapsulate the information and send it to the correct recipient.
When the recipient responds, the information will flow in the reverse direction—first to
the WLC and then through the CAPWAP data tunnel to the AP, which will finally
forward the information to the wireless station.
There are two types of split-MAC architectures:
Centralized architecture: This architecture places the WLC in a central location
(for example, closer to the core) so that the number of LAPs covered is maximized.
One advantage of centralized architecture is that roaming between LAPs is
simplified because one WLC controls all the LAPs a user is traversing. However,
traffic between two wireless stations associated to the same LAP may need to
travel through several links in order to reach the WLC and then back to the same
LAP. This may reduce the efficiency of the network.
Figure 1-40 shows an example of a centralized WLC architecture and the frame
path for a wireless-station-to-wireless-station transmission.
www.hellodigi.ir

Figure 1-40 Centralized WLC Architecture
Converged architecture: With this architecture, the WLC is moved closer to the
LAPs typically at the access layer. In this case, one WLC is covering fewer LAPs,
so various WLCs need to work together in a distributed fashion. In a converged
architecture, the WLC may be integrated into the access layer switch, which also
provides WLC functionality. This type of architecture increases the performance of
wireless-station-to-wireless-station communication, but makes roaming more
complicated because the user must travel through several WLCs. Figure 1-41 shows
an example of a converged architecture.
www.hellodigi.ir

Figure 1-41 Converged WLC Architecture
Internet Protocol and Layer 3 Technologies
In previous sections, you learned how information is sent at the link layer, or Layer 2. In
this section, we discuss how information is transmitted at Layer 3—that is, how a
packet travels through a network, across several broadcast domains, to reach its
destination.
Layer 3 protocols are used to enable communication without being concerned about the
specific transportation medium or other Layer 2 properties (for example, whether the
information needs to be transported on a wired network or using a wireless connection).
The most-used Layer 3 protocol is the Internet Protocol (IP). As a security professional,
it is fundamental that you master how IP works in communication networks.
IP comes in two different versions: IP version 4 (IPv4) and IP version 6 (IPv6).
Although some of the concepts remain the same between the two versions, IPv6 could
be seen as a completely different protocol rather than an update of IPv4. In this section,
we mainly discuss IPv4. In the next section, we will discuss the fundamentals of IPv6
www.hellodigi.ir

and highlight the differences between IPv4 and IPv6.
Before digging into more detail, let’s look at the basic transmission of an IP packet, also
referred to as Layer 3 forwarding. Figure 1-42 shows a simple topology where Host A
is connected to a switch that provides LAN access to the host at Site A. Host B is also
connected to an access switch at Site B. In the middle, two routers (R1 and R2) provide
connectivity between the two sites.
Figure 1-42 Example of a Basic Network Topology
Here are a few concepts you should be familiar with:
An IP address is the means by which a device is identified by the IP protocol. An IP
address can be assigned to a host or to a router interface.
In the example in Figure 1-42, Host A is identified by IPv4 address 10.0.1.1, and
Host B is identified by IPv4 address 10.0.2.2. IPv4 and IPv6 are different; we will
look into the details of IPv4 and IPv6 addresses later in this section.
The routing table or routing database is somewhat similar to the MAC address table
discussed in the previous section. The routing table contains two main pieces of
information: the destination IP or network and the next-hop IP address, which is the
IP address of the next device where the IP packet should be sent.
A default route is a special entry in the routing table that says to forward all
packets, regardless of the destination to a specific next hop.
Packet routing refers to the action performed by the Layer 3 device to transmit a
packet. When a packet reaches one interface of the device, the device will look up
the routing table to see where the packet should be sent. If the information is found,
the packet is sent to the next-hop device.
The router or IP gateway is a Layer 3 device that performs packet routing. It has
two or more interfaces connected to a network segment—either a LAN segment or a
WAN segment. Although a router is usually classified as Layer 3, most modern
www.hellodigi.ir

routers implement all layers of the TCP/IP model; however, their main function is to
route packets at Layer 3. R1 and R2 in Figure 1-42 are examples of routers.
Referring to Figure 1-43, let’s see how Host A can send information to Host B.
Figure 1-43 Example of IP Packet Routing and a Routing Table
Step 1. Host A will encapsulate the data through the various TCP/IP layers up to the
IP layer. The IP layer adds the IP header and sends it down to the link layer to
encapsulate it in an Ethernet frame. After that, the frame is sent to R1.
Step 2. R1 strips the Ethernet header and trailer and processes the IP packet header. It
sees that this packet has Host B as its destination, so it looks to its routing table
to find the next-hop device. In the routing table, Host B can be reached via R2,
so R1 re-encapsulates the packet in a new link layer frame (for example, a new
Ethernet frame) and sends it to R2.
Step 3. R2 performs the same operation as R1. It strips the link layer information,
processes the IP packet header, and looks to its routing table to find Host B. R2
sees that Host B is directly connected—that is, it is in the same broadcast
domain as its F0/2 interface—so it encapsulates the packet in an Ethernet frame
and sends it directly to Host B.
www.hellodigi.ir

Step 4. Host B receives the Ethernet frame, strips the information, and reads the IP
packet header. Because Host B is the recipient of the packet, it will further
process the IP packet to access the payload.
This process is somehow similar for IPv4 and IPv6. We will continue explaining the
routing process using IPv4. IPv6 will be discussed a bit down the road.
IPv4 Header
An IP packet is formed by an IP header, which includes information on how to handle
the packet from the IP protocol, and by the IP payload, which includes the Layer 4 PDU
(for example, the TCP segment). The IP header is between 20 and 60 bytes long,
depending on which IP header options are present.
Figure 1-44 shows an example of an IPv4 header.
Figure 1-44 IPv4 Header, Organized as 4 Bytes Wide, for a Total of 20 Bytes
The IP header fields are as follows:
Version: Indicates the IP protocol version (for example, IP version 4).
Internet Header Length: It indicates the length of the header. A standard header,
without options, is 20 bytes in length.
Notification (Differentiated Services Code Point [DSCP]) and Explicit
Congestion (ECN): Includes information about flow prioritization to implement
Quality of Service and congestion control.
Total Length: The length of the IP packet, which is the IP header plus the payload.
The minimum length is 20 bytes, which is an IP packet that includes the basic IP
header only.
Identification: This field is mainly used when an IP packet needs to be fragmented
due to constraint at the Layer 2 protocol. For example, Ethernet can transport, at a
www.hellodigi.ir

maximum, a 1500-byte IP packet.
Flags and Fragment Offset: Fields to handle IP packet fragmentation.
Time to Live (TTL): A field that’s used to prevent IP packets from looping
indefinitely. The TTL field is set when the IP packet is created, and each router on
the path decrements it by one unit. If the TTL goes to zero, the router discards the
packet and sends a message to the sender to tell it that the packet was dropped.
Protocol: Indicates the type of protocol transported within the IP payload. For
example, if TCP is transported, the value is 6; if UDP is transported, the value is
17.
Table 1-7 lists the common IP protocol codes. The protocol numbers are registered at
IANA (http://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml).
Table 1-7 Common IP Protocol Codes
Header Checksum: This is the checksum of the header. Every time a router
modifies the header (for example, to reduce the TTL field), the header checksum
needs to be recalculated.
Source Address: This is the IP address of the sender of the IP packet.
Destination Address: This is the IP address of the destination of the IP packet.
www.hellodigi.ir

IPv4 Fragmentation
IP fragmentation is the process of splitting an IP packet into several fragments to allow
the transmission by a Layer 2 protocol. In fact, the maximum length of a payload for a
Layer 2 protocol depends on the physical medium used for transmission and on other
factors. For example, Ethernet allows a maximum payload for the frame, also called the
maximum transmission unit (MTU), of 1500 bytes in its basic frame, as you saw earlier.
So what happens if a host sends an IP packet that is larger than that size? The packet
needs to be fragmented.
Figure 1-45 shows an example of fragmentation. Host A sends an IP packet that is 2000
bytes, including 20 bytes of IP header. Before being transmitted via Ethernet, the packet
needs to be split in two: one fragment will be 1500 bytes, and the other will be 520
bytes (500 bytes are due to the remaining payload, plus 20 bytes for the new IP header,
which is added to the second fragment).
Figure 1-45 Example of IPv4 Fragmentation
The receiving host reassembles the original packet once all the fragments arrive. Two or
www.hellodigi.ir

more fragments of the same IP packet can be recognized because they will have the
same value in the Identification field. The IP flags include a bit called More Fragments
(MF), which indicates whether more fragments are expected. The last fragment will
have this bit unset to indicate that no more fragments are expected. The Fragment Offset
field is used to indicate at which point of the original unfragmented IP packet this
fragment should start.
In the example in Figure 1-45, the first packet would have the following fields set:
Identification = 20
IP Flags MF = 1
Fragment Offset = 0
The second fragment would have these fields set:
Identification = 20 (which indicates that this is a fragment of the previous packet)
IP Flags MF = 0 (which indicates that this is the last fragment)
Fragment Offset = 1480 (to indicate that this fragment should start after 1480 bytes
of the original packet)
NOTE
In reality, the fragment offset is expressed in multiples of 8. Therefore, the
real value would be 185 (that is, 1480 / 8).
IPv4 Addresses and Addressing Architecture
An IPv4 address is a 32-bit-long number used to identify a device at Layer 3 (for
example, a host or a router interface). In human-readable form, an IPv4 address is
usually written in dotted decimal notation. The address is split in four parts of 8 bits
each, and each part is represented in decimal form.
For example, an IPv4 address of 00000001000000010000000111111110 would be
transformed into 00000001. 00000001. 00000001. 11111110, and each octet is
transformed to decimal. Therefore, this address is written as 1.1.1.254.
You may be wondering how IP addresses are assigned? For example, who decided that
10.0.1.1 should be the IP address of Host A? Creating the IP address architecture is one
of the most delicate tasks when designing an IP-based communication network. This
section starts with a description of the basics of IP addressing and then delves into how
the concept evolved and how it is commonly performed today.
One of the first architectures, called classful addressing, was based on IPv4 address
www.hellodigi.ir

classes, where the IPv4 address is logically divided into two components: a network
part and a host part. The network prefix identifies the network (for example, an
organization), while the host number identifies a host within that network.
The IPv4 address range was divided into five classes, as shown in Table 1-8.
Table 1-8 IPv4 Address Classes
Class A, B, and C IP addresses can be assigned to hosts or interfaces for normal IP
unicast usage; Class D IP addresses can be used as multicast addresses; Class E is
reserved and cannot be used for IP routing. The network prefix length and host
numbering length vary depending on the class.
Class A allots the first 8 bits for the network prefix and the remaining 24 bits for host
addresses. This means Class A includes 256 (28) distinct networks, each capable of
providing an address to 16,777,216 (224) hosts. For example, address 1.1.1.1 and
address 2.2.2.2 would be in two different networks, whereas address 1.1.1.1 and
address 1.4.1.1 would be in the same 1.x.x.x Class A network.
Class B allots the first 16 bits for the network prefix and the remaining 16 for host
addresses. Class B includes 65,536 (216) distinct networks and 65,536 (216) host
addresses within a single network.
Class C allots the first 24 bits for the network prefix and the remaining 8 for host
addresses. Class C includes 16,777,216 (28) distinct networks and 256 (28) host
addresses within one network.
Figure 1-46 summarizes the network and host portions for each class.
www.hellodigi.ir

Figure 1-46 Network and Host Portion for IPv4 Address Classes
For each network, there are two special addresses that are usually not assigned to a
single host:
Network address: An address where the host portion is set to all 0s. This address
is used to identify the whole network.
Broadcast network address: An address where the host portion is set to all 1s in
binary notation, which correspond to 255 in decimal notation.
For example, in the network 1.x.x.x, the network address would be 1.0.0.0 and the
broadcast address would be 1.255.255.255. To indicate the bits used for the network
portion and the bits used for the host portion, each IP address is followed by a network
mask.
A network mask is a binary number that has the same length as an IP address: 32 bits. In
a network mask, the network portion is indicated with all 1s and the host portion with
all 0s. The network mask can also be read in dotted decimal format like an IP address.
For example, the network mask for a Class A network would be
11111111000000000000000000000000, or 255.0.0.0.
The network mask sometimes is abbreviated as a backslash character (/) followed by
the number of bits of the network portion of the IP address. For example, the same Class
www.hellodigi.ir

A network mask can be written as /8. This is sometime called Classless Interdomain
Routing (CIDR) notation. Although it may seem that a network mask is unnecessary
because the IP address range already provides the same info (for example, 3.3.3.3
would fall under the Class A addresses range, which would imply a network prefix of 8
bits), network masks are important to the concept of subnets, which we discuss in the
next section.
Table 1-9 shows the default network mask for Classes A, B, and C. Classes D and E do
not have any predefined mask because they are not used for unicast traffic.
Table 1-9 Default Network Masks for IPv4 Classes A, B, and C
Keep in mind that two hosts are subtracted from the totals in this table because we need
to remove the host address reserved for the network address as well as the address
reserved for the broadcast network address.
IP Network Subnetting and Classless Interdomain Routing (CIDR)
In the classful addressing model, an organization would need to send a request to an
Internet registry authority for a network within one of the classes, depending on the
number of hosts needed. However, this method is highly inefficient because
organizations receive more addresses than they actually need due to the structure of the
classes. For example, an organization that only needs to assign an address to 20 hosts
would get a Class C network, thus wasting 234 addresses (that is, 256 – 20 – 2). A
more intelligent approach is introduced with Classless Interdomain Routing (CIDR).
CIDR moves away from the concept of class and introduces the concept of a network
mask or prefix, as mentioned in the previous section. By using CIDR, the IANA or any
local registry can assign to an organization a smaller number of IP addresses instead of
having to assign a full class range. With this method, IP addresses can be saved because
an organization can request an IP address range that actually fits its requirements, which
means other addresses can be allocated to a different organization.
www.hellodigi.ir

In the previous example, the organization would receive a /27 network mask instead of
a full Class C network (/24). In the following pages, we explore how an organization
can further partition the received address space to adapt to organizational needs using
the concept of subnets.
You were already introduced to the term subnet or network segment when we discussed
Layer 2 technologies. A subnet can be identified with a broadcast domain. In Figure 1-
47, we can identify three subnets, each representing a separate broadcast domain. Each
subnet includes a number of IP addresses that are assigned to the hosts and interfaces
within that subnet. In this example, Subnet 1 would need a minimum of three IP
addresses (Host A, Host B, and the R1 interface), and Subnet 2 at least two IP
addresses (one for each router interface). Subnet 3 also would need at least two IP
addresses (one for Host C and one for the R2 interface). Remember than on each subnet,
we also need to reserve one address for the network ID and one for the broadcast
network address.
www.hellodigi.ir

Figure 1-47 Example of Addressing in a Topology with Three Subnets
When subnets are used, an IP address is logically split into three parts: the network
prefix, the subnet ID, and the host portion, as shown in Figure 1-48. The network prefix
is assigned by the IANA (or by any other assignment authority) and cannot be changed.
Network administrators, however, can use the subnet prefix to split the address space
into various smaller groups.
www.hellodigi.ir

Figure 1-48 IP Address Format with Subnet
For example, an organization receiving a Class B range of IP addresses, 172.1.0.0/16,
could use Subnets to further split the address range. Using 8 bits for the subnet ID, for
example, they could create 255 subnets, 172.1.1.0/24, 172.1.2.0/24, 172.1.3.0/24 etc.,
as shown in Figure 1-49 each with 253 (255 – 2) IP addresses that could be assigned to
hosts within the subnet.
Figure 1-49 Example of IP Address and Subnet
There are two fundamental rules when using subnets in the IP address architecture:
Hosts within the same subnet should be assigned only IP addresses provided by the
host portion of that subnet.
Traffic between subnets needs a router or a Layer 3 device to flow. This is because
each subnet represents a broadcast domain.
So how do you know how a network has been subnetted? You use network masks. In the
case of subnets, the network mask would set all 1s for the network part plus the subnet
prefix, while the host part would be all 0s. For example, each subnet derived from the
Class B network in Figure 1-49 would get a network mask of 255.255.255.0, or /24.
www.hellodigi.ir

Variable-Length Subnet Mask (VLSM)
Classic subnetting splits a network into equal parts. This might not be completely
efficient because, for example, one subnet may require fewer IP addresses than others.
Let’s suppose we have three subnets: SubA, SubB, and SubC. Each subnet has a
different number of devices that require an IP address, as shown in Figure 1-50.
Figure 1-50 Example of Three Subnets with Different Requirements for IP Addresses
Let’s assume that the subnets have the following requirements in terms of IP addresses:
SubA requires 30 IP addresses.
SubB requires 14 IP addresses.
SubC requires eight IP addresses.
www.hellodigi.ir

Because of the requirement of SubA, in classic subnetting, we would use a subnet mask
of /27 so that 30 hosts can be assigned an IP address. However, all the other subnets
will also receive a /27 address because of the fixed way a subnet is split. For example,
we would create and assign the addresses and subnets as detailed in Table 1-10.
Table 1-10 Classic Subnetting
The first subnet, SubA, will consume all the IP addresses; however, SubB will only use
14 out of the 30 provided, SubC will only use eight out of 30, and SubD through SubG
will be unused, thus wasting 30 IP addresses each.
The variable-length subnet mask (VLSM) method allows you to subnet a network with
subnets of different sizes. The size will be calculated based on the actual need for IP
addresses in each subnet. Table 1-11 shows how the VLSM approach can be used in our
example. SubA will still need 30 hosts, so it will keep the former subnet mask. SubB
only needs 14 IP addresses, so it can use a /28 subnet mask, which allows for up to 14
IP addresses. SubC needs eight IP addresses, so it will also use a /28 subnet mask,
because a /29 subnet mask would allow only six IP addresses—that is, 8 – 2 (for the
network and broadcast addresses). There is no need to create other subnets, which
further saves IP addresses.
Table 1-11 Subnetting with VLSM
www.hellodigi.ir

Public and Private IP Addresses
Based on the discussion so far, it is probably clear that IP addresses are scarce
resources and that reducing the number of unused IP addresses is a priority due to the
exponential growth of the use of TCP/IP and the Internet. CIDR, subnets, and VLSM
have greatly helped with optimizing the IP addressing architecture, but by themselves
have not been enough to handle the amount of requests for IP addresses.
In most organizations, probably not all the devices need to be reachable from the
Internet. Some or even most of them just need to be reached within the organization. For
example, an internal database might need to be reached by applications within the
organization boundaries, but there is no need to make it accessible for everyone on the
Internet.
A private IP addresses range is a range that can be used by any organization without
requiring a specific assignment from an IP address assignment authority. The rule is,
however, that these ranges can be used only within the organization and should never be
used to send traffic over the Internet.
Figure 1-51 shows two organizations using IP address ranges. RFC 1918 defines three
IP address ranges for private use:
10.0.0.0/8 network
172.16.0.0/12 network
192.168.0.0/16 network
Figure 1-51 IP Address Ranges for Private Use
Be careful not to confuse these address ranges with Class A, B, or C because the
network masks are different.
www.hellodigi.ir

Organizations can pick one of these ranges and assign IP addresses internally (for
example, using classic subnetting or VLSM). You may have noticed that when you
connect to your home router (for example, over Wi-Fi), you may get an IP address that
looks like 192.168.x.x. This is because your home router is using the 192.168.0.0/16
network to provide addresses for the local LAN.
Because two organizations can use the same network range, there could be two devices
with the same IP address within these two organizations. What if these two devices
want to send and receive traffic to and from each other? Recall that we said that private
IP addresses should never be used on the Internet. So how can a host with a private IP
address browse a web server on the Internet?
The method that is used to solve this problem is network address translation (NAT).
NAT uses the concept of a local IP address and a global (or public) IP address. The
local IP address is the IP address assigned to a host within the organization, and it is
usually a private address. Other devices within the organization will use this address to
communicate with that device. The global IP address is the IP address used outside the
organization, and it is a public IP address.
NOTE
Two hosts are not permitted to have the same IP address within a subnet. If,
within an organization, two hosts have the same IP address, then NAT
needs to be performed within the organization to allow traffic.
The following example shows how NAT is used to allow communication between two
hosts with the same IP address belonging to two different organizations (see Figure 1-
52):
Step 1. Host A initiates the traffic with the source IP address 192.168.1.1, which is
the local IP address, and the destination 2.2.2.2, which is the global IP address
of Host B.
Step 2. When the packet reaches the Internet gateway of Organization A, the router
notices that Host A needs to reach a device on the Internet. Therefore, it will
perform an address translation and change the source IP address of the packet
with the global IP address of Host A (for example, to 1.1.1.1). This is needed
because the 192.168.1.1 address is only locally significant and cannot be
routed over the Internet.
Step 3. The Internet gateway of Organization B receives a packet for Host B. It
notices that this is the global IP address of Host B, so it will perform an
www.hellodigi.ir

address translation and change the destination IP address to 192.168.1.1 which
is the local IP address for Host B.
Step 4. If Host B replies, it will send a packet with the source IP address of its local
IP address, 192.168.1.1, and a destination of the global IP address of Host A
(1.1.1.1). The Internet gateway at Organization B would follow a similar
process and translate the source IP address of the packet to match the global IP
address of Host B.
Figure 1-52 Using NAT to Allow Communication Between Two Hosts with the Same
IP Addresses Belonging to Two Different Organizations
How do Internet gateways know about the link between global and local IP addresses?
The information is included in a table, which is called the NAT table. This is a simple
example of how NAT works. NAT is described in more detail in Chapter 2, “Network
Security Devices and Cloud Services.”
Special and Reserved IPv4 Addresses
Besides the private addresses, additional IPv4 addresses have been reserved and
cannot be used to route traffic over the Internet. Table 1-12 provides a summary of IPv4
unicast special addresses based on RFC 6890. For example, 169.254.0.0/16 is used as
the link local address and can be used to communicate only within a subnet (that is, it
cannot be routed).
www.hellodigi.ir

Table 1-12 IPv4 Unicast Special Addresses
IP Addresses Assignment and DHCP
So far you have learned that each device in a subnet must receive an IP address so it can
send and receive IP packets. How do we assign an IP address to a device or interface?
Two methods are available for assigning IP addresses:
Static address assignment: With this method, someone needs to log in to the
device and statically assign an IP address and network mask. The advantage of this
method is that the IP address will not change because it is statically configured on
the device. The disadvantage is that this is a manual configuration. This is typically
used on networking devices or on a server where it is important that the IP address
is always the same. For example, the following commands can be used to assign an
IP address to the F0/0 interface of a Cisco IOS router:
www.hellodigi.ir

Click here to view code image
Interface FastEthernet 0/0
ip address 10.0.0.2 255.255.255.0
Dynamic address assignment: If there are hundreds or thousands of devices,
configuring each of them manually is probably not the best use of anyone’s time.
Additionally, if for some reason the network administrator changes something in the
network mask, network topology, and so on, all devices might need to be
reconfigured. Dynamic address assignment allows automatic IP address assignment
for networking devices. The Dynamic Host Configuration Protocol (DHCP) is used
to provide dynamic address assignment and to provision additional configuration to
networking devices. An older protocol not in use anymore and that provided similar
services was the BOOTP protocol.
Let’s explore how DHCP works.
DHCP, which is described in RFC 2131, is a client-server protocol that allows for the
automatic provisioning of network configurations to a client device. The DHCP server
is configured with a pool of IP addresses that can be assigned to devices. The IP
address is not statically assigned to a client, but the DHCP server “leases” the address
for a certain amount of time. When the duration of the leasing period is close to
expiring, the client can request to renew the leasing. Together with the IP addresses, the
DHCP server can provide other configurations.
Here are some examples of network configurations that can be provisioned via DHCP:
IP address
Network mask
Default gateway address
DNS server address
Domain name
DHCP uses UDP as the transport protocol on port 67 for the server and port 68 for the
client. DHCP defines several types of messages:
DHCPDISCOVERY: Used by a client to discover DHCP servers within a LAN. It
can include some preferences for addresses or lease period. It is sent to the network
broadcast address or to the broadcast address 255.255.255.255 and usually carries
as a source IP of 0.0.0.0.
DHCPOFFER: Sent by a DHCP server to a client. It includes a proposed IP
address, called YIADDR, and a network mask. It must also include the server ID,
www.hellodigi.ir

which is the IP address of the server. This is also called SIADDR. There could be
multiple DHCP servers within a LAN, so multiple DHCPOFFER messages can be
sent in response to a DHCPDISCOVERY.
DHCPREQUEST: Sent from the client to the broadcast network. This message is
used to confirm the offer from a particular server. It includes the SIADDR of the
DHCP server that has been selected. This is broadcast and not unicast because it
provides information to the DHCP servers that have not been chosen about the
choice of the client.
DHCP ACKNOWLEDGEMENT (DHCPACK): Sent from the server to the client
to confirm the proposed IP address and other information.
DHCP Not ACKNOWLEDGED (DHCPNACK): Sent from the server to the
client in case some issues with the IP address assignment are raised after the
DHCPOFFER.
DHCPDECLINE: Sent from the client to the server to highlight that the IP address
assigned is in use.
DHCPRELEASE: Sent from the client to the server to release the allocation of an
IP address and to end the lease.
DHCPINFORM: Sent from the client to the server. It is used to request additional
network configuration; however, the client already has an IP address assigned.
The following steps provide an example of a basic DHCP IP address request (see
Figure 1-53):
Figure 1-53 Basic DHCP IP Address Assignment Process
www.hellodigi.ir

Step 1. When a host first connects to a LAN, it does not have an IP address. It will
send a DHCPDISCOVERY packet to discover the DHCP servers within a
LAN. In one LAN there could be more than one DHCP server.
Step 2. Each DHCP server responds with DHCPOFFER message.
Step 3. The client receives several offers, picks one of them, and responds with a
DHCPREQUEST.
Step 4. The DHCP server that has been selected responds to the client with a
DHCPACK to confirm the leasing of the IP address.
What happens if there is no DHCP server within a subnet? To make it work, the Layer 3
device needs to be configured as DHCP relay or DHCP helper. In that case, the router
will take the broadcast requests (for example, DHCPDISCOVERY and
DHCPREQUEST) and unicast them to the DHCP server configured in the relay, as
shown in Figure 1-54. When the DHCP server replies, the router will forward it to the
client.
Figure 1-54 Example of DHCP Relay
Figure 1-54 shows an example of DHCP relay. The host sends a DHCP DISCOVERY
broadcast in the network segment where it is directly connected, 10.0.1.0/24. The router
R1 is configured with a helper address, 10.0.1.1, within that subnet. Because of that, R1
picks up the DHCP REQUEST and forwards it to the DHCP server configured. The
www.hellodigi.ir

server will answer the DHCP DISCOVERY with a DHCP OFFER, which is sent
directly to the IP helper address of R1. When R1 receives the answer from the DHCP
server, it will forward the answer to the host.
IP Communication Within a Subnet and Address Resolution Protocol (ARP)
In the previous section, you learned how each device in a subnet gets its own IP
address. So let’s see how devices communicate in a subnet first, and then in the next
section we will discuss how devices communicate across multiple subnets. Let’s
imagine Host A with IP address 10.0.0.1 wants to communicate with Host B in the same
subnet with IP address 10.0.0.3. At this point, Host A knows the IP address of Host B;
however, Layer 2 still requires the MAC destination address for Host B. How can Host
A get this information? Host A will use the Address Resolution Protocol (ARP) to get
the MAC address of Host B.
ARP includes two messages:
ARP request: This is used to request the MAC address given an IP address. It
includes the IP address and MAC address of the device sending the request and
only the IP address of the destination.
ARP reply: This is used to provide information about a MAC address. It includes
the IP address and MAC address of the device responding to the ARP request and
the IP address and MAC address of the device that sent the ARP request.
When Host A needs to send a message to Host B for the first time, it will send an ARP
request message using the Layer 2 broadcast address so that all devices within the
broadcast domain receive the request. Host B will see the request and recognize that the
request is looking for its IP address. It will respond with an ARP reply indicating its
own MAC address. Host A stores this information in an ARP table, so the next time it
does not have to go through the ARP exchanges.
Figure 1-55 shows an example of an ARP message exchange.
www.hellodigi.ir

Figure 1-55 ARP Message Exchange
Once the MAC address of the destination is known, Host A can send packets directly to
Host B by encapsulating the IP packet within an Ethernet frame, as discussed in the
previous sections.
Intersubnet IP Packet Routing
In the previous sections, you learned how IP communication works within a subnet. In
this section, we analyze how packets move across subnets. As stated in the previous
sections, each subnet is divided by a Layer 3 device (for example, a router). Figure 1-
56 shows two hosts, Host A and Host B, which belong to different subnets, and Host C,
which is in the same subnet as Host A. The two routers, R1 and R2, provide Layer 3
connectivity, and R3 is the gateway to the rest of the network. The table shown in this
figure includes the IP addresses for the relevant interfaces and hosts.
www.hellodigi.ir

Figure 1-56 Example of a Network Topology with Three Routers
When Host A needs to send a packet, it must make a decision on where to send the
packet. The logic implemented by the host is simple:
If the destination IP address is in the same subnet as the interface IP address, the
packet is sent directly to the device.
If the destination IP address is in a different subnet, it is sent to the default gateway.
The default gateway for a host is the router that allows the packet to exit the host subnet
(in this example, R1). The logic is implemented in Host A’s routing table. Host A will
see network 10.0.1.1/24 as directly connected and will have an entry saying that packets
for any other IP addresses go to the default gateway.
Figure 1-57 shows the routing table for Host A.
www.hellodigi.ir

Figure 1-57 Host A’s Routing Table
Let’s assume Host A needs to send a packet to Host B; it will check its routing table and
decide that the packet’s next hop (which means the next Layer 3 device to handle this
packet) is R1 F0/1, with an IP address of 10.0.1.3/24. If Host A does not know the
Layer 2 address of R1, it will send an ARP request, as discussed in the previous
section.
R1 receives the packets from Host A on the F0/1 interface. At this point, R1 will do a
routing table lookup to check where packets with the destination 10.0.3.3 should be
sent. Table 1-13 shows what the R1 routing table might look like.
Table 1-13 Example of the R1 Routing Table
Networks 10.0.1.0/24, 10.0.2.0/24, and 10.0.4.0/24 are directly connected to the router.
Network 10.0.3.0/24, which is the network of the destination IP address, has a next hop
of R2. The last network, 0.0.0.0/0, is called the default network. This means that, if
there is no better match, R1 will send the packet to 10.0.4.2, which is the F0/1 interface
of R3. R1 is said to have a default route via R3.
When looking up the routing table, the router will use the interface with the best
matching network, which is the network with the longest prefix match. For example,
imagine that the router includes the two entries in its routing table outlined in Table 1-
14.
www.hellodigi.ir

Table 1-14 Example of the Longest Prefix Match to Decide the Next Hop
Where would a packet with a destination of IP 10.0.3.3 be sent? In this case,
10.0.3.0/24 is a closer match than 10.0.0.0/16 (longest prefix match), so the router will
select 10.0.2.2 via the F0/2 interface.
Let’s go back to our example. R1 identified R2 as the next hop for this packet. R1 will
update the IP header information (for example, it will reduce the TTL field by one and
recalculate the checksum). After that, it will encapsulate the packet in an Ethernet frame
and send it to R2. Remember that R1 does not modify the IP addresses of the packet.
When R2 receives the IP packet on F0/1, it will again perform a routing table lookup to
understand what to do with the packet. The R2 routing table might look something like
Table 1-15.
Table 1-15 Example of the R2 Routing Table
Because the destination IP address matches a directly connected network, R2 can send
the packet directly to Host B via the F0/0 interface. If Host B replies to Host A, it will
send an IP packet with a destination of IP 10.0.1.1 to R2, which is the default gateway
for Host B.
R2 does not have a match for the 10.0.1.1 address; however, it is configured to send
anything for which it does not have a match to 10.0.2.1 (R1) via the F0/1 interface. R2
has a default route via R1. R2 will send the packet to R1, which will then deliver to
Host A.
Routing Tables and IP Routing Protocols
The routing table is a key component of the forwarding decision. How is this table
populated? The connected network will be automatically added when the interface is
configured. In fact, the device can determine the connected network from the interface IP
address and network mask. The host default gateway can also be configured statically
or, as you saw in the “IP Addresses Assignment and DHCP” section, dynamically
assigned via DHCP.
www.hellodigi.ir

For the other entries, there are two options:
Static routes: Routes that have been manually added by the device administrator.
Static routes are used when the organization does not use an IP routing protocol or
when the device cannot participate in an IP routing protocol.
Dynamic routes: Routes that are dynamically learned using an IP routing protocol
exchange.
An IP routing protocol is a protocol that allows the exchange of information among
Layer 3 devices (for example, among routers) in order to build up the routing table and
thus allow the routing of IP packets across the network. A routed protocol is the
protocol that actually transports the information and allows for packet forwarding. For
example, IPv4 and IPv6 are routed protocols.
Each routing protocol has two major characteristics that need to be defined by the
protocol itself:
How and which type of information is exchanged, and when it should be exchanged
What algorithm is used by each device to calculate the best path to destination
This book does not go into the details of all the routing protocols available; however, it
is important that you are familiar at least with the basic functioning of how an IP routing
protocol works.
The first classification of a routing protocol is based on where it operates in a network:
Interior gateway protocols (IGPs) operate within the organization boundaries. Here
are some examples of IGPs:
Open Shortest Path First (OSPF)
Intermediate System to Intermediate System (IS-IS)
Enhanced Interior Gateway Routing Protocol (EIGRP)
Routing Information Protocol Version 2 (RIPv2)
Exterior gateway protocols (EGPs) operate between service providers or very
large organizations. An example of an EGP is the Border Gateway Protocol (BGP).
An autonomous system (AS) is a collection of routing information under the
administration of a single organization entity. Usually the concept coincides with a
www.hellodigi.ir

single organization. Each AS is identified by an AS number (ASN). IGPs run within an
autonomous system, whereas EGPs run across autonomous systems.
Figure 1-58 shows an example of autonomous systems interconnected with EGPs and
running IGPs inside.
Figure 1-58 Autonomous Systems Interconnected with EGPs and IGPs Running
Inside
The other common way of classifying IP routing protocols is based on the algorithm
used to learn routes from other devices and choose the best path to a destination. The
most common algorithms for IGP protocols are distance vector (used in RIPv2), link-
state (used in OSPF or IS-IS), and advanced distance vector (also called hybrid, used in
EIGRP).
www.hellodigi.ir

Distance Vector
Distance vector (DV) is one of the first algorithms used for exchanging routing
information, and it is usually based on the Bellman-Ford algorithm. The most well-
known IP routing protocol using DV is RIPv2. To better understand how DV works, let’s
introduce two concepts:
Neighbors are two routers or Layer 3 devices that are directly connected.
Hop count is a number that represents the distance (that is, the number of routers on
the path) between a router and a specific network.
A device running a DV protocol will send a “vector of distances,” which is a routing
protocol message to the neighbors, that contains the information about all the networks
the device can reach and the cost.
In Figure 1-59, R2 will send a message to R1 saying that it can reach NetB 10.0.3.0/24
with a cost of 0, because it is directly connected, while it can reach NetC 10.0.5.0/24
with a cost of 1. R3 also sends a message to R1 saying that it can reach NetC
10.0.5.0/24 with a cost of 2 and NetB 10.0.3.0/24 with a cost of 1. R1 receives the
information and updates its routing table. It will add both NetB and NetC as reachable
via R2 because it has the lowest hop count to the destinations.
Figure 1-59 Example of a Distance Vector Exchange
The exchange continues until all routers have a stable routing table. At this point, the
routing protocol has converged. Neighbor routers also exchange periodic messages. If
the link to a neighbor goes down, both router will detect the situation and inform the
www.hellodigi.ir

other neighbors about the situation. Each neighbor will inform its own neighbors, and
the routing tables will be updated accordingly until the protocol converges again.
There are several issues with DV protocols:
Using hop count as the cost to determine the best path to a destination is not the best
method. For example, you may have three routers operating with a bandwidth of 1
Gbps and two routers operating with a bandwidth of 1 Mbps. It is probably better
for the packet to travel through one more router but with a better bandwidth.
Routers do not have full visibility into the network topology (they know only what
the neighbor routers tell them), so calculating the best path might not be optimal.
Each update includes an exchange of the full list of networks and costs, which can
consume bandwidth.
It is not loop free. Because of how the algorithm works, in some scenarios packets
might start looping in the network. This problem is known as count to infinity. To
solve this issue, routing protocols based on DV implement split-horizon and
reverse-poison techniques. These techniques, however, increase the time the routing
protocol takes to converge to a stable situation.
Advanced Distance Vector or Hybrid
To overcome most of the downside of legacy DV protocols such as RIPv2, there is a
class of protocols that are based on DV but that implement several structural
modifications to the protocol behavior. These are sometimes called advanced distance
vector or hybrid protocols, and one of the most known is Cisco EIGRP.
Figure 1-60 shows an example of an EIGRP message exchange between two neighbors.
At the beginning, the two routers discover each other with Neighbor Discovery hello
packets. Once neighborship is established, the two routers exchange the full routing
information, in a similar way as in DV. When an update is due (for example, because of
a topology change), only specific information is sent rather than the full update.
www.hellodigi.ir

Figure 1-60 Example of EIGRP Message Exchange
Here are the main enhancements of these types of protocols:
They do not use hop count as a metric to determine the best path to a network.
Bandwidth and delay are typically used to determine the best path; however, other
metrics can be used in combination, such as load on the link and the reliability of
the link.
The full database update is only sent at initialization, and partial updates are sent in
the event of topology changes. This reduces the bandwidth consumed by the
protocol.
They include a more robust method to avoid loops and reduce the convergence
time. For example, EIGRP routers maintain a partial topology table and include an
algorithm called Diffused Update Algorithm (DUAL), which is used to calculate the
best path to a destination and provides a mechanism to avoid loops.
Link-State
Link-state algorithms operate in a totally different way than DV, and the fundamental
difference is that devices that participate in an IP routing protocol based on a link-state
algorithm will have a full view of the network topology; therefore, they can use an
algorithm such as Dijkstra or Shortest Path First (SPF) to calculate the best path to each
network. The most well-known IP routing protocols using link-state are OSPF and IS-
IS.
This section describes the basic functioning of link-state by using OSPF as the basis for
the examples. In link-state routing protocols, the concept of router neighbors is
maintained while the cost to reach a specific network is based on several parameters.
www.hellodigi.ir

For example, in OSPF, the higher the bandwidth, the lower the cost.
During the initiation phase, each router will send a link-state advertisement (LSA) to the
neighbors, which will then forward it to all other neighbors. In Figure 1-61, R2 will
send an LSA containing information about its directly connected network and the cost to
R1, R3, and R5. Both R3 and R5 will forward this information to their neighbor routers
(in this case, R1 and R4). This process is called LSA flooding.
Figure 1-61 Example of a Link-State Advertisement Exchange
Each router will collect all the LSAs and store them in a database called a link-state
database (LSDB).
In this example, R1 receives the same LSA from both R2 and R3. Because there is
already one LSA present in the R1 LSDB from R2, the one received from R3 is
discarded. At the end of the flooding process, each router should have an identical view
of the network topology.
A router can now use an SPF algorithm to calculate the best way to reach each of the
networks. Once that is done, the information is added to the router’s routing table. When
a link goes down, the neighbor routers that detect it will again flood an LSA with the
updated information. Each router will receive the LSA, update the LSDB with that
information, recalculate the best path, and update the routing table accordingly.
Advantages of a link-state algorithm include the following:
A better way to calculate the cost to a destination
Less protocol overhead compared to DV because updates do not require sending
the full topology
www.hellodigi.ir

Better best-path calculation because each router has a view of the full topology
Loop-free
Using Multiple Routing Protocols
An organization can run more than one routing protocol within a network; for example,
they can use a combination of static routes and dynamic routes learned via a routing
protocol. What happens if the same destination is provided by two routing protocols
with a different next hop?
Routers may assign a value, called an administrative distance in Cisco routers, that is
used to determine the precedence based on the way the router has learned about a
specific network. For example, we may want the router to use the route information
provided by OSPF instead of the one provided by RIPv2.
Table 1-16 summarizes the default administrative distance of a Cisco IOS router. These
values can be modified to tweak the route selection if needed.
Table 1-16 Cisco IOS Router Default Administrative Distances
www.hellodigi.ir

Internet Control Message Protocol (ICMP)
The Internet Control Message Protocol (ICMP) is part of the Internet Protocol suite, and
its main purpose is to provide a way to communicate that an error occurred during the
routing of IP packets.
ICMP packets are encapsulated directly within the IP payload. An IP packet transporting
an ICMP message in its payload sets the Protocol field in the header to 1. The ICMP
packet starts with an ICMP header that always includes the Type and Code fields of the
ICMP message, which define what that message is used for. ICMP also defines several
Message types. Each Message type can include a code.
Table 1-17 provides a summary of the most used values for ICMP Type and Code fields.
A full list can be found at http://www.iana.org/assignments/icmp-parameters/icmp-
parameters.xhtml.
Table 1-17 Most Used ICMP Types and Codes
Probably the most known use of an ICMP message is Ping, which is a utility
implemented in operating systems using TCP/IP and used to confirm the reachability of
a remote host at Layer 3. Ping uses ICMP to perform the task. When you ping a remote
destination, an ICMP Echo Request (type 8 code 0) is sent to the destination. If the
packet arrives at the destination, the destination sends an ICMP Echo Reply (type 0
code 0) back to the host. This confirms connectivity at Layer 3.
Figure 1-62 shows an example of an ICMP Echo Request and Echo Reply exchange.
www.hellodigi.ir

Figure 1-62 ICMP Echo Request and Echo Reply Exchange
Another very popular ICMP message is Destination Unreachable. This is used for a
number of cases, as you can see by the large number of codes for this type. For example,
if Host A pings a remote host, but your default gateway does not have information on
how to route the packet to that destination, it will send back an ICMP Destination
Unreachable – Network Unreachable message (type 3 code 0) back to Host A to
communicate that the packet was dropped and could not be delivered.
An ICMP Time Exceeded message is instead generated when a router receives an IP
packet with an expired TTL value. The router will drop the packet and send back to the
IP packet source an ICMP Time Exceeded – TTL Exceed in Transit message (type 11
code 0).
Domain Name System (DNS)
In all the examples so far, we always had Host A sending a packet to Host B using its IP
address. However, having to remember IP addresses is not very convenient. Imagine if
you had to remember 72.163.4.161 instead of www.cisco.com when you wanted to
browse resources on the Cisco web server.
The solution is called the Domain Name System (DNS). DNS is a hierarchical and
distributed database that is used to provide a mapping between an IP address and the
name of the device where that IP is assigned.
This section introduces DNS and describes its basic functionalities. DNS works at
TCP/IP application layer; however, it is included in this section to complete the
overview of how two hosts communicate.
DNS is based on a hierarchical architecture called domain namespace. The hierarchy is
organized in a tree structure, where each leaf represents a specific resource and is
www.hellodigi.ir

uniquely identified by its fully qualified domain name (FQDN). The FQDN is formed by
linking together the names in the hierarchy, starting from the leaf name up to the root of
the tree.
Figure 1-63 shows an example of a DNS domain namespace. The FQDN of the host
www.cisco.com is composed, starting from the root, by its top-level domain (TLD),
which is com, then the second level domain, cisco, and finally by the resource name or
host name, www, which is the name for a server used to provide world-wide web
services. Another resource within the same second-level domain could be, for example,
a server called tools, in which case the FQDN would be tools.cisco.com.
Figure 1-63 DNS Domain Namespace
Table 1-18 summarizes the types of domain names.
www.hellodigi.ir

Table 1-18 Domain Names
Each entry in the DNS database is called a resource record (RR) and includes several
fields. Figure 1-64 shows an example of a resource record structure.
Figure 1-64 RR Structure
The Type field of the RR indicates which type of resources are included in the RDATA
field. For example, the RR type “A” refers to the address record and includes the
hostname and the associated IP address. This RR is used for the main functionality of
DNS, which is to provide an IP address based on an FQDN.
www.hellodigi.ir

Table 1-19 summarizes other common RRs.
Table 1-19 Common RRs
The DNS database is divided into DNS zones. A zone is a portion of the DNS database
that is managed by an entity. Each zone must have an SOA RR that includes information
about the management of the zone and the primary authoritative name server. Each DNS
zone must have an authoritative name server. This server is the one that has the
information about the resources present in the DNS zone and can respond to queries
concerning those resources.
So how then does Host A get to know the IP address of the www.cisco.com server? The
process is very simple. Host A will ask its configured DNS server about the IP address
of www.cisco.com. If its DNS knows the answer, it will reply. Otherwise, it will reach
the authoritative DNS server for www.cisco.com to get the answer. Let’s see the
process in a bit more detail.
Host A needs to query the DNS database to find the answer. In the context of DNS, Host
A, or in general any entity that requests a DNS service, is called a DNS resolver. The
DNS resolver sends queries to its own DNS server that is configured (for example, via
DHCP), as in the previous section.
There are two types of DNS queries, sometimes called lookups:
Recursive queries
Iterative queries
Recursive queries are sent from the DNS resolver to its own DNS server. Iterative
queries are sent from the DNS server to other DNS servers in case the initial DNS
server does not have the answer to the recursive query.
Figure 1-65 shows an example of the DNS resolution process, as detailed in the
following steps:
www.hellodigi.ir

Figure 1-65 DNS Resolution
Step 1. Host A sends a recursive DNS query for a type A record (remember, a type A
RR is used to map IPv4 IP addresses to FQDN) to resolve www.cisco.com to
its own DNS server, DNS A.
Step 2. DNS A checks its DNS cache but does not find the information, so it sends an
iterative DNS query to the root DNS server, which is authoritative for all of the
Internet.
Step 3. The root DNS server is not authoritative for that host, so it sends back a
referral to the .com DNS server, which is the authoritative server for the .com
domain.
Steps 4 and 5. The .com DNS server performs a similar process and sends a referral
to the cisco.com DNS server.
Steps 6 and 7. The cisco.com DNS server is the DNS authoritative server for
www.cisco.com, so it can reply to DNS A with the information.
Step 8. DNS A receives the information and stores it in its DNS cache for future use.
The information is stored in the cache for a finite time, which is indicated by
the Time To Live (TTL) value in the response from the cisco.com DNS server.
DNS A can now reply to the recursive DNS query from Host A.
www.hellodigi.ir

Host A receives the information from DNS A and can start sending packets to
www.cisco.com using the correct IP address. Additionally, it will store the
information in its own DNS cache for a time indicated in the TTL field.
The DNS protocol, described in RFC 1035, uses one message format for both queries
and replies. A DNS message includes five sections: Header, Question, Answer,
Authority, and Additional.
The DNS protocol can use UDP or TCP as the transport protocol, and the DNS server is
typically listening on port 53 for both UDP and TCP. According to RFC 1035, UDP port
53 is recommended for standard queries, whereas TCP is used for DNS zone transfer.
IPv6 Fundamentals
So far we have analyzed how two or more hosts can communicate using a routed
protocol (for example, IP), mainly using IPv4. In this section, we cover the newer
version of the IP protocol: IPv6.
With the growth of the Internet and communication networks based on TCP/IP, the
number of IPv4 addresses quickly became a scarce resource. Using private addressing
with NAT or CIDR has been fundamental to limiting the impact of the issue; however, a
long-term solution was needed. IPv6 has been designed with that in mind, and its main
purpose is to provide a larger IP address space to support the growth of the number of
devices needing to communicate using the TCP/IP model.
Most of the concepts we have discussed in the sections on the Internet Protocol and
Layer 3 technologies, such as the routing of a packet and routing protocols, work in a
similar way with IPv6. Of course, some modifications need to be taken into account due
to structural differences with IPv4 (for example, the IP address length).
This book will not go into detail on the IPv6 protocol; however, it is important that
security professionals and candidates for the CCNA Cyber Ops SECFND certification
have a basic understanding of IPv6 address, how IPv6 works, and its differences and
commonalities with IPv4.
Table 1-20 summarizes the main differences and commonalities between IPv6 and IPv4.
www.hellodigi.ir

Table 1-20 Comparing IPv6 and IPv4
Figure 1-66 shows an example of communications between Host A and Host B using
IPv6. Similar to the example we saw in the IPv4 section, Host A and Host B would
have an IP address that can identify the device at Layer 3. Each router interface would
also have an IPv6 address.
www.hellodigi.ir

Figure 1-66 Communication Between Hosts Using IPv6
Host A will send the IPv6 packet encapsulated in an Ethernet frame to its default
gateway, R1 (step 1).
R1 decapsulates the IPv6 packet, looks up the routing table, and finds that the next hop
is R2. It encapsulates the packet in a new Layer 2 frame and sends it to R2 (step 2). R2
will follow a similar process and finally deliver the packet to Host B.
In the example in Figure 1-66, probably the most notable difference is the format of the
IPv6 address. However, there are additional differences that are not visible. For
example, how does an IPv6 host know about the default gateway? Is ARP needed to find
out the MAC address given an IP address for intra-subnet traffic?
As discussed at the beginning of this section, several protocols that work for IPv4 could
work with IPv6 with just a few modifications. Some others are not necessary with IPv6,
and some new protocols had to be created. For example, ICMP and DHCP could not be
used “as is,” so new versions have been created: ICMPv6 and DHCPv6. The
functionality of ARP has been replaced with a new protocol called IPv6 Neighbor
Discovery. OSPF, EIGRP, and other routing protocols have been modified to work with
IPv6, and new versions have been proposed, such as OSPFv3, EIGRPv6, and RIPng.
www.hellodigi.ir

IPv6 Header
IPv6 has been designed to provide similar functionality to IPv4; however, it is actually
a separate and new protocol rather than an improvement to IPv4. As such, RFC 2460
defines a new header for IPv6 packets.
Figure 1-67 shows an IPv6 header.
Figure 1-67 IPv6 Header
Most of the fields serve the same purpose as their counterparts in IPv4.
With IPv6, one of the core differences with IPv4 is the introduction of extension
headers. Besides the fixed header, shown in Figure 1-67, IPv6 allows additional
headers to carry information for Layer 3 protocols. The extension header is positioned
just after the fixed header and before the IPv6 packet payload. The Next Header field in
the IPv6 header is used to determine what the next header in the packet is. If no
extension headers are present, the field will point to the Layer 4 header that is being
transported (for example, the TCP header). This is similar to the IP protocol field in
IPv4. If an extension header is present, it will indicate which type of extension header
will follow.
IPv6 allows the use of multiple extension headers in a chained fashion. Each extension
header contains a Next Header field that is used to determine whether an additional
extension header follows. The last extension header in the chain indicates the Layer 4
header type being transported (for example, TCP).
Figure 1-68 shows examples of chained extension headers. The first shows an IPv6
header without any extension headers. This is indicated by the Next Header field set to
TCP. In the third example of Figure 1-68, instead, the IPv6 header is followed by two
extension headers: the Routing extension header and the Fragmentation extension header.
The Fragmentation header’s Next Header field is indicating that a TCP header will
www.hellodigi.ir

follow.
Figure 1-68 Chained Extension Header
IPv6 Addressing and Subnets
The most notable difference between IPv4 and IPv6 is the IP address and specifically
the IP address length. The IPv6 address is 128 bits long, whereas the IPv4 address is
only 32 bits. This is because IPv6 is aimed at increasing the IP address space to resolve
the IPv4 address exhaustion issue and cope with the growth in demand of IP addresses.
Similar to IPv4, writing an IPv6 address in binary is not convenient. IPv6 uses a
different convention than IPv4 when it comes to writing down the IP address.
IPv6 addresses are represented by using four hexadecimal digits, which represent 16
bits, followed by a colon (:) An example of an IPv6 address is as follows:
2340:1111:AAAA:0001:1234:5678:9ABC:1234
Some additional simplification can be done to reduce the complexity of writing down an
IPv6 address:
For each block of four digits, the leading zeros can be omitted.
If two or more consecutive blocks of four digits are 0000, they can be substituted
with two colons (::). This, however, can only happen one time within an IPv6
address.
Let’s use FE00:0000:0000:0001:0000:0000:0000:0056 as an example. The first rule
will transform it as follows:
FE00:0:0:1:0:0:0:56
www.hellodigi.ir

The second rule can be applied either to the second and third blocks or to the fifth, sixth,
and seventh blocks, but not to both. The shortest form would be to apply it to the fifth,
sixth, and seventh blocks, which results in the following:
FE00:0:0:1::56
Like IPv4, IPv6 supports prefix length notation to identify subnets. For example, an
address could be written as 2222:1111:0:1:A:B:C:D/64, where the /64 indicates the
prefix length. To find the network ID, you can use the same process we used for IPv4;
that is, you can take the first n bits (in this case, 64) from the IPv6 address and set the
remaining bits to zeros. Figure 1-69 illustrates the process.
Figure 1-69 Finding the Network ID of an IPv6 Address
The resulting IPv6 address indicates the prefix or network for that IPv6 address. In our
example, this would be 2222:1111:0:1:0:0:0:0 or 2222:1111:0:1::.
IPv6 also defines three types of addresses:
Unicast: Used to identify one specific interface.
Anycast: Used to identify a set of interfaces (for example, on multiple nodes).
When this address is used, packets are usually delivered to the nearest interface
with that address.
Multicast: Used to identify a set of interfaces. When this address is used, packets
are usually delivered to all interfaces identified by that identifier.
In IPv6, there is no concept of broadcast address as we have seen for IPv4. To send
packets in broadcast, IPv6 uses a multicast address. Several types of addresses are
defined within these three main classes. In this book, we will not analyze all types of
addresses and instead will focus on two particular types defined within the Unicast
www.hellodigi.ir

class: global unicast and link-local unicast addresses (LLA).
In very simple terms, the difference between global unicast and link-local unicast is that
the former can be routed over the Internet whereas the latter is only locally significant
within the local link, and it is used for specific operations such as for the Neighbor
Discovery Protocol process.
One concept that is unique for IPv6 is that one interface can have multiple IPv6
addresses. For example, the same interface can have a link-local and a global unicast
address. Actually, this is one of the most common cases. In fact, IPv6 mandates that all
interfaces have at least one link-local address.
The global unicast address is very similar to a public IPv4 address. A global unicast
IPv6 address can be split in three parts (or prefixes), as shown in Figure 1-70.
Figure 1-70 Global Unicast IPv6 Address
The first one is called the global routing prefix and identifies the address block, as
assigned to an organization, the subnet ID, used to identify a subnet within that block
space, and the interface ID, which identifies an interface within that subnet.
The assignment of the global routing prefix is provided by IANA or by any of its
delegation, such as a regional Internet registry organization. The subnet part is decided
within the organization and is based on the IP address schema adopted.
The link-local address (LLA) is a special class of unicast address that is only locally
significant within a link or subnet. In IPv6, at least one LLA needs to be configured per
interface. The LLA is used for a number of functions, such as by the Neighbor Discovery
Protocol or as the next-hop address instead of the global unicast address. Any IPv6
www.hellodigi.ir

packet that includes an LLA should not be forwarded by a router outside of the subnet.
An LLA address should always start with the first 10 bits set to 1111111010
(FF80::/10), followed by 54 bits set to all 0s. This means that an LLA address always
starts with FE80:0000:0000:0000 for the first 64 bits, and the interface ID is
determined by the EUI-64 method, which we discuss in the next section.
Figure 1-71 shows an example of an IPv6 LLA.
Figure 1-71 IPv6 LLA
IPv6 multicast addresses are also very important for the correct functioning of IPv6 (for
example, because they replace the network broadcast address and are used in a number
of protocols to reach other devices). An IPv6 multicast address always starts with the
first 8 bits set to 1s, which is equivalent to FF00::/8.
Figure 1-72 shows the format of an IPv6 multicast address.
Figure 1-72 IPv6 Multicast Address Format
The FLGS and SCOP fields are used to communicate whether the address is
permanently assigned (and thus well known) or not, and for which scope the address
can be used (for example, only for local-link).
Table 1-21 summarizes some of most common IPv6 multicast addresses. A list of
reserved IPv6 multicast addresses can be found at
http://www.iana.org/assignments/ipv6-multicast-addresses/ipv6-multicast-
addresses.xhtml.
www.hellodigi.ir

Table 1-21 Common IPv6 Multicast Addresses
Special and Reserved IPv6 Addresses
Like IPv4, IPv6 includes some reserved addresses that should not be used for interface
assignment. Table 1-22 provides a summary of the special and reserved unicast
addresses and prefixes for IPv6 based on RFC 6890.
Table 1-22 Special and Reserved Unicast Addresses and Prefixes for IPv6
www.hellodigi.ir

IPv6 Addresses Assignment, Neighbor Discovery Protocol, and DHCPv6
IPv6 supports several methods for assigning an IP address to an interface:
Static
Static prefix with EUI-64 method
Stateless address auto-configuration (SLAAC)
Stateful DHCPv6
With static assignment, the IP address and prefix are configured by the device
administrator. In some devices, such as Cisco IOS routers, it is possible just to
configure the IPv6 prefix, the first 64 bits, and let the router automatically calculate the
interface ID portion of the address, the last 64 bits. The method to calculate the interface
ID is called the EUI-64 method.
The EUI-64 method, described in RFC 4291, uses the following rules to build the
interface ID:
1. Split the interface MAC address in two.
2. Add FFFE in between. This makes the address 64-bits long.
3. Invert the 7th bit (for example, if the bit is 1, write 0, and vice versa).
Figure 1-73 shows an example of the EUI-64 method to calculate the interface ID
portion of an IPv6 address. In this example, the MAC address of the interface is
0200.1111.1111. We first split the MAC address and add FFFE in the middle. We then
flip the 7th bit from 1 to 0. This results in an interface ID of 0000.11FF.FE11.1111.
www.hellodigi.ir

Figure 1-73 Calculating the Interface ID Portion of an IPv6 Address with EUI-64
The EUI-64 method is also used to calculate the interface ID for an LLA address, as
explained in the previous section.
The third method, SLAAC, allows for automatic address assignment when the IPv6
network prefix and prefix length are not known (for example, if they are not manually
configured). To understand how SLAAC works, we need to look at a new protocol that
is specific for IPv6: the Neighbor Discovery Protocol (NDP).
NDP is used for several functionalities:
Router discovery: Used to discover routers within a subnet.
Prefix discovery: Used to find out the IPv6 network prefix in a given link.
Address auto-configuration: Supports SLAAC to provide automatic address
configuration.
Address resolution: Similar to ARP for IPv4, address resolution is used to
determine the link layer address, given an IPv6 address.
Next-hop determination: Used to determine the next hop for a specific destination.
www.hellodigi.ir

Neighbor unreachability detection (NUD): Used to determine whether a neighbor
is reachable. It is useful, for example, to determine whether the next-hop router is
still available or an alternative router should be used.
Duplicate address detection (DAD): Used to determine whether the address a
node decided to use is already in use by some other node.
Redirect: Used to inform nodes about a better first-hop node for a destination.
NDP uses ICMP version 6 (ICMPv6) to provide these functionalities. As part of the
NDP specification, five new ICMPv6 messages are defined:
Router Solicitation (RS): This message is sent from hosts to routers and is used to
request a Router Advertisement message. The source IP address of this message is
either the host-assigned IP address or the unspecified address ::/128 if an IP
address is not assigned yet. The destination IP address is the all-routers multicast
address FF01::2/128.
Router Advertisement (RA): This message is sent from routers to all hosts, and it
is used to communicate information such as the IP address of the router and
information about network prefix and prefix length, or the allowed MTU. This can
be sent at regular intervals or to respond to an RS message.
The source IP of this message is the link-local IPv6 address of the router interface,
and the destination is either all-nodes multicast address FF01::1 or the address of
the host that sent the RS message.
Neighbor Solicitation (NS): This message is used to request the link-layer address
from a neighbor node. It is also used for NUD and DUD functionality. The source IP
address would be the IPv6 address of the interface, if already assigned, or the
unspecified address ::/128.
Neighbor Advertisement (NA): This message is sent in response to an NS or can
be sent unsolicited to flag a change in the link-layer address. The source IP address
is the interface IP, while the destination is either the IP address of the node that sent
the NS or the all-nodes address FF01::1.
Redirect: This message is used to inform the hosts about a better first hop. The
source IP address is the link-local IP of the router, and the destination IP address is
the IP address of the packet that triggered the redirect.
Figure 1-74 shows an example of an RS/RA exchange to get information about the
router. In this example, Host A sends a Router Solicitation to all routers in the subnet to
get the network prefix and prefix length.
www.hellodigi.ir

Figure 1-74 RS/RA Exchange
Figure 1-75 shows an example of an NS/NA exchange to get information about the link-
layer address. This process replaces the ARP process in IPv4. Host A needs to have the
MAC address of Host B so it can send frames. It sends an NS asking who has 2345::2,
and Host B responds with an NA, indicating its MAC address.
Figure 1-75 NS/NA Exchange to Get Link-Layer Address Information
Due to the criticality of the NDP operation, RFC 3971 describes the Secure Neighbor
Discovery (SeND) protocol to improve the security of NDP. SeND defines two ND
messages—Certification Path Solicitation (CPS) and Certification Path Answer (CPA)
—an additional ND option, and an additional auto-configuration mechanism.
Now that you know how NDP works, you can better understand the SLAAC process. In
the following example, we assume the host uses the EUI-64 method to generate an LLA.
At the start, the host generates an LLA address. This provides link-local connectivity to
www.hellodigi.ir

neighbors.
At this point, the host can receive RAs from the neighbor’s routers, or, optionally, it can
solicit an RA by sending an RS message. The RA message contains the network prefix
and prefix length information that can be used by the host to create a global unicast IP
address.
The prefix part of the address is provided by the information included in the RA. The
interface ID, instead, is provided by using EUI-64 or other methods (for example,
randomly). This depends on how the host has implemented SLAAC. For example, a host
may implement a privacy extension (described in RFC 4941) or a cryptographically
generated address (CGA) when SeND is used. Before the address can be finally
assigned to the interface, the host can use the DAD functionality of NDP to find out
whether any other host is using the same IP.
The following steps detail address assignment via SLAAC. In Figure 1-76, Host A has a
MAC address of 0200.2211.1111.
Figure 1-76 Address Assignment via SLAAC
Step 1. The SLAAC process starts by calculating the LLA. This is done by using the
EUI-64 method. This will result in an LLA address of FF80::22FF:FE11:1111.
Step 2. At this point, Host A has link-local connectivity and can send an RS message
to get information from the local routers.
Step 3. R1 responds with information about the prefix and prefix length, 2345::/64.
Step 4. Host A uses this information to calculate its global unicast address
2345::22FF:FE11:1111. Before using this address, Host A uses DAD to check
www.hellodigi.ir

whether any other device is using the same address. It sends an NS message
asking whether anyone is using this address.
Step 5. Since no one responded to the NS message, Host A assumes it is the only one
with that address. This terminates the SLAAC configuration.
The fourth method we look at in this section is stateful DHCPv6. As with many other
protocols, a new version of DHCP has been defined to make it work with IPv6. DHCP
version 6 uses UDP as the transport protocol with port 546 for clients and 547 for
servers or relays.
Two modes of operation have been defined:
Stateful DHCPv6: Works pretty much like DHCPv4, where a server assigns IP
addresses to clients and can provide additional network configuration. The server
keeps track of which IP addresses have been leased and to which clients. The
difference is that stateful DHCPv6 does not provide information about the default
route; that functionality is provided by NDP.
Stateless DHCPv6: Used to provide network configuration only. It is not used to
provide IP address assignment. The term stateless comes from the fact that the
DHCPv6 server does not need to keep the state of the leasing of an IPv6 address.
Stateless DHCPv6 can be used in combination with static or SLAAC IPv6
assignments to provide additional network configuration such as for a DNS server
or NTP server.
DHCPv6 defines several new messages as well, and some of the messages present in
DHCPv4 have been renamed.
The following steps show a basic stateful DHCPv6 exchange for IPv6 address
assignment (see Figure 1-77):
Step 1. The client sends a DHCPv6 Solicit message to the IPv6 multicast address
All_DHCP_Relay_Agents_and_Servers FF02::1:2 and uses its link-local
address as the source.
Step 2. The DHCPv6 servers reply with a DHCPv6 Advertise message back to the
client.
Step 3. The client picks a DHCPv6 server and sends a DHCPv6 Request message to
request the IP address and additional configuration.
Step 4. The DHCPv6 server sends a DHCPv6 Reply message with the information.
www.hellodigi.ir

Figure 1-77 Stateful DHCPv6 Exchange for IPv6 Address Assignment
If an IP address has been assigned using a different method, a host can use stateless
DHCPv6 to receive additional configuration information. This involves only two
messages instead of four, as shown here (see Figure 1-78):
www.hellodigi.ir

Figure 1-78 Stateless DHCPv6
Step 1. The client sends a DHCPv6 Information Request message to the IPv6
multicast address All_DHCP_Relay_Agents_and_Servers FF02::1:2.
Step 2. The server sends a DHCPv6 Reply with the information.
Just like DHCPv4, DHCPv6 includes the relay functionality to allow clients to access
DHCPv6 servers outside of a subnet.
Transport Layer Technologies and Protocols
The last concept to discuss in this chapter is how two hosts (Host A and Host B) can
establish end-to-end communication. The end-to-end communication service is
provided by the transport layer or Layer 4 protocols. These protocols are the focus of
this section.
Several protocols work at the transport layer and offer different functionalities. In this
section, we focus on two of the most used protocols: User Datagram Protocol (UDP)
and Transmission Control Protocol (TCP).
Before we get into the protocol details, we need to discuss the concept of multiplexing,
which is at the base of the functionality of UDP and TCP. On a single host, there may be
multiple applications that want to use the transport layer protocols (that is, TCP and
UDP) to communicate with remote hosts. In Figure 1-79, for example, Host B supports a
web server and an FTP server. Let’s imagine that Host A would like to browse and use
www.hellodigi.ir

the FTP services from Host B. It will send two TCP requests to Host B. The question is,
how does Host B differentiate between the two requests and forward the packets to the
correct application?
Figure 1-79 Example of TCP Multiplexing
The solution to this problem is provided by multiplexing, which relies on the concept of
a socket. A socket is a combination of three pieces of information:
The host IP address
A port number
The transport layer protocol
The first two items are sometimes grouped together under the notion of a socket address.
A socket (in the case of this example, a TCP socket) is formed by the IP address of the
host and a port number, which is used by the host to identify the connection. The pair of
sockets on the two hosts, Host A and Host B, uniquely identify a transport layer
connection.
For example, the Host A socket for the FTP connection would be (10.0.1.1, 1026),
where 10.0.1.1 is the IP address of Host A and 1026 is the TCP port used for the
communication. The Host B socket for the same connection would be (10.0.2.2, 21),
where 21 is the standard port assigned to FTP services.
Similarly, the Host A socket for the HTTP connection (web service) would be
(10.0.1.1, 1027), whereas the Host B socket would be (10.0.2.2, 80), where 80 is the
standard port assigned to HTTP services.
The preceding example illustrates the concepts of multiplexing and sockets as applied to
www.hellodigi.ir

a TCP connection, but the same holds for UDP. For example, when a DNS query is
made to a DNS server, as detailed earlier in the section “Domain Name System (DNS)”
of this chapter, a UDP socket is used on the DNS resolver and on the DNS server.
An additional concept that’s generally used to describe protocols at the transport layer
is whether a formal connection needs to be established before a device can send data.
Therefore, the protocols can be classified as follows:
Connection oriented: In this case, the protocol requires that a formal connection
be established before data can be sent. TCP is a connection-oriented protocol and
provides connection establishment by using three packets prior to sending data.
Generally, connection-oriented protocols have a mechanism to terminate a
connection. Connection-oriented protocols are more reliable because the
connection establishment allows the exchange of settings and ensures the receiving
party is able to receive packets. The drawback is that it adds additional overhead
and delay to the transmission of information.
Connectionless: In this case, the protocol allows packets to be sent without any
need for a connection. UDP is an example of a connectionless protocol.
We will now examine how TCP and UDP work in a bit more detail.
Transmission Control Protocol (TCP)
The Transmission Control Protocol (TCP) is a reliable, connection-oriented protocol
for communicating over the Internet. Connection oriented means that TCP requires a
connection between two hosts to be established through a specific packet exchange
before any data packets can be sent. This is the opposite of connectionless protocols
(such as UDP), which don’t require any exchange prior to data transmission.
As mentioned in RFC 793, which specifies the TCP protocol, TCP assumes it can obtain
simple and potentially unreliable datagrams (IP packets) from lower-level protocols.
TCP provides most of the services expected by a transport layer protocol. This section
explains the following services and features provided by TCP:
Multiplexing
Connection establishment and termination
Reliability (error detection and recovery)
Flow control
You may wonder why we don’t use TCP for all applications due to these important
www.hellodigi.ir

features. The reason is that the reliability offered by TCP is done at the cost of lower
speed and the need for increased bandwidth, in order to manage this process. For this
reason, some applications that require fast speed but don’t necessarily need to have all
the data packets received to provide the requested level of quality (such as voice/video
over IP) rely on UDP instead of TCP.
Table 1-23 summarizes the services provided by TCP.
Table 1-23 TCP Services
TCP Header
Application data is encapsulated in TCP segments by adding a TCP header to the
application data. These segments are then passed to IP for further encapsulation, thus
ensuring that the packets can be routed on the network, as shown on Figure 1-80.
Figure 1-80 Application Data Encapsulated in TCP Segments
The TCP header is more extensive compared to the UDP header; this is because it needs
additional fields to provide additional services and features. Figure 1-81 shows the
TCP header structure.
www.hellodigi.ir

Figure 1-81 TCP Header Structure
The main TCP header fields are as follows:
Source and Destination Port: These are used to include the source and destination
port for a given TCP packet. They are probably the most important fields within the
TCP header and are used to correctly identify a TCP connection and TCP socket.
Sequence Number (32 bits): When the SYN flag bit is set to 1, this is the initial
sequence number (ISN) and the first data byte is ISN+1. When the SYN flag bit is
set to 0, this is the sequence number of the first data byte in this segment.
Acknowledgment Number (32 bits): Once the connection is established, the ACK
flag bit is set to 1, and the acknowledgment number provides the sequence number
of the next data payload the sender of the packet is expecting to receive.
Control Flags (9 bits, 1 bit per flag): This field is used for congestion notification
and to carry TCP flags.
ECN (Explicit Congestion Notification) Flags (3 bits): The first three flags
(NS, CWR, ECE) are related to the congestion notification feature that has been
recently defined in RFC 3168 and RFC 3540 (following RFC 793 about the TCP
protocol in general). This feature supports end-to-end network congestion
notification, in order to avoid dropping packets as a sign of network congestion.
TCP flags include the following:
URG: The Urgent flag signifies that Urgent Pointer data should be reviewed.
ACK: The Acknowledgment bit flag should be set to 1 after the connection
has been established.
PSH: The Push flag signifies that the data should be pushed directly to an
application.
www.hellodigi.ir

RST: The Reset flag resets the connection.
SYN: The Synchronize (sequence numbers) flag is relevant for connection
establishment, and should only be set within the first packets from both of the
hosts.
FIN: This flag signifies that there is no more data from sender.
Window (16 bits): This field indicates the number of data bytes the sender of the
segment is able to receive. This field enables flow control.
Urgent pointer (16 bits): When the URG flag is set to 1, this field indicates the
sequence number of the data payload following the urgent data segment. The TCP
protocol doesn’t define what the user will do with the urgent data; it only provides
notification on urgent data pending processing.
TCP Connection Establishment and Termination
As mentioned at the beginning of this section, the fact that the TCP protocol is
connection oriented means that before any data is exchanged, the two hosts need to go
through a process of establishing a connection. This process is often referred to as
“three-way-handshake” because it involves three packets and the main goal is to
synchronize the sequence numbers so that the hosts can exchange data, as illustrated in
Figure 1-82.
www.hellodigi.ir

Figure 1-82 TCP Three-way Handshake
Let’s examine the packet exchange in more detail:
First packet (SYN): The client starts the process of establishing a connection with
a server by sending a TCP segment that has the SYN bit set to 1, in order to signal
to the peer that it wants to synchronize the sequence numbers and establish the
connection. The client also sends its initial sequence number (here X), which is a
random number chosen by a client.
Second packet (SYN-ACK): The server responds with a SYN-ACK packet where
it sends its own request for synchronization and its initial sequence number (another
random number; here Y). Within the same packet, the server also sends the
acknowledgment number X+1, acknowledging the receipt of a packet with the
sequence number X and requesting the next packet with the sequence number X+1.
Third packet (ACK): The client responds with a final acknowledgment, requesting
the next packet with the sequence number Y+1.
In order to terminate a connection, peers go through a similar packet exchange, as shown
in Figure 1-83.
www.hellodigi.ir

Figure 1-83 TCP Connection Termination
The process starts with the client’s application notifying the TCP layer on the client side
that it wants to terminate the connection. The client sends a packet with the FIN bit set,
to which the server responds with an acknowledgment, acknowledging the receipt of the
packet. At that point, the server notifies the application on its side that the other peer
wishes to terminate the connection. During this time, the client will still be able to
receive traffic from the server, but will not be sending any traffic to the server. Once the
application on the server side is ready to close down the connection, it signals to the
TCP layer that the connection is ready to be closed, and the server sends a FIN packet
as well, to which the client responds with an acknowledgment. At that point, the
connection is terminated.
TCP Socket
The concept of multiplexing has already been introduced as a way to enable multiple
applications to run on the same host and sockets by uniquely identifying a connection
with an IP address, transport protocol, and port number.
There are some “well-known” applications that use designated port numbers (for
example, WWW uses TCP port 80). This means that the web server will keep its socket
for TCP port 80 open, listening to requests from various hosts. When a host tries to open
www.hellodigi.ir

a connection to a web server, it will use TCP port 80 as a destination port, and it will
choose a random port number (greater than 1024) as a source port. Random port
numbers need to be greater than 1024 because the ones up to 1024 are reserved for
well-known applications.
Table 1-24 shows a list of some of the most used applications and their port numbers. A
full list of ports used by known services can be found at
http://www.iana.org/assignments/service-names-port-numbers/service-names-port-
numbers.xhtml.
Table 1-24 Commonly Used TCP Applications and Associated Port Numbers
FTP (File Transfer Protocol) usesTCP port 20 for transferring the data and a separate
connection on port 21 for exchanging control information (for example, FTP
commands). Depending on whether the FTP server is in active or passive mode,
different port numbers can be involved.
SSH (Secure Shell) is a protocol used for remote device management by allowing a
secure (encrypted) connection over an unsecure medium. Telnet can also be used for
device management; however, this is not recommended because FTP is not secure—data
is sent in plaintext.
SMTP (Simple Mail Transfer Protocol) is used for email exchange. Typically, the client
would use this protocol for sending emails, but would use POP3 or IMAP to retrieve
emails from the mail server.
DNS (Domain Name System) uses UDP port 53 for domain name queries from hosts that
allow other hosts to find out about the IP address for a specific domain name, but it uses
TCP port 53 for communication between DNS servers for completing DNS zone
www.hellodigi.ir

transfers.
HTTP (Hypertext Transfer Protocol) is an application-based protocol that is used for
accessing content on the Web. HTTPS (HTTP over Secure Socket Layer) is basically
HTTP that uses TLS (Transport Layer Security) and SSL (Secure Sockets Layer) for
encryption. HTTP is widely used on the Internet for secure communication because it
allows encryption and server authentication.
BGP (Border Gateway Protocol) is an exterior gateway protocol used for exchanging
routing information between different autonomous systems. It’s the routing protocol of
the Internet.
TCP Error Detection and Recovery
TCP provides reliable delivery because the protocol is able to detect errors in
transmission (for example, lost, damaged, or duplicated segments) and recover from
such errors. This is done through the use of sequence numbers, acknowledgments, and
checksum fields in the TCP header.
Each segment transmitted is marked with a sequence number, allowing the receiver of
the segments to order them and provide acknowledgment on which segments have been
received. If the sender doesn’t get acknowledgment, it will send the data again.
Figure 1-84 shows an example of sequence numbers and acknowledgments in a typical
scenario.
www.hellodigi.ir

Figure 1-84 Example of TCP Acknowledgement and Sequence Numbers
In this example, the client is sending three segments, each with 100 bytes of data. If the
server has received all three segments in order, it would send a packet with the
acknowledgment set to 400, which literally means “I’ve received all the segments with
sequence numbers up to 399, and I am now expecting a segment with the sequence
number 400.”
The fact that the segments have sequence numbers will allow the server to properly
align the data upon receipt—for example, if for any reason it receives the segments in a
different order or if it receives any duplicates.
Figure 1-85 shows how TCP detects and recovers from an error.
www.hellodigi.ir

Figure 1-85 TCP Error Detection and Recovery
Imagine now that the client sends three packets with sequence numbers 100, 200, and
300. Due to some error in the transmission, the packet with the sequence number 200
gets lost or damaged. If the segment gets damaged during transmission, the TCP protocol
would be able to detect this through the checksum number available within the TCP
header. Because the packet with the sequence number 200 has not been received
properly, the server will only send acknowledgement up to 200. This indicates to the
client that it needs to resend that packet. When the server receives the missing packet, it
will resume the normal acknowledge to 400, because it already received the packet
with sequence numbers 300 and 400. This indicates to the client that it can send packets
with sequence 500 and so on. It is worth mentioning that if the receiver doesn’t receive
the packet with the sequence number 200, it will continue to send packets with
acknowledgment number 200, asking for the missing packet.
www.hellodigi.ir

TCP Flow Control
The TCP protocol ensures flow control through the use of “sliding windows,” by which
a receiving host “tells” the sender how many bytes of data it can handle at a given time
before waiting for an acknowledgment—this is called the window size. This mechanism
works for both the client and server. For example, the client can ask the server to slow
down, and the server can use this mechanism to ask the client to slow down or even to
increase the speed. This allows the TCP peers to increase or reduce the speed of
transmission depending on the conditions on the network and processing capability, and
to avoid the situation of having a receiving host overwhelmed with data. The size of the
receiving window is communicated through the “Window” field within the TCP header.
Figure 1-86 shows how the window size gets adjusted based on the capability of the
receiving host.
www.hellodigi.ir

Figure 1-86 Example of TCP Flow Control
Initially, the server notifies the client that it can handle a window size of 300 bytes, so
the client is able to send three segments of 100 bytes each, before getting the
acknowledgment. However, if for some reason the server becomes overwhelmed with
data that needs to be processed, it will notify the client that it can now handle a smaller
window size.
The receiving host (for example, the server) has a certain buffer that it fills in with data
received during a TCP connection, which could determine the size of this window. In
ideal conditions, the receiving host may be able to process all the received data
instantaneously, and free up the buffer again, leaving the window at the same size.
However, if for some reason it is not able to process the data at that speed, it will
reduce the window, which will notify the client of the problem. In Figure 1-86, the
receiving party (the server) notifies the client that it needs to use a smaller window size
of 200 bytes instead of the initial 300-byte window. The client adjusts its data stream
www.hellodigi.ir

accordingly. This process is dynamic, meaning that the server could also increase the
window size.
The Window field in TCP header is 16 bits long, which means that the maximum
window size is 65,535 bytes. In order to use higher window sizes, a scaling factor
within the TCP Options field can be used. This TCP option will get negotiated within
the initial three-way handshake.
User Datagram Protocol (UDP)
Like TCP, the User Datagram Protocol (UDP) is one of the most used transport layer
protocols. Unlike TCP, however, UDP is designed to reduce the number of protocol
iterations and complexity. It in fact does not establish any connection channel and in
essence just wraps higher-layer information in a UDP segment and passes it to IP for
transmission. UDP is usually referred as a “connectionless” protocol.
Due to its simplicity, UDP does not implement any mechanism for error control and
retransmission; it leaves that task to the higher-layer protocols if required. Generally,
UDP is used in applications where the low latency and low jitter are more important
than reliability. A well-known use case for UDP is Voice over IP. UDP is described in
RFC 768.
UDP Header
The UDP header structure is shorter and less complex than TCP’s. Figure 1-87 shows an
example of a UDP header.
Figure 1-87 UDP Header
The UDP header includes the following fields:
Source and Destination Port: Similar to the TCP header, these fields are used to
determine the socket address and to correctly send the information to the higher-
level application.
Length: Includes the length of the UDP segment.
Checksum: It is built based on a pseudo header which includes information from
the IP header (source and destination addresses) and information from the UDP
www.hellodigi.ir

header. Refer to the RFC for more information on how the checksum is calculated.
UDP Socket and Known UDP Application
As described earlier, UDP uses the same principle of multiplexing and sockets that’s
used by TCP. The protocol information on the socket determines whether it is a TCP or
UDP type of socket. As with TCP, UDP has well-known applications that use standard
port numbers while listening for arriving packets. Table 1-25 provides an overview of
known applications and their standard ports.
Table 1-25 Commonly Used UDP Applications and Associated Port Numbers
This concludes the overview of networking fundamentals. The next chapter introduces
the concepts of network security devices and cloud services.
Exam Preparation Tasks
Review All Key Topics
Review the most important topics in the chapter, noted with the Key Topic icon in the
outer margin of the page. Table 1-26 lists these key topics and the page numbers on
which each is found.
www.hellodigi.ir

www.hellodigi.ir

Complete Tables and Lists from Memory
Print a copy of Appendix B, “Memory Tables,” (found on the book website), or at least
the section for this chapter, and complete the tables and lists from memory. Appendix C,
“Memory Tables Answer Key,” also on the website, includes completed tables and lists
to check your work.
Define Key Terms
Define the following key terms from this chapter, and check your answers in the
glossary:
TCP/IP model
OSI model
local area network
Ethernet
collision domain
half duplex
full duplex
MAC address
LAN hub
LAN bridge
LAN switch
MAC address table
dynamic MAC address learning
Ethernet broadcast domain
VLAN
trunk
multilayer switch
wireless LAN
access point
lightweight access point
autonomous access point
Internet Protocol
IP address
private IP addresses
www.hellodigi.ir

routing table
router
Classless Interdomain Routing (CIDR)
variable-length subnet Mask (VLSM)
routing protocol
Dynamic Host Configuration Protocol (DHCP)
address resolution
Domain Name System
stateless address auto-configuration (SLAAC)
transport protocol socket
connectionless communication
connection-oriented communication
Q&A
The answers to these questions appear in Appendix A, “Answers to the ‘Do I Know
This Already?’ Quizzes and Q&A Questions.” For more practice with exam format
questions, use the exam engine on the website.
1. At which OSI layer does a router typically operate?
a. Transport
b. Network
c. Data link
d. Application
2. What are the advantages of a full-duplex transmission mode compared to half-
duplex mode? (Select all that apply.)
a. Each station can transmit and receive at the same time.
b. It avoids collisions.
c. It makes use of backoff time.
d. It uses a collision avoidance algorithm to transmit.
3. How many broadcast domains are created if three hosts are connected to a Layer
2 switch in full-duplex mode?
a. 4
b. 3
c. None
www.hellodigi.ir

d. 1
4. What is a trunk link used for?
a. To pass multiple virtual LANs
b. To connect more than two switches
c. To enable Spanning Tree Protocol
d. To encapsulate Layer 2 frames
5. What is the main difference between a Layer 2 switch and a multilayer switch?
a. A multilayer switch includes Layer 3 functionality.
b. A multilayer switch can be deployed on multiple racks.
c. A Layer 2 switch is faster.
d. A Layer 2 switch uses a MAC table whereas a multilayer switch uses an ARP
table.
6. What is CAPWAP used for?
a. To enable wireless client mobility through different access points
b. For communication between a client wireless station and an access point
c. For communication between a lightweight access point and a wireless LAN
controller
d. For communication between an access point and the distribution service
7. Which of the following services are provided by a lightweight access point?
(Select all that apply.)
a. Channel encryption
b. Transmission and reception of frames
c. Client authentication
d. Quality of Service
8. Which of the following classful networks would allow at least 256 usable IPv4
addresses? (Select all that apply).
a. Class A
b. Class B
c. Class C
d. All of the above
9. What would be the maximum length of the network mask for a network that has
four hosts?
www.hellodigi.ir

a. /27
b. /30
c. /24
d. /29
10. Which routing protocol exchanges link state information?
a. RIPv2
b. RIP
c. OSPF
d. BGP
11. What is an advantage of using OSPF instead of RIPv2?
a. It does not have the problem of count to infinity.
b. OSPF has a higher hop-count value.
c. OSPF includes bandwidth information in the distance vector.
d. OSPF uses DUAL for optimal shortest path calculation.
12. What are two ways the IPv6 address
2345:0000:0000:0000:0000:0000:0100:1111 can be written?
a. 2345:0:0:0:0:0:0100:1111
b. 2345::1::1
c. 2345::0100:1111
d. 2345::1:1111
13. In IPv6, what is used to replace ARP?
a. ARPv6
b. DHCPv6
c. NDP
d. Route Advertisement Protocol
14. What would be the IPv6 address of a host using SLAAC with 2345::/64 as a
network prefix and MAC address of 0300.1111.2222?
a. 2345::100:11FF:FE11:2222
b. 2345:0:0:0:0300:11FF:FE11:2222
c. 2345:0:0:0:FFFE:0300:1111:2222
d. 2345::0300:11FF:FE11:2222
15. What is a DNS iterative query used for?
www.hellodigi.ir

a. It is sent from a DNS server to other servers to resolve a domain.
b. It is sent from a DNS resolver to the backup DNS server.
c. It is sent from a DNS server to the DNS client.
d. It is sent from a client machine to a DNS resolver.
16. Which TCP header flag is used by TCP to establish a connection?
a. URG
b. SYN
c. PSH
d. RST
17. What information is included in a network socket? (Select all that apply.)
a. Protocol
b. IP address
c. Port
d. MAC address
References and Further Reading
“Requirements for Internet Hosts – Communication Layers,”
https://tools.ietf.org/html/rfc1122
ISO/IEC 7498-1 – Information technology – Open System Interconnection –
Basic Reference Model: The Basic Model
David Hucaby, CCNA Wireless 200-355 Official Cert Guide, Cisco Press
(2015)
DNS Best Practices, Network Protections, and Attack Identification
http://www.cisco.com/c/en/us/about/security-center/dns-best-practices.html
Wendell Odom, CCENT/CCNA ICND1 100-105 Official Cert Guide, Cisco
Press (2016)
Wendell Odom, CCNA Routing and Switching ICND2 200-105 Official Cert
Guide, Cisco Press (2016)
Cisco ICND1 Foundation Learning Guide: LANs and Ethernet
http://www.ciscopress.com/articles/article.asp?p=2092245&seqNum=2
IEEE Std 802.1D – IEEE Standard for Local and Metropolitan Area Networks –
Media Access Control (MAC) Bridges
IEEE Std 802.1Q – IEEE Standard for Local and Metropolitan Area Networks –
Bridges and Bridged Networks
www.hellodigi.ir

IEEE Std 802 – IEEE Standard for Local and Metropolitan Area Networks:
Overview and Architecture
“Address Allocation for Private Internets,” https://tools.ietf.org/html/rfc1918
“Special-Purpose IP Address Registries,” https://tools.ietf.org/html/rfc6890
“Dynamic Host Configuration Protocol,” https://www.ietf.org/rfc/rfc2131.txt
“An Ethernet Address Resolution Protocol,” https://tools.ietf.org/html/rfc826
“INTERNET CONTROL MESSAGE PROTOCOL,”
https://tools.ietf.org/html/rfc792
“Domain Names - Implementation and Specification,”
https://www.ietf.org/rfc/rfc1035.txt
“Internet Protocol, Version 6 (IPv6),” Specification
https://tools.ietf.org/html/rfc2460
“Unique Local IPv6 Unicast Addresses,” https://tools.ietf.org/html/rfc4193
“IP Version 6 Addressing Architecture,” https://tools.ietf.org/html/rfc4291
“IPv6 Secure Neighbor Discovery,” http://www.cisco.com/en/US/docs/ios-
xml/ios/sec_data_acl/configuration/15-2mt/ip6-send.html
“Privacy Extensions for Stateless Address Autoconfiguration in IPv6,”
https://tools.ietf.org/html/rfc4941
“SEcure Neighbor Discovery (SEND),” https://tools.ietf.org/html/rfc3971
“Cryptographically Generated Addresses (CGA),”
https://tools.ietf.org/html/rfc3972
“IPv6 Stateless Address Autoconfiguration,” https://tools.ietf.org/search/rfc4862
“Transmission Control Protocol,” https://tools.ietf.org/html/rfc793
“User Datagram Protocol,” https://tools.ietf.org/html/rfc768
www.hellodigi.ir

Chapter 2. Network Security Devices and Cloud Services
This chapter covers the following topics:
The different network security systems used in today’s environments
What the benefits of security cloud-based solutions are and how they work
Details about Cisco NetFlow and how it plays a great role in cyber
security
Data loss prevention systems and solutions
Welcome to the second chapter! In this chapter, you will learn the different types of
network security devices and cloud services in the industry. This chapter compares
traditional and Next-Generation Firewalls, as well as traditional and Next-Generation
Intrusion Prevention Systems (IPS). You will learn details about the Cisco Web Security
and Cisco Email Security solutions, as well as what is Advanced Malware Protection
(AMP), what are identity management systems, Cisco NetFlow, and details about data
loss prevention (DLP).
“Do I Know This Already?” Quiz
The “Do I Know This Already?” quiz helps you identify your strengths and deficiencies
in this chapter’s topics. The ten-question quiz, derived from the major sections in the
“Foundation Topics” portion of the chapter, helps you determine how to spend your
limited study time. You can find the answers in Appendix A Answers to the “Do I Know
This Already?” Quizzes and Q&A Questions.
Table 2-1 outlines the major topics discussed in this chapter and the “Do I Know This
Already?” quiz questions that correspond to those topics.
Table 2-1 “Do I Know This Already?” Foundation Topics Section-to-Question
Mapping
1. Which of the following are examples of network security devices that have been
invented throughout the years to enforce policy and maintain network visibility?
www.hellodigi.ir

a. Routers
b. Firewalls
c. Traditional and next-generation intrusion prevention systems (IPSs)
d. Anomaly detection systems
e. Cisco Prime Infrastructure
2. Access control entries (ACE), which are part of an access control list (ACL), can
classify packets by inspecting Layer 2 through Layer 4 headers for a number of
parameters, including which of the following items?
a. Layer 2 protocol information such as EtherTypes
b. The number of bytes within a packet payload
c. Layer 3 protocol information such as ICMP, TCP, or UDP
d. The size of a packet traversing the network infrastructure device
e. Layer 3 header information such as source and destination IP addresses
f. Layer 4 header information such as source and destination TCP or UDP ports
3. Which of the following statements are true about application proxies?
a. Application proxies, or proxy servers, are devices that operate as
intermediary agents on behalf of clients that are on a private or protected
network.
b. Clients on the protected network send connection requests to the application
proxy to transfer data to the unprotected network or the Internet.
c. Application proxies can be classified as next-generation firewalls.
d. Application proxies always perform network address translation (NAT).
4. Which of the following statements are true when referring to network address
translation (NAT)?
a. NAT can only be used in firewalls.
b. Static NAT does not allow connections to be initiated bidirectionally.
c. Static NAT allows connections to be initiated bidirectionally.
d. NAT is often used by firewalls; however, other devices such as routers and
wireless access points provide support for NAT.
5. Which of the following are examples of next-generation firewalls?
a. Cisco WSA
b. Cisco ASA 5500-X
c. Cisco ESA
www.hellodigi.ir

d. Cisco Firepower 4100 Series
6. Which of the following are examples of cloud-based security solutions?
a. Cisco Cloud Threat Security (CTS)
b. Cisco Cloud Email Security (CES)
c. Cisco AMP Threat Grid
d. Cisco Threat Awareness Service (CTAS)
e. OpenDNS
f. CloudLock
7. The Cisco CWS service uses web proxies in the Cisco cloud environment that
scan traffic for malware and policy enforcement. Cisco customers can connect to
the Cisco CWS service directly by using a proxy auto-configuration (PAC) file in
the user endpoint or through connectors integrated into which of the following
Cisco products?
a. Cisco ISR G2 routers
b. Cisco Prime LMS
c. Cisco ASA
d. Cisco WSA
e. Cisco AnyConnect Secure Mobility Client
8. Depending on the version of NetFlow, a network infrastructure device can gather
different types of information, including which of the following?
a. Common vulnerability enumerators (CVEs)
b. Differentiated services code point (DSCP)
c. The device’s input interface
d. TCP flags
e. Type of service (ToS) byte
9. There are several differences between NetFlow and full-packet capture. Which
of the following statements are true?
a. Full-packet capture provides the same information as NetFlow.
b. Full-packet capture is faster.
c. One of the major differences and disadvantages of full-packet capture is cost
and the amount of data to be analyzed.
d. In many scenarios, full-packet captures are easier to collect and require pretty
much the same analysis ecosystem as NetFlow.
www.hellodigi.ir

10. Which of the following is an example of a data loss prevention solution?
a. Cisco Advanced DLP
b. Cisco CloudLock
c. Cisco Advanced Malware Protection (AMP)
d. Cisco Firepower 4100 appliances
Foundation Topics
Network Security Systems
Many network security devices have been invented throughout the years to enforce
policy and maintain visibility of everything that is happening in the network. These
network security devices include the following:
Traditional and next-generation firewalls
Personal firewalls
Intrusion detection systems (IDSs)
Traditional and next-generation intrusion prevention systems (IPSs)
Anomaly detection systems
Advanced malware protection (AMP)
Web security appliances
Email security appliances
Identity management systems
In the following sections, you will learn details about each of the aforementioned
network security systems.
Traditional Firewalls
Typically, firewalls are devices that are placed between a trusted and an untrusted
network, as illustrated in Figure 2-1.
Figure 2-1 Traditional Firewall Deployment
www.hellodigi.ir

In Figure 2-1, a firewall is deployed between two networks: a trusted network and an
untrusted network. The trusted network is labeled as the “inside” network, and the
untrusted network is labeled as the “outside” network. The untrusted network in this
case is connected to the Internet. This is the typical nomenclature you’ll often see in
Cisco and non-Cisco documentation. When firewalls are connected to the Internet, they
are often referred to as Internet edge firewalls. A detailed understanding of how
firewalls and their related technologies work is extremely important for all network
security professionals. This knowledge not only helps you to configure and manage the
security of your networks accurately and effectively, but also allows you to gain an
understanding of how to enforce policies and achieve network segmentation suitable for
your environment.
Several firewall solutions offer user and application policy enforcement in order to
supply protection for different types of security threats. These solutions often provide
logging capabilities that enable the security administrators to identify, investigate,
validate, and mitigate such threats.
Additionally, several software applications can run on a system to protect only that host.
These types of applications are known as personal firewalls. This section includes an
overview of network firewalls and their related technologies. Later in this chapter, you
will learn the details about personal firewalls.
Network-based firewalls provide key features that are used for perimeter security, such
as network address translation (NAT), access control lists (ACLs), and application
inspection. The primary task of a network firewall is to deny or permit traffic that
attempts to enter or leave the network based on explicit preconfigured policies and
rules. Firewalls are often deployed in several other parts of the network to provide
network segmentation within the corporate infrastructure and also in data centers. The
processes used to allow or block traffic may include the following:
Simple packet-filtering techniques
Application proxies
Network address translation
Stateful inspection firewalls
Next-generation context-aware firewalls
www.hellodigi.ir

Packet-Filtering Techniques
The purpose of packet filters is simply to control access to specific network segments
by defining which traffic can pass through them. They usually inspect incoming traffic at
the transport layer of the Open System Interconnection (OSI) model. For example,
packet filters can analyze Transmission Control Protocol (TCP) or User Datagram
Protocol (UDP) packets and compare them against a set of predetermined rules called
access control lists (ACLs). They inspect the following elements within a packet:
Source address
Destination address
Source port
Destination port
Protocol
ACLs are typically configured in firewalls, but they also can be configured in network
infrastructure devices such as routers, switches, wireless access controllers (WLCs),
and others.
Each entry of an ACL is referred to as an access control entry (ACE). These ACEs can
classify packets by inspecting Layer 2 through Layer 4 headers for a number of
parameters, including the following:
Layer 2 protocol information such as EtherTypes
Layer 3 protocol information such as ICMP, TCP, or UDP
Layer 3 header information such as source and destination IP addresses
Layer 4 header information such as source and destination TCP or UDP ports
After an ACL has been properly configured, you can apply it to an interface to filter
traffic. The firewall or networking device can filter packets in both the inbound and
outbound direction on an interface. When an inbound ACL is applied to an interface, the
security appliance analyzes packets against the ACEs after receiving them. If a packet is
permitted by the ACL, the firewall continues to process the packet and eventually
passes the packet out the egress interface.
The big difference between a router ACL and a Cisco ASA (a stateful firewall) ACL is
that only the first packet of a flow is subjected by an ACL in the security appliance.
After that, the connection is built, and subsequent packets matching that connection are
not checked by the ACL. If a packet is denied by the ACL, the security appliance
discards the packet and generates a syslog message indicating that such an event has
occurred.
www.hellodigi.ir

If an outbound ACL is applied on an interface, the firewall processes the packets by
sending them through the different processes (NAT, QoS, and VPN) and then applies the
configured ACEs before transmitting the packets out on the wire. The firewall transmits
the packets only if they are allowed to go out by the outbound ACL on that interface. If
the packets are denied by any one of the ACEs, the security appliance discards the
packets and generates a syslog message indicating that such an event has occurred.
Following are some of the important characteristics of an ACL configured on a Cisco
ASA or on a Cisco IOS zone-based firewall:
When a new ACE is added to an existing ACL, it is appended to the end of the
ACL.
When a packet enters the firewall, the ACEs are evaluated in sequential order.
Hence, the order of an ACE is critical. For example, if you have an ACE that
allows all IP traffic to pass through, and then you create another ACE to block all IP
traffic, the packets will never be evaluated against the second ACE because all
packets will match the first ACE entry.
There is an implicit deny at the end of all ACLs. If a packet is not matched against a
configured ACE, it is dropped and a syslog is generated.
Each interface is assigned a security level. The higher the security level, the more
secure. In traditional Cisco ASA firewalls, the security levels go from 0 (less
secure) to 100 (more secure). By default, the outside interface is assigned a security
level of 0 and the inside interface is assigned a security level of 100. In the Cisco
ASA, by default, you do not need to define an ACE to permit traffic from a high-
security-level interface to a low-security-level interface. However, if you want to
restrict traffic flows from a high-security-level interface to a low-security-level
interface, you can define an ACL. If you configure an ACL to a high-security-level
interface to a low-security-level interface, it disables the implicit permit from that
interface. All traffic is now subject to the entries defined in that ACL.
Also in the Cisco ASA, an ACL must explicitly permit traffic traversing the security
appliance from a lower- to a higher-security-level interface of the firewall. The
ACL must be applied to the lower-security-level interface.
The ACLs (Extended or IPv6) must be applied to an interface to filter traffic that is
passing through the security appliance.
You can bind one extended and one EtherType ACL in each direction of an interface
at the same time.
You can apply the same ACL to multiple interfaces. However, this is not considered
to be a good security practice because overlapping and redundant security policies
can be applied.
You can use ACLs to control traffic through the security appliance, as well as to
www.hellodigi.ir

control traffic to the security appliance. The ACLs controlling traffic to the
appliance are applied differently than ACLs filtering traffic through the firewall.
The ACLs are applied using access groups. The ACL controlling traffic to the
security appliance are called controlled plane ACLs.
When TCP or UDP traffic flows through the security appliance, the return traffic is
automatically allowed to pass through because the connections are considered
established and bidirectional.
Other protocols such as ICMP are considered unidirectional connections and
therefore you need to allow ACL entries in both directions. There is an exception
for the ICMP traffic when you enable the ICMP inspection engine.
The Cisco ASA supports five different types of ACLs to provide a flexible and scalable
solution to filter unauthorized packets into the network:
Standard ACLs
Extended ACLs
IPv6 ACLs
EtherType ACLs
Webtype ACLs
Standard ACLs
Standard ACLs are used to identify packets based on their destination IP addresses.
These ACLs can be used in scenarios such as split tunneling for the remote-access VPN
tunnels and route redistribution within route maps for dynamic routing deployments
(OSPF, BGP, and so on). These ACLs, however, cannot be applied to an interface for
filtering traffic. A standard ACL can be used only if the security appliance is running in
routed mode. In routed mode, the Cisco ASA routes packets from one subnet to another
subnet by acting as an extra Layer 3 hop in the network.
Extended ACLs
Extended ACLs, the most commonly deployed ACLs, can classify packets based on the
following attributes:
Source and destination IP addresses
Layer 3 protocols
Source and/or destination TCP and UDP ports
Destination ICMP type for ICMP packets
www.hellodigi.ir

An extended ACL can be used for interface packet filtering, QoS packet classification,
packet identification for NAT and VPN encryption, and a number of other features.
These ACLs can be set up on the Cisco ASA in the routed and the transparent mode.
EtherType ACLs
EtherType ACLs can be used to filter IP and non-IP-based traffic by checking the
Ethernet type code field in the Layer 2 header. IP-based traffic uses an Ethernet type
code value of 0x800, whereas Novell IPX uses 0x8137 or 0x8138, depending on the
Netware version.
An EtherType ACL can be configured only if the security appliance is running in
transparent mode. Just like any other ACL, the EtherType ACL has an implicit deny at
the end of it. However, this implicit deny does not affect the IP traffic passing through
the security appliance. As a result, you can apply both EtherType and extended ACLs to
each direction of an interface. If you configure an explicit deny at the end of an
EtherType ACL, it blocks IP traffic even if an extended ACL is defined to pass those
packets.
Webtype ACLs
A Webtype ACL allows security appliance administrators to restrict traffic coming
through the SSL VPN tunnels. In cases where a Webtype ACL is defined but there is no
match for a packet, the default behavior is to drop the packet because of the implicit
deny. On the other hand, if no ACL is defined, the security appliance allows traffic to
pass through it.
An ACL Example
Example 2-1 shows the command-line interface (CLI) configuration of an extended
ACL. The ACL is called outside_acl_in, and it is composed of four ACEs. The first
two ACEs allow HTTP traffic destined for 10.10.20.111 from the two client machines,
whereas the last two ACEs allow SMTP access to 10.10.20.112 from both machines.
Adding remarks to an ACL is recommended because it helps others to recognize its
function. In Example 2-1 the system administrator has added the ACL remark: “ACL to
block inbound traffic except HTTP and SMTP.”
Example 2-1 Configuration Example of an Extended ACL
Click here to view code image
ASA# configure terminal
ASA(config)# access-list outside_access_in remark ACL to block inbound
traffic except
www.hellodigi.ir

HTTP and SMTP
ASA(config)# access-list outside_access_in extended permit tcp host
10.10.10.1 host
10.10.202.131 eq http
ASA(config)# access-list outside_access_in extended permit tcp host
10.10.10.2 host
209.165.202.131 eq http
ASA(config)# access-list outside_access_in extended permit tcp host
10.10.10.1 host
10.10.20.112 eq smtp
ASA(config)# access-list outside_access_in extended permit tcp host
10.10.10.2 host
10.10.20.112 eq smtp
Always remember that there is an implicit deny at the end of any ACL.
Packet filters do not commonly inspect additional Layer 3 and Layer 4 fields such as
sequence numbers, TCP control flags, and TCP acknowledgment (ACK) fields. The
firewalls that inspect such fields and flags are referred to as stateful firewalls. You will
learn how stateful firewalls operate later in this chapter in the “Stateful Inspection
Firewalls” section.
Various packet-filtering firewalls can also inspect packet header information to find out
whether the packet is from a new or an existing connection. Simple packet-filtering
firewalls have several limitations and weaknesses:
Their ACLs or rules can be relatively large and difficult to manage.
They can be deceived into permitting unauthorized access of spoofed packets.
Attackers can orchestrate a packet with an IP address that is authorized by the ACL.
Numerous applications can build multiple connections on arbitrarily negotiated
ports. This makes it difficult to determine which ports are selected and used until
after the connection is completed. Examples of this type of application are
multimedia applications such as streaming audio and video applications. Packet
filters do not understand the underlying upper-layer protocols used by this type of
application, and providing support for this type of application is difficult because
the ACLs need to be manually configured in packet-filtering firewalls.
www.hellodigi.ir

Application Proxies
Application proxies, or proxy servers, are devices that operate as intermediary agents
on behalf of clients that are on a private or protected network. Clients on the protected
network send connection requests to the application proxy to transfer data to the
unprotected network or the Internet. Consequently, the application proxy (sometimes
referred to as a web proxy) sends the request on behalf of the internal client. The
majority of proxy firewalls work at the application layer of the OSI model. Most proxy
firewalls can cache information to accelerate their transactions. This is a great tool for
networks that have numerous servers that experience high usage. Additionally, proxy
firewalls can protect against some web-server-specific attacks; however, in most cases,
they do not provide any protection against the web application itself.
Network Address Translation
Several Layer 3 devices can supply network address translation (NAT) services. The
Layer 3 device translates the internal host’s private (or real) IP addresses to a publicly
routable (or mapped) address.
Cisco uses the terminology of “real” and “mapped” IP addresses when describing NAT.
The real IP address is the address that is configured on the host, before it is translated.
The mapped IP address is the address to which the real address is translated.
TIP
Static NAT allows connections to be initiated bidirectionally, meaning both
to the host and from the host.
Figure
2-2
demonstrates
how
a
host
on
the
inside
of
a
firewall
with
the
private
address
of
10.10.10.123
is
translated
to
the
public
address
209.165.200.227,
Figure 2-2 NAT Example
www.hellodigi.ir

N
AT is often used by firewalls; however, other devices such as routers and wireless
access points provide support for NAT. By using NAT, the firewall hides the internal
p
rivate addresses from the unprotected network and exposes only its own address or
p
ublic range. This enables a network professional to use any IP address space as the
internal network. A best practice is to use the address spaces that are reserved for
p
rivate use (see RFC 1918, “Address Allocation for Private Internets”). Table 1-1 lists
the private address ranges specified in RFC 1918.
Table 2-1 RFC 1918 Private Address Ranges
It is important to think about the different private address spaces when you plan your
network (for example, the number of hosts and subnets that can be configured). Careful
p
lanning and preparation lead to substantial time savings if changes are encountered
down the road.
TIP
The whitepaper titled “A Security-Oriented Approach to IP Addressing”
provides numerous tips on planning and preparing your network IP address
scheme. You can find this whitepaper here:
http://www.cisco.com/web/about/security/intelligence/security-for-ip-
addr.html.
Port Address Translation
Typically,
firewalls
perform
a
technique
called
port
address
translation
(PAT).
This
feature,
which
is
a
subset
of
the
NA
T
feature,
allows
many
devices
on
the
internal
protected
network
to
share
one
IP
address
by
inspecting
the
Layer
4
information
on
the
packet.
This
shared
address
is
usually
the
firewall’s
public
address;
however,
it
can
be
configured
to
any
other
available
public
IP
address.
Figure
2-3
shows
how
PAT
works.
24تﻧ ﮏﺗ
www.hellodigi.ir

Figure 2-3 PAT Example
As illustrated in Figure 2-3, several hosts on a trusted network labeled “inside” are
configured with an address from the network 10.10.10.0 with a 24-bit subnet mask. The
ASA is performing PAT for the internal hosts and translating the 10.10.10.x addresses
into its own address (209.165.200.228). In this example, Host A sends a TCP port 80
packet to the web server located in the “outside” unprotected network. The ASA
translates the request from the original 10.10.10.8 IP address of Host A to its own
address. It does this by randomly selecting a different Layer 4 source port when
forwarding the request to the web server. The TCP source port is modified from 1024 to
1188 in this example.
www.hellodigi.ir

Static Translation
A different methodology is used when hosts in the unprotected network need to initiate a
new connection to specific hosts behind the NAT device. You configure the firewall to
allow such connections by creating a static one-to-one mapping of the public (mapped)
IP address to the address of the internal (real) protected device. For example, static
NAT can be configured when a web server resides on the internal network and has a
private IP address but needs to be contacted by hosts located in the unprotected network
or the Internet. Figure 2-2 demonstrated how static translation works. The host address
(10.10.10.123) is statically translated to an address in the outside network
(209.165.200.227, in this case). This allows the outside host to initiate a connection to
the web server by directing the traffic to 209.165.200.227. The device performing NAT
then translates and sends the request to the web server on the inside network.
Firewalls like the Cisco ASA, Firepower Threat Defense (FTD), Cisco IOS zone-based
firewalls and others can perform all these NAT operations. On the other hand, address
translation is not limited to firewalls. Nowadays, all sorts of lower-end network
devices such as simple small office, home office (SOHO) and wireless routers can
perform different NAT techniques.
Stateful Inspection Firewalls
Stateful inspection firewalls provide enhanced benefits when compared to simple
packet-filtering firewalls. They track every packet passing through their interfaces by
ensuring that they are valid, established connections. They examine not only the packet
header contents but also the application layer information within the payload.
Subsequently, different rules can be created on the firewall to permit or deny traffic
based on specific payload patterns. A stateful firewall monitors the state of the
connection and maintains a database with this information, usually called the state table.
The state of the connection details whether such a connection has been established,
closed, reset, or is being negotiated. These mechanisms offer protection for different
types of network attacks.
www.hellodigi.ir

Demilitarized Zones
Firewalls can be configured to separate multiple network segments (or zones), usually
called demilitarized zones (DMZs). These zones provide security to the systems that
reside within them with different security levels and policies between them. DMZs can
have several purposes; for example, they can serve as segments on which a web server
farm resides or as extranet connections to a business partner. Figure 2-4 shows a Cisco
ASA with a DMZ.
Figure 2-4 DMZ example
DMZs minimize the exposure of devices and clients on your internal network by
allowing only recognized and managed services on those hosts to be accessible from the
Internet. In Figure 2-4, the DMZ hosts web servers that are accessible by internal and
Internet hosts. In large organizations, you can find multiple firewalls in different
segments and DMZs.
Firewalls Provide Network Segmentation
Firewalls can provide network segmentation while enforcing policies between those
segments. In Figure 2-5, a firewall is segmenting and enforcing policies between three
networks in the overall corporate network. The first network is the finance department,
the second is the engineering department, and the third is the sales department.
www.hellodigi.ir

Figure 2-5 Firewall Providing Network Segmentation
High Availability
Firewalls such as the Cisco ASA provide high availability features such as the
following:
Active-standby failover
Active-active failover
Clustering
Active-Standby Failover
In an active-standby failover configuration, the primary firewall is always active and
the secondary is in standby mode. When the primary firewall fails, the secondary
firewall takes over. Figure 2-6 shows a pair of Cisco ASA firewalls in an active-
standby failover configuration.
The configuration and stateful network information is synchronized from the primary
firewall to the secondary.
www.hellodigi.ir

Figure 2-6 Firewalls in Active-Standby Failover Mode
Active-Active Failover
In an active-active failover configuration, both of the firewalls are active. If one fails,
the other will continue to pass traffic in the network. Figure 2-7 shows a pair of Cisco
ASA firewalls in an active-active failover configuration.
Figure 2-7 Firewalls in Active-Active Failover Mode
Clustering Firewalls
Firewalls such as the Cisco ASA can also be clustered to provide next-generation
firewall protection in large and highly scalable environments. For example, the Cisco
ASA firewalls can be part of a cluster of up to 16 firewalls. Figure 2-8 shows a cluster
of three Cisco ASAs. One of the main reasons to cluster firewalls is to increase packet
www.hellodigi.ir

throughput and to scale in a more efficient way.
In Figure 2-8, the Cisco ASAs have 10 Gigabit Ethernet interfaces in an Etherchannel
configuration to switches in both inside and outside networks. An Etherchannel involves
bundling together two or more interfaces in order to scale and achieve bigger
bandwidth.
Figure 2-8 Cisco ASAs in a Cluster
Firewalls in the Data Center
Firewalls can also be deployed in the data center. The placement of firewalls in the data
center will depend on many factors, such as how much latency the firewalls will
introduce, what type of traffic you want to block and allow, and in what direction the
traffic will flow (either north to south or east to west).
In the data center, traffic going from one network segment or application of the data
center to another network segment or application within the data center is often referred
to as east-to-west (or west-to-east) traffic. This is also known as lateral traffic. Figure
2-9 demonstrates east-west traffic.
www.hellodigi.ir

Figure 2-10 Data Center North-South Traffic
Another example of advanced segmentation and micro-segmentation in the data center is
the security capabilities of the Cisco Application Centric Infrastructure (ACI). Cisco
ACI is a software-defined networking (SDN) solution that has a very robust policy
model across data center networks, servers, storage, security, and services. This policy-
based automation helps network administrators to achieve micro-segmentation through
the integration of physical and virtual environments under one policy model for
networks, servers, storage, services, and security. Even if servers and applications are
“network adjacent” (that is, on the same network segment), they will not communicate
with each other until a policy is configured and provisioned. This is why Cisco ACI is
very attractive to many security-minded network administrators. Another major benefit
of Cisco ACI is automation. With such automation, you can reduce application
deployment times from weeks to minutes. Cisco ACI policies are enforced and
deployed by the Cisco Application Policy Infrastructure Controller (APIC).
www.hellodigi.ir

Virtual Firewalls
Firewalls can also be deployed as virtual machines (VMs). An example of a virtual
firewall is the Cisco ASAv. These virtual firewalls are often deployed in the data center
to provide segmentation and network protection to virtual environments. They are
typically used because traffic between VMs often does not leave the physical server and
cannot be inspected or enforced with physical firewalls.
TIP
The Cisco ASA also has a featured called virtual contexts. This is not the
same as the virtual firewalls described previously. In the Cisco ASA
security context feature, one physical appliance can be “virtualized” into
separate contexts (or virtual firewalls). Virtual firewalls such as the Cisco
ASAv run on top of VMware or KVM on a physical server such as the
Cisco UCS.
Figure 2-11 shows two virtual firewalls providing network segmentation between
several VMs deployed in a physical server.
Figure 2-11 Virtual Firewalls Example
www.hellodigi.ir

Deep Packet Inspection
Several applications require special handling of data packets when they pass through
firewalls. These include applications and protocols that embed IP addressing
information in the data payload of the packet or open secondary channels on
dynamically assigned ports. Sophisticated firewalls and security appliances such as the
Cisco ASA and Cisco IOS Firewall offer application inspection mechanisms to handle
the embedded addressing information to allow the previously mentioned applications
and protocols to work. Using application inspection, these security appliances can
identify the dynamic port assignments and allow data exchange on these ports during a
specific connection.
With deep packet inspection, firewalls can look at specific Layer 7 payloads to protect
against security threats. For example, you can configure a Cisco ASA running version
7.0 or later to not allow peer-to-peer (P2P) applications to be transferred over the
HTTP protocol. You can also configure these devices to deny specific FTP commands,
HTTP content types, and other application protocols.
TIP
The Cisco ASA provides a Modular Policy Framework (MPF) that offers a
consistent and flexible way to configure application inspection and other
features to specific traffic flows in a manner similar to the Cisco IOS
Software modular quality-of-service (QoS) command-line interface (CLI).
Next-Generation Firewalls
The proliferation of mobile devices and the need to connect from any place are
radically changing the enterprise security landscape. Social networking sites such as
Facebook and Twitter long ago moved beyond mere novelty sites for teens and geeks
and have become vital channels for communicating with groups and promoting brands.
Security concerns and fear of data loss are leading reasons why some businesses don’t
embrace social media, but many others are adopting social media as a vital resource
within the organization. Some of the risks associated with social media can be mitigated
through the application of technology and user controls. However, there’s no doubt that
criminals have used social media networks to lure victims into downloading malware
and handing over login passwords.
Before today’s firewalls grant network access, they need to be aware of not only the
applications and users accessing the infrastructure but also the device in use, the
location of the user, and the time of day. Such context-aware security requires a
rethinking of the firewall architecture. Context-aware firewalls extend beyond the next-
generation firewalls on the market today. They provide granular control of applications,
www.hellodigi.ir

comprehensive user identification, and location-based control. The Cisco ASA 5500-X
Series next-generation firewalls are examples of context-based firewall solutions.
The Cisco ASA family provides a very comprehensive set of features and next-
generation security capabilities. For example, it provides capabilities such as simple
packet filtering (normally configured with access control lists, or ACLs) and stateful
inspection. The Cisco ASA also provides support for application inspection/awareness.
It can listen in on conversations between devices on one side and devices on the other
side of the firewall. The benefit of listening in is so that the firewall can pay attention to
application layer information.
The Cisco ASA also supports network address translation (NAT), the capability to act
as a Dynamic Host Configuration Protocol (DHCP) server or client, or both. The Cisco
ASA supports most of the interior gateway routing protocols, including Routing
Information Protocol (RIP), Enhanced Interior Gateway Routing Protocol (EIGRP), and
Open Shortest Path First (OSPF). It also supports static routing. The Cisco ASA also
can be implemented as a traditional Layer 3 firewall, which has IP addresses assigned
to each of its routable interfaces. The other option is to implement a firewall as a
transparent (Layer 2) firewall, in which the actual physical interfaces receive individual
IP addresses, but a pair of interfaces operate like a bridge. Traffic that is going across
this two-port bridge is still subject to the rules and inspection that can be implemented
by the ASA. Additionally, the Cisco ASA is often used as a head-end or remote-end
device for VPN tunnels for both remote-access VPN users and site-to-site VPN tunnels.
It supports IPsec and SSL-based remote access VPNs. The SSL VPN capabilities
include support for clientless SSL VPN and the full AnyConnect SSL VPN tunnels.
Cisco Firepower Threat Defense
The Cisco Firepower Threat Defense (FTD) is unified software that includes Cisco
ASA features, legacy FirePOWER Services, and new features. FTD can be deployed on
Cisco Firepower 4100 and 9300 appliances to provide next-generation firewall
(NGFW) services. In addition to being able to run on the Cisco Firepower 4100 Series
and the Firepower 9300 appliances, FTD can also run natively on the ASA 5506-X,
ASA 5506H-X, ASA 5506W-X, ASA 5508-X, ASA 5512-X, ASA 5515-X, ASA 5516-
X, ASA 5525-X, ASA 5545-X, and ASA 5555-X. It is not supported in the ASA 5505
or the 5585-X. FTD can also run as a virtual machine (Cisco Firepower Threat Defense
Virtual, or FTDv).
www.hellodigi.ir

NOTE
Cisco spells the word FirePOWER (uppercase “POWER”) when referring
to the Cisco ASA FirePOWER Services module. The word Firepower
(lowercase “power”) is used when referring to all other software, such as
FTD, Firepower Management Center (FMC), and Firepower appliances.
Cisco Firepower 4100 Series
The Cisco Firepower 4100 Series appliances are next-generation firewalls that run the
Cisco FTD software and features. There are four models:
Cisco Firepower 4110, which supports up to 20 Gbps of firewall throughput
Cisco Firepower 4120, which supports up to 40 Gbps of firewall throughput
Cisco Firepower 4140, which supports up to 60 Gbps of firewall throughput
Cisco Firepower 4150, which supports over 60 Gbps of firewall throughput
All of the Cisco Firepower 4100 Series models are one rack-unit (1 RU) appliances
and are managed by the Cisco Firepower Management Center.
Cisco Firepower 9300 Series
The Cisco Firepower 9300 appliances are designed for very large enterprises or
service providers. They can scale beyond 1 Tbps and are designed in a modular way,
supporting Cisco ASA software, Cisco FTD software, and Radware DefensePro DDoS
mitigation software. Radware DefensePro DDoS mitigation software is provided by
Radware, a Cisco partner.
NOTE
The Radware DefensePro DDoS mitigation software is available and
supported directly from Cisco on Cisco Firepower 4150 and Cisco
Firepower 9300 appliances.
Radware’s DefensePro DDoS mitigation software provides real-time analysis to protect
the enterprise or service provider infrastructure against network and application
downtime due to distributed denial of service (DDoS) attacks.
Cisco FTD for Cisco Integrated Services Routers (ISRs)
The Cisco FTD can run on Cisco Unified Computing System (UCS) E-Series blades
installed on Cisco ISR routers. Both the FMC and FTD are deployed as virtual
machines. There are two internal interfaces that connect a router to a UCS E-Series
www.hellodigi.ir

blade. On ISR G2, Slot0 is a Peripheral Component Interconnect Express (PCIe)
internal interface, and UCS E-Series Slot1 is a switched interface connected to the
backplane Multi Gigabit Fabric (MGF). In Cisco ISR 4000 Series routers, both internal
interfaces are connected to the MGF.
A hypervisor is installed on the UCS E-Series blade, and the Cisco FTD software runs
as a virtual machine on it. FTD for ISRs is supported on the following platforms:
Cisco ISR G2 Series: 2911, 2921, 2951, 3925, 3945, 3925E, and 3945E
Cisco ISR 4000 Series: 4331, 4351, 4451, 4321, and 4431
Personal Firewalls
Personal firewalls are popular software applications that you can install on end-user
machines or servers to protect them from external security threats and intrusions. The
term personal firewall typically applies to basic software that controls Layer 3 and
Layer 4 access to client machines. Today, sophisticated software is available that not
only supplies basic personal firewall features but also protects the system based on the
behavior of the applications installed on such systems.
Intrusion Detection Systems and Intrusion Prevention Systems
Intrusion detection systems (IDSs) are devices that detect (in promiscuous mode)
attempts from an attacker to gain unauthorized access to a network or a host, to create
performance degradation, or to steal information. They also detect distributed denial-of-
service (DDoS) attacks, worms, and virus outbreaks. Figure 2-12 shows how an IDS
device is configured to promiscuously detect security threats.
Figure 2-12 IDS Example
www.hellodigi.ir

In Figure 2-12, a compromised host sends a malicious packet to a series of hosts in the
10.10.20.0/24 network. The IDS device analyzes the packet and sends an alert to a
monitoring system. The malicious packet still successfully arrives at the 10.10.20.0/24
network.
Intrusion prevention system (IPS) devices, on the other hand, are capable of not only
detecting all these security threats, but also dropping malicious packets inline. IPS
devices may be initially configured in promiscuous mode (monitoring mode) when you
are first deploying them in the network. This is done to analyze the impact to the
network infrastructure. Then they are deployed in inline mode to be able to block any
malicious traffic in your network.
Figure 2-13 shows how an IPS device is placed inline and drops the noncompliant
packet while sending an alert to the monitoring system.
Figure 2-13 IPS Example
A few different types of IPSs exist:
Traditional network-based IPSs (NIPSs)
Next-generation IPS systems (NGIPSs)
Host-based IPSs (HIPSs)
Examples of traditional NIPSs are the Cisco IPS 4200 sensors and the Catalyst 6500
www.hellodigi.ir

IPS module. These devices have been in the end-of-life (EoL) stage for quite some time.
Examples of NGIPSs are the Cisco Firepower IPS systems.
The Cisco ASA 5500 Series FirePOWER Services provide intrusion prevention,
firewall, and VPN services in a single, easy-to-deploy platform. Intrusion prevention
services enhance firewall protection by looking deeper into the flows to provide
protection against threats and vulnerabilities. The Cisco Firepower Threat Defense
(FTD) provides these capabilities in a combined software package.
Network-based IDSs and IPSs use several detection methodologies, such as the
following:
Pattern matching and stateful pattern-matching recognition
Protocol analysis
Heuristic-based analysis
Anomaly-based analysis
Global threat correlation capabilities
Pattern Matching and Stateful Pattern-Matching Recognition
Pattern matching is a methodology in which the intrusion detection device searches for a
fixed sequence of bytes within the packets traversing the network. Generally, the pattern
is aligned with a packet that is related to a specific service or, in particular, associated
with a source and destination port. This approach reduces the amount of inspection
made on every packet. However, it is limited to services and protocols that are
associated with well-defined ports. Protocols that do not use any Layer 4 port
information are not categorized. Examples of these protocols are Encapsulated Security
Payload (ESP), Authentication Header (AH), and Generic Routing Encapsulation
(GRE).
This tactic uses the concept of signatures. A signature is a set of conditions that point out
some type of intrusion occurrence. For example, if a specific TCP packet has a
destination port of 1234 and its payload contains the string ff11ff22, a signature can be
configured to detect that string and generate an alert.
Alternatively, the signature could include an explicit starting point and endpoint for
inspection within the specific packet.
Here are some of the benefits of the plain pattern-matching technique:
Direct correlation of an exploit
Trigger alerts on the pattern specified
Can be applied across different services and protocols
One of the main disadvantages is that pattern matching can lead to a considerably high
rate of false positives, which are alerts that do not represent a genuine malicious
www.hellodigi.ir

activity. In contrast, any alterations to the attack can lead to overlooked events of real
attacks, which are normally referred as false negatives.
To address some of these limitations, a more refined method was created. This
methodology is called stateful pattern-matching recognition. This process dictates that
systems performing this type of signature analysis must consider the chronological order
of packets in a TCP stream. In particular, they should judge and maintain a stateful
inspection of such packets and flows.
Here are some of the advantages of stateful pattern-matching recognition:
The capability to directly correlate a specific exploit within a given pattern
Supports all non-encrypted IP protocols
Systems that perform stateful pattern matching keep track of the arrival order of non-
encrypted packets and handle matching patterns across packet boundaries.
However, stateful pattern-matching recognition shares some of the same restrictions as
the simple pattern-matching methodology, which was discussed previously, including an
uncertain rate of false positives and the possibility of some false negatives.
Additionally, stateful pattern matching consumes more resources in the IPS device
because it requires more memory and CPU processing.
Protocol Analysis
Protocol analysis (or protocol decode-base signatures) is often referred to as an
extension to stateful pattern recognition. A network-based intrusion detection system
(NIDS) accomplishes protocol analysis by decoding all protocol or client-server
conversations. The NIDS identifies the elements of the protocol and analyzes them
while looking for an infringement. Some intrusion detection systems look at explicit
protocol fields within the inspected packets. Others require more sophisticated
techniques, such as examination of the length of a field within the protocol or the number
of arguments. For example, in SMTP, the device may examine specific commands and
fields such as HELO, MAIL, RCPT, DATA, RSET, NOOP, and QUIT. This technique
diminishes the possibility of encountering false positives if the protocol being analyzed
is properly defined and enforced. On the other hand, the system can generate numerous
false positives if the protocol definition is ambiguous or tolerates flexibility in its
implementation.
www.hellodigi.ir

Heuristic-Based Analysis
A different approach to network intrusion detection is to perform heuristic-based
analysis. Heuristic scanning uses algorithmic logic from statistical analysis of the traffic
passing through the network. Its tasks are CPU and resource intensive, so it is an
important consideration while planning your deployment. Heuristic-based algorithms
may require fine tuning to adapt to network traffic and minimize the possibility of false
positives. For example, a system signature can generate an alarm if a range of ports is
scanned on a particular host or network. The signature can also be orchestrated to
restrict itself from specific types of packets (for example, TCP SYN packets).
Heuristic-based signatures call for more tuning and modification to better respond to
their distinctive network environment.
Anomaly-Based Analysis
A different practice keeps track of network traffic that diverges from “normal”
behavioral patterns. This practice is called anomaly-based analysis. The limitation is
that what is considered to be normal must be defined. Systems and applications whose
behavior can be easily considered as normal could be classified as heuristic-based
systems.
However, sometimes it is challenging to classify a specific behavior as normal or
abnormal based on different factors, which include the following:
Negotiated protocols and ports
Specific application changes
Changes in the architecture of the network
A variation of this type of analysis is profile-based detection. This allows systems to
orchestrate their alarms on alterations in the way that other systems or end users
interrelate on the network.
Another kind of anomaly-based detection is protocol-based detection. This scheme is
related to, but not to be confused with, the protocol-decode method. The protocol-based
detection technique depends on well-defined protocols, as opposed to the protocol-
decode method, which classifies as an anomaly any unpredicted value or configuration
within a field in the respective protocol. For example, a buffer overflow can be
detected when specific strings are identified within the payload of the inspected IP
packets.
www.hellodigi.ir

TIP
A buffer overflow occurs when a program attempts to stock more data in a
temporary storage area within memory (buffer) than it was designed to
hold. This might cause the data to incorrectly overflow into an adjacent
area of memory. An attacker could thus craft specific data inserted into the
adjacent buffer. Subsequently, when the corrupted data is read, the target
computer executes new instructions and malicious commands.
Traditional IDS and IPS provide excellent application layer attack-detection
capabilities. However, they do have a weakness. For example, they cannot detect DDoS
attacks where the attacker uses valid packets. IDS and IPS devices are optimized for
signature-based application layer attack detection. Another weakness is that these
systems utilize specific signatures to identify malicious patterns. Yet, if a new threat
appears on the network before a signature is created to identify the traffic, it could lead
to false negatives. An attack for which there is no signature is called a zero-day attack.
Although some IPS devices do offer anomaly-based capabilities, which are required to
detect such attacks, they need extensive manual tuning and have a major risk of
generating false positives.
You can use more elaborate anomaly-based detection systems to mitigate DDoS attacks
and zero-day outbreaks. Typically, an anomaly detection system monitors network traffic
and alerts or reacts to any sudden increase in traffic and any other anomalies. Cisco
delivers a complete DDoS-protection solution based on the principles of detection,
diversion, verification, and forwarding to help ensure total protection. Examples of
sophisticated anomaly detection systems are the Cisco CRS Carrier-Grade Services
Engine Module DDoS mitigation solution and the Cisco Firepower 9300 appliances
with Radware’s software.
You can also use NetFlow as an anomaly detection tool. NetFlow is a Cisco proprietary
protocol that provides detailed reporting and monitoring of IP traffic flows through a
network device, such as a router, switch, or the Cisco ASA.
www.hellodigi.ir

Global Threat Correlation Capabilities
Cisco NGIPS devices include global correlation capabilities that utilize real-world
data from Cisco Talos. Cisco Talos is a team of security researchers who leverage big-
data analytics for cyber security and provide threat intelligence for many Cisco security
products and services. Global correlation allows an IPS sensor to filter network traffic
using the “reputation” of a packet’s source IP address. The reputation of an IP address is
computed by Cisco threat intelligence using the past actions of that IP address. IP
reputation has been an effective means of predicting the trustworthiness of current and
future behaviors from an IP address.
NOTE
You can obtain more information about Cisco Talos at
https://talosintel.com.
Next-Generation Intrusion Prevention Systems
As a result of the Sourcefire acquisition, Cisco expanded its NGIPS portfolio with the
following products:
Cisco Firepower 8000 Series appliances: These high-performance appliances
running Cisco FirePOWER Next-Generation IPS Services support throughput
speeds from 2 Gbps up to 60 Gbps.
Cisco Firepower 7000 Series appliances: These appliances comprise the base
platform for the Cisco FirePOWER NGIPS software. The base platform supports
throughput speeds from 50 Mbps up to 1.25 Gbps.
Virtual next-generation IPS (NGIPSv) appliances for VMware: These
appliances can be deployed in virtualized environments. By deploying these virtual
appliances, security administrators can maintain network visibility that is often lost
in virtual environments.
Firepower Management Center
Cisco Firepower Management Center (FMC) provides a centralized management and
analysis platform for the Cisco NGIPS appliances, the Cisco ASA with FirePOWER
Services, and Cisco FTD. It provides support for role-based policy management and
includes a fully customizable dashboard with advanced reports and analytics. The
following are the models of the Cisco FMC appliances:
FS750: Supports a maximum of ten managed devices (NGIPS or Cisco ASA
www.hellodigi.ir

appliances) and a total of 20 million IPS events.
FS2000: Supports a maximum of 70 managed devices and up to 60 million IPS
events.
FS4000: Supports a maximum of 300 managed devices and a total of 300 million
IPS events.
FMC virtual appliance: Allows you to conveniently provision on your existing
virtual infrastructure. It supports a maximum of 25 managed devices and up to 10
million IPS events.
Advance Malware Protection
Cisco provides advanced malware protection (AMP) capabilities for endpoint and
network security devices. In the following sections, you will learn the details about
AMP for Endpoints and the integration of AMP in several Cisco security products.
AMP for Endpoints
Numerous antivirus and antimalware solutions on the market are designed to detect,
analyze, and protect against both known and emerging endpoint threats. Before diving
into these technologies, you should understand viruses and malicious software
(malware). The following are the most common types of malicious software:
Computer virus: Malicious software that infects a host file or system area to
produce an undesirable outcome such as erasing data, stealing information, or
corrupting the integrity of the system. In numerous cases, these viruses multiply
again to form new generations of themselves.
Worm: A virus that replicates itself over the network, infecting numerous
vulnerable systems. In most cases, a worm executes malicious instructions on a
remote system without user interaction.
Mailer or mass-mailer worm: A type of worm that sends itself in an email
message. Examples of mass-mailer worms are Loveletter.A@mm and
W32/SKA.A@m (a.k.a. the Happy99 worm), which sends a copy of itself every
time the user sends a new message.
Logic bomb: A type of malicious code that is injected into a legitimate application.
An attacker can program a logic bomb to delete itself from the disk after it performs
the malicious tasks on the system. Examples of these malicious tasks include
deleting or corrupting files or databases and executing a specific instruction after
certain system conditions are met.
www.hellodigi.ir

Trojan horse: A type of malware that executes instructions to delete files, steal
data, or otherwise compromise the integrity of the underlying operating system.
Trojan horses typically use a form of social engineering to fool victims into
installing such software on their computers or mobile devices. Trojans can also act
as back doors.
Back door: A piece of malware or a configuration change that allows an attacker to
control the victim’s system remotely. For example, a back door can open a network
port on the affected system so that the attacker can connect to and control the
system.
Exploit: A malicious program designed to exploit, or take advantage of, a single
vulnerability or set of vulnerabilities.
Downloader: A piece of malware that downloads and installs other malicious
content from the Internet to perform additional exploitation on an affected system.
Spammer: Malware that sends spam, or unsolicited messages sent via email,
instant messaging, newsgroups, or any other kind of computer or mobile device
communications. Spammers send these unsolicited messages with the primary goal
of fooling users into clicking malicious links, replying to emails or other messages
with sensitive information, or performing different types of scams. The attacker’s
main objective is to make money.
Key logger: A piece of malware that captures the user’s keystrokes on a
compromised computer or mobile device. A key logger collects sensitive
information such as passwords, personal ID numbers (PINs), personally
identifiable information (PII), credit card numbers, and more.
Rootkit: A set of tools used by an attacker to elevate his or her privilege to obtain
root-level access in order to completely take control of the affected system.
Ransomware: A type of malware that compromises a system and then demands that
the victim pay a ransom to the attacker in order for the malicious activity to cease
or for the malware to be removed from the affected system. Two examples of
ransomware are Crypto Locker and CryptoWall; they both encrypt the victim’s data
and demand that the user pay a ransom in order for the data to be decrypted and
accessible again.
The following are just a few examples of the commercial and free antivirus software
options available today:
Avast
AVG Internet Security Bitdefender Antivirus Free
ZoneAlarm PRO Antivirus+, ZoneAlarm PRO Firewall, and ZoneAlarm Extreme
Security
www.hellodigi.ir

F-Secure Anti-Virus
Kaspersky Anti-Virus
McAfee AntiVirus
Panda Antivirus
Sophos Antivirus
Norton AntiVirus
ClamAV
Immunet AntiVirus
There are numerous other antivirus software companies and products.
NOTE
ClamAV is an open source antivirus engine sponsored and maintained by
Cisco and non-Cisco engineers. You can download ClamAV from
www.clamav.net. Immunet is a free community-based antivirus software
maintained by Cisco Sourcefire. You can download Immunet from
www.immunet.com.
Personal firewalls and host intrusion prevention systems (HIPSs) are software
applications that you can install on end-user machines or servers to protect them from
external security threats and intrusions. The term personal firewall typically applies to
basic software that can control Layer 3 and Layer 4 access to client machines. HIPS
provides several features that offer more robust security than a traditional personal
firewall, such as host intrusion prevention and protection against spyware, viruses,
worms, Trojans, and other types of malware.
Today, more sophisticated software makes basic personal firewalls and HIPS obsolete.
For example, Cisco Advanced Malware Protection (AMP) for Endpoints provides
granular visibility and control to stop advanced threats missed by other security layers.
Cisco AMP for Endpoints takes advantage of telemetry from big data, continuous
analysis, and advanced analytics provided by Cisco threat intelligence to be able to
detect, analyze, and stop advanced malware across endpoints.
Cisco AMP for Endpoints provides advanced malware protection for many operating
systems, including Windows, Mac OS X, Android, and Linux.
Attacks are getting very sophisticated and can evade detection of traditional systems and
endpoint protection. Today, attackers have the resources, knowledge, and persistence to
beat point-in-time detection. Cisco AMP for Endpoints provides mitigation capabilities
that go beyond point-in-time detection. It uses threat intelligence from Cisco to perform
retrospective analysis and protection. Cisco AMP for Endpoints also provides device
www.hellodigi.ir

and file trajectory capabilities to allow a security administrator to analyze the full
spectrum of an attack. Device trajectory and file trajectory support the following file
types in the Windows and Mac OS X operating systems:
MSEXE
PDF
MSCAB
MSOLE2
ZIP
ELF
MACHO
MACHO_UNIBIN
SWF
JAVA
AMP for Networks
Cisco AMP for Networks provides next-generation security services that go beyond
point-in-time detection. It provides continuous analysis and tracking of files and also
retrospective security alerts so that a security administrator can take action during and
after an attack. The file trajectory feature of Cisco AMP for Networks tracks file
transmissions across the network, and the file capture feature enables a security
administrator to store and retrieve files for further analysis.
The network provides unprecedented visibility into activity at a macro-analytical level.
However, to remediate malware, in most cases you need to be on the host. This is why
AMP has the following connectors: AMP for Networks, AMP for Endpoints, and AMP
for Content Security Appliances.
You can install AMP for Networks on any Cisco Firepower security appliance right
alongside the firewall and IPS; however, there are dedicated AMP appliances as well.
When it comes down to it, though, AMP appliances and Firepower appliances are
actually the same. They can all run all the same services. Are you thoroughly confused?
Stated a different way, Cisco AMP for Networks is the AMP service that runs on the
appliance examining traffic flowing through a network. It can be installed in a
standalone form or as a service on a Firepower IPS or even a Cisco ASA with
FirePOWER Services.
AMP for Networks and all the AMP connectors are designed to find malicious files,
provide retrospective analysis, illustrate trajectory, and point out how far malicious
files may have spread.
The AMP for Networks connector examines, records, tracks, and sends files to the
www.hellodigi.ir

cloud. It creates an SHA-256 hash of the file and compares it to the local file cache. If
the hash is not in the local cache, it queries the Firepower Management Center (FMC).
The FMC has its own cache of all the hashes it has seen before, and if it hasn’t
previously seen this hash, the FMC queries the cloud. Unlike with AMP for Endpoints,
when a file is new, it can be analyzed locally and doesn’t have to be sent to the cloud
for all analysis. Also, the file is examined and stopped in flight, as it is traversing the
appliance.
Figure 2-14 illustrates the many AMP for Networks connectors sending the file hash to
the FMC, which in turn sends it to the cloud if the hash is new. The connectors could be
running on dedicated AMP appliances, as a service on a Cisco next-generation IPS
(NGIPS), on an ASA with FirePOWER Services, or on the next-generation firewall
(NGFW) known as Firepower Threat Defense (FTD).
Figure 2-14 AMP Connectors Communicating to the FMC and the Cloud
It’s very important to note that only the SHA-256 hash is sent unless you configure the
policy to send files for further analysis in Threat Grid.
AMP can also provide retrospective analysis. The AMP for Networks appliance keeps
www.hellodigi.ir

data from what occurred in the past. When a file’s disposition is changed, AMP
provides an historical analysis of what happened, tracing the incident/infection. With the
help of AMP for Endpoints, retrospection can reach out to that host and remediate the
bad file, even though that file was permitted in the past.
Web Security Appliance
For an organization to be able to protect its environment against web-based security
threats, security administrators need to deploy tools and mitigation technologies that go
far beyond traditional blocking of known bad websites. Today, you can download
malware through compromised legitimate websites, including social media sites,
advertisements in news and corporate sites, and gaming sites. Cisco has developed
several tools and mechanisms to help customers combat these threats, including and
Cisco Web Security Appliance (WSA), Cisco Security Management Appliance (SMA),
and Cisco Cloud Web Security (CWS). These solutions enable malware detection and
blocking, continuous monitoring, and retrospective alerting.
A Cisco WSA uses cloud-based intelligence from Cisco to help protect an organization
before, during, and after an attack. This “lifecycle” is referred to as the attack
continuum. The cloud-based intelligence includes web (URL) reputation and zero-day
threat intelligence from the Talos Cisco security intelligence and research group. This
threat intelligence helps security professionals stop threats before they enter the
corporate network and also enables file reputation and file sandboxing to identify
threats during an attack. Retrospective attack analysis allows security administrators to
investigate and provide protection after an attack, when advanced malware might have
evaded other layers of defense.
A Cisco WSA can be deployed in explicit proxy mode or as a transparent proxy, using
the Web Cache Communication Protocol (WCCP). In explicit proxies, clients are aware
of the requests that go through a proxy. On the other hand, in transparent proxies, clients
are not aware of a proxy in the network; the source IP address in a request is that of the
client. In transparent proxies, configuration is needed on the client. WCCP was
originally developed by Cisco, but several other vendors have integrated this protocol
into their products to allow clustering and transparent proxy deployments on networks
using Cisco infrastructure devices (routers, switches, firewalls, and so on).
Figure 2-15 illustrates a Cisco WSA deployed as an explicit proxy.
www.hellodigi.ir

Figure 2-15 WSA Explicit Proxy Configuration
The following are the steps illustrated in Figure 2-15:
1. An internal user makes an HTTP request to an external website. The client
browser is configured to send the request to the Cisco WSA.
2. The Cisco WSA connects to the website on behalf of the internal user.
3. The firewall (Cisco ASA) is configured to only allow outbound web traffic from
the Cisco WSA, and it forwards the traffic to the web server.
Figure 2-16 shows a Cisco WSA deployed as a transparent proxy.
Figure 2-16 WSA Transparent Proxy Configuration
The following are the steps illustrated in Figure 2-16:
1. An internal user makes an HTTP request to an external website.
2. The internal router (R1) redirects the web request to the Cisco WSA, using
WCCP.
3. The Cisco WSA connects to the website on behalf of the internal user.
www.hellodigi.ir

4. The firewall (Cisco ASA) is configured to only allow outbound web traffic from
the WSA. The web traffic is sent to the Internet web server.
Figure 2-17 demonstrates how the WCCP registration works. The Cisco WSA is the
WCCP client, and the Cisco router is the WCCP server.
Figure 2-17 WCCP Registration
During the WCCP registration process, the WCCP client sends a registration
announcement (“Here I am”) every 10 seconds. The WCCP server (the Cisco router, in
this example) accepts the registration request and acknowledges it with an “I see you”
WCCP message. The WCCP server waits 30 seconds before it declares the client as
“inactive” (engine failed). WCCP can be used in large-scale environments. Figure 2-18
shows a cluster of Cisco WSAs, where internal Layer 3 switches redirect web traffic to
the cluster.
www.hellodigi.ir

Figure 2-18 Cisco WSA Cluster
The Cisco WSA runs the Cisco AsyncOS operating system. Cisco AsyncOS supports
numerous features, including the following, that help mitigate web-based threats:
Real-time antimalware adaptive scanning: The Cisco WSA can be configured to
dynamically select an antimalware scanning engine based on URL reputation,
content type, and scanner effectiveness. Adaptive scanning is a feature designed to
increase the “catch rate” of malware embedded in images, JavaScript, text, and
Adobe Flash files. Adaptive scanning is an additional layer of security on top of
Cisco WSA web reputation filters that include support for Sophos, Webroot, and
McAfee.
Layer 4 traffic monitor: The Cisco WSA is used to detect and block spyware. It
dynamically adds IP addresses of known malware domains to databases of sites to
block.
Third-party DLP integration: The Cisco WSA redirects all outbound traffic to a
third-party DLP appliance, allowing deep content inspection for regulatory
www.hellodigi.ir

compliance and data exfiltration protection. It enables an administrator to inspect
web content by title, metadata, and size, and to even prevent users from storing files
to cloud services such as Dropbox and Google Drive.
File reputation: Using threat information from Cisco Talos, this file reputation
threat intelligence is updated every 3 to 5 minutes.
File sandboxing: If malware is detected, the Cisco AMP capabilities can put files
in a sandbox to inspect the malware’s behavior and combine the inspection with
machine-learning analysis to determine the threat level. Cisco Cognitive Threat
Analytics (CTA) uses machine-learning algorithms to adapt over time.
File retrospection: After a malicious attempt or malware is detected, the Cisco
WSA continues to cross-examine files over an extended period of time.
Application visibility and control: The Cisco ASA can inspect and even block
applications that are not allowed by the corporate security polity. For example, an
administrator can allow users to use social media sites such as Facebook but block
micro-applications such as Facebook games.
Email Security Appliance
Users are no longer accessing email only from the corporate network or from a single
device. Cisco provides cloud-based, hybrid, and on-premises solutions based on the
Email Security Appliance (ESA) that can help protect any dynamic environment. This
section introduces these solutions and technologies and explains how users can use
threat intelligence to detect, analyze, and protect against both known and emerging
threats.
The following are the most common email-based threats:
Spam: Unsolicited email messages that advertise a service, a scam (typically), or a
message with malicious intent. Email spam continues to be a major threat because it
can be used to spread malware.
Malware attachments: Email messages containing malicious software (malware).
Phishing: An attacker’s attempt to fool a user into thinking that the email
communication comes from a legitimate entity or site, such as a bank, social media
website, online payment processor, or even the corporate IT department. The goal
of a phishing email is to steal a user’s sensitive information, such as user
credentials, bank account information, and so on.
Spear phishing: This involves phishing attempts that are more targeted. Spear-
phishing emails are directed to specific individuals or organizations. For instance,
an attacker might perform a passive reconnaissance on an individual or organization
by gathering information from social media sites (for example, Twitter, LinkedIn,
www.hellodigi.ir

and Facebook) and other online resources. Then the attacker might tailor a more
directed and relevant message to the victim to increase the probability that the user
will be fooled into following a malicious link, clicking an attachment containing
malware, or simply replying to the email and providing sensitive information.
Another phishing-based attack, called whaling, specifically targets executives and
high-profile users.
The Cisco ESA runs the Cisco AsyncOS operating system. Cisco AsyncOS supports
numerous features that help mitigate email-based threats. The following are examples of
the features supported by the Cisco ESA:
Access control: Controlling access for inbound senders, according to a sender’s IP
address, IP address range, or domain name.
Anti-spam: Multilayer filters based on Cisco SenderBase reputation and Cisco
antispam integration. The antispam reputation and zero-day threat intelligence are
fueled by the Cisco security intelligence and research group named Talos.
Network antivirus: Network antivirus capabilities at the gateway. Cisco partnered
with Sophos and McAfee, supporting their antivirus scanning engines.
Advanced Malware Protection (AMP): Allows security administrators to detect
and block malware and perform continuous analysis and retrospective alerting.
Data loss prevention (DLP): The ability to detect any sensitive emails and
documents leaving the corporation. The Cisco ESA integrates RSA email DLP for
outbound traffic.
Email encryption: The ability to encrypt outgoing mail to address regulatory
requirements. The administrator can configure an encryption policy on the Cisco
ESA and use a local key server or hosted key service to encrypt the message.
Email authentication: A few email authentication mechanisms include Sender
Policy Framework (SPF), Sender ID Framework (SIDF), and DomainKeys
Identified Mail (DKIM) verification of incoming mail, as well as DomainKeys and
DKIM signing of outgoing mail.
Outbreak filters: Preventive protection against new security outbreaks and email-
based scams using Cisco’s Security Intelligence Operations (SIO) threat
intelligence information.
www.hellodigi.ir

NOTE
Cisco SenderBase (see www.senderbase.org) is the world’s largest email
and web traffic monitoring network. It provides real-time threat
intelligence powered by Cisco SIO.
The Cisco ESA acts as the email gateway for an organization, handling all email
connections, accepting messages, and relaying messages to the appropriate systems. The
Cisco ESA can service email connections from the Internet to users inside a network
and from systems inside the network to the Internet. Email connections use Simple Mail
Transfer Protocol (SMTP). The ESA services all SMTP connections, by default acting
as the SMTP gateway.
TIP
Mail gateways are also known as mail exchangers (MX).
The Cisco ESA uses listeners to handle incoming SMTP connection requests. A listener
defines an email processing service that is configured on an interface in the Cisco ESA.
Listeners apply to email entering the appliance from either the Internet or internal
systems.
The following listeners can be configured:
Public listeners for email coming in from the Internet.
Private listeners for email coming from hosts in the corporate (inside) network.
(These emails are typically from internal groupware, Exchange, POP, or IMAP
email servers.)
Cisco ESA listeners are often referred to as SMTP daemons, and they run on specific
Cisco ESA interfaces. When a listener is configured, the following information must be
provided:
Listener properties such as a specific interface in the Cisco ESA and the TCP port
that will be used. The listener properties must also indicate whether the listener is
public or private.
The hosts that are allowed to connect to the listener, using a combination of access
control rules. An administrator can specify which remote hosts can connect to the
listener.
The local domains for which public listeners accept messages.
www.hellodigi.ir

Cisco Security Management Appliance
Cisco Security Management Appliance (SMA) is a Cisco product that centralizes the
management and reporting for one or more Cisco ESAs and Cisco WSAs. Cisco SMA
enables you to consistently enforce policy and enhance threat protection. Figure 2-19
shows a Cisco SMA that is controlling Cisco ESAs and Cisco WSAs in different
geographic locations (New York, Raleigh, Paris, and London).
Figure 2-19 Cisco SMA
The Cisco SMA can be deployed with physical appliances or as virtual appliances.
www.hellodigi.ir

Cisco Identity Services Engine
The Cisco Identity Services Engine (ISE) is a comprehensive security identity
management solution designed to function as a policy decision point for network access.
It allows security administrators to collect real-time contextual information from a
network, its users, and devices. Cisco ISE is the central policy management platform in
the Cisco TrustSec solution. It supports a comprehensive set of AAA (authentication,
authorization, and accounting), posture, and network profiler features in a single device.
Cisco ISE provides the AAA functionality of legacy Cisco products such as the Cisco
Access Control Server (ACS).
Cisco ISE allows security administrators to provide network guest access management
and wide-ranging client provisioning policies, including 802.1X environments. The
support of TrustSec features such as security group tags (SGTs) and security group
access control lists (SGACLs) make the Cisco ISE a complete identity services
solution. Cisco ISE supports policy sets, which let a security administrator group sets of
authentication and authorization policies.
Cisco ISE provides Network Admission Control (NAC) features, including posture
policies, to enforce configuration of end-user devices with the most up-to-date security
settings or applications before they enter the network. The Cisco ISE supports the
following agent types for posture assessment and compliance:
Cisco NAC Web Agent: A temporary agent that is installed in end-user machines
at the time of login. The Cisco NAC Web Agent is not visible on the end-user
machine after the user terminates the session.
Cisco NAC Agent: An agent that is installed permanently on a Windows or Mac
OS X client system.
Cisco AnyConnect Secure Mobility Client: An agent that is installed permanently
on a Windows or Mac OS X client system.
Cisco ISE provides a comprehensive set of features to allow corporate users to connect
their personal devices—such as mobile phones, tablets, laptops, and other network
devices—to the network. Such a bring-your-own-device (BYOD) system introduces
many challenges in terms of protecting network services and enterprise data. Cisco ISE
provides support for multiple mobile device management (MDM) solutions to enforce
policy on endpoints. ISE can be configured to redirect users to MDM onboarding
portals and prompt them to update their devices before they can access the network.
Cisco ISE can also be configured to provide Internet-only access to users who are not
compliant with MDM policies.
Cisco ISE supports the Cisco Platform Exchange Grid (pxGrid), a multivendor, cross-
www.hellodigi.ir

platform network system that combines different parts of an IT infrastructure, such as the
following:
Security monitoring
Detection systems
Network policy platforms
Asset and configuration management
Identity and access management platforms
Cisco pxGrid has a unified framework with an open application programming interface
(API) designed in a hub-and-spoke architecture. pxGrid is used to enable the sharing of
contextual-based information from a Cisco ISE session directory to other policy
network systems, such as Cisco IOS devices and the Cisco ASA.
The Cisco ISE can be configured as a certificate authority (CA) to generate and manage
digital certificates for endpoints. Cisco ISE CA supports standalone and subordinate
deployments.
Cisco ISE software can be installed on a range of physical appliances or on a VMware
server (Cisco ISE VM). The Cisco ISE software image does not support the installation
of any other packages or applications on this dedicated platform.
Security Cloud-based Solutions
Several cloud-based security solutions are also available in the market. For example,
Cisco provides the following cloud-based security services:
Cisco Cloud Web Security (CWS)
Cisco Cloud Email Security (CES)
Cisco AMP Threat Grid
Cisco Threat Awareness Service
OpenDNS
CloudLock
The following sections describe these cloud-based security services.
www.hellodigi.ir

Cisco Cloud Web Security
Cisco Cloud Web Security (CWS) is a cloud-based security service that provides
worldwide threat intelligence, advanced threat defense capabilities, and roaming user
protection. The Cisco CWS service uses web proxies in the Cisco cloud environment
that scan traffic for malware and policy enforcement. Cisco customers can connect to
the Cisco CWS service directly by using a proxy auto-configuration (PAC) file in the
user endpoint or through connectors integrated into the following Cisco products:
Cisco ISR G2 routers
Cisco ASA
Cisco WSA
Cisco AnyConnect Secure Mobility Client
NOTE
Cisco is always adding more functionality to their products. The number of
connectors may increase throughout time. Those in the preceding list are
the ones available at the time of writing.
Organizations using the transparent proxy functionality through a connector can get the
most out of their existing infrastructure. In addition, the scanning is offloaded from the
hardware appliances to the cloud, thus reducing the impact to hardware utilization and
reducing network latency. Figure 2-20 illustrates how the transparent proxy functionality
through a connector works.
www.hellodigi.ir

Figure 2-20 Cisco CWS Example
In Figure 2-20, the Cisco ASA is enabled with the Cisco CWS connector at a branch
office, and it protects the corporate users at the branch office with these steps:
1. An internal user makes an HTTP request to an external website (example.org).
2. The Cisco ASA forwards the request to the Cisco CWS global cloud
infrastructure.
www.hellodigi.ir

3. Cisco CWS notices that example.org has some web content (ads) that is
redirecting the user to a known malicious site.
4. Cisco CWS blocks the request to the malicious site.
Cisco Cloud Email Security
Cisco Cloud Email Security (CES) provides a cloud-based solution that allows
companies to outsource the management of their email security. The service provides
email security instances in multiple Cisco data centers to enable high availability.
The Cisco Hybrid Email Security solution combines both cloud-based and on-premises
ESAs. This hybrid solution helps Cisco customers reduce their onsite email security
footprint and outsource a portion of their email security to Cisco, while still allowing
them to maintain control of confidential information within their physical boundaries.
Many organizations must comply with regulations that require them to keep sensitive
data physically on their premises. The Cisco Hybrid Email Security solution allows
network security administrators to remain compliant and to maintain advanced control
with encryption, DLP, and onsite identity-based integration.
www.hellodigi.ir

Cisco AMP Threat Grid
Cisco acquired a security company called Threat Grid that provides cloud-based and
on-premises malware analysis solutions. Cisco integrated Cisco AMP and Threat Grid
to provide a solution for advanced malware analysis with deep threat analytics. The
Cisco AMP Threat Grid integrated solution analyzes millions of files and correlates
them with hundreds of millions of malware samples. This provides a look into attack
campaigns and how malware is distributed. This solution provides a security
administrator with detailed reports of indicators of compromise and threat scores that
help prioritize mitigations and recover from attacks. Cisco AMP Threat Grid
crowdsources malware from a closed community and analyzes all samples using highly
secure proprietary techniques that include static and dynamic analysis. These are
different from traditional sandboxing technologies. The Cisco AMP Threat Grid
analysis exists outside the virtual environment, identifying malicious code designed to
evade analysis. There is a feature in Cisco AMP Threat Grid called Glovebox that helps
you interact with the malware in real time, recording all activity for future playback and
reporting. Advanced malware uses numerous evasion techniques to determine whether it
is being analyzed in a sandbox. Some of these samples require user interaction.
Glovebox dissects these samples without infecting your network while the samples are
being analyzed. Glovebox is a powerful tool against advanced malware that allows
analysts to open applications and replicate a workflow process, see how the malware
behaves, and even reboot the virtual machine.
NOTE
The Mac OS X connector does not support SWF files. The Windows
connector does not scan ELF, JAVA, MACHO, and MACHO_UNIBIN files
at the time of this writing. The Android AMP connector scans APK files.
www.hellodigi.ir

Cisco Threat Awareness Service
The Cisco Threat Awareness Service (CTAS) is a threat intelligence service that
provides Cisco customers with network visibility by making security information
available 24 hours a day, 7 days a week. CTAS is a cloud-based service that is
accessed via a web browser. It allows Cisco customers to maintain visibility into
inbound and outbound network activity from the outside and displays potential threats
requiring additional attention by the network security staff. CTAS requires no
configuration changes, network infrastructure, or new software, as it tracks the domain
names and IP addresses of Cisco customer premises to alert on suspicious activity or
requests. CTAS also provides remediation recommendations through its web portal.
Cisco provides a base offer of the CTAS service with Cisco Smart Net Total Care
Service at no additional cost. A premium offer is available as a yearly subscription for
customers looking to track an unlimited number of domain names and IP addresses.
NOTE
You can obtain more information about CTAS at
http://www.cisco.com/c/en/us/products/security/sas-threat-
management.html.
OpenDNS
Cisco acquired a company called OpenDNS that provides DNS services, threat
intelligence, and threat enforcement at the DNS layer. OpenDNS has a global network
that delivers advanced security solutions (as a cloud-based service) regardless of
where Cisco customer offices or employees are located. This service is extremely easy
to deploy and easy to manage. Cisco has also incorporated the innovative advancements
to threat research and threat-centric security that OpenDNS has developed to block
advanced cyber security threats with other security and networking products. Millions
of people use OpenDNS, including thousands of companies, from Fortune 500
enterprises to small businesses.
OpenDNS provides a free DNS service for individuals, students, and small businesses.
You can just simply configure your endpoint (laptop, desktop, mobile device, server, or
your DHCP server) to point to OpenDNS servers: 208.67.222.222 and/or
208.67.220.220.
It also provides the following premium services:
OpenDNS Umbrella: An enterprise advanced network security service to protect
www.hellodigi.ir

any device, anywhere. This service blocks known malicious sites from being
“resolved” in DNS. It provides an up-to-the-minute view and analysis of at least
2% of the world’s Internet activity to stay ahead of attacks. This service provides
threat intelligence by seeing where attacks are being staged on the Internet.
OpenDNS Investigate: This is a premium service that provides you information on
where attacks are forming, allowing you to investigate incidents faster and
prioritize them better. With the Investigate service, you can see up-to-the-minute
threat data and historical context about all domains on the Internet and respond
quickly to critical incidents. It provides a dynamic search engine and a RESTful
API that you can use to automatically bring critical data into the security
management and threat intelligence systems deployed in your organization. It also
provides predictive threat intelligence using statistical models for real-time and
historical data to predict domains that are likely malicious and could be part of
future attacks.
CloudLock
Cisco acquired a company called CloudLock that creates solutions to protect their
customers against data breaches in any cloud environment and application (app) through
a highly configurable cloud-based data loss prevention (DLP) architecture. CloudLock
has numerous out-of-the-box policies and a wide range of automated, policy-driven
response actions, including the following:
File-level encryption
Quarantine
End-user notifications
These policies are designed to provide common data protection and help with
compliance. CloudLock also can monitor data at rest within platforms via an API and
provide visibility of user activity through retroactive monitoring capabilities. This
solution helps organizations defend against account compromises with cross-platform
User and Entity Behavior Analytics (UEBA) for Software as a Service (SaaS),
Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Identity as a
Service (IDaaS) environments. CloudLock uses advanced machine learning to be able
to detect anomalies and to identify activities in different countries that can be
whitelisted or blacklisted in the platform. CloudLock Apps Firewall is a feature that
discovers and controls malicious cloud apps that may be interacting with the corporate
network.
www.hellodigi.ir

Cisco NetFlow
NetFlow is a Cisco technology that provides comprehensive visibility into all network
traffic that traverses a Cisco-supported device. Cisco invented NetFlow and is the
leader in IP traffic flow technology. NetFlow was initially created for billing and
accounting of network traffic and to measure other IP traffic characteristics such as
bandwidth utilization and application performance. NetFlow has also been used as a
network capacity planning tool and to monitor network availability. Nowadays,
NetFlow is used as a network security tool because its reporting capabilities provide
nonrepudiation, anomaly detection, and investigative capabilities. As network traffic
traverses a NetFlow-enabled device, the device collects traffic flow data and provides
a network administrator or security professional with detailed information about such
flows.
NetFlow provides detailed network telemetry that can be used to see what is actually
happening across the entire network. You can use NetFlow to identify DoS attacks,
quickly identify compromised endpoints and network infrastructure devices, and
monitor network usage of employees, contractors, or partners. NetFlow is also often
used to obtain network telemetry during security incident response and forensics. You
can also take advantage of NetFlow to detect firewall misconfigurations and
inappropriate access to corporate resources.
NetFlow supports both IP Version 4 (IPv4) and IP Version 6 (IPv6).
There’s also the Internet Protocol Flow Information Export (IPFIX), which is a network
flow standard led by the Internet Engineering Task Force (IETF). IPFIX was designed to
create a common, universal standard of export for flow information from routers,
switches, firewalls, and other infrastructure devices. IPFIX defines how flow
information should be formatted and transferred from an exporter to a collector. IPFIX
is documented in RFC 7011 through RFC 7015 and RFC 5103. Cisco NetFlow Version
9 is the basis and main point of reference for IPFIX. IPFIX changes some of the
terminologies of NetFlow, but in essence they are the same principles of NetFlow
Version 9.
Traditional Cisco NetFlow records are usually exported via UDP messages. The IP
address of the NetFlow collector and the destination UDP port must be configured on
the sending device. The NetFlow standard (RFC 3954) does not specify a specific
NetFlow listening port. The standard or most common UDP port used by NetFlow is
UDP port 2055, but other ports, such as 9555, 9995, 9025, and 9026, can also be used.
UDP port 4739 is the default port used by IPFIX.
www.hellodigi.ir

What Is the Flow in NetFlow?
A flow is a unidirectional series of packets between a given source and destination.
Figure 2-21 shows an example of a flow between a client and a server.
Figure 2-21 Flow Example
In a flow, the same source and destination IP addresses, source and destination ports,
and IP protocol are shared. This is often referred to as the five-tuple.
In Figure 2-21, the client (source) establishes a connection to the server (destination).
When the traffic traverses the router (configured for NetFlow), it generates a flow
record. At the very minimum, the five-tuple is used to identify the flow in the NetFlow
database of flows kept on the device. This database is often called the NetFlow cache.
Here is the five-tuple for the basic flow represented in Figure 2-21:
Source address: 192.168.1.1
Destination IP address: 10.10.10.10
Source port: 15728
Destination port: 80
Protocol: TCP (since HTTP is carried over TCP)
Many people often confuse a flow with a session. All traffic in a flow is going in the
same direction; however, when the client establishes the HTTP connection (session) to
www.hellodigi.ir

the server and accesses a web page, it represents two separate flows. The first flow is
the traffic from the client to the server, and the other flow is from the server to the client.
There are different versions of NetFlow. Depending on the version of NetFlow, the
router can also gather additional information, such as type of service (ToS) byte,
differentiated services code point (DSCP), the device’s input interface, TCP flags, byte
counters, and start and end times.
Flexible NetFlow, Cisco’s next-generation NetFlow, can track a wide range of Layer 2,
IPv4, and IPv6 flow information, such as the following:
Source and destination MAC addresses
Source and destination IPv4 or IPv6 addresses
Source and destination ports
ToS
DSCP
Packet and byte counts
Flow timestamps
Input and output interface numbers
TCP flags and encapsulated protocol (TCP/UDP) and individual TCP flags
Sections of a packet for deep packet inspection
All fields in an IPv4 header, including IP-ID and TTL
All fields in an IPv6 header, including Flow Label and Option Header
Routing information, such as next-hop address, source autonomous system number
(ASN), destination ASN, source prefix mask, destination prefix mask, Border
Gateway Protocol (BGP) next hop, and BGP policy accounting traffic index
NetFlow vs. Full Packet Capture
A substantial difference exists between a full packet capture and the information
collected in NetFlow. Think about NetFlow as being a technology to collect metadata
on all transactions/flows traversing the network.
Collecting packet captures in your network involves “tapping” or capturing a mirror
image of network packets as they move through the network. Cisco switches allow for
the setup of mirror ports that do not impact network performance. Typically, a deep
packet inspection (DPI) application is connected to a mirror port, and certain
information is extracted from the packets so that you can find out what is happening on
your network. DPI solutions range from open source packet capture software such as
www.hellodigi.ir

Wireshark to commercial applications that can provide more detailed analysis.
You may be asking, “How does NetFlow compare to traditional packet capture
technologies that leverage SPAN ports or Ethernet taps?” The cost and the amount of
data that needs to be analyzed is much higher with packet captures. In a lot of scenarios
and in most cases, you don’t need heavyweight packet capture technology everywhere
throughout your network if you have an appropriate NetFlow collection and analysis
ecosystem. In fact, you probably couldn’t afford it even if you did need it. For instance,
the storage and compute power needed to analyze full packet captures can lead to much
higher costs. However, there is definitely also a good benefit to collecting full packet
capture data.
If you really must have latency and packet capture capabilities, Cisco through its
Lancope acquisition offers a device called a FlowSensor that plugs into a SPAN, tap, or
mirror port to generate NetFlow suitable for consumption by any NetFlow v9–capable
collector.
The NetFlow Cache
The three types of NetFlow cache are as follows:
Normal cache
Immediate cache
Permanent cache
The “normal cache” is the default cache type in many infrastructure devices enabled
with NetFlow and Flexible NetFlow. The entries in the flow cache are removed (aged
out) based on the configured timeout active seconds and timeout inactive seconds
settings.
In the immediate cache, the flows account for a single packet. This type of NetFlow
cache is desirable for real-time traffic monitoring and distributed DoS (DDoS)
detection. The immediate NetFlow cache is used when only very small flows are
expected (for example, sampling).
TIP
You have to keep in mind that the immediate cache may result in a large
amount of export data.
The permanent cache is used to track a set of flows without expiring the flows from the
cache. The entire cache is periodically exported (update timer). Another thing to
highlight is that the cache is a configurable value. After the cache is full, new flows will
not be monitored. The permanent cache uses update counters rather than delta counters.
www.hellodigi.ir

Data Loss Prevention
Data loss prevention (DLP) is the ability to detect any sensitive emails, documents, or
information leaving your organization. Several products in the industry inspect for
traffic to prevent data loss in an organization. Several Cisco security products integrate
with third-party products to provide this type of solution. For example, the Cisco ESA
integrates RSA email DLP for outbound email traffic. Also, the Cisco Cloud Email
Service and the Cisco Hybrid Email Security solution allow network security
administrators to remain compliant and to maintain advanced control with encryption,
DLP, and onsite identity-based integration. Another product family that integrates with
other DLP solutions is the Cisco WSA, which redirects all outbound traffic to a third-
party DLP appliance, allowing deep content inspection for regulatory compliance and
data exfiltration protection. It enables an administrator to inspect web content by title,
metadata, and size and even to prevent users from storing files to cloud services such as
Dropbox and Google Drive.
Cisco CloudLock is also another DLP solution. CloudLock is designed to protect
organizations of any type against data breaches in any type of cloud environment or
application (app) through a highly configurable cloud-based DLP architecture.
CloudLock is an API-driven solution that provides a deep level of integration with
monitored SaaS, IaaS, PaaS, and IDaaS solutions. It provides advanced cloud DLP
functionality that includes out-of-the-box policies designed to help administrators
maintain compliance. Additionally, CloudLock can monitor data at rest within platforms
via APIs and provide a comprehensive picture of user activity through retroactive
monitoring capabilities. Security administrators can mitigate risk efficiently using
CloudLock’s configurable, automated response actions, including encryption,
quarantine, and end-user notification.
Data loss doesn’t always take place because of a complex attack carried out by an
external attacker; many data loss incidents have been carried out by internal (insider)
attacks. Data loss can also happen because of human negligence or ignorance—for
example, an internal employee sending sensitive corporate email to their personal email
account, or uploading sensitive information to an unapproved cloud provider. This is
why maintaining visibility into what’s coming as well as leaving the organization is so
important.
www.hellodigi.ir

Table 2-2 Key Topics
Complete Tables and Lists from Memory
Print a copy of Appendix B, “Memory Tables,” (found on the book website), or at least
the section for this chapter, and complete the tables and lists from memory. Appendix C,
“Memory Tables Answer Key,” also on the website, includes completed tables and lists
to check your work.
www.hellodigi.ir

Define Key Terms
Define the following key terms from this chapter, and check your answers in the
glossary:
network firewalls
ACLs
network address translation
DLP
AMP
IPS
NetFlow
Q&A
The answers to these questions appear in Appendix A, “Answers to the ‘Do I Know
This Already?’ Quizzes and Q&A Questions.” For more practice with exam format
questions, use the exam engine on the website.
1. Which of the following explains features of a traditional stateful firewall?
a. Access control is done by application awareness and visibility.
b. Access control is done by the five-tuple (source and destination IP addresses,
source and destination ports, and protocol).
c. Application inspection is not supported.
d. Traditional stateful firewalls support advanced malware protection.
2. Which of the following describes a traditional IPS?
a. A network security appliance or software technology that resides in stateful
firewalls
b. A network security appliance or software technology that supports advanced
malware protection
c. A network security appliance or software technology that inspects network
traffic to detect and prevent security threats and exploits
d. A virtual appliance that can be deployed with the Cisco Adaptive Security
Manager (ASM)
3. Which of the following is true about NetFlow?
a. NetFlow can be deployed to replace IPS devices.
b. NetFlow provides information about network session data.
c. NetFlow provides user authentication information.
www.hellodigi.ir

d. NetFlow provides application information.
4. What is DLP?
a. An email inspection technology used to prevent phishing attacks
b. A software or solution for making sure that corporate users do not send
sensitive or critical information outside the corporate network
c. A web inspection technology used to prevent phishing attacks
d. A cloud solution used to provide dynamic layer protection
5. Stateful and traditional firewalls can analyze packets and judge them against a set
of predetermined rules called access control lists (ACLs). They inspect which of
the following elements within a packet?
a. Session headers
b. NetFlow flow information
c. Source and destination ports and source and destination IP addresses
d. Protocol information
6. Which of the following are Cisco cloud security solutions?
a. CloudDLP
b. OpenDNS
c. CloudLock
d. CloudSLS
7. Cisco pxGrid has a unified framework with an open API designed in a hub-and-
spoke architecture. pxGrid is used to enable the sharing of contextual-based
information from which devices?
a. From a Cisco ASA to the Cisco OpenDNS service
b. From a Cisco ASA to the Cisco WSA
c. From a Cisco ASA to the Cisco FMC
d. From a Cisco ISE session directory to other policy network systems, such as
Cisco IOS devices and the Cisco ASA
8. Which of the following is true about heuristic-based algorithms?
a. Heuristic-based algorithms may require fine tuning to adapt to network traffic
and minimize the possibility of false positives.
b. Heuristic-based algorithms do not require fine tuning.
c. Heuristic-based algorithms support advanced malware protection.
d. Heuristic-based algorithms provide capabilities for the automation of IPS
www.hellodigi.ir

signature creation and tuning.
9. Which of the following describes the use of DMZs?
a. DMZs can be configured in Cisco IPS devices to provide additional
inspection capabilities.
b. DMZs can automatically segment the network traffic.
c. DMZs can serve as segments on which a web server farm resides or as
extranet connections to business partners.
d. DMZs are only supported in next-generation firewalls.
10. Which of the following has the most storage requirements?
a. NetFlow
b. Syslog
c. Full packet captures
d. IPS signatures
www.hellodigi.ir

Chapter 3. Security Principles
This chapter covers the following topics:
Describe the principles of the defense-in-depth strategy.
What are threats, vulnerabilities, and exploits?
Describe Confidentiality, Integrity, and Availability.
Describe risk and risk analysis.
Define what personally identifiable information (PII) and protected health
information (PHI) are.
What are the principles of least privilege and separation of duties?
What are security operation centers (SOCs)?
Describe cyber forensics.
This chapter covers the principles of the defense-in-depth strategy and compares and
contrasts the concepts of risk, threats, vulnerabilities, and exploits. This chapter also
defines what are threat actors, run book automation (RBA), chain of custody
(evidentiary), reverse engineering, sliding window anomaly detection, Personally
Identifiable Information (PII), Protected Health Information (PHI), as well as what is
the principle of least privilege, and how to perform separation of duties. It also covers
concepts of risk scoring, risk weighting, risk reduction, and how to perform overall risk
assessments.
“Do I Know This Already?” Quiz
The “Do I Know This Already?” quiz helps you identify your strengths and deficiencies
in this chapter’s topics. The 11-question quiz, derived from the major sections in the
“Foundation Topics” portion of the chapter, helps you determine how to spend your
limited study time. You can find the answers in Appendix A Answers to the “Do I Know
This Already?” Quizzes and Q&A Questions.
Table 3-1 outlines the major topics discussed in this chapter and the “Do I Know This
Already?” quiz questions that correspond to those topics.
www.hellodigi.ir

Table 3-1 “Do I Know This Already?” Foundation Topics Section-to-Question
Mapping
1. What is one of the primary benefits of a defense-in-depth strategy?
a. You can deploy advanced malware protection to detect and block advanced
persistent threats.
b. You can configure firewall failover in a scalable way.
c. Even if a single control (such as a firewall or IPS) fails, other controls can
still protect your environment and assets.
d. You can configure intrusion prevention systems (IPSs) with custom signatures
and auto-tuning to be more effective in the network.
2. Which of the following planes is important to understand for defense in depth?
a. Management plane
b. Failover plane
c. Control plane
d. Clustering
e. User/data plane
f. Services plane
3. Which of the following are examples of vulnerabilities?
a. Advanced threats
b. CVSS
c. SQL injection
d. Command injection
e. Cross-site scripting (XSS)
www.hellodigi.ir

f. Cross-site request forgery (CSRF)
4. What is the Common Vulnerabilities and Exposures (CVE)?
a. An identifier of threats
b. A standard to score vulnerabilities
c. A standard maintained by OASIS
d. A standard for identifying vulnerabilities to make it easier to share data across
tools, vulnerability repositories, and security services
5. Which of the following is true when describing threat intelligence?
a. Threat intelligence’s primary purpose is to make money by exploiting threats.
b. Threat intelligence’s primary purpose is to inform business decisions
regarding the risks and implications associated with threats.
c. With threat intelligence, threat actors can become more efficient to carry out
attacks.
d. Threat intelligence is too difficult to obtain.
6. Which of the following is an open source feed for threat data?
a. Cyber Squad ThreatConnect
b. BAE Detica CyberReveal
c. MITRE CRITs
d. Cisco AMP Threat Grid
7. What is the Common Vulnerability Scoring System (CVSS)?
a. A scoring system for exploits.
b. A tool to automatically mitigate vulnerabilities.
c. A scoring method that conveys vulnerability severity and helps determine the
urgency and priority of response.
d. A vulnerability-mitigation risk analysis tool.
8. Which of the following are examples of personally identifiable information (PII)?
a. Social security number
b. Biological or personal characteristics, such as an image of distinguishing
features, fingerprints, x-rays, voice signature, retina scan, and geometry of the
face
c. CVE
d. Date of birth
9. Which of the following statements are true about the principle of least privilege?
www.hellodigi.ir

a. Principle of least privilege and separation of duties can be considered to be
the same thing.
b. The principle of least privilege states that all users—whether they are
individual contributors, managers, directors, or executives—should be granted
only the level of privilege they need to do their job, and no more.
c. Programs or processes running on a system should have the capabilities they
need to “get their job done,” but no root access to the system.
d. The principle of least privilege only applies to people.
10. What is a runbook?
a. A runbook is a collection of processes running on a system.
b. A runbook is a configuration guide for network security devices.
c. A runbook is a collection of best practices for configuring access control lists
on a firewall and other network infrastructure devices.
d. A runbook is a collection of procedures and operations performed by system
administrators, security professionals, or network operators.
11. Chain of custody is the way you document and preserve evidence from the time
you started the cyber forensics investigation to the time the evidence is presented
at court. Which of the following is important when handling evidence?
a. Documentation about how and when the evidence was collected
b. Documentation about how evidence was transported
c. Documentation about who had access to the evidence and how it was accessed
d. Documentation about the CVSS score of a given CVE
Foundation Topics
In this chapter, you will learn the different cyber security principles, including what
threats, vulnerabilities, and exploits are. You will also learn details about what defense
in depth is and how to perform risk analysis. This chapter also provides an overview of
what runbooks are and how to perform runbook automation (RBA).
When you are performing incident response and forensics tasks, you always have to be
aware of how to collect evidence and what the appropriate evidentiary chain of custody
is. This chapter provides an overview of chain of custody when it pertains to cyber
security investigations. You will learn the details about reverse engineering, forensics,
and sliding window anomaly detection. You will also learn what personally identifiable
information (PII) and protected health information (PHI) are, especially pertaining to
different regulatory standards such as the Payment Card Industry Data Security Standard
(PCI DSS) and the Health Insurance Portability and Accountability Act (HIPAA).
www.hellodigi.ir

In this chapter, you will also learn the concepts of principle of least privilege. It is
important to know how to perform risk scoring and risk weighting in the realm of risk
assessment and risk reduction. This chapter provides an overview of these risk
assessment and risk reduction methodologies.
The Principles of the Defense-in-Depth Strategy
If you are a cyber security expert, or even an amateur, you probably already know that
when you deploy a firewall or an intrusion prevention system (IPS) or install antivirus
or advanced malware protection on your machine, you cannot assume you are now safe
and secure. A layered and cross-boundary “defense-in-depth” strategy is what is needed
to protect your network and corporate assets. One of the primary benefits of a defense-
in-depth strategy is that even if a single control (such as a firewall or IPS) fails, other
controls can still protect your environment and assets. Figure 3-1 illustrates this
concept.
Figure 3-1 Defense in Depth
The following are the layers illustrated in Figure 3-1 (starting from the top):
Nontechnical activities such as appropriate security policies and procedures, and
end-user and staff training.
Physical security, including cameras, physical access control (such as badge
readers, retina scanners, and fingerprint scanners), and locks.
www.hellodigi.ir

Network security best practices, such as routing protocol authentication, control
plane policing (CoPP), network device hardening, and so on.
Host security solutions such as advanced malware protection (AMP) for endpoints,
antiviruses, and so on.
Application security best practices such as application robustness testing, fuzzing,
defenses against cross-site scripting (XSS), cross-site request forgery (CSRF)
attacks, SQL injection attacks, and so on.
The actual data traversing the network. You can employ encryption at rest and in
transit to protect data.
TIP
Each layer of security introduces complexity and latency, while requiring
that someone manage it. The more people are involved, even in
administration, the more attack vectors you create, and the more you
distract your people from possibly more important tasks. Employ multiple
layers, but avoid duplication—and use common sense.
The first step in the process of preparing your network and staff to successfully identify
security threats is achieving complete network visibility. You cannot protect against or
mitigate what you cannot view/detect. You can achieve this level of network visibility
through existing features on network devices you already have and on devices whose
potential you do not even realize. In addition, you should create strategic network
diagrams to clearly illustrate your packet flows and where, within the network, you
could enable security mechanisms to identify, classify, and mitigate the threats.
Remember that network security is a constant war. When defending against the enemy,
you must know your own territory and implement defense mechanisms.
In some cases, onion-like diagrams are used to help illustrate and analyze what
“defense-in-depth” protections and enforcements should be deployed in a network.
Figure 3-2 shows an example of one of these onion diagrams, where network resources
are protected through several layers of security.
www.hellodigi.ir

Figure 3-2 Layered Onion Diagram Example
You can create this type of diagram, not only to understand the architecture of your
organization, but also to strategically identify places within the infrastructure where you
can implement telemetry mechanisms such as NetFlow and identify choke points where
you can mitigate an incident. Notice that the access, distribution, and core
layers/boundaries are clearly defined.
These types of diagrams also help you visualize operational risks within your
organization. The diagrams can be based on device roles and can be developed for
critical systems you want to protect. For example, identify a critical system within your
organization and create a layered diagram similar to the one in Figure 3-2. In this
example, an “important database in the data center” is the most critical application/data
source for this company. The diagram includes the database in the center.
You can also use this type of diagram to audit device roles and the types of services they
should be running. For example, you can decide in what devices you can run services
such as Cisco NetFlow or where to enforce security policies. In addition, you can see
the life of a packet within your infrastructure, depending on the source and destination.
An example is illustrated in Figure 3-3.
www.hellodigi.ir

Figure 3-3 Layered Onion Diagram Example
In Figure 3-3, you can see a packet flow that occurs when a user from the call center
accesses an Internet site. You know exactly where the packet is going based on your
architecture as well as your security and routing policies. This is a simple example;
however, you can use this concept to visualize risks and to prepare your isolation
policies.
When applying defense-in-depth strategies, you can also look at a roles-based network
security approach for security assessment in a simple manner. Each device on the
network serves a purpose and has a role; subsequently, you should configure each
device accordingly. You can think about the different planes as follows:
Management plane: This is the distributed and modular network management
environment.
Control plane: This plane includes routing control. It is often a target because the
control plane depends on direct CPU cycles.
User/data plane: This plane receives, processes, and transmits network data
among all network elements.
Services plane: This is the Layer 7 application flow built on the foundation of the
other layers.
www.hellodigi.ir

Policies: The plane includes the business requirements. Cisco calls policies the
“business glue” for the network. Policies and procedures are part of this section,
and they apply to all the planes in this list.
You should also view security in two different perspectives, as illustrated in Figure 3-4:
Operational (reactive) security
Proactive security
Figure 3-4 Reactive vs. Proactive Security
You should have a balance between proactive and reactive security approaches.
Prepare your network, staff, and organization as a whole to better identify, classify,
trace back, and react to security incidents. In addition, proactively protect your
organization while learning about new attack vectors, and mitigate those vectors with
the appropriate hardware, software, and architecture solutions.
What Are Threats, Vulnerabilities, and Exploits?
In this section, you will learn the difference between vulnerabilities, threats, and
exploits.
www.hellodigi.ir

Vulnerabilities
A vulnerability is an exploitable weakness in a system or its design. Vulnerabilities can
be found in protocols, operating systems, applications, hardware, and system designs.
Vulnerabilities abound, with more discovered every day. You will learn many examples
of vulnerability classifications in Chapter 13, “Types of Attacks and Vulnerabilities.”
However, the following are a few examples:
SQL injection vulnerabilities
Command injections
Cross-site scripting (XSS)
Cross-site request forgery (CSRF)
API abuse vulnerabilities
Authentication vulnerabilities
Privilege escalation vulnerabilities
Cryptographic vulnerabilities
Error-handling vulnerabilities
Input validation vulnerabilities
Path traversal vulnerabilities
Buffer overflows
Deserialization of untrusted data
Directory restriction error
Double free
Password management: hardcoded password
Password plaintext storage
Vendors, security researchers, and vulnerability coordination centers typically assign
vulnerabilities an identifier that’s disclosed to the public. This identifier is known as the
Common Vulnerabilities and Exposures (CVE). CVE is an industry-wide standard. CVE
is sponsored by US-CERT, the office of Cybersecurity and Communications at the U.S.
Department of Homeland Security. Operating as DHS’s Federally Funded Research and
Development Center (FFRDC), MITRE has copyrighted the CVE List for the benefit of
the community in order to ensure it remains a free and open standard, as well as to
legally protect the ongoing use of it and any resulting content by government, vendors,
and/or users. MITRE maintains the CVE list and its public website, manages the CVE
Compatibility Program, oversees the CVE Naming Authorities (CNAs), and provides
www.hellodigi.ir

impartial technical guidance to the CVE Editorial Board throughout the process to
ensure CVE serves the public interest.
The goal of CVE is to make it easier to share data across tools, vulnerability
repositories, and security services.
More information about CVE is available at http://cve.mitre.org.
Threats
A threat is any potential danger to an asset. If a vulnerability exists but has not yet been
exploited—or, more importantly, it is not yet publicly known—the threat is latent and
not yet realized. If someone is actively launching an attack against your system and
successfully accesses something or compromises your security against an asset, the
threat is realized. The entity that takes advantage of the vulnerability is known as the
malicious actor, and the path used by this actor to perform the attack is known as the
threat agent or threat vector.
A countermeasure is a safeguard that somehow mitigates a potential risk. It does so by
either reducing or eliminating the vulnerability, or it at least reduces the likelihood of
the threat agent to actually exploit the risk. For example, you might have an unpatched
machine on your network, making it highly vulnerable. If that machine is unplugged from
the network and ceases to have any interaction through exchanging data with any other
device, you have successfully mitigated all those vulnerabilities. You have likely
rendered that machine no longer an asset, though—but it is safer.
Threat Actors
Threat actors are the individuals (or group of individuals) who perform an attack or are
responsible for a security incident that impacts or has the potential of impacting an
organization or individual. There are several types of threat actors:
Script kiddies: People who uses existing “scripts” or tools to hack into computers
and networks. They lack the expertise to write their own scripts.
Organized crime groups: Their main purpose is to steal information, scam people,
and make money.
State sponsors and governments: These agents are interested in stealing data,
including intellectual property and research-and-development data from major
manufacturers, government agencies, and defense contractors.
www.hellodigi.ir

Hacktivists: People who carry out cyber security attacks aimed at promoting a
social or political cause.
Terrorist groups: These groups are motivated by political or religious beliefs.
Threat Intelligence
Threat intelligence is referred to as the knowledge about an existing or emerging threat
to assets, including networks and systems. Threat intelligence includes context,
mechanisms, indicators of compromise (IoCs), implications, and actionable advice.
Threat intelligence is referred to as the information about the observables, indicators of
compromise (IoCs) intent, and capabilities of internal and external threat actors and
their attacks. Threat intelligence includes specifics on the tactics, techniques, and
procedures of these adversaries. Threat intelligence’s primary purpose is to inform
business decisions regarding the risks and implications associated with threats.
Converting these definitions into common language could translate to threat intelligence
being evidence-based knowledge of the capabilities of internal and external threat
actors. This type of data can be beneficial for the security operations center (SOC) of
any organization. Threat intelligence extends cyber security awareness beyond the
internal network by consuming intelligence from other sources Internet-wide related to
possible threats to you or your organization. For instance, you can learn about threats
that have impacted different external organizations. Subsequently, you can proactively
prepare rather than react once the threat is seen against your network. Providing an
enrichment data feed is one service that threat intelligence platforms would typically
provide.
Forrester defines a five-step threat intelligence process (see Figure 3-5) for evaluating
threat intelligence sources:
Step 1. Planning and direction
Step 2. Collection
Step 3. Processing
Step 4. Analysis and production
Step 5. Dissemination
www.hellodigi.ir

Figure 3-5 Threat Intelligence
Many different threat intelligence platforms and services are available in the market
nowadays. Cyber threat intelligence focuses on providing actionable information on
adversaries, including indicators of compromise (IoCs). Threat intelligence feeds help
you prioritize signals from internal systems against unknown threats. Cyber threat
intelligence allows you to bring more focus to cyber security investigation because
instead of blindly looking for “new” and “abnormal” events, you can search for specific
IoCs, IP addresses, URLs, or exploit patterns. The following are a few examples:
Cyber Squad ThreatConnect: An on-premises, private, or public cloud solution
offering threat data collection, analysis, collaboration, and expertise in a single
platform. You can obtain more details at http://www.threatconnect.com.
BAE Detica CyberReveal: A multithreat monitoring, analytics, investigation, and
response product. CyberReveal brings together BAE Systems Detica’s heritage in
network intelligence, big-data analytics, and cyber threat research. CyberReveal
consists of three core components: platform, analytics, and investigator. Learn more
at http://www.baesystems.com.
Lockheed Martin Palisade: Supports comprehensive threat collection, analysis,
collaboration, and expertise in a single platform. Learn more at
http://www.lockheedmartin.com.
MITRE CRITs: Collaborative Research Into Threats (CRITs) is an open source
feed for threat data. Learn more at https://crits.github.io.
Cisco AMP Threat Grid: Combines static and dynamic malware analysis with
threat intelligence into one unified solution.
A number of standards are being developed for disseminating threat intelligence
information. The following are a few examples:
Structured Threat Information eXpression (STIX): An express language
designed for sharing of cyber attack information. STIX details can contain data such
www.hellodigi.ir

as the IP address of command-and-control servers (CnC), malware hashes, and so
on. STIX was originally developed by MITRE and is now maintained by OASIS.
You can obtain more information at http://stixproject.github.io.
Trusted Automated eXchange of Indicator Information (TAXII): An open
transport mechanism that standardizes the automated exchange of cyber threat
information. TAXII was originally developed by MITRE and is now maintained by
OASIS. You can obtain more information at http://taxiiproject.github.io.
Cyber Observable eXpression (CybOX): A free standardized schema for
specification, capture, characterization, and communication of events of stateful
properties that are observable in the operational domain. CybOX was originally
developed by MITRE and is now maintained by OASIS. You can obtain more
information at https://cyboxproject.github.io.
Open Indicators of Compromise (OpenIOC): An open framework for sharing
threat intelligence in a machine-digestible format. Learn more at
http://www.openioc.org.
It should be noted that many open source and non-security-focused sources can be
leveraged for threat intelligence as well. Some examples of these sources are social
media, forums, blogs, and vendor websites.
Exploits
An exploit is software or a sequence of commands that takes advantage of a
vulnerability in order to cause harm to a system or network. There are several methods
of classifying exploits; however, the most common two categories are remote and local
exploits. A remote exploit can be launched over a network and carries out the attack
without any prior access to the vulnerable device or software. A local exploit requires
the attacker or threat actor to have prior access to the vulnerable system.
NOTE
Exploits are commonly categorized and named by the type of vulnerability
they exploit.
There is also the concept of exploit kits. An exploit kit is a compilation of exploits that
are often designed to be served from web servers. Their main purpose is identifying
software vulnerabilities in client machines and then exploiting such vulnerabilities to
upload and execute malicious code on the client. The following are a few examples of
known exploit kits:
www.hellodigi.ir

Angler
MPack
Fiesta
Phoenix
Blackhole
Crimepack
RIG
NOTE
Cisco Talos has covered and explained numerous exploit kits in detail,
including Angler. You can obtain more information about these type of
threats at Talos’s blog, http://blog.talosintel.com, and specifically for
Angler at http://blog.talosintel.com/search/label/angler.
Confidentiality, Integrity, and Availability: The CIA Triad
Confidentiality, integrity and availability, is often referred to as the CIA triad. This is a
model that was created to define security policies. In some cases, you may also see this
model referred to as the AIC triad (availability, integrity and confidentiality) to avoid
confusion with the United States Central Intelligence Agency.
The idea is that confidentiality, integrity and availability should be guaranteed in any
system that is considered secured.
Confidentiality
The ISO 27000 standard has a very good definition: “confidentiality is the property, that
information is not made available or disclosed to unauthorized individuals, entities, or
processes.” One of the most common ways to protect the confidentiality of a system or
its data is to use encryption. The Common Vulnerability Scoring System (CVSS) uses
the CIA triad principles within the metrics used to calculate the CVSS base score.
NOTE
You will learn more about CVSS throughout the following chapters, and
you can obtain more information about CVSS at:
https://www.first.org/cvss/specification-document
www.hellodigi.ir

Integrity
Integrity is the ability to make sure that a system and its data has not been altered or
compromised. It ensures that the data is an accurate and unchanged representation of the
original secure data. Integrity applies not only to data, but also to systems. For instance,
if a threat actor changes the configuration of a server, firewall, router, switch or any
other infrastructure device, it is considered that he or she impacted the integrity of the
system.
Availability
Availability refers that a system or application must be “available” to authorized users
at all times. According to the CVSS version 3 specification, the availability metric
“measures the impact to the availability of the impacted component resulting from a
successfully exploited vulnerability. While the Confidentiality and Integrity impact
metrics apply to the loss of confidentiality or integrity of data (e.g., information, files)
used by the impacted component, this metric refers to the loss of availability of the
impacted component itself, such as a networked service (e.g., web, database, email).
Since availability refers to the accessibility of information resources, attacks that
consume network bandwidth, processor cycles, or disk space all impact the availability
of an impacted component.”
A common example of an attack that impacts availability is a denial of service (DoS)
attack.
Risk and Risk Analysis
According to the Merriam-Webster dictionary, risk is “the possibility that something bad
or unpleasant will happen.” In the world of cyber security, risk can be defined as the
possibility of a security incident (something bad) happening. There are many standards
and methodologies for classifying and analyzing cyber security risks. The Federal
Financial Institutions Examination Council (FFIEC) developed the Cybersecurity
Assessment Tool (Assessment) to help financial institutions identify their risks and
determine their cyber security preparedness. This guidance/tool can be useful for any
organization. The FFIEC tool provides a repeatable and measurable process for
organizations to measure their cyber security readiness.
According to the FFIEC, the assessment consists of two parts:
Inherent Risk Profile and Cybersecurity Maturity: The Inherent Risk Profile
identifies the institution’s inherent risk before implementing controls. The
Cybersecurity Maturity includes domains, assessment factors, components, and
www.hellodigi.ir

individual declarative statements across five maturity levels to identify specific
controls and practices that are in place. Although management can determine the
institution’s maturity level in each domain, the Assessment is not designed to
identify an overall cyber security maturity level.
The International Organization for Standardization (ISO) 27001: This is the
international standard for implementing an information security management system
(ISMS). ISO 27001 is heavily focused on risk-based planning to ensure that the
identified information risks (including cyber risks) are appropriately managed
according to the threats and the nature of those threats. ISO 31000 is the general risk
management standard that includes principles and guidelines for managing risk. It
can be used by any organization, regardless of its size, activity, or sector. Using ISO
31000 can help organizations increase the likelihood of achieving objectives,
improve the identification of opportunities and threats, and effectively allocate and
use resources for risk treatment.
The ISO/IEC 27005 standard is more focused on cyber security risk assessment. It
is titled “Information technology—Security techniques—Information security risk
management.”
The following is according to ISO’s website:
“The standard doesn’t specify, recommend or even name any specific risk
management method. It does however imply a continual process consisting of a
structured sequence of activities, some of which are iterative:
Establish the risk management context (e.g. the scope, compliance obligations,
approaches/methods to be used and relevant policies and criteria such as the
organization’s risk tolerance or appetite);
Quantitatively or qualitatively assess (i.e. identify, analyze and evaluate)
relevant information risks, taking into account the information assets, threats,
existing controls and vulnerabilities to determine the likelihood of incidents or
incident scenarios, and the predicted business consequences if they were to
occur, to det














