Guide To Computer Network Security (3rd Edition) Joseph Migga Kizza Springer
User Manual:
Open the PDF directly: View PDF
.
Page Count: 550
Computer Communications and Networks
Joseph Migga Kizza
Guide to
Computer
Network
Security
Third Edition
Computer Communications and Networks
The Computer Communications and Networks series is a range of textbooks,
monographs and handbooks. It sets out to provide students, researchers, and nonspecialists alike with a sure grounding in current knowledge, together with
comprehensible access to the latest developments in computer communications and
networking.
Emphasis is placed on clear and explanatory styles that support a tutorial approach,
so that even the most complex of topics is presented in a lucid and intelligible
manner.
More information about this series at http://www.springer.com/series/4198
Joseph Migga Kizza
Guide to Computer
Network Security
Third Edition
Joseph Migga Kizza
Department of Computer Science
University of Tennessee
Chattanooga, TN, USA
Series Editor
A.J. Sammes
Centre for Forensic Computing
Cranfield University, Shrivenham campus
Swindon, UK
ISSN 1617-7975
Computer Communications and Networks
ISBN 978-1-4471-6653-5
ISBN 978-1-4471-6654-2
DOI 10.1007/978-1-4471-6654-2
(eBook)
Library of Congress Control Number: 2014959827
Springer London Heidelberg New York Dordrecht
© Springer-Verlag London 2009, 2013, 2015
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of
the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation,
broadcasting, reproduction on microfilms or in any other physical way, and transmission or information
storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology
now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the
editors give a warranty, express or implied, with respect to the material contained herein or for any errors
or omissions that may have been made.
Printed on acid-free paper
Springer-Verlag London Ltd. is part of Springer Science+Business Media (www.springer.com)
Preface to Third Edition
The second edition of this book came out barely 2 years ago and we are again in
need of a new and improved third edition. This rapid turnaround of editions of a
successful book like this is indicative of the rapidly changing technology landscape.
To keep a promise we made to our readers in the first edition of keeping the book
materials as up to date as possible, we have now embarked on this third edition.
First, recall that in the second edition, we introduced to the reader the concept of a
changing traditional Computer Network as we knew it when the first edition of this
book came out. That network with a nicely “demarcated” and heavily defended
perimeter wall and well guarded access points has been going into a transformation
as a result of new technologies. Changes have occurred, as we pointed out in the
second edition, within and outside the network we call the “traditional computer
network”, at the server and most importantly at the boundaries. A virtualized and
elastic network with rapid extensions at will is taking its place to meet the growing
needs of users. The new technologies driving this change, for now, are system
resource virtualization, the evolving cloud computing models and a growing and
unpredictable mobile computing technology creating platforms that demand new
extensions, on the fly and at will, to the traditional computer network. Secondly, the
rapidly merging computing and telecommunication technologies, we started discussing in the first and through the second editions, are rapidly destroying the traditional computer network as mobile and home devices are slowly becoming part of
the enterprise and at the same time remaining in their traditional public commons,
thus creating unpredictable and un-defendable enterprise and home networks. When
you think of a small mobile device now able to connect to a private enterprise network under the BYOD policies and the same device is able to be used as a home
network device and at the same time remains connected to networks in public commons, you start to get an image of the “anywhere and everywhere” computing network, a global sprawl of networks within networks and indeed networks on demand.
The ubiquitous nature of these new computing networks is creating new and
uncharted territories with security nightmares. What is more worrying is that along
with the sprawl, we are getting all types of characters joining amass in the new but
rapidly changing technological “ecosystem”, for lack of a better word.
For these reasons, we need to remain vigilant with better, if not advanced, computer and information security protocols and best practices because the frequency of
v
vi
Preface to Third Edition
computing and mobile systems attacks and the vulnerability of these systems will
likely not decline, rather they are likely to increase. More efforts in developing adaptive
and scalable security protocols and best practices and massive awareness, therefore,
are needed to meet this growing challenge and bring the public to a level where they
can be active and safe participants in the brave new worlds of computing.
This guide is a comprehensive volume touching not only on every major topic in
computing and information security and assurance, but also has gone beyond the
security of computer networks as we used to know them, to embrace new and more
agile mobile systems and new online social networks that are interweaving into our
everyday fabric, if not already. We bring into our ongoing discussion on computer
Network security, a broader view of the new wireless and mobile systems and online
social networks. As with previous editions, it is intended to bring massive security
awareness and education to the security realities of our time, a time when billions of
people from the remotest place on earth to the most cosmopolitan world cities are
using the smartest, smallest, and more powerful mobile devices loaded with the
most fascinating and worrisome functionalities ever known to interconnect via a
mesh of elastic computing networks. We highlight security issues and concerns in
these public commons and private bedrooms the globe over.
The volume is venturing into and exposing all sorts of known security problems,
vulnerabilities and the dangers likely to be encountered by the users of these devices.
In its own way, it is a pathfinder as it initiates a conversation towards developing
better algorithms, protocols, and best practices that will enhance security of systems
in the public commons, private and enterprise offices and living rooms and bedrooms where these devices are used. It does this comprehensively in five parts and
25 chapters. Part I gives the reader an understanding of the working and security
situation of the traditional computer networks. Part II builds on this knowledge and
exposes the reader to the prevailing security situation based on a constant security
threat. It surveys several security threats. Part III, the largest, forms the core of the
guide and presents to the reader most of the best practices and solutions that are
currently in use. Part IV goes beyond the traditional computer network as we used
to know it to cover new systems and technologies that have seamlessly and stealthlessy extended the boundaries of the traditional computer network. Systems and
technologies like virtualization, cloud computing and mobile systems are introduced and discussed. A new Part V ventures into the last mile as we look at the new
security quagmire of the home computing environment and the growing home
hotspots. Part VI, the last part, consists of projects.
As usual, in summary, the guide attempts to achieve the following objectives:
• Educate the public about computer security in the traditional computer
network.
• Educate the public about the evolving computing ecosystem created by the eroding boundaries between the enterprise network, the home network and the rapidly growing public-commons-based social networks all extending the
functionalities of the traditional computer network.
Preface to Third Edition
vii
• Alert the public to the magnitude of the vulnerabilities, weaknesses and loopholes inherent in the traditional computer network and now resident in the new
computing ecosystem.
• Bring to the public attention effective security solutions and best practice, expert
opinions on those solutions, and the possibility of ad hoc solutions
• Look at the roles legislation, regulation, and enforcement play in securing the
new computing ecosystem.
• Finally, initiate a debate on developing effective and comprehensive security
algorithms, protocols, and best practices for new computing ecosystem.
Since the guide covers a wide variety of security topics, algorithms, solutions,
and best practices, it is intended to be both a teaching and a reference tool for those
interested in learning about the security of evolving computing ecosystem. Learn
about available techniques to prevent attacks on these systems. The depth and thorough discussion and analysis of most of the security issues of the traditional computer network and the extending technologies and systems, together with the
discussion of security algorithms, and solutions given, make the guide a unique
reference source of ideas for computer network and data security personnel, network security policy makers, and those reading for leisure. In addition, the guide
provokes the reader by raising valid legislative, legal, social, technical and ethical
security issues, including the increasingly diminishing line between individual privacy and the need for collective and individual security in the new computing
ecosystem.
The guide targets college students in computer science, information science,
technology studies, library sciences, engineering, and to a lesser extent students in
the arts and sciences who are interested in information technology. In addition, students in information management sciences will find the guide particularly helpful.
Practitioners, especially those working in data and information-intensive areas, will
likewise find the guide a good reference source. It will also be valuable to those
interested in any aspect of information security and assurance and those simply
wanting to become cyberspace literates.
Book Resources
There are two types of exercises at the end of chapter: easy and quickly workable
exercises whose responses can be easily spotted from the proceeding text; and more
thought provoking advanced exercises whose responses may require research outside the content of this book. Also Chap. 25 is devoted to lab exercises. There are
three types of lab exercises: weekly or bi-weekly assignments that can be done easily with either reading or using readily available software and hardware tools;
slightly harder semester-long projects that may require extensive time, collaboration, and some research to finish them successfully; and hard open-research projects
that require a lot of thinking, take a lot of time, and require extensive research. Links
are provided below for Cryptographic and Mobile security hands-on projects from
viii
Preface to Third Edition
two successful National Science Foundation (NSF) funded workshops at the
author’s university.
• Teaching Cryptography Using Hands-on Labs and Case Studies – http://web2.
utc.edu/~djy471/cryptography/crypto.htm
• Capacity Building Through Curriculum and Faculty Development on Mobile
Security – http://www.utc.edu/faculty/li-yang/mobilesecurity.php
We have tried as much as possible, throughout the guide, to use open source
software tools. This has two consequences to it: one, it makes the guide affordable
keeping in mind the escalating proprietary software prices; and two, it makes the
content and related software tools last longer because the content and corresponding
exercises and labs are not based on one particular proprietary software tool that can
go out anytime.
Instructor Support Materials
As you consider using this book, you may need to know that we have developed
materials to help you with your course. The help materials for both instructors and
students cover the following areas:
• Syllabus. There is a suggested syllabus for the instructor.
• Instructor PowerPoint slides. These are detailed enough to help the instructor,
especially those teaching the course for the first time.
• Answers to selected exercises at the end of each chapter.
• Laboratory. Since network security is a hands-on course, students need to spend
a considerable amount of time on scheduled laboratory exercises. The last chapter of the book contains several laboratory exercises and projects. The book
resource center contains several more and updates. Also as we stated above, links
are also included at the author’s website for Cryptographic hands-on project
from two successful National Science Foundation (NSF) funded workshops at
the author’s university.
These materials can be found at the publisher’s website at http://www.springer.
com/978-1-4471-6653-5 and at the author’s website at http://www.utc.edu/Faculty/
Joseph-Kizza/
Chattanooga, TN, USA
June, 2014
Joseph Migga Kizza
Contents
Part I
1
Introduction to Computer Network Security
Computer Network Fundamentals ........................................................
1.1
Introduction ..................................................................................
1.2
Computer Network Models ..........................................................
1.3
Computer Network Types ............................................................
1.3.1
Local Area Networks (LANs) ......................................
1.3.2
Wide Area Networks (WANs) ......................................
1.3.3
Metropolitan Area Networks (MANs) .........................
1.4
Data Communication Media Technology ....................................
1.4.1
Transmission Technology .............................................
1.4.2
Transmission Media .....................................................
1.5
Network Topology........................................................................
1.5.1
Mesh .............................................................................
1.5.2
Tree ...............................................................................
1.5.3
Bus ................................................................................
1.5.4
Star ................................................................................
1.5.5
Ring ..............................................................................
1.6
Network Connectivity and Protocols ...........................................
1.6.1
Open System Interconnection (OSI)
Protocol Suite ...............................................................
1.6.2
Transport Control Protocol/Internet Protocol
(TCP/IP) Model ............................................................
1.7
Network Services .........................................................................
1.7.1
Connection Services .....................................................
1.7.2
Network Switching Services ........................................
1.8
Network Connecting Devices.......................................................
1.8.1
LAN Connecting Devices ............................................
1.8.2
Internetworking Devices ..............................................
3
3
4
5
6
6
6
7
7
10
13
13
14
14
15
15
17
18
19
22
23
24
26
26
30
ix
x
2
Contents
1.9
Network Technologies..................................................................
1.9.1
LAN Technologies .......................................................
1.9.2
WAN Technologies.......................................................
1.9.3
Wireless LANs .............................................................
1.10 Conclusion....................................................................................
References .................................................................................................
34
34
36
38
39
40
Computer Network Security Fundamentals ........................................
2.1
Introduction ..................................................................................
2.1.1
Computer Security........................................................
2.1.2
Network Security..........................................................
2.1.3
Information Security ....................................................
2.2
Securing the Computer Network ..................................................
2.2.1
Hardware ......................................................................
2.2.2
Software .......................................................................
2.3
Forms of Protection ......................................................................
2.3.1
Access Control .............................................................
2.3.2
Authentication ..............................................................
2.3.3
Confidentiality ..............................................................
2.3.4
Integrity ........................................................................
2.3.5
Nonrepudiation .............................................................
2.4
Security Standards........................................................................
2.4.1
Security Standards Based on Type
of Service/Industry .......................................................
2.4.2
Security Standards Based on Size/Implementation......
2.4.3
Security Standards Based on Interests .........................
2.4.4
Security Best Practices .................................................
References .................................................................................................
41
41
43
43
43
44
44
44
44
45
46
46
47
47
48
Part II
3
49
52
52
53
57
Security Issues and Challenges in the Traditional
Computer Network
Security Motives and Threats to Computer Networks ........................
3.1
Introduction ..................................................................................
3.2
Sources of Security Threats .........................................................
3.2.1
Design Philosophy........................................................
3.2.2
Weaknesses in Network Infrastructure
and Communication Protocols .....................................
3.2.3
Rapid Growth of Cyberspace .......................................
3.2.4
The Growth of the Hacker Community ........................
3.2.5
Vulnerability in Operating System Protocol ................
3.2.6
The Invisible Security Threat: The Insider Effect ........
3.2.7
Social Engineering .......................................................
3.2.8
Physical Theft ...............................................................
61
61
62
62
63
66
67
77
77
78
78
Contents
xi
3.3
Security Threat Motives ...............................................................
3.3.1
Terrorism ......................................................................
3.3.2
Military Espionage .......................................................
3.3.3
Economic Espionage ....................................................
3.3.4
Targeting the National Information Infrastructure .......
3.3.5
Vendetta/Revenge .........................................................
3.3.6
Hate (National Origin, Gender, and Race) ...................
3.3.7
Notoriety.......................................................................
3.3.8
Greed ...........................................................................
3.3.9
Ignorance ......................................................................
3.4
Security Threat Management .......................................................
3.4.1
Risk Assessment ...........................................................
3.4.2
Forensic Analysis .........................................................
3.5
Security Threat Correlation ..........................................................
3.5.1
Threat Information Quality ..........................................
3.6
Security Threat Awareness ...........................................................
References .................................................................................................
78
79
79
79
80
80
81
81
81
81
81
82
82
82
83
83
85
Introduction to Computer Network Vulnerabilities ............................
4.1
Definition......................................................................................
4.2
Sources of Vulnerabilities ............................................................
4.2.1
Design Flaws ................................................................
4.2.2
Poor Security Management ..........................................
4.2.3
Incorrect Implementation .............................................
4.2.4
Internet Technology Vulnerability ................................
4.2.5
Changing Nature of Hacker Technologies
and Activities ................................................................
4.2.6
Difficulty of Fixing Vulnerable Systems ......................
4.2.7
Limits of Effectiveness of Reactive Solutions..............
4.2.8
Social Engineering .......................................................
4.3
Vulnerability Assessment .............................................................
4.3.1
Vulnerability Assessment Services...............................
4.3.2
Advantages of Vulnerability Assessment Services .......
References .................................................................................................
87
87
87
88
91
92
93
96
97
98
99
100
101
102
103
Cyber Crimes and Hackers ....................................................................
5.1
Introduction ..................................................................................
5.2
Cyber Crimes ...............................................................................
5.2.1
Ways of Executing Cyber Crimes ................................
5.2.2
Cyber Criminals ...........................................................
5.3
Hackers .........................................................................................
5.3.1
History of Hacking .......................................................
5.3.2
Types of Hackers ..........................................................
5.3.3
Hacker Motives ............................................................
5.3.4
Hacking Topologies ......................................................
105
105
106
107
109
110
110
113
116
119
4
5
xii
Contents
5.3.5
Hackers’ Tools of System Exploitation ........................
5.3.6
Types of Attacks ...........................................................
5.4
Dealing with the Rising Tide of Cyber Crimes ............................
5.4.1
Prevention .....................................................................
5.4.2
Detection ......................................................................
5.4.3
Recovery.......................................................................
5.5
Conclusion....................................................................................
References .................................................................................................
6
7
Scripting and Security in Computer Networks
and Web Browsers...................................................................................
6.1
Introduction ..................................................................................
6.2
Scripting .......................................................................................
6.3
Scripting Languages .....................................................................
6.3.1
Server-Side Scripting Languages .................................
6.3.2
Client-Side Scripting Languages..................................
6.4
Scripting in Computer Network ...................................................
6.4.1
Introduction to the Common Gateway
Interface (CGI) .............................................................
6.4.2
Server-Side Scripting: The CGI Interface ....................
6.5
Computer Network Scripts and Security......................................
6.5.1
CGI Script Security ......................................................
6.5.2
JavaScript and VBScript Security ................................
6.5.3
Web Scripts Security ....................................................
6.6
Dealing with the Script Security Problems ..................................
References .................................................................................................
Security Assessment, Analysis, and Assurance .....................................
7.1
Introduction ..................................................................................
7.2
System Security Policy ................................................................
7.3
Building a Security Policy ...........................................................
7.3.1
Security Policy Access Rights Matrix ..........................
7.3.2
Policy and Procedures ..................................................
7.4
Security Requirements Specification ...........................................
7.5
Threat Identification .....................................................................
7.5.1
Human Factors .............................................................
7.5.2
Natural Disasters ..........................................................
7.5.3
Infrastructure Failures ..................................................
7.6
Threat Analysis.............................................................................
7.6.1
Approaches to Security Threat Analysis ......................
7.7
Vulnerability Identification and Assessment ................................
7.7.1
Hardware ......................................................................
7.7.2
Software .......................................................................
7.7.3
Humanware ..................................................................
7.7.4
Policies, Procedures, and Practices ..............................
123
126
127
127
128
128
128
129
131
131
131
132
132
133
134
135
137
139
139
141
142
142
143
145
145
147
149
149
151
155
156
156
157
157
159
160
161
161
162
163
163
Contents
xiii
7.8
164
165
165
166
166
167
167
168
168
169
Security Certification ...................................................................
7.8.1
Phases of a Certification Process..................................
7.8.2
Benefits of Security Certification .................................
7.9
Security Monitoring and Auditing ...............................................
7.9.1
Monitoring Tools ..........................................................
7.9.2
Type of Data Gathered .................................................
7.9.3
Analyzed Information ..................................................
7.9.4
Auditing........................................................................
7.10 Products and Services ..................................................................
References .................................................................................................
Part III
Dealing with Computer Network Security Challenges
8
Disaster Management .............................................................................
8.1
Introduction ..................................................................................
8.1.1
Categories of Disasters .................................................
8.2
Disaster Prevention ......................................................................
8.3
Disaster Response ........................................................................
8.4
Disaster Recovery ........................................................................
8.4.1
Planning for a Disaster Recovery .................................
8.4.2
Procedures of Recovery................................................
8.5
Make Your Business Disaster Ready ............................................
8.5.1
Always Be Ready for a Disaster...................................
8.5.2
Always Backup Media .................................................
8.5.3
Risk Assessment ...........................................................
8.6
Resources for Disaster Planning and Recovery ...........................
8.6.1
Local Disaster Resources .............................................
References .................................................................................................
173
173
174
175
177
177
178
179
181
181
182
182
182
182
184
9
Access Control and Authorization .........................................................
9.1
Definitions ....................................................................................
9.2
Access Rights ...............................................................................
9.2.1
Access Control Techniques and Technologies .............
9.3
Access Control Systems ...............................................................
9.3.1
Physical Access Control ...............................................
9.3.2
Access Cards ................................................................
9.3.3
Electronic Surveillance ................................................
9.3.4
Biometrics ....................................................................
9.3.5
Event Monitoring .........................................................
9.4
Authorization................................................................................
9.4.1
Authorization Mechanisms ..........................................
9.5
Types of Authorization Systems...................................................
9.5.1
Centralized ...................................................................
9.5.2
Decentralized................................................................
9.5.3
Implicit .........................................................................
9.5.4
Explicit .........................................................................
185
185
186
187
192
192
192
193
194
197
197
198
199
199
199
200
200
xiv
Contents
9.6
Authorization Principles...............................................................
9.6.1
Least Privileges ............................................................
9.6.2
Separation of Duties .....................................................
9.7
Authorization Granularity ............................................................
9.7.1
Fine Grain Authorization..............................................
9.7.2
Coarse Grain Authorization..........................................
9.8
Web Access and Authorization.....................................................
References .................................................................................................
200
201
201
201
202
202
202
204
10
Authentication .........................................................................................
10.1 Definition......................................................................................
10.2 Multiple Factors and Effectiveness of Authentication .................
10.3 Authentication Elements ..............................................................
10.3.1 Person or Group Seeking Authentication .....................
10.3.2 Distinguishing Characteristics for Authentication .......
10.3.3 The Authenticator .........................................................
10.3.4 The Authentication Mechanism ...................................
10.3.5 Access Control Mechanism..........................................
10.4 Types of Authentication ...............................................................
10.4.1 Nonrepudiable Authentication......................................
10.4.2 Repudiable Authentication ...........................................
10.5 Authentication Methods ...............................................................
10.5.1 Password Authentication ..............................................
10.5.2 Public-Key Authentication ...........................................
10.5.3 Remote Authentication .................................................
10.5.4 Anonymous Authentication ..........................................
10.5.5 Digital Signature-Based Authentication .......................
10.5.6 Wireless Authentication ...............................................
10.6 Developing an Authentication Policy ...........................................
References .................................................................................................
205
205
206
208
208
209
209
209
210
210
210
211
211
212
214
218
219
220
220
221
223
11
Cryptography ..........................................................................................
11.1 Definition......................................................................................
11.1.1 Block Ciphers ...............................................................
11.2 Symmetric Encryption .................................................................
11.2.1 Symmetric Encryption Algorithms...............................
11.2.2 Problems with Symmetric Encryption .........................
11.3 Public-Key Encryption .................................................................
11.3.1 Public-Key Encryption Algorithms ..............................
11.3.2 Problems with Public-Key Encryption .........................
11.3.3 Public-Key Encryption Services ..................................
11.4 Enhancing Security: Combining Symmetric
and Public-Key Encryptions.........................................................
225
225
227
228
229
231
232
234
234
235
235
Contents
Key Management: Generation, Transportation,
and Distribution ............................................................................
11.5.1 The Key Exchange Problem .........................................
11.5.2 Key Distribution Centers (KDCs) ................................
11.5.3 Public-Key Management ..............................................
11.5.4 Key Escrow ..................................................................
11.6 Public-Key Infrastructure (PKI) ...................................................
11.6.1 Certificates....................................................................
11.6.2 Certificate Authority .....................................................
11.6.3 Registration Authority (RA) .........................................
11.6.4 Lightweight Directory Access Protocols (LDAP) ........
11.6.5 Role of Cryptography in Communication ....................
11.7 Hash Function ..............................................................................
11.8 Digital Signatures .........................................................................
References .................................................................................................
xv
11.5
12
13
235
236
237
238
242
242
243
243
243
244
244
244
245
247
Firewalls ...................................................................................................
12.1 Definition......................................................................................
12.2 Types of Firewalls ........................................................................
12.2.1 Packet Inspection Firewalls ..........................................
12.2.2 Application Proxy Server: Filtering
Based on Known Services ............................................
12.2.3 Virtual Private Network (VPN) Firewalls ....................
12.2.4 Small Office or Home (SOHO) Firewalls ....................
12.3 Configuration and Implementation of a Firewall .........................
12.4 The Demilitarized Zone (DMZ) ...................................................
12.4.1 Scalability and Increasing Security in a DMZ .............
12.5 Improving Security Through the Firewall ....................................
12.6 Firewall Forensics ........................................................................
12.7 Firewall Services and Limitations ................................................
12.7.1 Firewall Services ..........................................................
12.7.2 Limitations of Firewalls ...............................................
References .................................................................................................
249
249
252
253
System Intrusion Detection and Prevention .........................................
13.1 Definition......................................................................................
13.2 Intrusion Detection .......................................................................
13.2.1 The System Intrusion Process ......................................
13.2.2 The Dangers of System Intrusions ...............................
13.3 Intrusion Detection Systems (IDSs) .............................................
13.3.1 Anomaly Detection ......................................................
13.3.2 Misuse Detection ..........................................................
13.4 Types of Intrusion Detection Systems..........................................
13.4.1 Network-Based Intrusion Detection
Systems (NIDSs) ..........................................................
273
273
273
274
275
276
277
279
280
258
261
263
263
265
267
267
269
269
269
270
271
280
xvi
Contents
13.4.2 Host-Based Intrusion Detection Systems (HIDS) ........
13.4.3 The Hybrid Intrusion Detection System.......................
13.5 The Changing Nature of IDS Tools..............................................
13.6 Other Types of Intrusion Detection Systems................................
13.6.1 System Integrity Verifiers (SIVs) .................................
13.6.2 Log File Monitors (LFM).............................................
13.6.3 Honeypots.....................................................................
13.7 Response to System Intrusion ......................................................
13.7.1 Incident Response Team ...............................................
13.7.2 IDS Logs as Evidence ..................................................
13.8 Challenges to Intrusion Detection Systems..................................
13.8.1 Deploying IDS in Switched Environments ..................
13.9 Implementing an Intrusion Detection System ..............................
13.10 Intrusion Prevention Systems (IPSs) ............................................
13.10.1 Network-Based Intrusion Prevention
Systems (NIPSs)...........................................................
13.10.2 Host-Based Intrusion Prevention Systems (HIPSs) .....
13.11 Intrusion Detection Tools .............................................................
References .................................................................................................
285
287
287
288
288
288
288
290
290
291
291
292
292
293
14
Computer and Network Forensics.........................................................
14.1 Definition......................................................................................
14.2 Computer Forensics .....................................................................
14.2.1 History of Computer Forensics ....................................
14.2.2 Elements of Computer Forensics .................................
14.2.3 Investigative Procedures ...............................................
14.2.4 Analysis of Evidence....................................................
14.3 Network Forensics........................................................................
14.3.1 Intrusion Analysis.........................................................
14.3.2 Damage Assessment .....................................................
14.4 Forensics Tools .............................................................................
14.4.1 Computer Forensics Tools ............................................
14.4.2 Network Forensics Tools ..............................................
References .................................................................................................
299
299
300
301
301
302
309
315
316
320
321
321
323
324
15
Virus and Content Filtering ...................................................................
15.1 Definitions ....................................................................................
15.2 Scanning, Filtering, and Blocking................................................
15.2.1 Content Scanning .........................................................
15.2.2 Inclusion Filtering ........................................................
15.2.3 Exclusion Filtering .......................................................
15.2.4 Other Types of Content Filtering..................................
15.2.5 Location of Content Filters ..........................................
15.3 Virus Filtering ..............................................................................
15.3.1 Viruses ..........................................................................
325
325
325
326
326
327
327
329
330
330
293
295
295
298
Contents
15.4
Content Filtering ..........................................................................
15.4.1 Application-Level Filtering ..........................................
15.4.2 Packet-Level Filtering and Blocking ............................
15.4.3 Filtered Material ...........................................................
15.5 Spam.............................................................................................
References .................................................................................................
16
17
Standardization and Security Criteria: Security
Evaluation of Computer Products .........................................................
16.1 Introduction ..................................................................................
16.2 Product Standardization ...............................................................
16.2.1 Need for Standardization of (Security) Products .........
16.2.2 Common Computer Product Standards ........................
16.3 Security Evaluations.....................................................................
16.3.1 Purpose of Security Evaluation ....................................
16.3.2 Security Evaluation Criteria .........................................
16.3.3 Basic Elements of an Evaluation ..................................
16.3.4 Outcome/Benefits .........................................................
16.4 Major Security Evaluation Criteria ..............................................
16.4.1 Common Criteria (CC) .................................................
16.4.2
FIPS ..............................................................................
16.4.3 The Orange Book/TCSEC ............................................
16.4.4 Information Technology Security Evaluation
Criteria (ITSEC) ...........................................................
16.4.5 The Trusted Network Interpretation (TNI):
The Red Book ...............................................................
16.5 Does Evaluation Mean Security? .................................................
References .................................................................................................
Computer Network Security Protocols .................................................
17.1 Introduction ..................................................................................
17.2 Application Level Security...........................................................
17.2.1 Pretty Good Privacy (PGP) ..........................................
17.2.2 Secure/Multipurpose Internet Mail Extension
(S/MIME) .....................................................................
17.2.3 Secure HTTP (S-HTTP) ...............................................
17.2.4 Hypertext Transfer Protocol over Secure
Socket Layer (HTTPS) .................................................
17.2.5 Secure Electronic Transactions (SET)..........................
17.2.6 Kerberos .......................................................................
17.3 Security in the Transport Layer ....................................................
17.3.1 Secure Socket Layer (SSL) ..........................................
17.3.2 Transport Layer Security (TLS) ...................................
17.4 Security in the Network Layer .....................................................
17.4.1 Internet Protocol Security (IPSec)................................
17.4.2 Virtual Private Networks (VPN) ..................................
xvii
337
337
339
340
341
343
345
345
346
346
347
348
348
348
349
349
351
351
352
352
355
355
356
357
359
359
360
360
362
363
366
367
369
371
372
375
376
376
380
xviii
Contents
17.5
Security in the Link Layer and over LANS .................................
17.5.1 Point-to-Point Protocol (PPP) ......................................
17.5.2 Remote Authentication Dial-In User Service
(RADIUS) ....................................................................
17.5.3 Terminal Access Controller Access Control
System (TACACS+) .....................................................
References .................................................................................................
384
385
387
388
18
Security in Wireless Networks and Devices ..........................................
18.1 Introduction ..................................................................................
18.2 Types of Wireless Broadband Networks ......................................
18.2.1 Wireless Personal Area Network (WPAN) ...................
18.2.2 Wireless Local Area Networks (WLAN) (Wi-Fi) ........
18.2.3 WiMAX LAN...............................................................
18.2.4 Mobile Cellular Network .............................................
18.3 Development of Cellular Technology ..........................................
18.3.1 First Generation ............................................................
18.3.2 Second Generation .......................................................
18.3.3 Third Generation ..........................................................
18.3.4 Fourth Generation: 4G/LTE .........................................
18.4 Other Features of Mobile Cellular Technology............................
18.4.1 Universality ..................................................................
18.4.2 Flexibility .....................................................................
18.4.3 Quality of Service (QoS) ..............................................
18.4.4 Service Richness ..........................................................
18.4.5 Mobile Cellular Security Protocol Stack......................
18.5 Security Vulnerabilities in Cellular Wireless Networks ...............
18.5.1 WLANs Security Concerns ..........................................
18.5.2 Best Practices for Wi-Fi Security .................................
References .................................................................................................
391
391
392
392
394
395
401
405
405
405
406
407
407
407
408
408
408
408
411
411
416
419
19
Security in Sensor Networks ..................................................................
19.1 Introduction ..................................................................................
19.2 The Growth of Sensor Networks ..................................................
19.3 Design Factors in Sensor Networks .............................................
19.3.1 Routing .........................................................................
19.3.2 Power Consumption .....................................................
19.3.3 Fault Tolerance .............................................................
19.3.4 Scalability .....................................................................
19.3.5 Production Costs ..........................................................
19.3.6 Nature of Hardware Deployed .....................................
19.3.7 Topology of Sensor Networks ......................................
19.3.8 Transmission Media .....................................................
19.4 Security in Sensor Networks ........................................................
19.4.1 Security Challenges ......................................................
19.4.2 Sensor Network Vulnerabilities and Attacks ................
19.4.3 Securing Sensor Networks ...........................................
421
421
422
423
424
426
426
426
426
426
427
427
427
427
428
430
386
Contents
xix
19.5
20
Security Mechanisms and Best Practices for
Sensor Networks ..........................................................................
19.6 Trends in Sensor Network Security Research ..............................
19.6.1 Cryptography................................................................
19.6.2 Key Management .........................................................
19.6.3 Confidentiality, Authentication, and Freshness ............
19.6.4 Resilience to Capture ...................................................
References .................................................................................................
431
432
432
433
434
434
435
Other Efforts to Secure Data in Computer Networks .........................
20.1 Introduction ..................................................................................
20.2 Legislation ....................................................................................
20.3 Regulation ....................................................................................
20.4 Self-Regulation ............................................................................
20.4.1 Hardware-Based Self-Regulation .................................
20.4.2 Software-Based Self-Regulation ..................................
20.5 Education......................................................................................
20.5.1 Focused Education .......................................................
20.5.2 Mass Education ............................................................
20.6 Reporting Centers.........................................................................
20.7 Market Forces...............................................................................
20.8 Activism .......................................................................................
20.8.1 Advocacy ......................................................................
20.8.2 Hotlines ........................................................................
References .................................................................................................
437
437
437
438
438
439
439
440
441
441
442
442
443
443
443
445
Part IV
21
Elastic Extension Beyond the Traditional Computer
Network: Virtualization, Cloud Computing
and Mobile Systems
Cloud Computing and Related Security Issues ....................................
21.1 Introduction ..................................................................................
21.2 Cloud Computing Infrastructure Characteristics .........................
21.3 Cloud Computing Service Models ...............................................
21.3.1 Three Features of SaaS Applications ...........................
21.4 Cloud Computing Deployment Models .......................................
21.5 Virtualization and Cloud Computing ...........................................
21.6 Benefits of Cloud Computing.......................................................
21.7 Cloud Computing, Power Consumption,
and Environmental Issues.............................................................
21.8 Cloud Computing Security, Reliability, Availability,
and Compliance Issues .................................................................
21.8.1 Cloud Computing Actors, Their Roles,
and Responsibilities......................................................
21.8.2 Security of Data and Applications in the Cloud ...........
449
449
450
452
453
453
454
455
457
458
459
461
xx
Contents
21.8.3
Security of Data in Transition: Cloud Security Best
Practices .......................................................................
21.8.4 Service-Level Agreements (SLAs) ...............................
21.8.5 Data Encryption............................................................
21.8.6 Web Access Points Security .........................................
21.8.7 Compliance...................................................................
References .................................................................................................
22
Virtualization Security ...........................................................................
22.1 Introduction ..................................................................................
22.2 History of Virtualization...............................................................
22.3 Virtualization Terminologies ........................................................
22.3.1 Host CPU/Guest CPU ..................................................
22.3.2 Host OS/Guest OS........................................................
22.3.3 Hypervisor ....................................................................
22.3.4 Emulation .....................................................................
22.4 Types of Computing System Virtualization .................................
22.4.1 Platform Virtualization .................................................
22.4.2 Network Virtualization .................................................
22.4.3 Storage Virtualization ...................................................
22.4.4 Application Virtualization ............................................
22.5 The Benefits of Virtualization ......................................................
22.5.1 Reduction of Server Sprawl .........................................
22.5.2 Conservation of Energy ................................................
22.5.3 Reduced IT Management Costs ...................................
22.5.4 Better Disaster Recovery Management ........................
22.5.5 Software Development Testing and Verification ..........
22.5.6 Isolation of Legacy Applications ..................................
22.5.7 Cross-Platform Support ................................................
22.5.8 Minimizing Hardware Costs ........................................
22.5.9 Faster Server Provisioning ...........................................
22.5.10 Better Load Balancing..................................................
22.5.11 Reduce the Data Center Footprint ................................
22.5.12 Increase Uptime............................................................
22.5.13 Isolate Applications ......................................................
22.5.14 Extend the Life of Older Applications .........................
22.6 Virtualization Infrastructure Security ...........................................
22.6.1 Hypervisor Security......................................................
22.6.2 Securing Communications Between Desktop
and Virtual Infrastructure .............................................
22.6.3 Security of Communication Between
Virtual Machines ..........................................................
22.6.4 Threats and Vulnerabilities Originating
from a VM ....................................................................
References .................................................................................................
467
468
468
468
468
471
473
473
474
475
475
475
475
476
476
476
479
484
484
484
484
485
485
485
485
485
486
486
486
486
486
487
487
487
487
488
488
489
489
490
Contents
23
Mobile Systems and Corresponding Intractable Security Issues .......
23.1 Introduction ..................................................................................
23.2 Current Major Mobile Operating Systems ...................................
23.2.1 Android.........................................................................
23.2.2
iOS................................................................................
23.2.3 Windows Phone 7.5 ......................................................
23.2.4 Bada (Samsung) ...........................................................
23.2.5 BlackBerry OS/RIM.....................................................
23.2.6 Symbian........................................................................
23.3 The Security in the Mobile Ecosystems .......................................
23.3.1 Application-Based Threats ...........................................
23.3.2 Web-Based Threats.......................................................
23.3.3 Network Threats ...........................................................
23.3.4 Physical Threats ...........................................................
23.3.5 Operating System–Based Threats ................................
23.4 General Mobile Devices Attack Types .........................................
23.4.1 Denial of Service (DDoS) ............................................
23.4.2 Phone Hacking .............................................................
23.4.3 Mobile Malware/Virus .................................................
23.4.4 Spyware ........................................................................
23.4.5 Exploit ..........................................................................
23.4.6 Everything Blue............................................................
23.4.7 Phishing ........................................................................
23.4.8 Smishing .......................................................................
23.4.9 Vishing .........................................................................
23.5 Mitigation of Mobile Devices Attacks .........................................
23.5.1 Mobile Device Encryption ...........................................
23.5.2 Mobile Remote Wiping ................................................
23.5.3 Mobile Passcode Policy................................................
23.6 Users Role in Securing Mobile Devices ......................................
References .................................................................................................
Part V
24
xxi
491
491
492
492
493
494
494
495
496
496
497
498
498
499
499
500
500
500
501
501
501
501
502
503
503
503
504
505
505
506
506
Securing the Last Frontiers – The Home Front
Conquering the Last Frontier in the Digital Invasion:
The Home Front ......................................................................................
24.1 Introduction ..................................................................................
24.2 The Changing Home Network and Hot Spots..............................
24.2.1 Cable LAN ...................................................................
24.2.2 Wireless Home Networks .............................................
24.2.3 Types of Broadband Internet Connections ...................
24.2.4 Smart Home Devices ....................................................
24.3 Data and Activities in the Home LAN .........................................
24.3.1 Work Data.....................................................................
24.3.2 Social Media Data ........................................................
511
511
512
512
513
516
516
517
517
517
xxii
Contents
24.3.3 Banking and Investment Data ......................................
24.3.4 Health Devices .............................................................
24.3.5 Home Monitoring and Security Devices ......................
24.4 Threats to the Home and Home LAN ..........................................
24.4.1 Most Common Threats to Homes and Home LANs ....
24.4.2 Actions to Safeguard the Family LAN .........................
24.4.3 Using Encryption to Protect the Family LAN ..............
24.4.4 Protecting the Family LAN with
Known Protocols ..........................................................
References .................................................................................................
Part VI
25
518
518
518
519
519
520
521
522
524
Hands-on Projects
Projects.....................................................................................................
25.1 Introduction ..................................................................................
25.2 Part I: Weekly/Biweekly Laboratory Assignments ......................
25.2.1 Laboratory # 1 ..............................................................
25.2.2 Laboratory # 2 ..............................................................
25.2.3 Laboratory # 3 ..............................................................
25.2.4 Laboratory # 4 ..............................................................
25.2.5 Laboratory # 5 ..............................................................
25.2.6 Laboratory # 6 ..............................................................
25.2.7 Laboratory # 7 ..............................................................
25.2.8 Laboratory # 8 ..............................................................
25.2.9 Laboratory # 9 ..............................................................
25.2.10 Laboratory # 10 ............................................................
25.3 Part II: Semester Projects .............................................................
25.3.1 Intrusion Detection Systems.........................................
25.3.2 Scanning Tools for System Vulnerabilities...................
25.4 The Following Tools Are Used to Enhance Security
in Web Applications .....................................................................
25.4.1 Public Key Infrastructure .............................................
25.5 Part III: Research Projects ............................................................
25.5.1 Consensus Defense.......................................................
25.5.2 Specialized Security .....................................................
25.5.3 Protecting an Extended Network..................................
25.5.4 Automated Vulnerability Reporting .............................
25.5.5 Turn-Key Product for Network Security Testing .........
25.5.6 The Role of Local Networks in the Defense
of the National Critical Infrastructure ..........................
25.5.7 Enterprise VPN Security ..............................................
25.5.8 Perimeter Security ........................................................
25.5.9 Enterprise Security .......................................................
25.5.10 Password Security: Investigating the Weaknesses........
25.6 Case Studies .................................................................................
527
527
527
528
528
528
529
529
530
530
530
530
530
531
531
533
534
534
535
535
535
535
535
536
536
536
536
537
537
537
Index ................................................................................................................. 539
Part I
Introduction to Computer Network Security
1
Computer Network Fundamentals
1.1
Introduction
The basic ideas in all types of communication are that there must be three ingredients
for the communication to be effective. First, there must be two entities, dubbed a
sender and a receiver. These two must have something they need to share. Second,
there must be a medium through which the sharable item is channeled. This is the
transmission medium. Finally, there must be an agreed-on set of communication rules
or protocols. These three apply to every category or structure of communication.
In this chapter, we will focus on these three components in a computer network.
But what is a computer network? The reader should be aware that our use of the
phrase computer network, from now on, will refer to the traditional computer network. A computer network is a distributed system consisting of loosely coupled
computers and other devices. Any two of these devices, which we will from now on
refer to as network elements or transmitting elements without loss of generality, can
communicate with each other through a communication medium. In order for these
connected devices to be considered a communicating network, there must be a set
of communicating rules or protocols each device in the network must follow to
communicate with another device in the network. The resulting combination
consisting of hardware and software is a computer communication network or
computer network in short. Figure 1.1 shows a computer network.
The hardware component is made of network elements consisting of a collection
of nodes that include the end systems commonly called hosts and intermediate
switching elements that include hubs, bridges, routers, and gateways that, without
loss of generality, we will call network elements.
Network elements may own resources individually, that is, locally or globally.
Network software consists of all application programs and network protocols that
are used to synchronize, coordinate, and bring about the sharing and exchange of
data among the network elements. Network software also makes the sharing of
expensive resources in the network possible. Network elements, network software,
and users all work together so that individual users can exchange messages and
© Springer-Verlag London 2015
J.M. Kizza, Guide to Computer Network Security, Computer Communications
and Networks, DOI 10.1007/978-1-4471-6654-2_1
3
4
1
Computer Network Fundamentals
Fig. 1.1 A computer network
share resources on other systems that are not readily available locally. The network
elements, together with their resources, may be of diverse hardware technologies
and the software may be as different as possible, but the whole combination must
work together in unison.
Internetworking technology enables multiple, diverse underlying hardware technologies and different software regimes to interconnect heterogeneous networks
and bring them to communicate smoothly. The smooth working of any computer
communication network is achieved through the low-level mechanisms provided by
the network elements and high-level communication facilities provided by the software running on the communicating elements. Before we discuss the working of
these networks, let us first look at the different types of networks.
1.2
Computer Network Models
There are several configuration models that form a computer network. The most
common of these are the centralized and distributed models. In a centralized model,
several computers and devices are interconnected and can talk to each other.
However, there is only one central computer, called the master, through which all
correspondence must take place. Dependent computers, called surrogates, may have
reduced local resources, such as memory, and sharable global resources are
controlled by the master at the center. Unlike the centralized model, however, the
distributed network consists of loosely coupled computers interconnected by a
communication network consisting of connecting elements and communication
channels. The computers themselves may own their resources locally or may request
resources from a remote computer. These computers are known by a string of
names, including host, client, or node. If a host has resources that other hosts need,
then that host is known as a server. Communication and sharing of resources are not
controlled by the central computer but are arranged between any two
1.3
Computer Network Types
5
Fig. 1.2 A centralized network model
Fig. 1.3 A distributed network model
communicating elements in the network. Figures 1.2 and 1.3 show a centralized
network model and a distributed network model, respectively.
1.3
Computer Network Types
Computer networks come in different sizes. Each network is a cluster of network
elements and their resources. The size of the cluster determines the network type.
There are, in general, two main network types: the local area network (LAN) and
wide area network (WAN).
6
1
Computer Network Fundamentals
Fig. 1.4 A LAN network
1.3.1
Local Area Networks (LANs)
A computer network with two or more computers or clusters of network and their
resources connected by a communication medium sharing communication protocols and confined in a small geographical area, such as a building floor, a building,
or a few adjacent buildings, is called a local area network (LAN). The advantage of
a LAN is that all network elements are close together so the communication links
maintain a higher speed of data movement. Also, because of the proximity of the
communicating elements, high-cost and high-quality communicating elements can
be used to deliver better service and high reliability. Figure 1.4 shows a LAN
network.
1.3.2
Wide Area Networks (WANs)
A wide area network (WAN), on the other hand, is a network made up of one or more
clusters of network elements and their resources, but instead of being confined to a
small area, the elements of the clusters or the clusters themselves are scattered over
a wide geographical area as in a region of a country or across the whole country,
several countries, or the entire globe like the Internet. Some advantages of a WAN
include distributing services to a wider community and availability of a wide array of
both hardware and software resources that may not be available in a LAN. However,
because of the large geographical areas covered by WANs, communication media are
slow and often unreliable. Figure 1.5 shows a WAN network.
1.3.3
Metropolitan Area Networks (MANs)
Between the LAN and WAN, there is also a middle network called the metropolitan
area network (MAN) because it covers a slightly wider area than the LAN but not
so wide as to be considered a WAN. Civic networks that cover a city or part of a city
1.4
Data Communication Media Technology
7
Fig. 1.5 A WAN network
are a good example of a MAN. MANs are rarely talked about because they are quiet
often overshadowed by cousin LAN to the left and cousin WAN to the right.
1.4
Data Communication Media Technology
The performance of a network type depends greatly on the transmission technology
and media used in the network. Let us look at these two.
1.4.1
Transmission Technology
The media through which information has to be transmitted determine the signal to
be used. Some media permit only analog signals. Some allow both analog and digital. Therefore, depending on the media type involved and other considerations, the
input data can be represented as either digital or analog signal. In an analog format,
data is sent as continuous electromagnetic waves on an interval representing things
such as voice and video and propagated over a variety of media that may include
copper wires, twisted coaxial pair or cable, fiber optics, or wireless. We will discuss
these media soon. In a digital format, on the other hand, data is sent as a digital
signal, a sequence of voltage pulses that can be represented as a stream of binary
bits. Both analog and digital data can be propagated and many times represented as
either analog or digital.
Transmission itself is the propagation and processing of data signals between
network elements. The concept of representation of data for transmission, either as
analog or digital signal, is called an encoding scheme. Encoded data is then transmitted over a suitable transmission medium that connects all network elements.
There are two encoding schemes, analog and digital. Analog encoding propagates
analog signals representing analog data such as sound waves and voice data. Digital
8
1
Computer Network Fundamentals
encoding, on the other hand, propagates digital signals representing either an analog
or a digital signal representing digital data of binary streams by two voltage levels.
Since our interest in this book is in digital networks, we will focus on the encoding
of digital data.
1.4.1.1 Analog Encoding of Digital Data
Recall that digital information is in the form of 1s or 0s. To send this information
over some analog medium such as the telephone line, for example, which has limited bandwidth, digital data needs to be encoded using modulation and demodulation to produce analog signals. The encoding uses a continuous oscillating wave,
usually a sine wave, with a constant frequency signal called a carrier signal. The
carrier has three modulation characteristics: amplitude, frequency, and phase shift.
The scheme then uses a modem, a modulation–demodulation pair, to modulate and
demodulate the data signal based on any one of the three carrier characteristics or a
combination. The resulting wave is between a range of frequencies on both sides of
the carrier as shown below [1]:
• Amplitude modulation represents each binary value by a different amplitude of
the carrier frequency. The absence of or low carrier frequency may represent a 0,
and any other frequency then represents a 1. But this is a rather inefficient modulation technique and is therefore used only at low frequencies up to 1,200 bps in
voice grade lines.
• Frequency modulation also represents the two binary values by two different
frequencies close to the frequency of the underlying carrier. Higher frequencies
represent a 1 and low frequencies represent a 0. The scheme is less susceptible to
errors.
• Phase shift modulation changes the timing of the carrier wave, shifting the carrier
phase to encode the data. A 1 is encoded as a change in phase by 180°, and a 0
may be encoded as a 0 change in phase of a carrier signal. This is the most efficient scheme of the three, and it can reach a transmission rate of up to 9,600 bps.
1.4.1.2 Digital Encoding of Digital Data
In this encoding scheme, which offers the most common and easiest way to transmit
digital signals, two binary digits are used to represent two different voltages. Within
a computer, these voltages are commonly 0 and 5 V. Another procedure uses two
representation codes: nonreturn to zero level (NRZ-L), in which negative voltage
represents binary one and positive voltage represents binary zero, and nonreturn to
zero, invert on ones (NRZ-I). See Figs. 1.6 and 1.7 for an example of these two
codes. In NRZ-L, whenever a 1 occurs, a transition from one voltage level to another
is used to signal the information. One problem with NRZ signaling techniques is the
requirement of a perfect synchronization between the receiver and transmitter
clocks. This is, however, reduced by sending a separate clock signal. There are yet
other representations such as the Manchester and differential Manchester, which
encode clock information along with the data.
1.4
Data Communication Media Technology
9
Fig. 1.6 NRZ-L N, nonreturn to zero level representation code
Fig. 1.7 NRZ-I, nonreturn to zero invert on ones representation code
One may wonder why go through the hassle of digital encoding and transmission. There are several advantages over its cousin, analog encoding. These include
the following:
•
•
•
•
•
Plummeting costs of digital circuitry.
More efficient integration of voice, video, text, and image.
Reduction of noise and other signal impairment because of use of repeaters.
Capacity of channels is utilized best with digital techniques.
Better encryption and hence better security than in analog transmission.
1.4.1.3 Multiplexing of Transmission Signals
Quite often during the transmission of data over a network medium, the volume of
transmitted data may far exceed the capacity of the medium. Whenever this happens, it may be possible to make multiple signal carriers share a transmission
medium. This is referred to as multiplexing. There are two ways in which multiplexing can be achieved: time-division multiplexing (TMD) and frequency-division
multiplexing (FDM).
In FDM, all data channels are first converted to analog form. Since a number of
signals can be carried on a carrier, each analog signal is then modulated by a separate and different carrier frequency, and this makes it possible to recover during the
demultiplexing process. The frequencies are then bundled on the carrier. At the
receiving end, the demultiplexer can select the desired carrier signal and use it to
extract the data signal for that channel in such a way that the bandwidths do not
overlap. FDM has an advantage of supporting full-duplex communication.
10
1
Computer Network Fundamentals
TDM, on the other hand, works by dividing the channel into time slots that are
allocated to the data streams before they are transmitted. At both ends of the transmission, if the sender and receiver agree on the time-slot assignments, then the receiver
can easily recover and reconstruct the original data streams. So multiple digital signals
can be carried on one carrier by interleaving portions of each signal in time.
1.4.2
Transmission Media
As we have observed above, in any form of communication, there must be a medium
through which the communication can take place. So network elements in a network
need a medium in order to communicate. No network can function without a transmission medium because there would be no connection between the transmitting elements. The transmission medium plays a vital role in the performance of the network.
In total, characteristic quality, dependability, and overall performance of a network
depend heavily on its transmission medium. The transmission medium also determines a network’s capacity in realizing the expected network traffic, reliability for the
network’s availability, size of the network in terms of the distance covered, and the
transmission rate. Network transmission media can be either wired or wireless.
1.4.2.1 Wired Transmission Media
Wired transmission media are used in fixed networks physically connecting every
network element. There are different types of physical media, the most common of
which are copper wires, twisted pair, coaxial cables, and optical fibers.
Copper wires have been traditionally used in communication because of their
low resistance to electrical currents that allows signals to travel even further. But
copper wires suffer interference from electromagnetic energy in the environment,
and because of this, they must always be insulated.
Twisted pair is a pair of wires consisting of insulated copper wire each wrapped
around the other, forming frequent and numerous twists. Together, the twisted, insulated copper wires act as a full-duplex communication link. The twisting of the
wires reduces the sensitivity of the cable to electromagnetic interference and also
reduces the radiation of radio-frequency noises that may interfere with nearby
cables and electronic components. To increase the capacity of the transmitting
medium, more than one pair of the twisted wires may be bundled together in a protective coating. Because twisted pairs were far less expensive, were easy to install,
and had a high quality of voice data, they were widely used in telephone networks.
However, because they are poor in upward scalability in transmission rate, distance,
and bandwidth in LANs, twisted pair technology has been abandoned in favor of
other technologies. Figure 1.8 shows a twisted pair.
Coaxial cables are dual-conductor cables with a shared inner conductor in the core
of the cable protected by an insulation layer and the outer conductor surrounding the
insulation. These cables are called coaxial because they share the inner conductor. The
inner core conductor is usually made of solid copper wire but at times can also be made
up of stranded wire. The outer conductor commonly made of braided wires, but
1.4
Data Communication Media Technology
11
Fig. 1.8 Twisted pair
Fig. 1.9 Coaxial cable
Fig. 1.10 Optical fiber
sometimes made of metallic foil or both, forms a protective tube around the inner
conductor. This outer conductor is also further protected by another outer coating
called the sheath. Figure 1.9 shows a coaxial cable. Coaxial cables are commonly used
in television transmissions. Unlike twisted pairs, coaxial cables can be used over long
distances. There are two types of coaxial cables: thinnet, a light and flexible cabling
medium that is inexpensive and easy to install, and the thicknet, which is thicker and
harder to break and can carry more signals through a longer distance than thinnet.
Optical fiber is a small medium made up of glass and plastics and conducts an
optical ray. This is the most ideal cable for data transmission because it can accommodate extremely high bandwidths and has few problems with electromagnetic
interference that coaxial cables suffer from. It can also support cabling for several
kilometers. The two disadvantages of fiber-optic cables, however, are cost and
installation difficulty. As shown in Fig. 1.10, a simple optical fiber has a central core
12
1
Computer Network Fundamentals
made up of thin fibers of glass or plastics. The fibers are protected by a glass or
plastic coating called a cladding. The cladding, though made up of the same materials as the core, has different properties that give it the capacity to reflect back the
core rays that tangentially hit on it. The cladding itself is encased in a plastic jacket.
The jacket protects the inner fiber from external abuses such as bending and abrasions. Optical fiber cables transmit data signals by first converting them into light
signals. The transmitted light is emitted at the source from either a light-emitting
diode (LED) or an injection laser diode (ILD). At the receiving end, the emitted rays
are received by a photo detector that converts them back to the original form.
1.4.2.2 Wireless Communication
Wireless communication and wireless networks have evolved as a result of rapid
development in communication technologies, computing, and people’s need for
mobility. Wireless networks fall in one of the following three categories depending
on distance as follows:
• Restricted Proximity Network: This network involves local area networks (LANs)
with a mixture of fixed and wireless devices.
• Intermediate/Extended Network: This wireless network is actually made up of
two fixed LAN components joined together by a wireless component. The bridge
may be connecting LANs in two nearby buildings or even further.
• Mobile Network: This is a fully wireless network connecting two network elements. One of these elements is usually a mobile unit that connects to the home
network (fixed) using cellular or satellite technology.
These three types of wireless networks are connected using basic media such as
infrared, laser beam, narrowband and spread-spectrum radio, microwave, and satellite communication [2].
Infrared: During an infrared transmission, one network element remotely emits
and transmits pulses of infrared light that carry coded instructions to the receiving
network element. As long as there is no object to stop the transmitted light, the
receiver gets the instruction. Infrared is best used effectively in a small confined
area, within 100 ft, for example, a television remote communicating with the television set. In a confined area such as this, infrared is relatively fast and can support
high bandwidths of up to 10 Mbps.
High-Frequency Radio: During a radio communication, high-frequency electromagnetic radio waves or radio frequency commonly referred to as RF transmissions
are generated by the transmitter and are picked up by the receiver. Because the range
of radio-frequency band is greater than that of infrared, mobile computing elements
can communicate over a limited area without both transmitter and receiver being
placed along a direct line of sight; the signal can bounce off light walls, buildings,
and atmospheric objects. RF transmissions are very good for long distances when
combined with satellites to refract the radio waves.
Microwave: Microwaves are a higher-frequency version of radio waves but
whose transmissions, unlike those of the radio, can be focused in a single direction.
1.5 Network Topology
13
Microwave transmissions use a pair of parabolic antennas that produce and receive
narrow but highly directional signals. To be sensitive to signals, both the transmitting and receiving antennas must focus within a narrow area. Because of this, both
the transmitting and receiving antennas must be carefully adjusted to align the transmitted signal to the receiver. Microwave communication has two forms: terrestrial,
when it is near ground, and satellite microwave. The frequencies and technologies
employed by these two forms are similar but with notably distinct differences.
Laser: Laser light can be used to carry data for several thousand yards through
air and optical fibers. But this is possible only if there are no obstacles in the line of
sight. Lasers can be used in many of the same situations as microwaves, and like
microwaves, laser beams must be refracted when used over long distances.
1.5
Network Topology
Computer networks, whether LANs, MANs, or WANs, are constructed based on a
topology. There are several topologies including the following popular ones.
1.5.1
Mesh
A mesh topology allows multiple access links between network elements, unlike
other types of topologies. The multiplicity of access links between the network elements offers an advantage in network reliability because whenever one network
element fails, the network does not cease operations; it simply finds a bypass to the
failed element and the network continues to function. Mesh topology is most often
applied in MAN networks. Figure 1.11 shows a mesh network.
Fig. 1.11 Mesh network
14
1
Computer Network Fundamentals
Fig. 1.12 Tree topology
1.5.2
Tree
A more common type of network topology is the tree topology. In the tree topology,
network elements are put in a hierarchical structure in which the most predominant
element is called the root of the tree, and all other elements in the network share a
child–parent relationship. As in ordinary, though inverted trees, there are no closed
loops. So dealing with failures of network elements presents complications depending on the position of the failed element in the structure. For example, in a deeply
rooted tree, if the root element fails, the network automatically ruptures and splits
into two parts. The two parts cannot communicate with each other. The functioning
of the network as a unit is, therefore, fatally curtailed. Figure 1.12 shows a network
using a tree topology.
1.5.3
Bus
A more popular topology, especially for LANs, is the bus topology. Elements in a
network using a bus topology always share a bus and, therefore, have equal access
to all LAN resources. Every network element has full-duplex connections to the
transmitting medium which allows every element on the bus to send and receive
data. Because each computing element is directly attached to the transmitting
medium, a transmission from any one element propagates through the entire length
of the medium in either direction and therefore can be received by all elements in
the network. Because of this, precautions need to be taken to make sure that transmissions intended for one element can be received by that element and no other
element. The network must also use a mechanism that handles disputes in case two
or more elements try to transmit at the same time. The mechanism deals with the
likely collision of signals and brings a quick recovery from such a collision. It is
also necessary to create fairness in the network so that all other elements can transmit when they need to do so. See Fig. 1.13.
1.5 Network Topology
15
Fig. 1.13 Bus topology
A collision control mechanism must also improve efficiency in the network using
a bus topology by allowing only one element in the network to have control of the
bus at any one time. This network element is then called the bus master and other
elements are considered to be its slaves. This requirement prevents collision from
occurring in the network as elements in the network try to seize the bus at the same
time. A bus topology is commonly used by LANs.
1.5.4
Star
Another very popular topology, especially in LAN network technologies, is a star
topology. A star topology is characterized by a central prominent node that connects
to every other element in the network. So all the elements in the network are connected to a central element. Every network element in a star topology is connected
pairwise in a point-to-point manner through the central element, and communication between any pair of elements must go through this central element. The central
element or node can either operate in a broadcast fashion, in which case information
from one element is broadcast to all connected elements, or transmit as a switching
device in which the incoming data is transmitted only to one element, the nearest
element enroute to the destination. The biggest disadvantage to the star topology in
networks is that the failure of the central element results in the failure of the entire
network. Figure 1.14 shows a star topology.
1.5.5
Ring
Finally, another popular network topology is the ring topology. In this topology,
each computing element in a network using a ring topology is directly connected to
the transmitting medium via a unidirectional connection so that information put on
the transmission medium can reach all computing elements in the network through
16
1
Computer Network Fundamentals
Fig. 1.14 Star topology
Fig. 1.15 Ring topology network
a mechanism of taking turns in sending information around the ring. Figure 1.15
shows a ring topology network. The taking of turns in passing information is managed through a token system. A token is a system-wide piece of information that
guarantees the current owner to be the bus master. As long as it owns the token, no
other network element is allowed to transmit on the bus. When an element currently
1.6
Network Connectivity and Protocols
17
Fig. 1.16 Token ring hub
sending information and holding the token has finished, it passes the token downstream to its nearest neighbor. The token system is a good management system of
collision and fairness.
There are variants of a ring topology collectively called hub hybrids combining
either a star with a bus or a stretched star as shown in Fig. 1.16.
Although network topologies are important in LANs, the choice of a topology
depends on a number of other factors, including the type of transmission medium,
reliability of the network, the size of the network, and its anticipated future growth.
Recently, the most popular LAN topologies have been the bus, star, and ring topologies. The most popular bus- and star-based LAN topology is the Ethernet, and the
most popular ring-based LAN topology is the token ring.
1.6
Network Connectivity and Protocols
In the early days of computing, computers were used as stand-alone machines, and
all work that needed cross-computing was done manually. Files were moved on
disks from computer to computer. There was, therefore, a need for cross-computing
where more than one computer should talk to others and vice versa.
A new movement was, therefore, born. It was called the open system movement,
which called for computer hardware and software manufacturers to come up with a
way for this to happen. But to make this possible, standardization of equipment and
software was needed. To help in this effort and streamline computer communication, the International Organization for Standardization (ISO) developed the Open
System Interconnection (OSI) model. The OSI is an open architecture model that
18
1
Computer Network Fundamentals
functions as the network communication protocol standard, although it is not the
most widely used one. The Transport Control Protocol/Internet Protocol (TCP/IP)
model, a rival model to OSI, is the most widely used. Both OSI and TCP/IP models
use two protocol stacks, one at the source element and the other at the destination
element.
1.6.1
Open System Interconnection (OSI) Protocol Suite
The development of the OSI model was based on the secure premise that a communication task over a network can be broken into seven layers, where each layer
represents a different portion of the task. Different layers of the protocol provide
different services and ensure that each layer can communicate only with its own
neighboring layers. That is, the protocols in each layer are based on the protocols of
the previous layers.
Starting from the top of the protocol stack, tasks and information move down
from the top layers until they reach the bottom layer where they are sent out over the
network media from the source system to the destination. At the destination, the task
or information rises back up through the layers until it reaches the top. Each layer is
designed to accept work from the layer above it and to pass work down to the layer
below it and vice versa. To ease interlayer communication, the interfaces between
the layers are standardized. However, each layer remains independent and can be
designed independently, and each layer’s functionality should not affect the functionalities of other layers above and below it.
Table 1.1 shows an OSI model consisting of seven layers and the descriptions of
the services provided in each layer.
In peer-to-peer communication, the two communicating computers can initiate
and receive tasks and data. The task and data initiated from each computer starts
from the top in the application layer of the protocol stack on each computer. The
tasks and data then move down from the top layers until they reach the bottom layer,
where they are sent out over the network media from the source system to the destination. At the destination, the task and data rise back up through the layers until
the top. Each layer is designed to accept work from the layer above it and pass work
down to the layer below it. As data passes from layer to layer of the sender machine,
Table 1.1 ISO protocol
layers and corresponding
services
Layer number
7
6
5
4
3
2
1
Protocol
Application
Presentation
Session
Transport
Network
Data link
Physical
1.6
19
Network Connectivity and Protocols
Fig. 1.17 ISO logical peer communication model
Table 1.2 OSI datagrams
seen in each layer with
header added
No header
H1
H2
H3
H4
H5
No header
Data
Data
Data
Data
Data
Data
Data
Application
Presentation
Session
Transport
Network
Data Link
Physical
layer headers are appended to the data, causing the datagram to grow larger. Each
layer header contains information for that layer’s peer on the remote system. That
information may indicate how to route the packet through the network or what
should be done to the packet as it is handed back up the layers on the recipient
computer.
Figure 1.17 shows a logical communication model between two peer computers
using the ISO model. Table 1.2 shows the datagram with added header information
as it moves through the layers. Although the development of the OSI model was
intended to offer a standard for all other proprietary models and it was as encompassing of all existing models as possible, it never really replaced many of those
rival models it was intended to replace. In fact it is this “all-in-one” concept that led
to market failure because it became too complex. Its late arrival on the market also
prevented its much anticipated interoperability across networks.
1.6.2
Transport Control Protocol/Internet Protocol
( TCP/IP) Model
Among the OSI rivals was the TCP/IP, which was far less complex and more historically established by the time the OSI came on the market. The TCP/IP model does
not exactly match the OSI model. For example, it has two to three fewer levels than
the seven layers of the OSI model. It was developed for the US Department of
Defense Advanced Research Projects Agency (DARPA); but over the years, it has
1
20
Computer Network Fundamentals
Table 1.3 TCP/IP protocol layers
Layer
Application
Delivery unit
Transport
Segment
Network
Datagram
Data link
Frame
Physical
Bit stream
Message
Protocols
Handles all higher-level protocols including File Transfer
Protocol (FTP), Name Server Protocol (NSP), Simple Mail
Transfer Protocol (SMTP), Simple Network Management
Protocol (SNMP), HTTP, Remote file access (telnet), Remote
file server (NFS), Name Resolution (DNS), HTTP, TFTP,
SNMP, DHCP, DNS, BOOTP
Combines application, session, and presentation layers of the
OSI model
Handles all high-level protocols
Handles transport protocols including Transmission Control
Protocol (TCP), User Datagram Protocol (UDP)
Contains the following protocols: Internet Protocol (IP),
Internet Control Message Protocol (ICMP), Internet Group
Management Protocol (IGMP)
Supports transmitting source packets from any network on the
internetwork and makes sure they arrive at the destination
independent of the path and networks they took to reach there
Best path determination and packet switching occur at this
layer
Contains protocols that require IP packet to cross a physical
link from one device to another directly connected device
It included the following networks
WAN – wide area network
LAN – local area network
All network card drivers
Fig. 1.18 Application layer data frame
seen a phenomenal growth in popularity, and it is now the de facto standard for the
Internet and many intranets. It consists of two major protocols: the Transmission
Control Protocol (TCP) and the Internet Protocol (IP), hence the TCP/IP designation. Table 1.3 shows the layers and protocols in each layer.
Since TCP/IP is the most widely used in most network protocol suites by the
Internet and many intranets, let us focus on its layers here.
1.6.2.1 Application Layer
This layer, very similar to the application layer in the OSI model, provides the user
interface with resources rich in application functions. It supports all network applications and includes many protocols on a data structure consisting of bit streams as
shown in Fig. 1.18.
1.6
Network Connectivity and Protocols
21
Fig. 1.19 A TCP data structure
Fig. 1.20 A UDP data structure
1.6.2.2 Transport Layer
This layer, again similar to the OSI model session layer, is slightly removed from
the user and is hidden from the user. Its main purpose is to transport application
layer messages that include application layer protocols in their headers between the
host and the server. For the Internet network, the transport layer has two standard
protocols: Transport Control Protocol (TCP) and User Datagram Protocol (UDP).
TCP provides a connection-oriented service, and it guarantees the delivery of all
application layer packets to their destination. This guarantee is based on two mechanisms: congestion control that throttles the transmission rate of the source element
when there is traffic congestion in the network and the flow control mechanism that
tries to match sender and receiver speeds to synchronize the flow rate and reduce the
packet drop rate. While TCP offers guarantees of delivery of the application layer
packets, UDP, on the other hand, offers no such guarantees. It provides a no-frill
connectionless service with just delivery and no acknowledgments. But it is much
more efficient and a protocol of choice for real-time data such as streaming video
and music. Transport layer delivers transport layer packets and protocols to the network layer. Figure 1.19 shows the TCP data structure, and Fig. 1.20 shows the UDP
data structure.
1.6.2.3 Network Layer
This layer moves packets, now called datagrams, from router to router along the
path from a source host to the destination host. It supports a number of protocols
including the Internet Protocol (IP), Internet Control Message Protocol (ICMP),
and Internet Group Management Protocol (IGMP). The IP Protocol is the most
widely used network layer protocol. IP uses header information from the transport
layer protocols that include datagram source and destination port numbers from IP
addresses, and other TCP header and IP information, to move datagrams from router
22
1
Computer Network Fundamentals
Fig. 1.21 An IP datagram structure
to router through the network. Best routes are found in the network by using routing
algorithms. Figure 1.21 shows the IP datagram structure.
The standard IP address has been the so-called IPv4, a 32-bit addressing scheme.
But with the rapid growth of the Internet, there was fear of running out of addresses,
so IPv6, a new 64-bit addressing scheme, was created. The network layer conveys
the network layer protocols to the data link layer.
1.6.2.4 Data Link Layer
This layer provides the network with services that move packets from one packet
switch like a router to the next over connecting links. This layer also offers reliable
delivery of network layer packets over links. It is at the lowest level of communication, and it includes the network interface card (NIC) and operating system (OS)
protocols. The protocols in this layer include Ethernet, asynchronous transfer mode
(ATM), and others such as frame relay. The data link layer protocol unit, the frame,
may be moved over links from source to destination by different link layer protocols
at different links along the way.
1.6.2.5 Physical Layer
This layer is responsible for literally moving data link datagrams bit by bit over the
links and between the network elements. The protocols here depend on and use the
characteristics of the link medium and the signals on the medium.
1.7
Network Services
For a communication network to work effectively, data in the network must be able
to move from one network element to another. This can only happen if the network
services to move such data work. For data networks, these services fall into two
categories:
• Connection services to facilitate the exchange of data between the two network
communicating end systems with as little data loss as possible and in as little
time as possible
• Switching services to facilitate the movement of data from host to host across the
length and width of the network mesh of hosts, hubs, bridges, routers, and
gateways
1.7 Network Services
1.7.1
23
Connection Services
How do we get the network transmitting elements to exchange data over the
network? Two types of connection services are used: the connected-oriented and
connectionless services.
1.7.1.1 Connected-Oriented Services
With a connection-oriented service, before a client can send packets with real data
to the server, there must be a three-way handshake. We will define this three-way
handshake in later chapters. But the purpose of a three-way handshake is to establish a session before the actual communication can begin. Establishing a session
before data is moved creates a path of virtual links between the end systems through
a network and, therefore, guarantees the reservation and establishment of fixed
communication channels and other resources needed for the exchange of data before
any data is exchanged and as long as the channels are needed. For example, this
happens whenever we place telephone calls; before we exchange words, the channels are reserved and established for the duration. Because this technique guarantees that data will arrive in the same order it was sent in, it is considered to be
reliable. In short, the service offers the following:
• Acknowledgments of all data exchanges between the end systems
• Flow control in the network during the exchange
• Congestion control in the network during the exchange
Depending on the type of physical connections in place and the services required
by the systems that are communicating, connection-oriented methods may be
implemented in the data link layers or in the transport layers of the protocol stack,
although the trend now is to implement it more at the transport layer. For example,
TCP is a connection-oriented transport protocol in the transport layer. Other network
technologies that are connection oriented include the frame relay and ATMs.
1.7.1.2 Connectionless Service
In a connectionless service, there is no handshaking to establish a session between
the communicating end systems, no flow control, and no congestion control in the
network. This means that a client can start communicating with a server without
warning or inquiry for readiness; it simply sends streams of packets, called datagrams, from its sending port to the server’s connection port in single point-to-point
transmissions with no relationship established between the packets and between the
end systems. There are advantages and of course disadvantages to this type of connection service. In brief, the connection is faster because there is no handshaking
which can sometimes be time consuming, and it offers periodic burst transfers with
large quantities of data, and in addition, it has a simple protocol. However, this
24
1
Computer Network Fundamentals
service offers minimum functions and has no safeguards and guarantees to the
sender since there is no prior control information and no acknowledgment. In addition, the service does not have the reliability of the connection-oriented method and
offers no error handling and no packets ordering; in addition, each packet self-identifies that leads to long headers, and finally, there is no predefined order in the arrival
of packets.
Like the connection-oriented method, this service can operate both at the data
link and transport layers. For example, UDP, a connectionless service, operates at
the transport layer.
1.7.2
Network Switching Services
Before we discuss communication protocols, let us take a detour and briefly discuss
data transfer by a switching element. This is a technique by which data is moved
from host to host across the length and width of the network mesh of hosts, hubs,
bridges, routers, and gateways. This technique is referred to as data switching. The
type of data switching technique used by a network determines how messages are
transmitted between the two communicating elements and across that network. There
are two types of data switching techniques: circuit switching and packet switching.
1.7.2.1 Circuit Switching
In circuit switching networks, one must reserve all the resources before setting up a
physical communication channel needed for communication. The physical connection, once established, is then used exclusively by the two end systems, usually
subscribers, for the duration of the communication. The main feature of such a connection is that it provides a fixed data rate channel and both subscribers must operate at this rate. For example, in a telephone communication network, a connected
line is reserved between the two points before the users can start using the service.
One issue of debate on circuit switching is the perceived waste of resources during
the so-called silent periods when the connection is fully in force but not being used
by the parties. This situation occurs when, for example, during a telephone network
session, a telephone receiver is not hung up after use, leaving the connection still
established. During this period, while no one is utilizing the session, the session line
is still open.
1.7.2.2 Packet Switching
Packet switching networks, on the other hand, do not require any resources to be
reserved before a communication session begins. These networks, however, require
the sending host to assemble all data streams to be transmitted into packets. If a
message is large, it is broken into several packets. Packet headers contain the source
and the destination network addresses of the two communicating end systems. Then
each of the packets is sent on the communication links and across packet switches
(routers). On receipt of each packet, the router inspects the destination address contained in the packet. Using its own routing table, each router then forwards the
1.7 Network Services
25
Fig. 1.22 Packet switching networks
packet on the appropriate link at the maximum available bit rate. As each packet is
received at each intermediate router, it is forwarded on the appropriate link interspersed with other packets being forwarded on that link. Each router checks the
destination address, if it is the owner of the packet; it then reassembles the packets
into the final message. Figure 1.22 shows the role of routers in packet switching
networks.
Packet switches are considered to be store-and-forward transmitters, meaning
that they must receive the entire packet before the packet is retransmitted or switched
on to the next switch.
Because there is no predefined route for these packets, there can be unpredictably
long delays before the full message can be reassembled. In addition, the network
may not dependably deliver all the packets to the intended destination. To ensure
that the network has a reliably fast transit time, a fixed maximum length of time is
allowed for each packet. Packet switching networks suffer from a few problems,
including the following:
• The rate of transmission of a packet between two switching elements depends on
the maximum rate of transmission of the link joining them and on the switches
themselves.
• Momentary delays are always introduced whenever the switch is waiting for a
full packet. The longer the packet, the longer the delay.
• Each switching element has a finite buffer for the packets. It is thus possible for
a packet to arrive only to find the buffer full with other packets. Whenever this
happens, the newly arrived packet is not stored but gets lost, a process called
packet dropping. In peak times, servers may drop a large number of packets.
Congestion control techniques use the rate of packet drop as one measure of traffic congestion in a network.
Packet switching networks are commonly referred to as packet networks for
obvious reasons. They are also called asynchronous networks, and in such networks,
packets are ideal because there is a sharing of the bandwidth, and of course, this
26
1
Computer Network Fundamentals
avoids the hassle of making reservations for any anticipated transmission. There are
two types of packet switching networks:
• Virtual circuit network in which a packet route is planned, and it becomes a logical connection before a packet is released.
• Datagram network, which is the focus of this book.
1.8
Network Connecting Devices
Before we discuss network connecting devices, let us revisit the network infrastructure. We have defined a network as a mesh of network elements, commonly referred
to as network nodes, connected together by conducting media. These network nodes
can be either at the ends of the mesh, in which case they are commonly known as
clients, or in the middle of the network as transmitting elements. In a small network
such as a LAN, the nodes are connected together via special connecting and conducting devices that take network traffic from one node and pass it on to the next
node. If the network is big Internetwork (large networks of networks like WANs and
LANs), these networks are connected to other special intermediate networking
devices so that the Internet functions as a single large network.
Now let us look at network connecting devices and focus on two types of devices:
those used in networks (small networks such as LANs) and those used in
internetworks.
1.8.1
LAN Connecting Devices
Because LANs are small networks, connecting devices in LANs are less powerful
with limited capabilities. There are hubs, repeaters, bridges, and switches.
1.8.1.1 A Hub
This is the simplest in the family of network connecting devices since it connects
the LAN components with identical protocols. It takes in imports and retransmits
them verbatim. It can be used to switch both digital and analog data. In each node,
presetting must be done to prepare for the formatting of the incoming data. For
example, if the incoming data is in digital format, the hub must pass it on as packets;
however, if the incoming data is analog, then the hub passes as a signal. There are
two types of hubs: simple and multiple port hubs, as shown in Figs. 1.23 and 1.24.
Multiple port hubs may support more than one computer up to its number of ports
and may be used to plan for the network expansion as more computers are added at
a later time.
Network hubs are designed to work with network adapters and cables and can
typically run at either 10 or 100 Mbps; some hubs can run at both speeds. To connect computers with differing speeds, it is better to use hubs that run at both speeds
10/100 Mbps.
1.8
Network Connecting Devices
27
Fig. 1.23 A simple hub
Fig. 1.24 Multi-ported hubs
1.8.1.2 A Repeater
A network repeater is a low-level local communication device at the physical layer
of the network that receives network signals, amplifies them to restore them to full
strength, and then retransmits them to another node in the network. Repeaters are
used in a network for several purposes including countering the attenuation that
occurs when signals travel long distances and extending the length of the LAN
above the specified maximum. Since they work at the lowest network stack layer,
they are less intelligent than their counterparts such as bridges, switches, routers,
and gateways in the upper layers of the network stack. See Fig. 1.25.
1.8.1.3 A Bridge
A bridge is like a repeater but differs in that a repeater amplifies electrical signals
because it is deployed at the physical layer; a bridge is deployed at the data link and
therefore amplifies digital signals. It digitally copies frames. It permits frames from
one part of a LAN or a different LAN with different technology to move to another
28
1
Computer Network Fundamentals
Fig. 1.25 A repeater in an OSI model
Fig. 1.26 Simple bridge
part or another LAN. However, in filtering and isolating a frame from one network
to another or another part of the same network, the bridge will not move a damaged
frame from one end of the network to the other. As it filters the data packets, the
bridge makes no modifications to the format and content of the incoming data. A
bridge filters the frames to determine whether a frame should be forwarded or
dropped. All “noise” (collisions, faulty wiring, power surges, etc.) packets are not
transmitted.
The bridge filters and forwards frames on the network using a dynamic bridge
table. The bridge table, which is initially empty, maintains the LAN addresses for
each computer in the LAN and the addresses of each bridge interface that connects
the LAN to other LANs. Bridges, like hubs, can be either simple or multi-ported.
Figure 1.26 shows a simple bridge, Fig. 1.27 shows a multi-ported bridge, and
Fig. 1.28 shows the position of the bridge in an OSI protocol stack.
1.8
Network Connecting Devices
29
Fig. 1.27 Multi-ported bridge
Fig. 1.28 Position of a bridge in an OSI protocol stack
1.8.1.4 A Switch
A switch is a network device that connects segments of a network or two small
networks such as Ethernet or token ring LANs. Like the bridge, it also filters and
forwards frames on the network with the help of a dynamic table. This point-topoint approach allows the switch to connect multiple pairs of segments at a time,
allowing more than one computer to transmit data at a time, thus giving them a high
performance over their cousins, the bridges.
30
1
Computer Network Fundamentals
Fig. 1.29 Router in the OSI protocol stack
1.8.2
Internetworking Devices
Internetworking devices connect together smaller networks, like several LANs creating much larger networks such as the Internet. Let us look at two of these connectors: the router and the gateway.
1.8.2.1 Routers
Routers are general-purpose devices that interconnect two or more heterogeneous
networks represented by IP subnets or unnumbered point-to-point lines. They are
usually dedicated special-purpose computers with separate input and output interfaces for each connected network. They are implemented at the network layer in the
protocol stack. Figure 1.29 shows the position of the router in the OSI protocol
stack.
According to RFC 1812, a router performs the following functions [3]:
• Conforms to specific Internet Protocols specified in the 1812 document, including the Internet Protocol (IP), Internet Control Message Protocol (ICMP), and
others as necessary.
• Connects to two or more packet networks. For each connected network, the
router must implement the functions required by that network because it is a
member of that network. These functions typically include the following:
– Encapsulating and decapsulating the IP datagrams with the connected network framing. For example, if the connected network is an Ethernet LAN, an
Ethernet header and checksum must be attached.
– Sending and receiving IP datagrams up to the maximum size supported by
that network; this size is the network’s maximum transmission unit or MTU.
– Translating the IP destination address into an appropriate network-level
address for the connected network. These are the Ethernet hardware addresses
on the NIC, for Ethernet cards, if needed. Each network addresses the router
as a member computer of its own network. This means that each router is a
member of each network it connects to. It, therefore, has a network host
address for that network and an interface address for each network it is connected to. Because of this rather strange characteristic, each router interface
1.8
•
•
•
•
Network Connecting Devices
31
has its own address resolution protocol (ARP) module, its LAN address (network card address), and its own Internet Protocol (IP) address.
– Responding to network flow control and error indications, if any.
Receives and forwards Internet datagrams. Important issues in this process are
buffer management, congestion control, and fairness. To do this, the router must:
– Recognize error conditions and generate ICMP error and information messages as required.
– Drop datagrams whose time-to-live fields have reached zero.
– Fragment datagrams when necessary to fit into the maximum transmission
unit (MTU) of the next network.
Chooses a next-hop destination for each IP datagram based on the information in
its routing database.
Usually supports an interior gateway protocol (IGP) to carry out distributed routing and reachability algorithms with the other routers in the same autonomous
system. In addition, some routers will need to support an exterior gateway protocol (EGP) to exchange topological information with other autonomous systems.
Provides network management and system support facilities, including loading,
debugging, status reporting, exception reporting, and control.
Forwarding an IP datagram from one network across a router requires the router
to choose the address and relevant interface of the next-hop router or for the final
hop if it is the destination host. The next-hop router is always in the next network of
which the router is also a member. The choice of the next-hop router, called forwarding, depends on the entries in the routing table within the router.
Routers are smarter than bridges in that the router with the use of a router table
has some knowledge of possible routes a packet could take from its source to its
destination. Once it finds the destination, it determines the best, fastest, and most
efficient way of routing the package. The routing table, like in the bridge and switch,
grows dynamically as activities in the network develop. On receipt of a packet, the
router removes the packet headers and trailers and analyzes the IP header by determining the source and destination addresses and data type and noting the arrival
time. It also updates the router table with new addresses if not already in the table.
The IP header and arrival time information is entered in the routing table. If a router
encounters an address it cannot understand, it drops the package. Let us explain the
working of a router by an example using Fig. 1.30.
In Fig. 1.30, suppose host A in LAN1 tries to send a packet to host B in LAN2.
Both host A and host B have two addresses: the LAN (host) address and the IP
address. The translation between host LAN addresses and IP addresses is done by
the ARP, and data is retrieved or built into the ARP table, similar to Table 1.4. Notice
also that the router has two network interfaces: interface 1 for LAN1 and interface
2 for LAN2 for the connection to a larger network such as the Internet. Each interface has a LAN (host) address for the network the interface connects on and a corresponding IP address. As we will see later in the chapter, host A sends a packet to
router 1 at time 10:01 that includes, among other things, both its addresses, message
type, and destination IP address of host B. The packet is received at interface 1 of
32
1
Computer Network Fundamentals
Fig. 1.30 Working of a router
Table 1.4 ARP table for
LAN1
IP address
127.0.0.5
127.76.1.12
LAN address
Table 1.5 Routing table for
interface 1
Address
127.0.0.1
192.76.1.12
Interface
16-73-AX-E4-01
07-1A-EB-17-F6
1
2
Time
10:00
10:03
Time
10:01
10:03
the router; the router reads the packet and builds row 1 of the routing table as shown
in Table 1.5.
The router notices that the packet has to go to network 193.55.1.***, where ***
are digits 0–9, and it has knowledge that this network is connected on interface 2. It
forwards the packet to interface 2. Now, interface 2 with its own ARP may know
host B. If it does, then it forwards the packet and updates the routing table with the
inclusion of row 2. What happens when the ARP at the router interface 1 cannot
determine the next network? That is, if it has no knowledge of the presence of network 193.55.1.***, it will then ask for help from a gateway. Let us now discuss how
IP chooses a gateway to use when delivering a datagram to a remote network.
1.8.2.2 Gateways
Gateways are more versatile devices than routers. They perform protocol conversion between different types of networks, architectures, or applications and serve as
translators and interpreters for network computers that communicate in different
protocols and operate in dissimilar networks, for example, OSI and TCP/IP. Because
the networks are different with different technologies, each network has its own
routing algorithms, protocols, domain name servers, and network administration
1.8
Network Connecting Devices
33
Fig. 1.31 Position of a gateway
Table 1.6 A gateway
routing table
Network
0.0.0.0
127.123.0.1
Gateway
192.133.1.1
198.24.0.1
Interface
1
2
procedures and policies. Gateways perform all of the functions of a router and more.
The gateway functionality that does the translation between different network technologies and algorithms is called a protocol converter. Figure 1.31 shows the position of a gateway in a network.
Gateways services include packet format and/or size conversion, protocol conversion, data translation, terminal emulation, and multiplexing. Since gateways perform a more complicated task of protocol conversion, they operate more slowly and
handle fewer devices.
Let us now see how a packet can be routed through a gateway or several gateways before it reaches its destination. We have seen that if a router gets a datagram,
it checks the destination address and finds that it is not on the local network. It,
therefore, sends it to the default gateway. The default gateway now searches its table
for the destination address. In case the default gateway recognizes that the destination address is not on any of the networks it is connected to directly, it has to find yet
another gateway to forward it through.
The routing information the server uses for this is in a gateway routing table linking networks to gateways that reach them. The table starts with the network entry
0.0.0.0, a catch-all entry, for default routes. All packets to an unknown network are
sent through the default route. Table 1.6 shows the gateway routing table.
34
1
Computer Network Fundamentals
The choice between a router, a bridge, and a gateway is a balance between
functionality and speed. Gateways, as we have indicated, perform a variety of
functions; however, because of this variety of functions, gateways may become
bottlenecks within a network because they are slow.
Routing tables may be built either manually for small LANs or by using software
called routing daemons for larger networks.
1.9
Network Technologies
Earlier in this chapter, we indicated that computer networks are basically classified
according to their sizes with the local area networks (LANs) covering smaller areas
and the bigger ones covering wider areas (WANs). In this last section of the chapter,
let us look at a few network technologies in each one of these categories.
1.9.1
LAN Technologies
Recall our definition of a LAN at the beginning of this chapter. We defined a LAN
to be a small data communication network that consists of a variety of machines that
are all part of the network and cover a geographically small area such as one building
or one floor. Also, a LAN is usually owned by an individual or a single entity such
as an organization. According to IEEE 802.3 Committee on LAN Standardization,
a LAN must be a moderately sized and geographically shared peer-to-peer
communication network broadcasting information for all on the network to hear via
a common physical medium on a point-to-point basis with no intermediate switching element required. Many common network technologies today fall into this
category including the popular Ethernet, the widely used token ring/IEEE 805.2,
and the fiber distributed data interface (FDDI).
1.9.1.1 Star-Based Ethernet (IEEE 802.3) LAN
Ethernet technology is the most widely used of all LAN technologies, and it has been
standardized by the IEEE 802.3 Committee on Standards. The IEEE 802.3 standards
define the medium access control (MAC) layer and the physical layer. The Ethernet
MAC is a carrier sense multiple access with collision detection (CSMA/CD) system.
With CSMA, any network node that wants to transmit must listen first to the medium
to make sure that there is no other node already transmitting. This is called the carrier
sensing of the medium. If there is already a node using the medium, then the element
that was intending to transmit waits; otherwise, it transmits. In case two or more elements are trying to transmit at the same time, a collision will occur and the integrity
of the data for all is compromised. However, the element may not know this. So it
waits for an acknowledgment from the receiving node. The waiting period varies,
taking into account maximum round trip propagation delay and other unexpected
delays. If no acknowledgment is received during that time, the element then assumes
that a collision has occurred and the transmission was unsuccessful and therefore it
1.9
Network Technologies
35
Fig. 1.32 An Ethernet frame structure
must retransmit. If more collisions were to happen, then the element must now
double the delay time and so on. After a collision, when the two elements are in delay
period, the medium may be idle and this may lead to inefficiency. To correct this situation, the elements, instead of just going into the delay mode, must continue to listen
onto the medium as they transmit. In this case, they will not only be doing carrier
sensing but also detecting a collision that leads to CSMA/CD. According to Stallings,
the CSMA/CD scheme follows the following algorithm [1]:
• If the medium is idle, transmit.
• If the medium is busy, continue to listen until idle; then transmit immediately.
• If collision is detected, transmit jamming signal for “collision warning” to all
other network elements.
• After jamming the signal, wait random time units and attempt to transmit.
A number of Ethernet LANs are based on the IEEE 802.3 standards, including:
• 10 BASE-X (where X = 2, 5, T and F; T, twisted pair, and F, fiber optics)
• 100 BASE-T (where the T options include T4, TX, and FX)
• 1,000 BASE-T (where T options include LX, SX, T, and CX)
The basic Ethernet transmission structure is a frame, and it is shown in Fig. 1.32.
The source and destination fields contain 6-byte LAN addresses of the form
xx-xx-xx-xx-xx-xx, where x is a hexadecimal integer. The error detection field is 4
bytes of bits used for error detection, usually using the cyclic redundancy check
(CRC) algorithm, in which the source and destination elements synchronize the
values of these bits.
1.9.1.2 Token Ring/IEEE 805.2
Token ring LANs based on IEEE 805.2 are also used widely in commercial and
small industrial networks, although not as popular as Ethernet. The standard uses a
frame called a token that circulates around the network so that all network nodes
have equal access to it. As we have seen previously, token ring technology employs
a mechanism that involves passing the token around the network so that all network
elements have equal access to it.
Whenever a network element wants to transmit, it waits for the token on the ring
to make its way to the element’s connection point on the ring. When the token
arrives at this point, the element grabs it and changes one bit of the token that
becomes the start bit in the data frame the element will be transmitting. The element
then inserts data, addressing information and other fields, and then releases the
36
1
Computer Network Fundamentals
Fig. 1.33 A token data frame
payload onto the ring. It then waits for the token to make a round and come back.
The receiving host must recognize the destination MAC address within the frame as
its own. Upon receipt, the host identifies the last field indicating the recognition of
the MAC address as its own. The frame contents are then copied by the host, and the
frame is put back in circulation. On reaching the network element that still owns the
token, the element withdraws the token and a new token is put on the ring for another
network element that may need to transmit.
Because of its round-robin nature, the token ring technique gives each network
element a fair chance of transmitting if it wants to. However, if the token ever gets
lost, the network business is halted. Figure 1.33 shows the structure of a token data
frame, and Fig. 1.16 shows the token ring structure.
Like Ethernet, the token ring has a variety of technologies based on the transmission rates.
1.9.1.3 Other LAN Technologies
In addition to those we have discussed earlier, several other LAN technologies are
in use, including the following:
• Asynchronous transfer mode (ATM) with the goal of transporting real-time
voice, video, text, e-mail, and graphic data. ATM offers a full array of network
services that make it a rival of the Internet network.
• Fiber distributed data interface (FDDI) is a dual-ring network that uses a token
ring scheme with many similarities to the original token ring technology.
• AppleTalk, the popular Mac users’ LAN.
1.9.2
WAN Technologies
As we defined it earlier, WANs are data networks like LANs but they cover a wider
geographical area. Because of their sizes, WANs traditionally provide fewer services to customers than LANs. Several networks fall into this category, including
the integrated services digital network (ISDN), X.25, frame relay, and the popular
Internet.
1.9.2.1 Integrated Services Digital Network (ISDN)
ISDN is a system of digital phone connections that allows data to be transmitted
simultaneously across the world using end-to-end digital connectivity. It is a network that supports the transmission of video, voice, and data. Because the
1.9
Network Technologies
37
transmission of these varieties of data, including graphics, usually puts widely
differing demands on the communication network, service integration for these
networks is an important advantage to make them more appealing. The ISDN standards specify that subscribers must be provided with:
• Basic rate interface (BRI) services of two full-duplex 64-kbps B channels, the
bearer channels, and one full-duplex 16-kbps D channel, the data channel. One
B channel is used for digital voice and the other for applications such as data
transmission. The D channel is used for telemetry and for exchanging network
control information. This rate is for individual users.
• Primary rate interface (PRI) services consisting of 23 64-kbps B channels and
one 64-kbps D channel. This rate is for all large users.
BRI can be accessed only if the customer subscribes to an ISDN phone line and
is within 18,000 ft (about 3.4 miles or 5.5 km) of the telephone company central
office. Otherwise, expensive repeater devices are required that may include ISDN
terminal adapters and ISDN routers.
1.9.2.2 X.25
X.25 is the International Telecommunication Union (ITU) protocol developed in
1993 to bring interoperability to a variety of many data communication wide area
networks (WANs), known as public networks, owned by private companies, organizations, and government agencies. By doing so, X.25 describes how data passes into
and out of public data communications networks.
X.25 is a connection-oriented and packet-switched data network protocol with
three levels corresponding to the bottom three layers of the OSI model as follows:
the physical level corresponds to the OSI physical layer, the link level corresponds
to OSI data link layer, and the packet level corresponds to the OSI network layer.
In full operation, the X.25 networks allow remote devices known as data terminal equipment (DTE) to communicate with each other across high-speed digital
links, known as data circuit-terminating equipment (DCE), without the expense of
individual leased lines. The communication is initiated by the user at a DTE setting
up calls using standardized addresses. The calls are established over virtual circuits,
which are logical connections between the originating and destination addresses.
On receipt, the called users can accept, clear, or redirect the call to a third party.
The virtual connections we mentioned above are of the following two types [4]:
• Switched virtual circuits (SVCs) – SVCs are very much like telephone calls; a
connection is established, data is transferred, and then the connection is released.
Each DTE on the network is given a unique DTE address that can be used much
like a telephone number.
• Permanent virtual circuits (PVCs) – A PVC is similar to a leased line in that the
connection is always present. The logical connection is established permanently
by the packet-switched network administration. Therefore, data may always be
sent without any call setup.
38
1
Computer Network Fundamentals
Both of these circuits are used extensively, but since user equipment and
network systems supported both X.25 PVCs and X.25 SVCs, most users prefer
the SVCs since they enable the user devices to set up and tear down connections
as required.
Because X.25 is a reliable data communications with a capability over a wide
range of quality of transmission facilities, it provides advantages over other WAN
technologies, for example:
• Unlike frame relay and ATM technologies that depend on the use of high-quality
digital transmission facilities, X.25 can operate over either analog or digital
facilities.
• In comparison with TCP/IP, one finds that TCP/IP has only end-to-end error
checking and flow control, while X.25 is error checked from network element to
network element.
X.25 networks are in use throughout the world by large organizations with
widely dispersed and communication-intensive operations in sectors such as finance,
insurance, transportation, utilities, and retail.
1.9.2.3 Other WAN Technologies
The following are other WAN technologies that we would like to discuss but cannot
include because of space limitations:
• Frame relay is a packet-switched network with the ability to multiplex many
logical data conversions over a single connection. It provides flexible efficient
channel bandwidth using digital and fiber-optic transmission. It has many similar
characteristics to X.25 network except in format and functionality.
• Point-to-point protocol (PPP) is the Internet standard for transmission of IP
packets over serial lines. The point-to-point link provides a single, preestablished
communications path from the ending element through a carrier network, such as
a telephone company, to a remote network. These links can carry datagram or
data stream transmissions.
• xDirect service line (xDSL) is a technology that provides an inexpensive, yet
very fast connection to the Internet.
• Switched multi-megabit data service (SMDS) is a connectionless service operating in the range of 1.5–100 Mbps; any SMDS station can send a frame to any
other station on the same network.
• Asynchronous transfer mode (ATM) is already discussed as a LAN technology.
1.9.3
Wireless LANs
The rapid advances, miniaturization, and the popularity of wireless technology have
opened a new component of LAN technology. The mobility and relocation of workers has forced companies to move into new wireless technologies with emphasis on
1.10
Conclusion
39
wireless networks extending the local LAN into a wireless LAN. There are basically
four types of wireless LANs [1]:
• LAN extension is a quick wireless extension to an existing LAN to accommodate
new changes in space and mobile units.
• Cross-building interconnection establishes links across buildings between both
wireless and wired LANs.
• Nomadic access establishes a link between a LAN and a mobile wireless communication device such as a laptop computer.
• Ad hoc networking is a peer-to-peer network temporarily set up to meet some
immediate need. It usually consists of laptops, handheld devices, PCs, and other
communication devices.
• Personal area networks (PANs) that include the popular Bluetooth networks.
There are several wireless IEEE 802.11-based LAN types, including:
• Infrared
• Spread spectrum
• Narrowband microwave
Wireless technology is discussed in further detail in Chap. 17.
1.10
Conclusion
We have developed the theory of computer networks and discussed the topologies,
standards, and technologies of these networks. Because we were limited by space,
we could not discuss a number of interesting and widely used technologies both
in LAN and WAN areas. However, our limited discussion of these technologies
should give the reader an understanding and scope of the changes that are talking
place in network technologies. We hope that the trend will keep the convergence
of the LAN, WAN, and wireless technologies on track so that the alarming number of different technologies is reduced and basic international standards are
established.
Exercises
1. What is a communication protocol?
2. Why do we need communication protocols?
3. List the major protocols discussed in this chapter.
4. In addition to ISO and TCP/IP, what are the other models?
5. Discuss two LAN technologies that are not Ethernet or token ring.
6. Why is Ethernet technology more appealing to users than the rest of the LAN
technologies?
7. What do you think are the weak points of TCP/IP?
8. Discuss the pros and cons of the four LAN technologies.
40
1
Computer Network Fundamentals
9. List four WAN technologies.
10. What technologies are found in MANs? Which of the technologies listed in 8
and 9 can be used in MANs?
Advanced Exercises
1. X.25 and TCP/IP are very similar but there are differences. Discuss these
differences.
2. Discuss the reasons why ISDN failed to catch on as WAN technology.
3. Why is it difficult to establish permanent standards for a technology like WAN
or LAN?
4. Many people see Bluetooth as a personal wireless network (PAN). Why is this
so? What standard does Bluetooth use?
5. Some people think that Bluetooth is a magic technology that is going to change
the world. Read about Bluetooth and discuss this assertion.
6. Discuss the future of wireless LANs.
7. What is a wireless WAN? What kind of technology can be used in it? Is this the
wave of the future?
8. With the future in mind, compare and contrast ATMs and ISDN technologies.
9. Do you foresee a fusion between LAN, MAN, and WAN technologies in the
future? Support your response.
10. Network technology is in transition. Discuss the direction of network
technology.
References
1. Stallings W (2000) Local and metropolitan area network. Prentice Hall, Upper Saddle River
2. Comar DE (2000) Internetworking with TCP/IP: principles, protocols, and architecture, 4th edn.
Prentice-Hall, Upper Saddle River
3. RFC (1812) Requirements for IP version 4 routers. http://www.cis.ohio-state.edu/cgi-bin/rfc/
rfc1812.html#sec-2.2.3
4. Sangoma Technologies. http://www.sangoma.com/x25.htm
Computer Network Security
Fundamentals
2.1
2
Introduction
Before we talk about network security, we need to understand in general terms what
security is. Security is a continuous process of protecting an object from unauthorized access. It is a state of being or feeling protected from harm. That object in that
state may be a person, an organization such as a business, or property such as a
computer system or a file. Security comes from secure which means, according to
Webster Dictionary, a state of being free from care, anxiety, or fear [1].
An object can be in a physical state of security or a theoretical state of security.
In a physical state, a facility is secure if it is protected by a barrier like a fence, has
secure areas both inside and outside, and can resist penetration by intruders. This
state of security can be guaranteed if the following four protection mechanisms are
in place: deterrence, prevention, detection, and response [1, 2].
• Deterrence is usually the first line of defense against intruders who may try to
gain access. It works by creating an atmosphere intended to frighten intruders.
Sometimes this may involve warnings of severe consequences if security is
breached.
• Prevention is the process of trying to stop intruders from gaining access to the
resources of the system. Barriers include firewalls, demilitarized zones (DMZs),
and the use of access items like keys, access cards, biometrics, and others to
allow only authorized users to use and access a facility.
• Detection occurs when the intruder has succeeded or is in the process of gaining
access to the system. Signals from the detection process include alerts to the
existence of an intruder. Sometimes these alerts can be real time or stored for
further analysis by the security personnel.
• Response is an aftereffect mechanism that tries to respond to the failure of the
first three mechanisms. It works by trying to stop and/or prevent future damage
or access to a facility.
© Springer-Verlag London 2015
J.M. Kizza, Guide to Computer Network Security, Computer Communications
and Networks, DOI 10.1007/978-1-4471-6654-2_2
41
42
2
Computer Network Security Fundamentals
The areas outside the protected system can be secured by wire and wall fencing,
mounted noise or vibration sensors, security lighting, closed-circuit television
(CCTV), buried seismic sensors, or different photoelectric and microwave systems
[1]. Inside the system, security can be enhanced by using electronic barriers such as
firewalls and passwords.
Digital barriers – commonly known as firewalls, discussed in detail in Chap. 12 –
can be used. Firewalls are hardware or software tools used to isolate the sensitive
portions of an information system facility from the outside world and limit the
potential damage by a malicious intruder.
A theoretical state of security, commonly known as pseudosecurity or security
through obscurity (STO), is a false hope of security. Many believe that an object can
be secure as long as nobody outside the core implementation group has knowledge
about its existence. This security is often referred to as “bunk mentality” security.
This is virtual security in the sense that it is not physically implemented like building walls, issuing passwords, or putting up a firewall, but it is effectively based
solely on a philosophy. The philosophy itself relies on a need to know basis, implying that a person is not dangerous as long as that person doesn’t have knowledge
that could affect the security of the system like a network, for example. In real systems where this security philosophy is used, security is assured through a presumption that only those with responsibility and who are trustworthy can use the system
and nobody else needs to know. So, in effect, the philosophy is based on the trust of
those involved assuming that they will never leave. If they do, then that means the
end of security for that system.
There are several examples where STO has been successfully used. These include
Coca-Cola, KFC, and other companies that have, for generations, kept their secret
recipes secure based on a few trusted employees. But the overall STO is a fallacy
that has been used by many software producers when they hide their codes. Many
times, STO hides system vulnerabilities and weaknesses. This was demonstrated
vividly in Matt Blaze’s 1994 discovery of a flaw in the Escrowed Encryption
Standard (Clipper) that could be used to circumvent law enforcement monitoring.
Blaze’s discovery allowed easier access to secure communication through the
Clipper technology than was previously possible, without access to keys [3]. The
belief that secrecy can make the system more secure is just that, a belief – a myth in
fact. Unfortunately, the software industry still believes this myth.
Although its usefulness has declined as the computing environment has changed
to large open systems, new networking programming and network protocols, and as
the computing power available to the average person has increased, the philosophy
is in fact still favored by many agencies, including the military, many government
agencies, and private businesses.
In either security state, many objects can be thought of as being secure if such a
state, a condition, or a process is afforded to them. Because there are many of these
objects, we are going to focus on the security of a few of these object models. These
will be a computer, a computer network, and information.
2.1
Introduction
2.1.1
43
Computer Security
This is a study, which is a branch of computer science, focusing on creating a secure
environment for the use of computers. It is a focus on the “behavior of users,” if you
will, required and the protocols in order to create a secure environment for anyone
using computers. This field, therefore, involves four areas of interest: the study of
computer ethics, the development of both software and hardware protocols, and the
development of best practices. It is a complex field of study involving detailed
mathematical designs of cryptographic protocols. We are not focusing on this in
this book.
2.1.2
Network Security
As we saw in Chap. 1, computer networks are distributed networks of computers
that are either strongly connected meaning that they share a lot of resources
from one central computer or loosely connected, meaning that they share only
those resources that can make the network work. When we talk about computer
network security, our focus object model has now changed. It is no longer one
computer but a network. So computer network security is a broader study of
computer security. It is still a branch of computer science, but a lot broader than
that of computer security. It involves creating an environment in which a
computer network, including all its resources, which are many; all the data in it
both in storage and in transit; and all its users, is secure. Because it is wider than
computer security, this is a more complex field of study than computer security
involving more detailed mathematical designs of cryptographic, communication, transport, and exchange protocols and best practices. This book focuses on
this field of study.
2.1.3
Information Security
Information security is even a bigger field of study including computer and
computer network security. This study is found in a variety of disciplines,
including computer science, business management, information studies, and
engineering. It involves the creation of a state in which information and data are
secure. In this model, information or data is either in motion through the
communication channels or in storage in databases on server. This, therefore,
involves the study of not only more detailed mathematical designs of cryptographic, communication, transport, and exchange protocols and best practices
but also the state of both data and information in motion. We are not discussing
these in this book.
44
2.2
2
Computer Network Security Fundamentals
Securing the Computer Network
Creating security in the computer network model we are embarking on in this book
means creating secure environments for a variety of resources. In this model, a
resource is secure, based on the above definition, if that resource is protected from
both internal and external unauthorized access. These resources, physical or not,
are objects. Ensuring the security of an object means protecting the object from
unauthorized access both from within the object and externally. In short, we protect
objects. System objects are either tangible or nontangible. In a computer network
model, the tangible objects are the hardware resources in the system, and the
intangible object is the information and data in the system, both in transition and
static in storage.
2.2.1
Hardware
Protecting hardware resources include protecting:
• End-user objects that include the user interface hardware components such as all
client system input components, including a keyboard, mouse, touch screen,
light pens, and others
• Network objects like firewalls, hubs, switches, routers, and gateways which are
vulnerable to hackers
• Network communication channels to prevent eavesdroppers from intercepting
network communications
2.2.2
Software
Protecting software resources includes protecting hardware-based software, operating systems, server protocols, browsers, application software, and intellectual
property stored on network storage disks and databases. It also involves protecting
client software such as investment portfolios, financial data, real estate records,
images or pictures, and other personal files commonly stored on home and business
computers.
2.3
Forms of Protection
Now, we know what model objects are or need to be protected. Let us briefly,
keep details for later, survey ways and forms of protecting these objects. Prevention
of unauthorized access to system resources is achieved through a number of
services that include access control, authentication, confidentiality, integrity, and
nonrepudiation.
2.3
Forms of Protection
2.3.1
45
Access Control
This is a service the system uses, together with a user pre-provided identification
information such as a password, to determine who uses what of its services. Let us
look at some forms of access control based on hardware and software.
2.3.1.1 Hardware Access Control Systems
Rapid advances in technology have resulted in efficient access control tools that are
open and flexible while at the same time ensuring reasonable precautions against
risks. Access control tools falling in this category include the following:
• Access terminal. Terminal access points have become very sophisticated, and now
they not only carry out user identification but also verify access rights, control
access points, and communicate with host computers. These activities can be done
in a variety of ways including fingerprint verification and real-time anti-break-in
sensors. Network technology has made it possible for these units to be connected
to a monitoring network or remain in a stand-alone off-line mode.
• Visual event monitoring. This is a combination of many technologies into one
very useful and rapidly growing form of access control using a variety of
real-time technologies including video and audio signals, aerial photographs,
and global positioning system (GPS) technology to identify locations.
• Identification cards. Sometimes called proximity cards, these cards have become
very common these days as a means of access control in buildings, financial
institutions, and other restricted areas. The cards come in a variety of forms,
including magnetic, bar coded, contact chip, and a combination of these.
• Biometric identification. This is perhaps the fastest growing form of control access
tool today. Some of the most popular forms include fingerprint, iris, and voice
recognition. However, fingerprint recognition offers a higher level of security.
• Video surveillance. This is a replacement of CCTV of yester year, and it is
gaining popularity as an access control tool. With fast networking technologies
and digital cameras, images can now be taken and analyzed very quickly and
action taken in minutes.
2.3.1.2 Software Access Control Systems
Software access control falls into two types: point of access monitoring and remote
monitoring. In point of access (POA), personal activities can be monitored by a
PC-based application. The application can even be connected to a network or to a
designated machine or machines. The application collects and stores access events
and other events connected to the system operation and download access rights to
access terminals.
In remote mode, the terminals can be linked in a variety of ways, including
the use of modems, telephone lines, and all forms of wireless connections. Such
terminals may, sometimes if needed, have an automatic calling at preset times if
desired or have an attendant to report regularly.
46
2.3.2
2
Computer Network Security Fundamentals
Authentication
Authentication is a service used to identify a user. User identity, especially of remote
users, is difficult because many users, especially those intending to cause harm, may
masquerade as the legitimate users when they actually are not. This service provides
a system with the capability to verify that a user is the very one he or she claims to
be based on what the user is, knows, and has.
Physically, we can authenticate users or user surrogates based on checking one
or more of the following user items [2]:
• User name (sometimes screen name)
• Password
• Retinal images: The user looks into an electronic device that maps his or her eye
retina image; the system then compares this map with a similar map stored on the
system.
• Fingerprints: The user presses on or sometimes inserts a particular finger into a
device that makes a copy of the user fingerprint and then compares it with a similar image on the system user file.
• Physical location: The physical location of the system initiating an entry request is
checked to ensure that a request is actually originating from a known and authorized
location. In networks, to check the authenticity of a client’s location a network or
Internet protocol (IP) address of the client machine is compared with the one on the
system user file. This method is used mostly in addition to other security measures
because it alone cannot guarantee security. If used alone, it provides access to the
requested system to anybody who has access to the client machine.
• Identity cards: Increasingly, cards are being used as authenticating documents.
Whoever is the carrier of the card gains access to the requested system. As is the
case with physical location authentication, card authentication is usually used as
a second-level authentication tool because whoever has access to the card automatically can gain access to the requested system.
2.3.3
Confidentiality
The confidentiality service protects system data and information from unauthorized
disclosure. When data leave one extreme of a system such as a client’s computer in
a network, it ventures out into a nontrusting environment. So, the recipient of that
data may not fully trust that no third party like a cryptanalysis or a man-in-the
middle has eavesdropped on the data. This service uses encryption algorithms to
ensure that nothing of the sort happened while the data was in the wild.
Encryption protects the communications channel from sniffers. Sniffers are
programs written for and installed on the communication channels to eavesdrop on
network traffic, examining all traffic on selected network segments. Sniffers are easy
to write and install and difficult to detect. The encryption process uses an encryption
2.3
Forms of Protection
47
algorithm and key to transform data at the source, called plaintext; turn it into an
encrypted form called ciphertext, usually unintelligible form; and finally recover it at
the sink. The encryption algorithm can either be symmetric or asymmetric. Symmetric
encryption or secret key encryption, as it is usually called, uses a common key and the
same cryptographic algorithm to scramble and unscramble the message. Asymmetric
encryption commonly known as public-key encryption uses two different keys: a public
key known by all and a private key known by only the sender and the receiver. Both
the sender and the receiver each has a pair of these keys, one public and one private.
To encrypt a message, a sender uses the receiver’s public key which was published.
Upon receipt, the recipient of the message decrypts it with his or her private key.
2.3.4
Integrity
The integrity service protects data against active threats such as those that may alter
it. Just like data confidentiality, data in transition between the sending and receiving
parties is susceptible to many threats from hackers, eavesdroppers, and cryptanalysts
whose goal is to intercept the data and alter it based on their motives. This service,
through encryption and hashing algorithms, ensures that the integrity of the
transient data is intact. A hash function takes an input message M and creates a code
from it. The code is commonly referred to as a hash or a message digest. A one-way
hash function is used to create a signature of the message – just like a human
fingerprint. The hash function is, therefore, used to provide the message’s integrity
and authenticity. The signature is then attached to the message before it is sent by
the sender to the recipient.
2.3.5
Nonrepudiation
This is a security service that provides proof of origin and delivery of service and/
or information. In real life, it is possible that the sender may deny the ownership of
the exchanged digital data that originated from him or her. This service, through
digital signature and encryption algorithms, ensures that digital data may not be
repudiated by providing proof of origin that is difficult to deny. A digital signature
is a cryptographic mechanism that is the electronic equivalent of a written signature
to authenticate a piece of data as to the identity of the sender.
We have to be careful here because the term “nonrepudiation” has two meanings,
one in the legal world and the other in the cryptotechnical world. Adrian McCullagh
and William Caelli define “nonrepudiation” in a cryptotechnical way as follows [4]:
• In authentication, a service that provides proof of the integrity and origin of data,
both in a forgery-proof relationship, which can be verified by any third party at
any time
• In authentication, an authentication that with high assurance can be asserted to
be genuine and that cannot subsequently be refuted
48
2
Computer Network Security Fundamentals
However, in the legal world, there is always a basis for repudiation. This basis,
again according to Adrian McCullagh, can be as follows:
• The signature is a forgery.
• The signature is not a forgery but was obtained via:
– Unconscionable conduct by a party to a transaction
– Fraud instigated by a third party
– Undue influence exerted by a third party
We will use the cryptotechnical definition throughout the book. To achieve
nonrepudiation, users and application environments require a nonrepudiation
service to collect, maintain, and make available the irrefutable evidence. The best
services for nonrepudiation are digital signatures and encryption. These services
offer trust by generating unforgettable evidence of transactions that can be used for
dispute resolution after the fact.
2.4
Security Standards
The computer network model also suffers from the standardization problem. Security
protocols, solutions, and best practices that can secure the computer network model
come in many different types and use different technologies resulting in incompatibility of interfaces (more in Chap. 16), less interoperability, and uniformity among
the many system resources with differing technologies within the system and between
systems. System managers, security chiefs, and experts, therefore, choose or prefer
standards, if no de facto standard exists, that are based on service, industry, size, or
mission. The type of service offered by an organization determines the types of security standards used. Like service, the nature of the industry an organization is in also
determines the types of services offered by the system, which in turn determines the
type of standards to adopt. The size of an organization also determines what type of
standards to adopt. In relatively small establishments, the ease of implementation
and running of the system influence the standards to be adopted. Finally, the mission
of the establishment also determines the types of standards used. For example,
government agencies have a mission that differs from that of a university. These two
organizations, therefore, may choose different standards. We are, therefore, going to
discuss security standards along these divisions. Before we do that, however, let us
look at the bodies and organizations behind the formulation, development, and
maintenance of these standards. These bodies fall into the following categories:
• International organizations such as the Internet Engineering Task Force (IETF), the
Institute of Electronic and Electric Engineers (IEEE), the International Standards
Organization (ISO), and the International Telecommunications Union (ITU)
• Multinational organizations like the European Committee for Standardization
(CEN), the Commission of European Union (CEU), and the European
Telecommunications Standards Institute (ETSI)
2.4
Security Standards
49
Table 2.1 Organizations and their standards
Organization
IETF
ISO
ITU
ECBS
ECMA
NIST
IEEE
RSA
W3C
Standards
IPSec, XML Signature XPath Filter2, X.509, Kerberos, S/MIME
ISO 7498–2:1989 Information processing systems – Open Systems
Interconnection, ISO/IEC 979x, ISO/IEC 997, ISO/IEC 1011x, ISO/IEC 11xx,
ISO/IEC DTR 13xxx, ISO/IEC DTR 14xxx
X.2xx, X.5xx, X.7xx, X.80x,
TR-40x
ECMA-13x, ECMA-20x
X3 Information Processing, X9.xx Financial, X12.xx Electronic Data Exchange
P1363 Standard Specifications, For Public-Key Cryptography, IEEE 802.xx,
IEEE P802.11 g, Wireless LAN Medium Access Control (MAC) and Physical
Layer (PHY) Specifications
PKCS #x – Public-Key Cryptographic Standard
XML Encryption, XML Signature, exXensible Key Management Specification
(XKMS)
• National governmental organizations like the National Institute of Standards and
Technology (NIST), the American National Standards Institute (ANSI), and the
Canadian Standards Council (CSC)
• Sector-specific organizations such as the European Committee for Banking
Standards (ECBS), the European Computer Manufacturers Association (ECMA),
and the Institute of Electronic and Electric Engineers (IEEE)
• Industry standards such as the RSA, the Open Group (OSF+X/Open), the Object
Management Group (OMG), the World Wide Web Consortium (W3C)), and the
Organization for the Advancement of Structured Information Standards (OASIS)
• Other sources of standards in security and cryptography
Each one of these organizations has a set of standards. Table 2.1 shows some of
these standards. In the table, x is any digit between 0 and 9.
2.4.1
Security Standards Based on Type of Service/Industry
System and security managers and users may choose a security standard to use
based on the type of industry they are in and what type of services that industry
provides. Table 2.2 shows some of these services and the corresponding security
standards that can be used for these services.
Let us now give some details of some of these standards.
2.4.1.1 Public-Key Cryptography Standards (PKCS)
In order to provide a basis and a catalyst for interoperable security based on
public-key cryptographic techniques, the Public-Key Cryptography Standards
(PKCS) were established. These are recent security standards, first published in
2
50
Computer Network Security Fundamentals
Table 2.2 Security standards based on services
Area of application
Internet security
Digital signature
and encryption
Login and
authentication
Firewall and
system security
Service
Network authentication
Secure TCP/IP communications
over the Internet
Privacy-enhanced electronic mail
Public-key cryptography standards
Secure hypertext transfer protocol
Authentication of directory users
Security protocol for privacy
on Internet/transport security
Advanced encryption standard/PKI/
digital certificates, XML digital
signatures
Authentication of user’s right
to use system or network
resources
Security of local, wide, and
metropolitan area networks
Security standard
Kerberos
IPSec
S/MIME, PGP
3-DES, DSA, RSA, MD-5, SHA-1,
PKCS
S-HTTP
X.509/ISO/IEC 9594–8:2000:
SSL, TLS, SET
X509, RSA BSAFE SecurXML-C,
DES, AES, DSS/DSA, EESSI,
ISO 9xxx, ISO, SHA/ SHS, XML
Digital Signatures (XMLD- SIG),
XML Encryption (XMLENC),
XML Key Management
Specification (XKMS)
SAML, Liberty Alliance, FIPS 112
Secure Data Exchange (SDE)
protocol for IEEE 802,
ISO/IEC 10164
1991 following discussions of a small group of early adopters of public-key
technology. Since their establishment, they have become the basis for many formal
standards and are implemented widely.
In general, PKCS are security specifications produced by RSA Laboratories in
cooperation with secure systems developers worldwide for the purpose of accelerating the deployment of public-key cryptography. In fact, worldwide contributions
from the PKCS series have become part of many formal and de facto standards,
including ANSI X9 documents, PKIX, SET, S/MIME, and SSL.
2.4.1.2 The Standards for Interoperable Secure MIME (S/MIME)
S/MIME (Secure Multipurpose Internet Mail Extensions) is a specification for
secure electronic messaging. It came to address a growing problem of e-mail interception and forgery at the time of increasing digital communication. So, in 1995,
several software vendors got together and created the S/MIME specification with
the goal of making it easy to secure messages from prying eyes.
It works by building a security layer on top of the industry standard MIME protocol based on PKCS. The use of PKCS avails the user of S/MIME with immediate
privacy, data integrity, and authentication of an e-mail package. This has given the
2.4
Security Standards
51
standard a wide appeal, leading to S/MIME moving beyond just e-mail. Already
vendor software warehouses, including Microsoft, Lotus, Banyan, and other online
electronic commerce services, are using S/MIME.
2.4.1.3 Federal Information Processing Standards (FIPS)
Federal Information Processing Standards (FIPS) are National Institute of Standards
and Technology (NIST)-approved standards for advanced encryption. These are US
federal government standards and guidelines in a variety of areas in data processing.
They are recommended by NIST to be used by US government organizations and
others in the private sector to protect sensitive information. They range from FIPS
31 issued in 1974 to current FIPS 198.
2.4.1.4 Secure Sockets Layer (SSL)
SSL is an encryption standard for most Web transactions. In fact, it is becoming the
most popular type of e-commerce encryption. Most conventional intranet and
extranet applications would typically require a combination of security mechanisms
that include:
• Encryption
• Authentication
• Access control
SSL provides the encryption component implemented within the TCP/IP protocol.
Developed by Netscape Communications, SSL provides secure Web client and
server communications, including encryption, authentication, and integrity checking
for a TCP/IP connection.
2.4.1.5 Web Services Security Standards
In order for Web transactions such as e-commerce to really take off, customers will
need to see an open architectural model backed up by a standard-based security
framework. Security players, including standards organizations, must provide
that open model and a framework that is interoperable, that is, as vendor-neutral as
possible, and able to resolve critical, often sensitive, issues related to security. The
security framework must also include Web interoperability standards for access
control, provisioning, biometrics, and digital rights.
To meet the challenges of Web security, two industry rival standards companies
are developing new standards for XML digital signatures that include XML
Encryption, XML Signature, and exXensible Key Management Specification
(XKMS) by the World Wide Web Consortium (W3C) and BSAFE SecurXML-C
software development kit (SDK) for implementing XML digital signatures by rival
RSA Security. In addition, RSA also offers a SAML Specification (Security
Assertion Markup Language), an XML framework for exchanging authentication
and authorization information. It is designed to enable secure single sign-on across
portals within and across organizations.
52
2
2.4.2
Computer Network Security Fundamentals
Security Standards Based on Size/Implementation
If the network is small or it is a small organization such as a university, for example,
security standards can be spelled out as either the organization’s security policy or
its best practices on the security of the system, including the physical security of
equipment, system software, and application software.
• Physical security. This emphasizes the need for security of computers running
the Web servers and how these machines should be kept physically secured in a
locked area. Standards are also needed for backup storage media like tapes and
removable disks.
• Operating systems. The emphasis here is on privileges and number of accounts,
and security standards are set based on these. For example, the number of users
with most privileged access like root in UNIX or Administrator in NT should be
kept to a minimum. Set standards for privileged users. Keep to a minimum the
number of user accounts on the system. State the number of services offered to
clients computers by the server, keeping them to a minimum. Set a standard for
authentication such as user passwords and for applying security patches.
• System logs. Logs always contain sensitive information such as dates and times
of user access. Logs containing sensitive information should be accessible only
to authorized staff and should not be publicly accessible. Set a standard on who
and when logs should be viewed and analyzed.
• Data security. Set a standard for dealing with files that contain sensitive data. For
example, files containing sensitive data should be encrypted wherever possible
using strong encryption or should be transferred as soon as possible and practical
to a secured system not providing public services.
As an example, Table 2.3 shows how such standards may be set.
2.4.3
Security Standards Based on Interests
In many cases, institutions and government agencies choose to pick a security standard
based solely on the interest of the institution or the country. Table 2.4 below shows
some security standards based on interest, and the subsections following the table also
show security best practices and security standards based more on national interests.
Table 2.3 Best security practices for a small organization
Application area
Operating systems
Virus protection
Email
Firewalls
Telnet and FTP terminal applications
Security standards
Unix, Linux, Windows, etc.
Norton
PGP, S/MIME
SSH (secure shell)
2.4
Security Standards
53
Table 2.4 Interest-based security standards
Area of application Service
Security standard
Banking
Security within banking IT systems ISO 8730, ISO 8732, ISO/TR 17944
Financial
Security of financial services
ANSI X9.x, ANSI X9.xx
2.4.3.1 British Standard 799 (BS 7799)
The BS 7799 standard outlines a code of practice for information security management that further helps to determine how to secure network systems. It puts forward
a common framework that enables companies to develop, implement, and measure
effective security management practice and provide confidence in intercompany
trading. BS 7799 was first written in 1993, but it was not officially published until
1995, and it was published as an international standard BS ISO/IEC 17799:2000 in
December 2000.
2.4.3.2 Orange Book
This is the US Department of Defense Trusted Computer System Evaluation Criteria
(DOD-5200.28-STD) standard known as the Orange Book. For a long time, it has
been the de facto standard for computer security used in government and industry,
but as we will see in Chap. 15, other standards have now been developed to either
supplement it or replace it. First published in 1983, its security levels are referred to
as “Rainbow Series.”
2.4.3.3 Homeland National Security Awareness
After the September 11, 2001, attack on the United States, the government created
a new cabinet department of Homeland Security to be in charge of all national
security issues. The Homeland Security department created a security advisory
system made up of five levels ranging from green (for low security) to red (severe)
for heightened security. Figure 2.1 shows these levels.
2.4.4
Security Best Practices
As you noticed from our discussion, there is a rich repertoire of standards security
tools on the system and information security landscape because as technology
evolves, the security situation becomes more complex, and it grows more so every
day. With these changes, however, some trends and approaches to security remain
the same. One of these constants is having a sound strategy of dealing with the
changing security landscape. Developing such a security strategy involves keeping
an eye on the reality of the changing technology scene and rapidly increasing
security threats. To keep abreast of all these changes, security experts and security
managers must know how and what to protect and what controls to put in place and
at what time. It takes security management, planning, policy development, and the
design of security procedures. It’s important to remember and definitely understand
that there is no procedure, policy, or technology, however much you like it and trust
54
2
Computer Network Security Fundamentals
Fig. 2.1 Department of Homeland Security Awareness Levels [7]
it, that will ever be 100 %, so it is important for and company, preferably to have a
designated security person, a security program officer, and chief security officer
(CSO), under the chief information officer (CIO), to be responsible for the security
best practices. Here are some examples of best practices:
Commonly Accepted Security Practices and Regulations (CASPR): Developed
by the CASPR Project, this effort aims to provide a set of best practices that can be
2.4
Security Standards
55
universally applied to any organization regardless of industry, size, or mission. Such
best practices would, for example, come from the world’s experts in information
security. CASPR distills the knowledge into a series of papers and publishes them
so they are freely available on the Internet to everyone. The project covers a wide area,
including operating system and system security, network and telecommunication
security, access control and authentication, infosecurity management, infosecurity
auditing and assessment, infosecurity logging and monitoring, application security,
application and system development, and investigations and forensics. In order to
distribute their papers freely, the founders of CASPR use the open source movement
as a guide, and they release the papers under the GNU Free Document License to
make sure they and any derivatives remain freely available.
Control Objectives for Information and (Related) Technology (COBIT): Developed
by IT auditors and made available through the Information Systems Audit and
Control Association, COBIT provides a framework for assessing a security program.
COBIT is an open standard for control of information technology. The IT Governance
Institute has, together with the worldwide industry experts, analysts, and academics,
developed new definitions for COBIT that consist of maturity models, critical
success factors (CSFs), key goal indicators (KGIs), and key performance indicators
(KPIs). COBIT was designed to help three distinct audiences [5]:
• Management who need to balance risk and control investment in an often
unpredictable IT environment
• Users who need to obtain assurance on the security and controls of the IT services
upon which they depend to deliver their products and services to internal and
external customers
• Auditors who can use it to substantiate their opinions and/or provide advice to
management on internal controls
Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE)
by Carnegie Mellon’s CERT Coordination Center: OCTAVE is an approach for
self- directed information security risk evaluations that [6].
•
•
•
•
Puts organizations in charge
Balances critical information assets, business needs, threats, and vulnerabilities
Measures the organization against known or accepted good security practices
Establishes an organization-wide protection strategy and information security
risk mitigation plans
In short, it provides measures based on accepted best practices for evaluating
security programs. It does this in three phases:
• First, it determines information assets that must be protected.
• Evaluates the technology infrastructure to determine if it can protect those assets
and how vulnerable it is and defines the risks to critical assets.
• Uses good security practices and establishes an organization-wide protection
strategy and mitigation plans for specific risks to critical assets.
56
2
Computer Network Security Fundamentals
General Best Practices – Matthew Putvinski, in his article “IT Security Series
Part 1: Information Security Best Practices” [7], discusses under the following
general categories:
• Chief information security officer or designate – establish the need for a security
designated officer to oversee security-related issues in the enterprise because the
lack of a person responsible for security in any organization means the organization does not give information security priority.
• Ender user – the security guidelines here must be contained in the organization’s
security policy of what the organization’s end users must and must not do as far
as dealing with organization’s information in general and computing services in
particular. As we move into miniature mobile devices and if a policy is to use a
bring your own device (BYOD), specific data handling policies must be in place.
• Software updates and patches – specific guidelines in the organization security
policy book must specifically take a stance on how the organization will use
software security patches and upgrades and the frequency of updates.
• Vendor management – if the organization is using software provided by third
party individuals or organizations as vendors, care must be taken to ensure that
any organization’s confidential information provided to vendors to help identify
a suitable software tool is well documented and indicate to who.
• Physical Security – this is squarely a security policy issue specifically spelling
out the physical specification required to safeguard the organization’s information and data. These include access to offices and digital equipment, when and
where information is stored and when and where information is destroyed. We
will discuss more of this in the coming chapters.
• The following guidelines are also a security policy issues:
– Data classification and retention
– Password requirements and guidelines
– Wireless networking
– Mobile device usage and access
– Employee awareness training
– Incident response
Exercises
1. What is security and Information security? What is the difference?
2. It has been stated that security is a continuous process; what are the states in
this process?
3. What are the differences between symmetric and asymmetric key systems?
4. What is PKI? Why is it so important in information security?
5. What is the difference between authentication and nonrepudiation?
6. Why is there a dispute between digital nonrepudiation and legal nonrepudiation?
7. Virtual security seems to work in some systems. Why is this so? Can you apply
it in a network environment? Support your response.
8. Security best practices are security guidelines and policies aimed at enhancing
system security. Can they work without known and proven security mechanisms?
References
57
9. Does information confidentiality infer information integrity? Explain your
response.
10. What are the best security mechanisms to ensure information confidentiality?
Advanced Exercises
1. In the chapter, we have classified security standards based on industry, size, and
mission. What other classifications can you make and why?
2. Most of the encryption standards that are being used such as RSA and DES
have not been formally proven to be safe. Why then do we take them to be
secure – what evidence do we have?
3. IPSec provides security at the network layer. What other security mechanism is
applicable at the network layer? Do network layer security solutions offer
better security?
4. Discuss two security mechanisms applied at the application layer. Are they
safer than those applied at the lower network layer? Support your response.
5. Are there security mechanisms applicable at transport layer? Is it safer?
6. Discuss the difficulties encountered in enforcing security best practices.
7. Some security experts do not believe in security policies. Do you? Why or
why not?
8. Security standards are changing daily. Is it wise to pick a security standard
then? Why or why not?
9. If you are an enterprise security chief, how would you go about choosing a
security best practice? Is it good security policy to always use a best security
practice? What are the benefits of using a best practice?
10. Why it is important to have a security plan despite the various views of security
experts concerning its importance?
References
1. Kizza JM (2003) Social and ethical issues in the information age, 2nd edn. Springer, New York
2. Scherphier A CS596 client-server programming security. http://www.sdsu.edu/cs596/security.
html
3. Mercuri R, Peter N (2003) Security by obscurity. Commun ACM 46(11):160
4. McCullagh A, Caelli W Non-repudiation in the digital environment. http://www.firstmonday.dk/
issues/issue5_8/mccullagh/index.html#author
5. CobiT a practical toolkit for IT governance. http://www.ncc.co.uk/ncc/myitadviser/archive/
issue8/business_processes.cfm
6. OCTAVE: information security risk evaluation. http://www.cert.org/octave/
7. Putvinski M IT security series part 1: information security best practices. http://www.corporatecomplianceinsights.com/information-security-best-practices
Part II
Security Issues and Challenges
in the Traditional Computer Network
3
Security Motives and Threats
to Computer Networks
3.1
Introduction
In February, 2002, the Internet security watch group CERT Coordination Center
first disclosed to the global audience that global networks, including the Internet,
phone systems, and the electrical power grid, are vulnerable to attack because of
weakness in programming in a small but key network component. The component,
an Abstract Syntax Notation One, or ASN.1, is a communication protocol used
widely in the Simple Network Management Protocol (SNMP).
There was a widespread fear among government, networking manufacturers,
security researchers, and IT executives because the component is vital in many
communication grids, including national critical infrastructures such as parts of the
Internet, phone systems, and the electrical power grid. These networks were vulnerable to disruptive buffer overflow and malformed packet attacks.
This example illustrates but one of the many potential incidents that can cause
widespread fear and panic among government, networking manufacturers, security
researchers, and IT executives when they think of the consequences of what might
happen to the global networks.
The number of threats is rising daily, yet the time window to deal with them is
rapidly shrinking. Hacker tools are becoming more sophisticated and powerful.
Currently, the average time between the point at which a vulnerability is announced
and when it is actually deployed in the wild is getting shorter and shorter.
Traditionally, security has been defined as a process to prevent unauthorized
access, use, alteration, theft, or physical damage to an object through maintaining
high confidentiality and integrity of information about the object and making information about the object available whenever needed. However, there is a common
fallacy, taken for granted by many, that a perfect state of security can be achieved;
they are wrong. There is nothing like a secure state of any object, tangible or not,
because no such object can ever be in a perfectly secure state and still be useful. An
object is secure if the process can maintain its highest intrinsic value. Since the
intrinsic value of an object depends on a number of factors, both internal and
© Springer-Verlag London 2015
J.M. Kizza, Guide to Computer Network Security, Computer Communications
and Networks, DOI 10.1007/978-1-4471-6654-2_3
61
62
3
Security Motives and Threats to Computer Networks
external to the object during a given time frame, an object is secure if the object
assumes its maximum intrinsic value under all possible conditions. The process of
security, therefore, strives to maintain the maximum intrinsic value of the object at
all times.
Information is an object. Although it is an intangible object, its intrinsic value
can be maintained in a high state, thus ensuring that it is secure. Since our focus in
this book is on global computer network security, we will view the security of this
global network as composed of two types of objects: the tangible objects such as the
servers, clients, and communication channels and the intangible object such as
information that is stored on servers and clients and that moves through the
communication channels.
Ensuring the security of the global computer networks requires maintaining the
highest intrinsic value of both the tangible objects and information – the intangible
one. Because of both internal and external forces, it is not easy to maintain the highest level of the intrinsic value of an object. These forces constitute a security threat
to the object. For the global computer network, the security threat is directed to the
tangible and the intangible objects that make up the global infrastructure such as
servers, clients, communication channels, files, and information.
The threat itself comes in many forms, including viruses, worms, distributed
denial of services, and electronic bombs, and derives many motives, including
revenge, personal gains, hate, and joy rides, to name but a few.
3.2
Sources of Security Threats
The security threat to computer systems springs from a number of factors that
include weaknesses in the network infrastructure and communication protocols that
create an appetite and a challenge to the hacker mind, the rapid growth of cyberspace into a vital global communication and business network on which international commerce and business transactions are increasingly being performed and
many national critical infrastructures are being connected, the growth of the hacker
community whose members are usually experts at gaining unauthorized access into
systems that run not only companies and governments but also critical national
infrastructures, the vulnerability in operating system protocols whose services run
the computers that run the communication network, the insider effect resulting from
workers who steal and sell company databases and the mailing lists or even confidential business documents, social engineering, physical theft from within the organizations of things such as laptop and handheld computers with powerful
communication technology and more potentially sensitive information, and security
as a moving target.
3.2.1
Design Philosophy
Although the design philosophy on which both the computer network infrastructure
and communication protocols were built has tremendously boosted its cyberspace
3.2
Sources of Security Threats
63
development, the same design philosophy has been a constant source of the many
ills plaguing cyberspace. The growth of the Internet and cyberspace in general was
based on an open architecture work in progress philosophy. This philosophy
attracted the brightest minds to get their hands dirty and contribute to the infrastructure and protocols. With many contributing their best ideas for free, the Internet
grew in leaps and bounds. This philosophy also helped the spirit of individualism
and adventurism, both of which have driven the growth of the computer industry
and underscored the rapid and sometimes motivated growth of cyberspace.
Because the philosophy was not based on clear blueprints, new developments
and additions came about as reactions to the shortfalls and changing needs of a
developing infrastructure. The lack of a comprehensive blueprint and the demanddriven design and development of protocols are causing the ever present weak
points and loopholes in the underlying computer network infrastructure and
protocols.
In addition to the philosophy, the developers of the network infrastructure and
protocols also followed a policy to create an interface that is as user-friendly,
efficient, and transparent as possible so that all users of all education levels can use
it unaware of the working of the networks and therefore are not concerned with the
details.
The designers of the communication network infrastructure thought it was better
this way if the system is to serve as many people as possible. Making the interface
this easy and far removed from the details, though, has its own downside in that the
user never cares about and pays very little attention to the security of the system.
Like a magnet, the policy has attracted all sorts of people who exploit the network’s vulnerable and weak points in search of a challenge, adventurism, fun, and
all forms of personal gratification.
3.2.2
Weaknesses in Network Infrastructure
and Communication Protocols
Compounding problems created by the design philosophy and policy are the weaknesses in the communication protocols. The Internet is a packet network that works
by breaking the data to be transmitted into small individually addressed packets that
are downloaded on the network’s mesh of switching elements. Each individual
packet finds its way through the network with no predetermined route, and the packets are reassembled to form the original message by the receiving element. To work
successfully, packet networks need a strong trust relationship that must exist among
the transmitting elements.
As packets are disassembled, transmitted, and reassembled, the security of each
individual packet and the intermediary transmitting elements must be guaranteed.
This is not always the case in the current protocols of cyberspace. There are areas
where, through port scans, determined users have managed to intrude, penetrate,
fool, and intercept the packets.
The two main communication protocols on each server in the network, UDP and
TCP, use port numbers to identify higher-layer services. Each higher-layer service
64
3
Security Motives and Threats to Computer Networks
Fig. 3.1 A three-way handshake
on a client uses a unique port number to request a service from the server, and each
server uses a port number to identify the service needed by a client. The cardinal
rule of a secure communication protocol in a server is never to leave any port open
in the absence of a useful service. If no such service is offered, its port should never
be open. Even if the service is offered by the server, its port should never be left
open unless it is legitimately in use.
In the initial communication between a client and a server, the client addresses
the server via a port number in a process called a three-way handshake. The threeway handshake, when successful, establishes a TCP virtual connection between the
server and the client. This virtual connection is required before any communication
between the two can begin. The process begins by a client/host sending a TCP segment with the synchronize (SYN) flag set; the server/host responds with a segment
that has the acknowledge valid (ACK) and SYN flags set, and the first host responds
with a segment that has only the ACK flag set. This exchange is shown in Fig. 3.1.
The three-way handshake suffers from a half-open socket problem when the server
trusts the client that originated the handshake and leaves its port door open for further communication from the client.
As long as the half-open port remains open, an intruder can enter the system
because while one port remains open, the server can still entertain other three-way
handshakes from other clients that want to communicate with it. Several half-open
ports can lead to network security exploits including both TCP/IP and UDP: Internet
Protocol spoofing (IP spoofing), in which IP addresses of the source element in the
3.2
Sources of Security Threats
65
data packets are altered and replaced with bogus addresses, and SYN flooding
where the server is overwhelmed by spoofed packets sent to it.
In addition to the three-way handshake, ports are used widely in network communication. There are well-known ports used by processes that offer services. For
example, ports 0 through 1023 are used widely by system processes and other
highly privileged programs. This means that if access to these ports is compromised, the intruder can get access to the whole system. Intruders find open ports via
port scans. The two examples below from G-Lock Software illustrate how a port
scan can be made [1]:
• TCP connect ( ) scanning is the most basic form of TCP scanning. An attacker’s
host is directed to issue a connect ( ) system call to a list of selected ports on the
target machine. If any of these ports is listening, connect ( ) system call will
succeed; otherwise, the port is unreachable and the service is unavailable.
• UDP Internet Control Message Protocol (ICMP) port unreachable scanning is
one of the few UDP scans. Recall from Chap. 1 that UDP is a connectionless
protocol; so, it is harder to scan than TCP because UDP ports are not required to
respond to probes. Most implementations generate an ICMP port – unreachable
error when an intruder sends a packet to a closed UDP port. When this response
does not come, the intruder has found an active port.
In addition to port number weaknesses usually identifiable via port scans, both
TCP and UDP suffer from other weaknesses.
Packet transmissions between network elements can be intercepted, and their
contents altered such as in initial sequence number attack. Sequence numbers are
integer numbers assigned to each transmitted packet, indicating their order of arrival
at the receiving element. Upon receipt of the packets, the receiving element acknowledges it in a two-way communication session during which both the transmitting
elements talk to each other simultaneously in full duplex.
In the initial sequence number attack, the attacker intercepts the communication
session between two or more communicating elements and then guesses the next
sequence number in a communication session. The intruder then slips the spoofed
IP addresses into the packets transmitted to the server. The server sends an acknowledgment to the spoofed clients. Infrastructure vulnerability attacks also include
session attacks, packet sniffing, buffer overflow, and session hijacking. These
attacks are discussed in later chapters.
The infrastructure attacks we have discussed so far are of the penetration type
where the intruder physically enters the system infrastructure, either at the transmitting element or in the transmitting channel levels, and alters the content of packets.
In the next set of infrastructure attacks, a different approach of vulnerability
exploitation is used. This is the distributed denial of services (DDoS).
The DDoS attacks are attacks that are generally classified as nuisance attacks in
the sense that they simply interrupt the services of the system. System interruption
can be as serious as destroying a computer’s hard disk or as simple as using up all
the available memory of the system. DDoS attacks come in many forms, but the
66
3
Security Motives and Threats to Computer Networks
most common are the following: smurfing, ICMP protocol, and ping of death
attacks.
The “smurf” attack utilizes the broken down trust relationship created by IP
spoofing. An offending element sends a large amount of spoofed ping packets
containing the victim’s IP address as the source address. Ping traffic, also called
Protocol Overview Internet Control Message Protocol (ICMP) in the Internet community, is used to report out-of-band messages related to network operation or
mis-operation such as a host or entire portion of the network being unreachable,
owing to some type of failure. The pings are then directed to a large number of
network subnets, a subnet being a small independent network such as a LAN. If all
the subnets reply to the victim address, the victim element receives a high rate of
requests from the spoofed addresses as a result and the element begins buffering
these packets. When the requests come at a rate exceeding the capacity of the queue,
the element generates ICMP Source Quench messages meant to slow down the
sending rate. These messages are then sent, supposedly, to the legitimate sender of
the requests. If the sender is legitimate, it will heed the requests and slow down the
rate of packet transmission. However, in cases of spoofed addresses, no action is
taken because all sender addresses are bogus. The situation in the network can easily
deteriorate further if each routing device itself takes part in smurfing.
We have outlined a small part of a list of several hundred types of known infrastructure vulnerabilities that are often used by hackers to either penetrate systems
and destroy, alter, or introduce foreign data into the system or disable the system
through port scanning and DDoS. Although for these known vulnerabilities, equipment manufacturers and software producers have done a considerable job of issuing
patches as soon as a loophole or a vulnerability is known, quite often, as was
demonstrated in the Code Red fiasco, not all network administrators adhere to the
advisories issued to them.
Furthermore, new vulnerabilities are being discovered almost everyday either by
hackers in an attempt to show their skills by exposing these vulnerabilities or by
users of new hardware or software such as what happened with the Microsoft
Windows IIS in the case of the Code Red worm. Also, the fact that most of these
exploits use known vulnerabilities is indicative of our abilities in patching known
vulnerabilities even if the solutions are provided.
3.2.3
Rapid Growth of Cyberspace
There is always a security problem in numbers. Since its beginning as ARPANET in
the early 1960s, the Internet has experienced phenomenal growth, especially in the
last 10 years. There was an explosion in the number of users, which in turn ignited
an explosion in the number of connected computers.
Just less than 20 years ago in 1985, the Internet had fewer than 2,000 computers
connected, and the corresponding number of users was in the mere tens of thousands.
However, by 2001, the figure has jumped to about 109 million hosts, according to
Tony Rutkowski at the Center for Next Generation Internet, an Internet Software
Consortium. This number represents a significant new benchmark for the number of
3.2
Sources of Security Threats
67
Internet hosts. At a reported current annual growth rate of 51 % over the past 2 years,
this shows continued strong exponential growth, with an estimated growth of up to
one billion hosts if the same growth rate is sustained [2].
This is a tremendous growth by all accounts. As it grew, it brought in more and
more users with varying ethical standards, added more services, and created more
responsibilities. By the turn of the century, many countries found their national critical infrastructures firmly intertwined in the global network. An interdependence
between humans and computers and between nations on the global network has
been created that has led to a critical need to protect the massive amount of information stored on these network computers. The ease of use of and access to the Internet
and large quantities of personal, business, and military data stored on the Internet
was slowly turning into a massive security threat not only to individuals and business interests but also to national defenses.
As more and more people enjoyed the potential of the Internet, more and more
people with dubious motives were also drawn to the Internet because of its enormous wealth of everything they were looking for. Such individuals have posed a
potential risk to the information content of the Internet, and such a security threat
has to be dealt with.
Statistics from the security company Symantec show that Internet attack activity
is currently growing by about 64 % per year. The same statistics show that during
the first 6 months of 2002, companies connected to the Internet were attacked, on
average, 32 times per week compared to only 25 times per week in the last 6 months
of 2001. Symantec reports between 400 and 500 new viruses every month and about
250 vulnerabilities in computer programs [3].
In fact, the rate at which the Internet is growing is becoming the greatest security
threat ever. Security experts are locked in a deadly race with these malicious hackers
that at the moment looks like a losing battle with the security community.
3.2.4
The Growth of the Hacker Community
Although other factors contributed significantly to the security threat, in the general
public view, the number one contributor to the security threat of computer and telecommunication networks more than anything else is the growth of the hacker community. Hackers have managed to bring this threat into news headlines and people’s
living rooms through the ever-increasing and sometimes devastating attacks on
computer and telecommunication systems using viruses, worms, DDoS, and other
security attacks.
Until recently most hacker communities worked underground forming groups
global like some in Table 3.1. Today, hackers are no longer considered as bad to
computer networks as it used to be, and now hackers are being used by governments
and organization to do the opposite of what they were supposed to be doing, defending national critical networks and hardening company networks. Increasingly,
hacker groups and individuals are being used in clandestined campaigns of attacking other nations. So hacker groups and individuals are no longer as much under the
cloud of suspicion as causing mayhem to computer networks and many are now in
68
3
Security Motives and Threats to Computer Networks
Table 3.1 Global hacker groups
A
Anontune
Anonymous
C
Chaos Computer Club
Cyberwarfare in the People’s
Republic of China
D
Decocidio
Digital DawgPound
G
Gay Nigger Association of America
Genocide2600
Global kOS
Harford Hackerspace
Helith
Honker Union
The Humble Guys
I
Infonomicon
Port7Alliance
Pumping Station: One
R
Red Hacker Alliance
S
Securax
IPhone Dev Team
L
L0pht
Level Seven
London Hackspace
LulzSec
LulzRaft
GlobalHell
Goatse Security
H
HacDC
Hack Canada
Hacker Dojo
M
Malicious Security
Milw0rm
Moonlight Maze
N
Network Crack Program Hacker
(NCPH) Group
P
P.H.I.R.M.
Phone Losers of America
T
Team Elite
TeaMp0isoN
TESO
The 414s
The Shmoo Group
The Unknowns
(hacking group)
Titan Rain
TOG (hackerspace)
U
UGNazi
UXu
W
Hacktivismo
Hackweiser
W00w00
World of Hell
Reference source: http://en.wikipedia.org/wiki/Category:Hacker_groups
the open. In fact hacker Web sites like www.hacker.org with messages like “The
hacker explores the intersection of art and science in an insatiable quest to understand and shape the world around him. We guide you on this journey,” are legitimately popping up everywhere.
However, for long, the general public, computer users, policy makers, parents,
and law makers have watched in bewilderment and awe as the threat to their individual and national security has grown to alarming levels as the sizes of the global
networks have grown and national critical infrastructures have become more and
more integrated into this global network. In some cases, the fear from these attacks
reached hysterical proportions, as demonstrated in the following major attacks that
we have rightly called the big “bungs.”
3.2.4.1 The Big “Bungs”
The Internet Worm
On November 2, 1988, Robert T. Morris, Jr., a computer science graduate student at
Cornell University, using a computer at MIT, released what he thought was a benign
3.2
Sources of Security Threats
69
experimental, self-replicating, and self-propagating program on the MIT computer
network. Unfortunately, he did not debug the program well before running it. He
soon realized his mistake when the program he thought was benign went out of
control. The program started replicating itself and at the same time infecting more
computers on the network at a faster rate than he had anticipated. There was a bug
in his program. The program attacked many machines at MIT and very quickly went
beyond the campus to infect other computers around the country. Unable to stop his
own program from spreading, he sought a friend’s help. He and his friend tried
unsuccessfully to send an anonymous message from Harvard over the network,
instructing programmers how to kill the program – now a worm – and prevent its
reinfection of other computers. The worm spread like wildfire to infect some 6,000
networked computers, a whopping number in proportion to the 1988 size of the
Internet, clogging government and university systems. In about 12 h, programmers
in affected locations around the country succeeded in stopping the worm from
spreading further. It was reported that Morris took advantage of a hole in the debug
mode of the Unix sendmail program. Unix then was a popular operating system that
was running thousands of computers on university campuses around the country.
Sendmail runs on Unix to handle e-mail delivery.
Morris was apprehended a few days later, taken to court, sentenced to 3 years
probation with a $10,000 fine and 400 h of community service, and dismissed from
Cornell. Morris’s worm came to be known as the Internet worm. The estimated cost
of the Internet worm varies from $53,000 to as high as $96 million, although the
exact figure will never be known [4].
Michelangelo Virus
The world first heard of the Michelangelo virus in 1991. The virus affected only PCs
running MS-DOS 2.xx and higher versions. Although it overwhelmingly affected
PCs running DOS operating systems, it also affected PCs running other operating
systems such as UNIX, OS/2, and Novell. It affected computers by infecting floppy
disk boot sectors and hard disk master boot records. Once in the boot sectors of the
bootable disk, the virus then installed itself in memory from where it would infect
the partition table of any other disk on the computer, whether a floppy or a hard disk.
For several years, a rumor was rife, more so many believe, as a scare tactic by
antivirus software manufactures that the virus is to be triggered on March 6 of every
year to commemorate the birth date of the famous Italian painter. But in real terms,
the actual impact of the virus was rare. However, because of the widespread publicity it received, the Michelangelo virus became one of the most disastrous viruses
ever, with damages into millions of dollars.
Pathogen, Queeg, and Smeg Viruses
Between 1993 and April 1994, Christopher Pile, a 26-year-old resident of Devon in
Britain, commonly known as the “Black Baron” in the hacker community, wrote
three computer viruses: Pathogen, Queeg, and Smeg all named after expressions
used in the British Sci-Fi comedy “Red Dwarf.” He used Smeg to camouflage both
Pathogen and Queeg. The camouflage of the two programs prevented most known
70
3
Security Motives and Threats to Computer Networks
antivirus software from detecting the viruses. Pile wrote the Smeg in such a way that
others could also write their own viruses and use Smeg to camouflage them. This
meant that the Smeg could be used as a locomotive engine to spread all sorts of
viruses. Because of this, Pile’s viruses were extremely deadly at that time. Pile used
a variety of ways to distribute his deadly software, usually through bulletin boards
and freely downloadable Internet software used by thousands in cyberspace.
Pile was arrested on May 26, 1995. He was charged with 11 counts that included
the creation and release of these viruses that caused modification and destruction of
computer data, inciting others to create computer viruses. He pleaded guilty to 10 of
the 11 counts and was sentenced to 18 months in prison.
Pile’s case was in fact not the first one as far as creating and distributing computer viruses was concerned. In October 1992, three Cornell University students
were each sentenced to several hundred hours of community service for creating
and disseminating a computer virus. However, Pile’s case was significant in that it
was the first widely covered and published computer crime case that ended in a jail
sentence [5].
Melissa Virus
On March 26, 1999, the global network of computers was greeted with a new virus
named Melissa. Melissa was created by David Smith, a 29-year-old New Jersey
computer programmer. It was later learned that he named the virus after a Florida
stripper.
The Melissa virus was released from an “alt.sex” newsgroup using the America
OnLine (AOL) account of Scott Steinmetz, whose username was “skyroket.”
However, Steinmetz, the owner of the AOL account who lived in the western US
state of Washington, denied any knowledge of the virus, let alone knowing anybody
else using his account. It looked like Smith hacked his account to disguise his tracks.
The virus, which spreads via a combination of Microsoft’s Outlook and Word
programs, takes advantage of Word documents to act as surrogates and the users’
e-mail address book entries to propagate it. The virus then mailed itself to each
entry in the address book in either the original Word document named “list.doc” or
in a future Word document carrying it after the infection. It was estimated that
Melissa affected more than 100,000 e-mail users and caused $80 million in damages during its rampage.
The Y2K Bug
From 1997 to December 31, 1999, the world was gripped by apprehension over one
of the greatest myths and misnomers in the history. This was never a bug, a software
bug as we know it, but a myth shrouded in the following story. Decades ago, because
of memory storage restrictions and expanse of time, computer designers and programmers together made a business decision. They decided to represent the date
field by two digits such as “89” and “93” instead of the usual four digits such as
“1956.” The purpose was noble, but the price was humongous.
The bug, therefore, is as follows: On New Year’s Eve of 1999, when world clocks
were supposed to change over from 31/12/99 to 01/01/00 at 12:00 midnight, many
3.2
Sources of Security Threats
71
computers, especially the older ones, were supposed not to know which year it was
since it would be represented by “00.” Many, of course, believed that computers
would then assume anything from year “0000” to “1900,” and this would be
catastrophic.
Because the people who knew much were unconvinced about the bug, it was
known by numerous names to suit the believer. Among the names were the following: millennium bug, Y2K computer bug, Y2K, Y2K problem, Y2K crisis, Y2K bug,
and many others.
The good news is that the year 2000 came and went with very few incidents of
one of the most feared computer bugs of our time.
The Goodtimes E-mail Virus
Yet another virus hoax, the Goodtimes virus, was humorous, but it ended up being a
chain e-mail annoying every one in its path because of the huge amount of “e-mail
virus alerts” it generated. Its humor is embedded in the following prose: Goodtimes
will rewrite your hard drive. Not only that, but it will also scramble any disks that
are even close to your computer. It will recalibrate your refrigerator’s coolness
setting so all your ice cream melts. It will demagnetize the strips on all your credit
cards, make a mess of the tracking on your television, and use subspace field
harmonics to scratch any CD you try to play.
It will give your ex-girlfriend your new phone number. It will mix Kool-Aid into
your fish tank. It will drink all your beer and leave its socks out on the coffee table
when company is coming over. It will put a dead kitten in the back pocket of your
good suit pants and hide your car keys when you are running late for work.
Goodtimes will make you fall in love with a penguin. It will give you nightmares
about circus midgets. It will pour sugar in your gas tank and shave off both your
eyebrows while dating your current girlfriend behind your back and billing the
dinner and hotel room to your Visa card.
It will seduce your grandmother. It does not matter if she is dead. Such is the
power of Goodtimes; it reaches out beyond the grave to sully those things we hold
most dear.
It moves your car randomly around parking lots so you can’t find it. It will kick
your dog. It will leave libidinous messages on your boss’s voice mail in your voice!
It is insidious and subtle. It is dangerous and terrifying to behold. It is also a rather
interesting shade of mauve.
Goodtimes will give you Dutch Elm disease. It will leave the toilet seat up. It will
make a batch of methamphetamine in your bathtub and then leave bacon cooking on
the stove while it goes out to chase gradeschoolers with your new snowblower.
Distributed Denial of Service (DDoS)
February 7, 2000, a month after the Y2K bug scare and Goodtimes hoax, the world
woke up to the real thing. This was not a hoax or a myth. On this day, a 16-year-old
Canadian hacker nicknamed “Mafiaboy” launched his distributed denial-of-service
(DDoS) attack. Using the Internet’s infrastructure weaknesses and tools, he
unleashed a barrage of remotely coordinated blitz of GB(s) IP packet requests from
72
3
Security Motives and Threats to Computer Networks
Fig. 3.2 The working of a DDoS attack
selected, sometimes unsuspecting victim servers which, in a coordinated fashion,
bombarded and flooded and eventually overcame and knocked out Yahoo servers for
a period of about 3 h. Within 2 days, while technicians at Yahoo and law enforcement agencies were struggling to identify the source of the attacker, on February 9,
2000, Mafiaboy struck again, this time bombarding servers at eBay, Amazon, Buy.
com, ZDNet, CNN, E*Trade, and MSN.
The DDoS attack employs a network consisting of a master computer responsible for directing the attacks, the “innocent” computers commonly known as “daemons” used by the master as intermediaries in the attack, and the victim computer – a
selected computer to be attacked. Figure 3.2 shows how this works.
After the network has been selected, the hacker instructs the master node to further instruct each daemon in its network to send several authentication requests to
the selected network nodes, filling up their request buffers. All requests have false
return addresses; so, the victim nodes can’t find the user when they try to send back
the authentication approval. As the nodes wait for acknowledgments, sometimes
even before they close the connections, they are again and again bombarded with
more requests. When the rate of requests exceeds the speed at which the victim node
can take requests, the nodes are overwhelmed and brought down.
The primary objective of a DDoS attack is multifaceted, including flooding a
network to prevent legitimate network traffic from going through the network,
3.2
Sources of Security Threats
73
disrupting network connections to prevent access to services between network
nodes, preventing a particular individual network node from accessing either all
network services or specified network services, and disrupting network services to
either a specific part of the network or selected victim machines on the network.
The Canadian judge stated that although the act was done by an adolescent, the
motivation of the attack was undeniable and had a criminal intent. He, therefore,
sentenced the Mafiaboy, whose real name was withheld because he was under age,
to serve 8 months in a youth detention center and 1 year of probation after his
release from the detention center. He was also ordered to donate $250 to charity.
Love Bug Virus
On April 28, 2000, Onel de Guzman, a dropout from AMA computer college in
Manila, Philippines, released a computer virus onto the global computer network.
The virus was first uploaded to the global networks via a popular Internet Relay
Chat program using Impact, an Internet ISP. It was then uploaded to Sky Internet’s
servers, another ISP in Manila, and it quickly spread to global networks, first in Asia
and then Europe. In Asia, it hit a number of companies hard, including the Dow
Jones Newswire and the Asian Wall Street Journal. In Europe, it left thousands of
victims that included big companies and parliaments. In Denmark, it hit TV2 channel and the Danish parliament, and in Britain, the House of Commons fell victim
too. Within 12 h of release, it was on the North American continent, where the US
Senate computer system was among the victims [6].
It spread via Microsoft Outlook e-mail systems as surrogates. It used a rather
sinister approach by tricking the user to open an e-mail presumably from someone
the user knew (because the e-mail usually came from an address book of someone
the user knew). The e-mail, as seen in Fig. 3.3, requests the user to check the
attached “Love Letter.” The attachment file was in fact a Visual Basic script, which
contained the virus payload. The virus then became harmful when the user opened
the attachment. Once the file was opened, the virus copied itself to two critical system directories and then added triggers to the Windows registry to ensure that it ran
every time the computer was rebooted. The virus then replicated itself, destroying
system files including Web development such as “.js” and “.css,” multimedia files
such as JPEG and MP3, searched for log-in names and passwords in the user’s
address book, and then mailed itself again [6].
De Guzman was tracked down within hours of the release of the virus. Security
officials, using a Caller ID of the phone number and ISP used by de Guzman, were
led to an apartment in the poor part of Manila where de Guzman lived.
The virus devastated global computer networks, and it was estimated that it
caused losses ranging between $7 and $20 billion [7].
Palm Virus
In August 2000, the actual palm virus was released under the name of Liberty Trojan
horse, the first known malicious program targeting the Palm OS. The Liberty Trojan
horse duped some people into downloading a program that erased data.
74
3
Security Motives and Threats to Computer Networks
Fig. 3.3 The love bug monitor display
Another palm virus shortly followed Palm Liberty. On September 21, 2000,
McAfee.com and F-Secure, two of the big antivirus companies, first discovered a
really destructive palm virus they called Palm OS/Phage. When Palm OS/Phage is
executed, the screen is filled with a dark gray box, and the application is terminated.
The virus then replicates itself to other Palm OS applications.
Wireless device viruses have not been widespread, thanks to the fact that the
majority of Palm OS users do not download programs directly from the Web but via
their desktop and then sync to their palm. Because of this, they have virus protection
available to them at either their ISP’s Internet gateway, at the desktop, or at their
corporation.
The appearance of a palm virus in cyberspace raises many concerns about the
security of cyberspace because PDAs are difficult to check for viruses as they are
not hooked up to a main corporate network. PDAs are moving as users move, making virus tracking and scanning difficult.
Anna Kournikova Virus
On February 12, 2001, global computer networks were hit again by a new virus,
Anna Kournikova, named after the Russian tennis star. The virus was released by
20-year-old Dutchman Jan de Wit, commonly known in the hacker underworld
community as “OnTheFly.” The virus, like the I LOVE YOU virus before it, was a
mass-mailing type. Written in Visual Basic scripting language, the virus spreads by
mailing itself, disguised as a JPEG file named Anna Kournikov, through Microsoft
Windows, Outlook, and other e-mail programs on the Internet.
3.2
Sources of Security Threats
75
Fig. 3.4 Anna Kournikov monitor display
The subject line of mail containing the virus bears the following: “Here ya
have;0),” “Here you are;-),” or “here you go;-).” Once opened, Visual Basic script
copies itself to a Windows directory as “AnnaKournikova.jpg.vbs.” It then mails
itself to all entries in the user’s Microsoft Outlook e-mail address book. Figure 3.4
shows the Anna Kournikov monitor screen display.
Spreading at twice the speed of the notorious “I LOVE YOU” bug, Anna quickly
circumvented the globe.
Security experts believe Anna was of the type commonly referred to as a “virus
creation kit,” “a do-it-yourself program kit” that potentially makes everyone able to
create a malicious code.
Code Red: “For one moment last week, the Internet stood still.”1
The Code Red worm was first released on July 12, 2001, from Foshan University in
China, and it was detected the next day July 13 by senior security engineer Ken
Eichman. However, when detected, it was not taken seriously until 4 days later when
engineers at eEye Digital cracked the worm code and named it “Code Red” after
staying awake with “Code Red”-labeled Mountain Dew [8]. By this time, the worm
had started to spread, though slowly. Then on July 19, according to Rob Lemos, it is
1
Lemos, Rob. “Code Red: Virulent worm calls into doubt our ability to protect the Net,” CNET
News.com, July 27, 2001.
76
3
Security Motives and Threats to Computer Networks
believed that someone modified the worm, fixing a problem with its random-number
generator. The new worm started to spread like wildfire spreading, leaping from
15,000 infections that morning to almost 350,000 infections by 5 p.m. PDT [8].
The worm was able to infect computers because it used a security hole, discovered a month before, in computers using Microsoft’s Internet Information Server
(IIS) in the Windows NT4 and Windows 2000 Index Services. The hole, known as
the Index Server ISAPI vulnerability, allowed the intruder to take control of a security vulnerability in these systems, resulting in one of the several outcomes, including Web site defacement and installation of denial-of-service tools. The following
Web defacement – HELLO! Welcome to http://www.worm.com! Hacked By
Chinese! – usually resulted. The Web defacement was done by the worm connecting
to TCP port 80 on a randomly chosen host. If the connection was successful, the
attacking host sent a crafted HTTP GET request to the victim, attempting to exploit
a buffer overflow in the Indexing Service [9].
Because Code Red was self-propagating, the victim computer would then send
the same exploit (HTTP GET request) to another set of randomly chosen hosts.
Although Microsoft issued a patch when the security hole was discovered, not
many servers were patched before Code Red hit. Because of the large number of IIS
serves on the Internet, Code Red found the going easy and at its peak, it hit up to
300,000 servers. But Code Red did not do as much damage as feared; because of its
own design flaw, the worm was quickly brought under control.
SQL Worm
On Saturday, January 25, 2003, the global communication network was hit by the
SQL Worm. The worm, which some refer to as the “SQL Slammer,” spreads to
computers that are running Microsoft SQL Server with a blank SQL administrator
password. Once in the system, it copies files to the infected computer and changes
the SQL administrator password to a string of four random characters.
The vulnerability exploited by the slammer warm preexisted in the Microsoft
SQL Server 2000 and in fact was discovered 6 months prior to the attack. When the
vulnerability was discovered, Microsoft offered a free patch to fix the problem;
however, the word never got around to all users of the server software.
The worm spread rapidly in networks across Asia, Europe, and the United States
and Canada, shutting down businesses and government systems. However, its
effects were not very serious because of its own weaknesses that included its inability to affect secure servers and its ease of detection.
Hackers View Eight Million Visa/MasterCard, Discover, and American
Express Accounts
On Monday, February 17, 2003, the two major credit card companies, Visa and
MasterCard, reported a major infiltration into a third-party payment card processor
by a hacker who gained access to more than five million Visa and MasterCard
accounts throughout the United States. Card information exposed included card
numbers and personal information that included social security numbers and credit
limits.
3.2
Sources of Security Threats
77
The flood of the hacker victims increased by two on Tuesday, February 18, 2003,
when both Discover Financial Services and American Express reported that they
were also victims of the same hacker who breached the security system of a company that processes transactions on behalf of merchants.
While MasterCard and Visa had earlier reported that around 2.2 million and 3.4 million
of their own cards were, respectively, affected, Discover and American Express would
not disclose how many accounts were involved. It is estimated, however, that the number
of affected accounts in the security breach was as high as eight million.
3.2.5
Vulnerability in Operating System Protocol
One area that offers the greatest security threat to global computer systems is the
area of software errors, especially network operating systems errors. An operating
system plays a vital role not only in the smooth running of the computer system in
controlling and providing vital services but by playing a crucial role in the security
of the system in providing access to vital system resources. A vulnerable operating
system can allow an attacker to take over a computer system and do anything that
any authorized super user can do, such as changing files, installing and running
software, or reformatting the hard drive.
Every OS comes with some security vulnerabilities. In fact many security vulnerabilities are OS specific. Hacker look for OS-identifying information like file
extensions for exploits.
3.2.6
The Invisible Security Threat: The Insider Effect
Quite often, news media reports show that in cases of violent crimes such as murder,
one is more likely to be attacked by someone one does not know. However, real
official police and court records show otherwise. This is also the case in network
security. Research data from many reputable agencies consistently show that the
greatest threat to security in any enterprise is the guy down the hall.
In 1997, the accounting firm Ernst & Young interviewed 4,226 IT managers and
professionals from around the world about the security of their networks. From the
responses, 75 % of the managers indicated that they believed authorized users and
employees represent a threat to the security of their systems. Forty-two percent of
the Ernst & Young respondents reported they had experienced external malicious
attacks in the past year, while 43 % reported malicious acts from employees [10].
The inside threat to organizational security comes from one of its own, an
untrustworthy member of the organization. This “insider threat” is a person possibly
who has privileged access to classified, sensitive, or propriety data, who uses this
unique opportunity to remove information from the organization and transfer to
unauthorized outsider users.
According to Jack Strauss, president and CEO of SafeCorp, a professional
information security consultancy in Dayton, Ohio, company insiders intentionally
78
3
Security Motives and Threats to Computer Networks
or accidentally misusing information pose the greatest information security threat to
today’s Internet-centric businesses. Strauss believes that it is a mistake for company
security chiefs to neglect to lock the back door to the building, to encrypt sensitive
data on their laptops, or not to revoke access privileges when employees leave the
company [11].
3.2.7
Social Engineering
Besides the security threat from the insiders themselves who knowingly and willingly are part of the security threat, the insider effect can also involve insiders
unknowingly being part of the security threat through the power of social engineering. Social engineering consists of an array of methods an intruder such as a hacker,
both from within and outside the organization, can use to gain system authorization
through masquerading as an authorized user of the network. Social engineering can
be carried out using a variety of methods, including physically impersonating an
individual known to have access to the system, online, telephone, and even by writing. The infamous hacker Kevin Mitnick used social engineering extensively to
break into some of the nation’s most secure networks with a combination of his
incredible solid computer hacking and social engineering skills to coax information,
such as passwords, out of people.
3.2.8
Physical Theft
As the demand for information by businesses to stay competitive and nations to
remain strong heats up, laptop computer and PDA theft is on the rise. There is a
whole list of incidents involving laptop computer theft such as the reported disappearance of a laptop used to log incidents of covert nuclear proliferation from a
sixth-floor room in the headquarters of the US State Department in January 2000. In
March of the same year, a British accountant working for the MI5, a British national
spy agency, had his laptop computer snatched from between his legs while waiting
for a train at London’s Paddington Station. In December 1999, someone stole a
laptop from the car of Bono, lead singer for the megaband U2; it contained months
of crucial work on song lyrics. And according to the computer-insurance firm
Safeware, some 319,000 laptops were stolen in 1999, at a total cost of more than
$800 million for the hardware alone [12]. Thousands of company executive laptops
and PDA disappear every year with years of company secrets.
3.3
Security Threat Motives
Although we have seen that security threats can originate from natural disasters and
unintentional human activities, the bulk of cyberspace threats and then attacks originate from humans caused by illegal or criminal acts from either insiders or outsiders, recreational hackers, and criminals. The FBI’s foreign counterintelligence
3.3
Security Threat Motives
79
mission has broadly categorized security threats based on terrorism, military
espionage, economic espionage, that targeting the National Information
Infrastructure, vendetta and revenge, and hate [13].
3.3.1
Terrorism
Our increasing dependence on computers and computer communication has opened
up the can of worms, which we now know as electronic terrorism. Electronic terrorism is used to attack military installations, banking, and many other targets of interest based on politics, religion, and probably hate. Those who are using this new
brand of terrorism are a new breed of hackers, who no longer hold the view of
cracking systems as an intellectual exercise but as a way of gaining from the action.
The “new” hacker is a cracker who knows and is aware of the value of information
that he or she is trying to obtain or compromise. But cyberterrorism is not only
about obtaining information; it is also about instilling fear and doubt and compromising the integrity of the data.
Some of these hackers have a mission, usually foreign power sponsored or foreign power coordinated that, according to the FBI, may result in violent acts, dangerous to human life, that are a violation of the criminal laws of the targeted nation
or organization and are intended to intimidate or coerce people so as to influence the
policy.
3.3.2
Military Espionage
For generations, countries have been competing for supremacy of one form or
another. During the Cold War, countries competed for military spheres. After it
ended, the espionage turf changed from military aim to gaining access to highly
classified commercial information that would not only let them know what other
countries are doing but also might give them either a military or commercial advantage without their spending a great deal of money on the effort. It is not surprising,
therefore, that the spread of the Internet has given a boost and a new lease on life to
a dying Cold War profession. Our high dependency on computers in the national
military and commercial establishments has given espionage a new fertile ground.
Electronic espionage has many advantages over its old-fashion, trench-coated, sunglassed, and gloved Hitchcock-style cousin. For example, it is less expensive to
implement, it can gain access into places that would be inaccessible to human spies,
it saves embarrassment in case of failed or botched attempts, and it can be carried
out at a place and time of choice.
3.3.3
Economic Espionage
The end of the Cold War was supposed to bring to an end spirited and intensive military espionage. However, in the wake of the end of the Cold War, the United States,
80
3
Security Motives and Threats to Computer Networks
as a leading military, economic, and information superpower, found itself a constant
target of another kind of espionage, economic espionage. In its pure form, economic
espionage targets economic trade secrets which, according to the 1996 US Economic
Espionage Act, are defined as all forms and types of financial, business, scientific,
technical, economic, or engineering information and all types of intellectual property including patterns, plans, compilations, program devices, formulas, designs,
prototypes, methods, techniques, processes, procedures, programs, and/or codes,
whether tangible or not, stored or not, and compiled or not [14]. To enforce this act
and prevent computer attacks targeting American commercial interests, US Federal
Law authorizes law enforcement agencies to use wiretaps and other surveillance
means to curb computer-supported information espionage.
3.3.4
Targeting the National Information Infrastructure
The threat may be foreign power sponsored or foreign power coordinated, directed
at a target country, corporation, establishments, or persons. It may target specific
facilities, personnel, information, or computer, cable, satellite, or telecommunication systems that are associated with the National Information Infrastructure.
Activities may include the following [15]:
• Denial or disruption of computer, cable, satellite, or telecommunication services
• Unauthorized monitoring of computer, cable, satellite, or telecommunication
systems
• Unauthorized disclosure of proprietary or classified information stored within or
communicated through computer, cable, satellite, or telecommunication systems
• Unauthorized modification or destruction of computer programming codes,
computer network databases, stored information, or computer capabilities
• Manipulation of computer, cable, satellite, or telecommunication services resulting in fraud, financial loss, or other federal criminal violations
3.3.5
Vendetta/Revenge
There are many causes that lead to vendettas. The demonstrations at the last World
Trade Organization (WTO) in Seattle, Washington, and subsequent demonstrations
at the meetings in Washington, DC, of both the World Bank and the International
Monetary Fund are indicative of the growing discontent of the masses who are
unhappy with big business, multinationals, big governments, and a million others.
This discontent is driving a new breed of wild, rebellious, young people to hit back
at systems that they see as not solving world problems and benefiting all of mankind. These mass computer attacks are increasingly being used as paybacks for
what the attacker or attackers consider to be injustices done that need to be avenged.
However, most vendetta attacks are for mundane reasons such as a promotion
3.4
Security Threat Management
81
denied, a boyfriend or girlfriend taken, an ex-spouse given child custody, and other
situations that may involve family and intimacy issues.
3.3.6
Hate (National Origin, Gender, and Race)
Hate as a motive of security threat originates from and is always based on an individual or individuals with a serious dislike of another person or group of persons
based on a string of human attributes that may include national origin, gender, race,
or mundane ones such as the manner of speech one uses. Then incensed, by one or
all of these attributes, the attackers contemplate and threaten and sometimes carry
out attacks of vengeance often rooted in ignorance.
3.3.7
Notoriety
Many, especially young, hackers try to break into a system to prove their competence and sometimes to show off to their friends that they are intelligent or superhuman in order to gain respect among their peers.
3.3.8
Greed
Many intruders into company systems do so to gain financially from their acts.
3.3.9
Ignorance
This takes many forms, but quite often it happens when a novice in computer security stumbles on an exploit or vulnerability and without knowing or understanding
it uses it to attack other systems.
3.4
Security Threat Management
Security threat management is a technique used to monitor an organization’s critical
security systems in real time to review reports from the monitoring sensors such as
the intrusion detection systems, firewall, and other scanning sensors. These reviews
help to reduce false positives from the sensors, develop quick response techniques
for threat containment and assessment, correlate and escalate false positives across
multiple sensors or platforms, and develop intuitive analytical, forensic, and management reports.
As the workplace gets more electronic and critical company information finds its
way out of the manila envelopes and brown folders into online electronic databases,
security management has become a full-time job for system administrators. While
82
3
Security Motives and Threats to Computer Networks
the number of dubious users is on the rise, the number of reported criminal incidents
is skyrocketing, and the reported response time between a threat and a real attack is
down to 20 min or less [15]. To secure company resources, security managers have
to do real-time management. Real-time management requires access to real-time
data from all network sensors.
Among the techniques used for security threat management are risk assessment
and forensic analysis.
3.4.1
Risk Assessment
Even if there are several security threats all targeting the same resource, each threat
will cause a different risk and each will need a different risk assessment. Some will
have low risk, while others will have the opposite. It is important for the response
team to study the risks as sensor data come in and decide which threat to deal with
first.
3.4.2
Forensic Analysis
Forensic analysis is done after a threat has been identified and contained. After containment, the response team can launch the forensic analysis tools to interact with
the dynamic report displays that have come from the sensors during the duration of
the threat or attack if the threat results in an attack. The data on which forensic
analysis should be performed must be kept in a secure state to preserve the evidence.
It must be stored and transferred, if this is needed, with the greatest care, and the
analysis must be done with the utmost professionalism possible if the results of the
forensic analysis are to stand in court.
3.5
Security Threat Correlation
As we have noted in the previous section, the interval time between the first occurrence of the threat and the start of the real attack has now been reduced about
20 min. This is putting enormous pressure on organizations’ security teams to correspondingly reduce the turnaround time, the time between the start of an incident
and the receipt of the first reports of the incident from the sensors. The shorter the
turnaround time, the quicker the response to an incident in progress. In fact, if the
incident is caught at an early start, an organization can be saved from a great deal of
damage.
Threat correlation, therefore, is the technique designed to reduce the turnaround
time by monitoring all network sensor data and then use that data to quickly analyze
and discriminate between real threats and false positives. In fact, threat correlation
helps in:
3.6
Security Threat Awareness
83
• Reducing false positives because if we get the sensor data early enough, analyze
it, and detect false positives, we can quickly retune the sensors so that future false
positives are reduced.
• Reducing false negatives; similarly by getting early sensor reports, we can analyze it, study where false negatives are coming from, and retune the sensors to
reveal more details.
• Verifying sensor performance and availability; by getting early reports we can
quickly check on all sensors to make sure that they are performing as needed.
3.5.1
Threat Information Quality
The quality of data coming from the sensor logs depends on several factors
including:
• Collection – when data is collected, it must be analyzed. The collection techniques
specify where the data is to be analyzed. To reduce on bandwidth and data compression problems, before data is transported to a central location for analysis, some
analysis is usually done at the sensor and then reports are brought to the central
location. But this kind of distributed computation may not work well in all cases.
• Consolidation – given that the goal of correlation is to pull data out of the sensors, analyze it, correlate it, and deliver timely and accurate reports to the
response teams, and also given the amount of data generated by the sensors and
further the limitation to bandwidth, it is important to find good techniques to
filter out relevant data and consolidate sensor data either through compression or
aggregation so that analysis is done on only real and active threats.
• Correlation – again given the goals of correlation, if the chosen technique of data
collection is to use a central database, then a good data mining scheme must be
used for appropriate queries on the database that will result in outputs that will
realize the goals of correlation. However, many data mining techniques have
problems.
3.6
Security Threat Awareness
Security threat awareness is meant to bring widespread and massive attention of the
population to the security threat. Once people come to know of the threat, it is hoped
that they will become more careful, more alert, and more responsible in what they
do. They will also be more likely to follow security guidelines. A good example of
how massive awareness can be planned and brought about is the efforts of the new
US Department of Homeland Security. The department was formed after the
September 11, 2001, attack on the United States to bring maximum national awareness to the security problems facing not only the country but also every individual.
The idea is to make everyone proactive to security. Figure 3.5 shows some of the
efforts of the Department of Homeland Security for massive security awareness.
84
3
Security Motives and Threats to Computer Networks
Fig. 3.5 Department of homeland security efforts for massive security awareness [16]
Exercises
1. Although we discussed several sources of security threats, we did not exhaust
all. There are many such sources. Name and discuss five.
2. We pointed out that the design philosophy of the Internet infrastructure was
partly to blame for the weaknesses and hence a source of security threats. Do
you think a different philosophy would have been better? Comment on your
answer.
3. Give a detailed account of why the three-way handshake is a security threat.
4. In the chapter, we gave two examples of how a port scan can be a threat to security. Give three more examples of port scans that can lead to system security
compromise.
References
85
5. Comment on the rapid growth of the Internet as a contributing factor to the
security threat of cyberspace. What is the responsible factor in this growth? Is
it people or the number of computers?
6. There seems to have been an increase in the number of reported virus and worm
attacks on computer networks. Is this really a sign of an increase, more reporting, or more security awareness on the part of the individual? Comment on each
of these factors.
7. Social engineering has been frequently cited as a source of network security
threat. Discuss the different elements within social engineering that contribute
to this assertion.
8. In the chapter, we gave just a few of the many motives for security threat.
Discuss five more, giving details of why there are motives.
9. Outline and discuss the factors that influence threat information quality.
10. Discuss the role of data mining techniques in the quality of threat information.
Advanced Exercises
1. Research the effects of industrial espionage and write a detailed account of a
profile of a person who sells and buys industrial secrets. What type of industrial
secrets is likely to be traded?
2. The main reasons behind the development of the National Strategy to Secure
Cyberspace were the realization that we are increasingly dependent on the computer networks, the major components of the national critical infrastructure are
dependent on computer networks, and our enemies have the capabilities to disrupt and affect any of the infrastructure components at will. Study the National
Information Infrastructure, the weaknesses inherent in the system, and suggest
ways to harden it.
3. Study and suggest the best ways to defend the national critical infrastructure
from potential attackers.
4. We indicated in the text that the best ways to manage security threats is to do an
extensive risk assessment and more forensic analysis. Discuss how reducing the
turnaround time can assist you in both risk assessment and forensic analysis.
What are the inputs into the forensic analysis model? What forensic tools are you
likely to use? How do you suggest to deal with the evidence?
5. Do research on intrusion detection and firewall sensor false positives and false
negatives. Write an executive report on the best ways to deal with both of these
unwanted reports.
References
1. G-Lock Software. TCP and UDP port scanning examples. http://www.glocksoft.com/tcpudpscan.htm
2. Rutkowski T, Internet survey reaches 109 million host level. Center for next generation internet. http://www.ngi.org/trends/TrendsPR0102.txt
3. Battling the net security threat, Saturday, 9 Nov 2002, 08:15 GMT. http://news.bbc.co.uk/2/hi/
technology/2386113.stm
86
3
Security Motives and Threats to Computer Networks
4. Derived in part from a letter by Severo M. Ornstein, Commun ACM, June 1989, 32 (6)
5. Virus writer Christopher Pile (Black Barron) sent to jail for 18 Months Wednesday 15
November 1995. http://www.gps.jussieu.fr/comp/VirusWriter.html
6. Hopper I Destructive ‘I LOVE YOU’ Computer virus strikes worldwide. CNN Interactive
Technology. http://www.cnn.com/2000/TECH/computing/05/04/iloveyou/
7. Former student: bug may have been spread accidentally. CNN Interactive. http://www.cnn.
com/2000/ASIANOWsoutheast/05/11/iloveyou.02/
8. National security threat list. http://rf-web.tamu.edu/security/SECGUIDE/T1threat/Nstl.htm
9. CERT® Advisory CA-2001–19 ‘Code Red’ Worm exploiting buffer overflow In IIS indexing
service DLL. http://www.cert.org/advisories/CA-2001–19.html
10. “Is IT Safe?” InfoTrac. Tennessee electronic library. HP Professional, December 1997, 1(12),
14–20
11. Insider abuse of information is biggest security threat, SafeCop says. InfoTrac. Tennessee
electronic library. Business wire. 10 Nov 2000, p 1
12. Hollows P, Security threat correlation: the next battlefield. eSecurityPlanetcom. http://www.
esecurityplanet.com/views/article.php/10752_1501001
13. Awareness of National Security Issues and Response [ANSIR]. FBI’s Intelligence resource
program. http://www.fas.org/irp/ops/ci/ansir.htm
14. Grosso A (2000) The economic espionage ACT: touring the minefields. Commun ACM
43(8):15–18
15. ThreatManager™ – the real-time security threat management suite. http://www.open.com/
responsenetworks/products/threatmanager/threatmanager.htm?ISR1
16. Department of homeland security. http://www.dohs.gov/
4
Introduction to Computer Network
Vulnerabilities
4.1
Definition
System vulnerabilities are weaknesses in the software or hardware on a server or a
client that can be exploited by a determined intruder to gain access to or shut down
a network. Donald Pipkin defines system vulnerability as a condition, a weakness of
or an absence of security procedure, or technical, physical, or other controls that
could be exploited by a threat [1].
Vulnerabilities exist not only in hardware and software that constitute a computer system but also in policies and procedures, especially security policies and
procedures, that are used in a computer network system and in users and employees
of the computer network systems. Since vulnerabilities can be found in so many
areas in a network system, one can say that a security vulnerability is indeed
anything in a computer network that has the potential to cause or be exploited for an
advantage. Now that we know what vulnerabilities are, let us look at their possible
sources.
4.2
Sources of Vulnerabilities
The frequency of attacks in the last several years and the speed and spread of these
attacks indicate serious security vulnerability problems in our network systems.
There is no definitive list of all possible sources of these system vulnerabilities.
Many scholars and indeed many security incident reporting agencies – such as
Bugtraq, the mailing list for vulnerabilities; CERT/CC, the US Computer Emergency
Response Team; NTBugtraq, the mailing list for Windows security; RUS-CERT, the
Germany Computer Emergency Response Team; and US DOE-CIAC, the US
Department of Energy Computer Incident Adversary Capability – have called attention to not only one but multiple factors that contribute to these security problems and
pose obstacles to the security solutions. Among the most frequently mentioned sources
of security vulnerability problems in computer networks are design flaws, poor
© Springer-Verlag London 2015
J.M. Kizza, Guide to Computer Network Security, Computer Communications
and Networks, DOI 10.1007/978-1-4471-6654-2_4
87
88
4
Introduction to Computer Network Vulnerabilities
security management, incorrect implementation, Internet technology vulnerability,
the nature of intruder activity, the difficulty of fixing vulnerable systems, the limits
of effectiveness of reactive solutions, and social engineering [2].
4.2.1
Design Flaws
The two major components of a computer system, hardware and software, quite
often have design flaws. Hardware systems are less susceptible to design flaws than
their software counterparts owing to less complexity, which makes them easier to
test; limited number of possible inputs and expected outcomes, again making it easy
to test and verify; and the long history of hardware engineering. But even with all
these factors backing up hardware engineering, because of complexity in the new
computer systems, design flaws are still common.
But the biggest problems in system security vulnerability are due to software
design flaws. A number of factors cause software design flaws, including overlooking security issues all together. However, three major factors contribute a great deal
to software design flaws: human factors, software complexity, and trustworthy software sources [3].
4.2.1.1 Human Factors
In the human factor category, poor software performance can be a result of the
following:
1. Memory lapses and attentional failures: For example, someone was supposed to
have removed or added a line of code, tested, or verified, but did not because of
simple forgetfulness.
2. Rush to finish: The result of pressure, most often from management, to get the
product on the market either to cut development costs or to meet a client deadline
can cause problems.
3. Overconfidence and use of nonstandard or untested algorithms: Before algorithms are fully tested by peers, they are put into the product line because they
seem to have worked on a few test runs.
4. Malice: Software developers, like any other professionals, have malicious
people in their ranks. Bugs, viruses, and worms have been known to be embedded
and downloaded in software, as is the case with Trojan horse software, which boots
itself at a timed location. As we will see in Sect. 8.4, malice has traditionally
been used for vendetta, personal gain (especially monetary), and just irresponsible amusement. Although it is possible to safeguard against other types of
human errors, it is very difficult to prevent malice.
5. Complacency: When either an individual or a software producer has significant
experience in software development, it is easy to overlook certain testing and
other error control measures in those parts of software that were tested previously in a similar or related product, forgetting that no one software product can
conform to all requirements in all environments.
4.2
Sources of Vulnerabilities
89
4.2.1.2 Software Complexity
Both software professionals and nonprofessionals who use software know the
differences between software programming and hardware engineering. In these
differences underlie many of the causes of software failure and poor performance.
Consider the following:
1. Complexity: Unlike hardwired programming in which it is easy to exhaust the
possible outcomes on a given set of input sequences, in software programming a
similar program may present billions of possible outcomes on the same input
sequence. Therefore, in software programming, one can never be sure of all the
possibilities on any given input sequence.
2. Difficult testing: There will never be a complete set of test programs to check
software exhaustively for all bugs for a given input sequence.
3. Ease of programming: The fact that software programming is easy to learn
encourages many people with little formal training and education in the field to
start developing programs, but many are not knowledgeable about good programming practices or able to check for errors.
4. Misunderstanding of basic design specifications: This affects the subsequent
design phases including coding, documenting, and testing. It also results in
improper and ambiguous specifications of major components of the software and
in ill-chosen and poorly defined internal program structures.
4.2.1.3 Trustworthy Software Sources
There are thousands of software sources for the millions of software products on
the market today. However, if we were required to name well-known software
producers, very few of us would succeed in naming more than a handful. Yet we
buy software products every day without even ever minding their sources. Most
importantly, we do not care about the quality of that software, the honesty of the
anonymous programmer, and of course its reliability as long as it does what we
want it to do.
Even if we want to trace the authorship of the software product, it is impossible
because software companies are closed within months of their opening. Chances are
when a software product is 2 years old, its producer is likely to be out of business.
In addition to the difficulties in tracing the producers of software who go out of
business as fast as they come in, there is also fear that such software may not even
have been tested at all.
The growth of the Internet and the escalating costs of software production have
led many small in-house software developers to use the marketplace as a giant testing
laboratory through the use of beta testing, shareware, and freeware. Shareware and
freeware have a high potential of bringing hostile code into trusted systems.
For some strange reason, the more popular the software product gets, the less it
is tested. As software products make market inroads, their producers start thinking
of producing new versions and releases with little to no testing of current versions.
This leads to the growth of what is called a common genesis software product,
where all its versions and releases are based on a common code. If such a code has
90
4
Introduction to Computer Network Vulnerabilities
not been fully tested, which is normally the case, then errors are carried through
from version to version and from release to release.
In the last several years, we have witnessed the growth of the open-source movement. It has been praised as a novel idea to break the monopoly and price gauging
by big software producers and most important as a timely solution to poor software
testing. Those opposed to the movement have criticized it for being a source of
untrusted and many times untested software. Despite the wails of the critics, major
open-source products such as Linux operating system have turned out with few
security flaws; still there are fears that hackers can look at the code and perhaps find
a way to cause mischief or steal information.
There has been a rise recently in Trojan horses inserted into open-source code.
In fact security experts are not recommending running readily available programs
such as MD5 hashes to ensure that code hasn’t been altered. Using MD5 hashes and
similar programs such as MD4, SHA, and SHA-1 continually compares codes
generated by “healthy” software to hashes of programs in the field, thus exposing
the Trojans. According to the recent CERT advisory, crackers are increasingly
inserting Trojans into the source code for tcpdump, a utility that monitors network
traffic, and libpcap, a packet capture library tool [4].
However, according to the recent study by the Aberdeen Group, open-source
software now accounts for more than half of all security advisories published in the
past year by the Computer Emergency Response Team (CERT). Also according to
industry study reports, open-source software commonly used in Linux, Unix, and
network routing equipment accounted for 16 of the 29 security advisories during the
first 10 months of 2002, and there is an upswing in new virus and Trojan horse
warnings for Unix, Linux, Mac OSX, and open-source software [4].
4.2.1.4 Software Reuse, Reengineering, and Outlived Design
New developments in software engineering are spearheading new developments
such as software reuse and software reengineering. Software reuse is the integration
and use of software assets from a previously developed system. It is the process in
which old or updated software such as library, component, requirements and design
documents, and design patterns is used along with new software.
Both software reengineering and reuse are hailed for cutting down on the escalating development and testing costs. They have brought efficiency by reducing
time spent designing or coding, popularized standardization, and led to common
“look and feel” between applications. They have made debugging easier through
use of thoroughly tested designs and code.
However, both software techniques have the potential to introduce security flaws
in systems. Among some of the security flaws that have been introduced into programming is first the mismatch where reused requirements specifications and
designs may not completely match the real situation at hand and nonfunctional
characteristics of code may not match those of the intended recipient. Second, when
using object programming, it is important to remember that objects are defined with
certain attributes, and any new application using objects defined in terms of the old
ones will inherit all their attributes.
4.2
Sources of Vulnerabilities
91
In Chap. 4, we discussed many security problems associated with script programming. Yet there is now momentum in script programming to bring more
dynamism into Web programming. Scripting suffers from a list of problems including
inadequate searching and/or browsing mechanisms before any interaction between
the script code and the server or client software, side effects from software assets that
are too large or too small for the projected interface, and undocumented interfaces.
4.2.2
Poor Security Management
Security management is both a technical and an administrative security process that
involves security policies and controls that the organization decides to put in place to
provide the required level of protection. In addition, it also involves security monitoring and evaluation of the effectiveness of those policies. The most effective way to
meet these goals is to implement security risk assessment through a security policy
and secure access to network resources through the use of firewalls and strong cryptography. These and others offer the security required for the different information
systems in the organization in terms of integrity, confidentiality, and availability of
that information. Security management by itself is a complex process; however, if it
is not well organized, it can result in a security nightmare for the organization.
Poor security management is a result of little control over security implementation, administration, and monitoring. It is a failure in having solid control of the
security situation of the organization when the security administrator does not know
who is setting the organization’s security policy and administering security compliance and who manages system security configurations and is in charge of security
event and incident handling.
In addition to the disarray in the security administration, implementation, and
monitoring, a poor security administration team may even lack a plan for the wireless
component of the network. As we will see in Chap. 17, the rapid growth of wireless
communication has brought with it serious security problems. There are so many
things that can go wrong with security if security administration is poor. Unless the
organization has a solid security administration team with a sound security policy
and secure security implementation, the organization’s security may be compromised. An organization’s system security is as good as its security policy and its
access control policies and procedures and their implementation.
Good security management is made up of a number of implementable security
components that include risk management, information security policies and procedures, standards, guidelines, information classification, security monitoring, and
security education. These core components serve to protect the organization’s
resources.
• A risk analysis will identify these assets, discover the threats that put them at
risk, and estimate the possible damage and potential loss a company could endure
if any of these threats become real. The results of the risk analysis help management construct a budget with the necessary funds to protect the recognized assets
92
•
•
•
•
•
4
Introduction to Computer Network Vulnerabilities
from their identified threats and develop applicable security policies that provide
direction for security activities. Security education takes this information to each
and every employee.
Security policies and procedures to create, implement, and enforce security
issues that may include people and technology.
Standards and guidelines to find ways, including automated solution for creating,
updating, and tracking compliance of security policies across the organization.
Information classification to manage the search, identification, and reduction of
system vulnerabilities by establishing security configurations.
Security monitoring to prevent and detect intrusions, consolidate event logs for
future log and trend analysis, manage security events in real time, manage
parameter security including multiple firewall reporting systems, and analyze
security events enterprise wide.
Security education to bring security awareness to every employee of the organization and teach them their individual security responsibility.
4.2.3
Incorrect Implementation
Incorrect implantation very often is a result of incompatible interfaces. Two product
modules can be deployed and work together only if they are compatible. That means
that the module must be additive, that is, the environment of the interface needs to
remain intact. An incompatible interface, on the other hand, means that the introduction of the module has changed the existing interface in such a way that existing
references to the interface can fail or behave incorrectly.
This definition means that the things we do on the many system interfaces can
result in incompatibility that results in bad or incomplete implementation. For
example, ordinary addition of a software or even an addition or removal of an
argument to an existing software module may cause an imbalanced interface. This
interface sensitivity tells us that it is possible because of interposition that the
addition of a simple thing like a symbol or an additional condition can result in an
incompatible interface, leading the new symbol or condition to conflict with all
applications that have been without problems.
To put the interface concept into a wide system framework, consider a systemwide integration of both hardware and software components with differing technologies with no standards. No information system products, whether hardware or
software, are based on a standard that the industry has to follow. Because of this,
manufacturers and consumers must contend with the constant problems of system
compatibility. Because of the vast number of variables in information systems,
especially network systems, involving both hardware and software, it is not possible
to test or verify all combinations of hardware and software. Consider, for example,
that there are no standards in the software industry. Software systems involve different
models based on platforms and manufacturer. Products are heterogeneous both
semantically and syntactically.
4.2
Sources of Vulnerabilities
93
When two or more software modules are to interface one another in the sense that
one may feed into the other or one may use the outputs of the other, incompatibility
conditions may result from such an interaction. Unless there are methodologies and
algorithms for checking for interface compatibility, errors are transmitted from one
module into another. For example, consider a typical interface created by a method
call between software modules. Such an interface always makes assumptions about
the environment having the necessary availability constraints. If such availability
constraints are not checked before the modules are allowed to pass parameters via
method calls, errors may result.
Incompatibility in system interfaces may be cause by a variety of conditions
usually created by things such as
•
•
•
•
Too much detail
Not enough understanding of the underlying parameters
Poor communication during design
Selecting the software or hardware modules before understanding the receiving
software
• Ignoring integration issues
• Error in manual entry
Many security problems result from the incorrect implementation of both hardware and software. In fact, system reliability in both software and hardware is based
on correct implementation, as is the security of the system.
4.2.4
Internet Technology Vulnerability
In Sect. 4.2.1, we discussed design flaws in technology systems as one of the leading
causes of system vulnerabilities. In fact we pointed out that systems are composed
of software, hardware, and humanware. There are problems in each one of these
components. Since the humanware component is influenced by the technology in
the software and hardware, we will not discuss this any further.
The fact that computer and telecommunication technologies have developed
at such an amazing and frightening speed and people have overwhelmingly
embraced both of them has caused security experts to worry about the side effects
of these booming technologies. There were reasons to worry. Internet technology
has been and continues to be vulnerable. There have been reports of all sorts of
loopholes, weaknesses, and gaping holes in both software and hardware
technologies.
According to Table 4.1, the number of reported system vulnerabilities has been
on the rise from 3 in 1989 to 1,113 in 2011, a 26.81 % growth, and this is only what
is reported to the National Vulnerability Database (NVD), a division of the National
Institute of Standards and Technology (NIST) within the US Department of
Commerce. There is agreement among security experts that what is reported is the
94
Table 4.1 Vulnerability
statistical data
4
Year
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
Introduction to Computer Network Vulnerabilities
# of vulns
3
8
12
9
8
2
2
8
11
5
19
47
239
298
318
497
831
948
1,025
953
983
1,348
1,113
88
% of total
100.00
72.73
80.00
69.23
61.54
8.00
8.00
10.67
4.37
2.03
2.13
4.61
14.25
13.82
20.83
20.28
16.85
14.35
15.74
16.92
17.15
29.06
26.81
4.24
Reference Source: National Vulnerability Database Version 2.2,
http://web.nvd.nist.gov/view/vuln/statistics-results?cves=on&
query=&cwe_id=&pub_date_start_month=11&pub_date_start_
year=2011&pub_date_end_month=-1&pub_date_end_year=1&mod_date_start_month=-1&mod_date_start_year=-1&mod_
date_end_month=-1&mod_date_end_year=-1&cvss_sev_
base=&cvss_av=&cvss_ac=&cvss_au=&cvss_c=&cvss_
i=&cvss_a=&uscert_ta=on&uscert_vn=on&oval_query=on
tip of the iceberg. Many vulnerabilities are discovered and, for various reasons, are
not reported.
Because these technologies are used by many who are not security experts (in
fact the majority of users are not security literate), one can say that many vulnerabilities are observed and probably not reported because those who observe them do
not have the knowledge to classify what has been observed as a vulnerability. Even
if they do, they may not know how and where to report.
No one knows how many of these vulnerabilities are there in both software and
hardware. The assumption is that there are thousands. As history has shown us, a
few are always discovered every day by hackers. Although the list spans both
4.2
Sources of Vulnerabilities
95
hardware and software, the problem is more prevalent with software. In fact, software
vulnerabilities can be put into four categories:
• Operating system vulnerabilities: Operating systems are the main sources of all
reported system vulnerabilities. Going by the SANS (SysAdmin, Audit, Network,
Security) Institute, a cooperative research and education organization serving
security professionals, auditors, system administrators, and network administrators,
together with the Common Weakness Enumeration (CWE), a communitydeveloped dictionary of weaknesses of software types, has been issuing lists
annually: “CWE/SANS Top 25 Most Dangerous Software Errors,” popular
operating systems cause many of the vulnerabilities. This is always so because
hackers tend to take the easiest route by exploiting the best-known flaws with the
most effective and widely known and available attack tools.
• Port-based vulnerabilities: Besides operating systems, network service ports take
second place in sourcing system vulnerabilities. For system administrators,
knowing the list of most vulnerable ports can go a long way to help enhance
system security by blocking those known ports at the firewall. Such an operation,
though not comprehensive, adds an extra layer of security to the network. In fact
it is advisable that in addition to blocking and deny-everything filtering, security
administrators should also monitor all ports including the blocked ones for
intruders who entered the system by some other means. For the most common
vulnerable port numbers, the reader is referred to the latest SANS at: http://www.
sans.org/.
• Application software-based errors
• System protocol software such as client and server browser
To help in the hunt for and fight against system vulnerabilities, SANS, in cooperation with the Common Weakness Enumeration (CWE), a community-developed
dictionary of weaknesses of software types, has been issuing lists annually: “CWE/
SANS Top 25 Most Dangerous Software Errors.
In addition to highlighting the need for system administrators to patch the most
common vulnerabilities, we hope this will also help many organizations that lack
the resources to train security personnel to have a choice of focusing on either the
most current or the most persistent vulnerability. One would wonder why a vulnerability would remain among the most common year after year, while there are
advisories on it and patches for it. The answer is not very far-fetched, but simple:
system administrators do not correct many of these flaws because they simply do
not know which vulnerabilities are most dangerous; they are too busy to correct
them all or they do not know how to correct them safely.
Although these vulnerabilities are cited, many of them year after year, as the
most common vulnerabilities, there are traditionally thousands of vulnerabilities
that hackers often use to attack systems. Because they are so numerous and new
ones being discovered every day, many system administrators may be overwhelmed,
which may lead to loss of focus on the need to ensure that all systems are protected
against the most common attacks.
96
4
Introduction to Computer Network Vulnerabilities
Let us take stock of what we have said so far. Lots and lots of system vulnerabilities
have been observed and documented by SANS and CWE in their series, “CWE/
SANS Top 25 Most Dangerous Software Errors.” However, there is a stubborn
persistence of a number of vulnerabilities making the list year after year. This observation, together with the nature of software, as we have explored in Sect. 4.2.1,
means it is possible that what has been observed so far is a very small fraction of a
potential sea of vulnerabilities; many of them probably will never be discovered
because software will ever be subjected to either unexpected input sequences or
operated in unexpected environments.
Besides the inherently embedded vulnerabilities resulting from flawed designs,
there are also vulnerabilities introduced in the operating environments as a result of
incorrect implementations by operators. The products may not have weaknesses
initially, but such weaknesses may be introduced as a result of bad or careless installations. For example, quite often products are shipped to customers with security
features disabled, forcing the technology users to go through the difficult and errorprone process of properly enabling the security features by oneself.
4.2.5
Changing Nature of Hacker Technologies and Activities
It is ironic that as “useful” technology develops, so does the “bad” technology.
What we call useful technology is the development in all computer and telecommunication technologies that are driving the Internet, telecommunication, and the
Web. “Bad” technology is the technology that system intruders are using to attack
systems. Unfortunately these technologies are all developing in tandem. In fact,
there are times when it looks like hacker technologies are developing faster than
the rest of the technologies. One thing is clear, though: hacker technology is
flourishing.
Although it used to take intelligence, determination, enthusiasm, and perseverance to become a hacker, it now requires a good search engine, time, a little bit of
knowledge of what to do, and owning a computer. There are thousands of hacker
Web sites with the latest in script technologies and hundreds of recipe books and
sources on how to put together an impact virus or a worm and how to upload it.
The ease of availability of these hacker tools; the ability of hackers to disguise
their identities and locations; the automation of attack technology which further
distances the attacker from the attack; the fact that attackers can go unidentified,
limiting the fear of prosecution; and the ease of hacker knowledge acquisition have
put a new twist in the art of hacking, making it seem easy and hence attracting more
and younger disciples.
Besides the ease of becoming a hacker and acquiring hacker tools, because of the
Internet sprawl, hacker impact has become overwhelming, impressive, and more
destructive in shorter times than ever before. Take, for example, recent virus
incidents such as the “I Love You,” “Code Red,” “Slammer,” and the “Blaster”
worms’ spread. These worms and viruses probably spread around the world
much faster than the human cold virus and the dreaded severe acute respiratory
syndrome (SARS).
4.2
Sources of Vulnerabilities
97
What these incidents have demonstrated is that the turnaround time, the time a
virus is first launched in the wild and the time it is first cited as affecting the system,
is becoming incredibly shorter. Both the turnaround time and the speed at which the
virus or a worm spreads reduce the response time, the time a security incident is first
cited in the system and the time an effective response to the incident should have
been initiated. When the response time is very short, security experts do not have
enough time to respond to a security incident effectively. In a broader framework,
when the turnaround time is very short, system security experts who develop patches
do not have enough time to reverse engineer and analyze the attack in order to
produce counter-immunization codes. It has been and it is still the case in many
security incidents for antivirus companies to take hours and sometime days, such as
in the case of the Code Red virus, to come up with an effective cure. However, even
after a patch is developed, it takes time before it is filtered down to the system
managers. Meantime, the damage has already been done, and it is multiplying.
Likewise, system administrators and users have little time to protect their systems.
4.2.6
Difficulty of Fixing Vulnerable Systems
In his testimony to the Subcommittee on Government Efficiency, Financial
Management, and Intergovernmental Relations of the US House Committee on
Government Reform, Richard D. Pethia, Director, CERT Centers, pointed out the
difficulty in fixing known system vulnerabilities as one of the sources of system
vulnerabilities. His concern was based on a number of factors, including the everrising number of system vulnerabilities and the ability of system administrators to
cope with the number of patches issued for these vulnerabilities. As the number of
vulnerabilities rises, system and network administrators face a difficult situation.
They are challenged with keeping up with all the systems they have and all the
patches released for those systems. Patches can be difficult to apply and might even
have unexpected side effects as a result of compatibility issues [2].
Besides the problem of keeping abreast of the number of vulnerabilities and the
corresponding patches, there are also logistic problems between the time at which a
vendor releases a security patch and the time at which a system administrator fixes
the vulnerable computer system. There are several factors affecting the quick fixing
of patches. Sometimes, it is the logistics of the distribution of patches. Many vendors disseminate the patches on their Web sites; others send e-mail alerts. However,
sometimes busy systems administrators do not get around to these e-mails and security alerts until sometime after. Sometimes, it can be months or years before the
patches are implemented on a majority of the vulnerable computers.
Many system administrators are facing the same chronic problems: the neverending system maintenance, limited resources, and highly demanding management.
Under these conditions, the ever-increasing security system complexity, increasing
system vulnerabilities, and the fact that many administrators do not fully understand
the security risks, system administrators neither give security a high enough priority
nor assign adequate resources. Exacerbating the problem is the fact that the demand
for skilled system administrators far exceeds the supply [2].
98
4.2.7
4
Introduction to Computer Network Vulnerabilities
Limits of Effectiveness of Reactive Solutions
Data from Table 4.1 shows a growing number of system attacks reported.
However, given that just a small percentage of all attacks is reported, this table
indicates a serious growing system security problem. As we have pointed out
earlier, hacker technology is becoming more readily available, easier to get and
assemble, more complex, and their effects are more far-reaching. All these indicate that urgent action is needed to find an effective solution to this monstrous
problem.
The security community, including scrupulous vendors, have come up with
various solutions, some good and others not. In fact, in an unexpected reversal of
fortunes, one of the new security problems is to find a “good” solution from among
thousands of solutions and to find an “expert” security option from the many different views.
Are we reaching the limits of our efforts, as a community, to come up with a few
good and effective solutions to this security problem? There are many signs to support an affirmative answer to this question. It is clear that we are reaching the limits
of effectiveness of our reactive solutions. Richard D. Pethia gives the following
reasons [2]:
• The number of vulnerabilities in commercial off-the-shelf software is now at the
level that it is virtually impossible for any but the best resourced organizations to
keep up with the vulnerability fixes.
• The Internet now connects more than 109,000,000 computers and continues to grow at a rapid pace. At any point in time, there are hundreds of
thousands of connected computers that are vulnerable to one form of attack
or another.
• Attack technology has now advanced to the point where it is easy for attackers to
take advantage of these vulnerable machines and harness them together to launch
high-powered attacks.
• Many attacks are now fully automated, thus reducing the turnaround time even
further as they spread around cyberspace.
• The attack technology has become increasingly complex and in some cases
intentionally stealthy, thus reducing the turnaround time and increasing the time
it takes to discover and analyze the attack mechanisms in order to produce
antidotes.
• Internet users have become increasingly dependent on the Internet and now use
it for many critical applications so that a relatively minor attack has the potential
to cause huge damages.
Without being overly pessimistic, these factors, taken together, indicate that
there is a high probability that more attacks are likely and since they are getting
more complex and attacking more computers, they are likely to cause significant
devastating economic losses and service disruptions.
4.2
Sources of Vulnerabilities
4.2.8
99
Social Engineering
According to John Palumbo, social engineering is an outside hacker’s use of psychological tricks on legitimate users of a computer system in order to gain the information
(usernames and passwords) one needs to gain access to the system [5].
Many have classified social engineering as a diversion, in the process of system
attack, on people’s intelligence to utilize two human weaknesses: first, no one wants
to be considered ignorant and second is human trust. Ironically, these are two weaknesses that have made social engineering difficult to fight because no one wants to
admit falling for it. This has made social engineering a critical system security hole.
Many hackers have and continue to use it to get into protected systems. Kevin
Mitnick, the notorious hacker, used it successfully and was arguably one of the most
ingenious hackers of our time; he was definitely very gifted with his ability to
socially engineer just about anybody [5].
Hackers use many approaches to social engineering, including the following [6]:
• Telephone. This is the most classic approach, in which hackers call up a targeted
individual in a position of authority or relevance and initiate a conversation with
the goal of gradually pulling information out of the target. This is done mostly to
help desks and main telephone switchboards. Caller ID cannot help because
hackers can bypass it through tricks and the target truly believes that the hacker
is actually calling from inside the corporation.
• Online. Hackers are harvesting a boom of vital information online from careless
users. The reliance on and excessive use of the Internet has resulted in people
having several online accounts. Currently an average user has about four to five
accounts including one for home use, one for work, and an additional one or two
for social or professional organizations. With many accounts, as probably any
reader may concur, one is bound to forget some passwords, especially the least
used ones. To overcome this problem, users mistakenly use one password on
several accounts. Hackers know this and they regularly target these individuals
with clever baits such as telling them they won lotteries or were finalists in
sweepstakes, where computers select winners, or they have won a specific
number of prizes in a lotto, where they were computer selected. However, in
order to get the award, the user must fill in an online form, usually Web based,
and this transmits the password to the hacker. Hackers have used hundreds of
tricks on unsuspecting users in order for them to surrender their passwords.
• Dumpster diving is now a growing technique of information theft not only in
social engineering but more so in identity theft. The technique, also known as
trashing, involves an information theft scavenging through individual and company dumpsters for information. Large and critical information can be dug out of
dumpsters and trash cans. Dumpster diving can recover from dumpsters and
trash cans individual social security numbers, bank accounts, individual vital
records, and a whole list of personal and work-related information that gives the
hackers the exact keys they need to unlock the network.
100
4
Introduction to Computer Network Vulnerabilities
• In person is the oldest of the information-stealing techniques that predates
computers. It involves a person physically walking into an organization’s offices
and casually checking out note boards, trash diving into bathroom trash cans and
company hallway dumpsters, and eating lunches together and initiating conversations with employees. In big companies, this can be done only on a few
occasions before trusted friendships develop. From such friendships, information can be passed unconsciously.
• Snail mail is done in several ways and is not limited only to social engineering
but has also been used in identity theft and a number of other crimes. It has been
in the news recently because of identity theft. It is done in two ways: the hacker
picks a victim and goes to the post office and puts in a change of address form to
a new box number. This gives the hacker a way to intercept all snail mail of the
victim. From the intercepted mail, the hacker can gather a great deal of information that may include the victim’s bank and credit card account numbers and
access control codes and pins by claiming to have forgotten his or her password
or pin and requesting a reissue in the mail. In another form, the hacker drops a
bogus survey in the victim’s mailbox offering baits of cash award for completing
a “few simple” questions and mailing them in. The questions, in fact, request far
more than simple information from an unsuspecting victim.
• Impersonation is also an old trick played on unsuspecting victims by criminals
for a number of goodies. These days the goodies are information. Impersonation
is generally acting out a victim’s character role. It involves the hacker playing a
role and passing himself or herself as the victim. In the role, the thief or hacker
can then get into legitimate contacts that lead to the needed information. In large
organizations with hundreds or thousands of employees scattered around the
globe, it is very easy to impersonate a vice president or a chief operations officer.
Since most employees always want to look good to their bosses, they will end up
supplying the requested information to the imposter.
Overall, social engineering is a cheap but rather threatening security problem
that is very difficult to deal with.
4.3
Vulnerability Assessment
Vulnerability assessment is a process that works on a system to identify, track, and
manage the repair of vulnerabilities on the system. The assortment of items that are
checked by this process in a system under review varies depending on the organization. It may include all desktops, servers, routers, and firewalls. Most vulnerability
assessment services will provide system administrators with:
• Network mapping and system fingerprinting of all known vulnerabilities
• A complete vulnerability analysis and ranking of all exploitable weaknesses based
on potential impact and likelihood of occurrence for all services on each host
• Prioritized list of misconfigurations
4.3
Vulnerability Assessment
101
In addition, at the end of the process, a final report is always produced detailing
the findings and the best way to go about overcoming such vulnerabilities. This
report consists of prioritized recommendations for mitigating or eliminating
weaknesses, and based on an organization’s operational schedule, it also contains
recommendations of further reassessments of the system within given time intervals
or on a regular basis.
4.3.1
Vulnerability Assessment Services
Due to the massive growth of the number of companies and organizations owning
their own networks, the growth of vulnerability monitoring technologies, the
increase in network intrusions and attacks with viruses, and the worldwide publicity of such attacks, there is a growing number of companies offering system
vulnerability services. These services, targeting the internals and perimeter of the
system, Web-based applications, and providing a baseline to measure subsequent
attacks against, include scanning, assessment and penetration testing, and application assessment.
4.3.1.1 Vulnerability Scanning
Vulnerability scanning services provide a comprehensive security review of the
system, including both the perimeter and system internals. The aim of this kind of
scanning is to spot critical vulnerabilities and gaps in the system’s security practices. Comprehensive system scanning usually results in a number of both false
positives and negatives. It is the job of the system administrator to find ways of
dealing with these false positives and negatives. The final report produced after each
scan consists of strategic advice and prioritized recommendations to ensure that
critical holes are addressed first. System scanning can be scheduled, depending on
the level of the requested scan, by the system user or the service provider, to run
automatically and report by either automated or periodic e-mail to a designated user.
The scans can also be stored on a secure server for future review.
4.3.1.2 Vulnerability Assessment and Penetration Testing
This phase of vulnerability assessment is a hands-on testing of a system for identified
and unidentified vulnerabilities. All known hacking techniques and tools are tested
during this phase to reproduce real-world attack scenarios. One of the outcomes of
these real-life testings is that new and sometimes obscure vulnerabilities are found,
processes and procedures of attack are identified, and sources and severity of vulnerabilities are categorized and prioritized based on the user-provided risks.
4.3.1.3 Application Assessment
As Web applications become more widespread and more entrenched into
e-commerce and all other commercial and business areas, applications are slowly
becoming the main interface between the user and the network. The increased
demands on applications have resulted into new directions in automation and
102
4
Introduction to Computer Network Vulnerabilities
dynamism of these applications. As we saw in Chap. 6, scripting in Web applications,
for example, has opened a new security paradigm in system administration. Many
organizations have gotten sense of these dangers and are making substantial progress
in protecting their systems from attacks via Web-based applications. Assessing the
security of system applications is, therefore, becoming a special skills requirement
needed to secure critical systems.
4.3.2
Advantages of Vulnerability Assessment Services
Vulnerability online services have many advantages for system administrators.
They can, and actually always do, provide and develop signatures and updates for
new vulnerabilities and automatically include them in the next scan. This eliminates
the need for the system administrator to schedule periodic updates.
Reports from these services are very detailed not only on the vulnerabilities,
sources of vulnerabilities, and existence of false positives, but they also focus on
vulnerability identification and provide more information on system configuration
that may not be readily available to system administrators. This information alone
goes a long way in providing additional security awareness to security experts about
additional avenues whereby systems may be attacked. The reports are then encrypted
and stored in secure databases accessible only with the proper user credentials. This
is because these reports contain critically vital data on the security of the system and
they could, therefore, be a pot of gold for hackers if found. This additional care and
awareness adds security to the system.
Probably, the best advantage to an overworked and many times resource-strapped
system administrator is the automated and regularly scheduled scan of all network
resources. They provide, in addition, a badly needed third-party “security eye,” thus
helping the administrator to provide an objective yet independent security evaluation of the system.
Exercises
1. What is a vulnerability? What do you understand by a system vulnerability?
2. Discuss four sources of system vulnerabilities.
3. What are the best ways to identify system vulnerabilities?
4. What is innovative misuse? What role does it play in the search for solutions to
system vulnerability?
5. What is incomplete implementation? Is it possible to deal with incomplete
implementation as a way of dealing with system vulnerabilities? In other words,
is it possible to completely deal with incomplete implementation?
6. What is social engineering? Why is it such a big issue yet so cheap to perform?
Is it possible to completely deal with it? Why or why not?
7. Some have described social engineering as being perpetuated by our internal
fears. Discuss those fears.
8. What is the role of software security testing in the process of finding solutions
to system vulnerabilities?
References
103
9. Some have sounded an apocalyptic voice as far as finding solutions to system
vulnerabilities. Should we take them seriously? Support your response.
10. What is innovative misuse? What role does it play in the search for solutions to
system vulnerabilities?
Advanced Exercises
1. Why are vulnerabilities difficult to predict?
2. Discuss the sources of system vulnerabilities.
3. Is it possible to locate all vulnerabilities in a network? In other words, can one
make an authoritative list of those vulnerabilities? Defend your response.
4. Why are design flaws such a big issue in the study of vulnerability?
5. Part of the problem in design flaws involves issues associated with software
verification and validation (V&V). What is the role of V&V in system
vulnerability?
References
1. Pipkin D (2000) Information security: protecting the global enterprise. Prentice Hall PTR,
Upper Saddle River
2. Pethia RD, Information technology—essential but vulnerable: how prepared are we for attacks?
http://www.cert.org/congressional_testimony/Pethia_testimony_Sep26.html
3. Kizza JM (2003) Ethical and social issues in the information age, 2nd edn. Springer, New York
4. Hurley J, Hemmendinger E, Open source and Linux: 2002 poster children for security problems.
http://www.aberdeen.com/ab_abstracts/2002/11/11020005.htm
5. Palumbo J, Social engineering: what is it, why is so little said about it and what can be done?
SANS. http://www.sans.org/rr/social/social.php
6. Granger S. Social engineering fundamentals, part I: hacker tactics. http://www.securityfocus.
com/infocus/1527
5
Cyber Crimes and Hackers
5.1
Introduction
The greatest threats to the security, privacy, and reliability of computer networks
and other related information systems in general are cyber crimes committed by
cyber criminals, but most importantly hackers. Judging by the damage caused by
past cyber criminal and hacker attacks to computer networks in businesses, governments, and individuals, resulting in inconvenience and loss of productivity and
credibility, one cannot fail to see that there is a growing community demand to
software and hardware companies to create more secure products that can be used
to identify threats and vulnerabilities, to fix problems, and to deliver security
solutions.
The rise of the hacker factor; the unprecedented and phenomenal growth of the
Internet; the latest developments in globalization, hardware miniaturization, and
wireless and mobile technology; the mushrooming of connected computer networks, and the society’s ever-growing appetite for and dependency on computers
have all greatly increased the threats both the hacker and cyber crimes pose to the
global communication and computer networks. Both these factors are creating serious social, ethical, legal, political, and cultural problems. These problems involve,
among others, identity theft, hacking, electronic fraud, intellectual property theft,
and national critical infrastructure attacks and are generating heated debates on
finding effective ways to deal with them, if not stop them.
Industry and governments around the globe are responding to these threats
through a variety of approaches and collaborations such as:
• Formation of organizations, such as the Information Sharing and Analysis
Centers (ISACs).
• Getting together of industry portals and ISPs on how to deal with distributed
denial of service attacks including the establishment of Computer Emergency
Response Teams (CERTs).
© Springer-Verlag London 2015
J.M. Kizza, Guide to Computer Network Security, Computer Communications
and Networks, DOI 10.1007/978-1-4471-6654-2_5
105
106
5
Cyber Crimes and Hackers
• Increasing the use of sophisticated tools and services by companies to deal with
network vulnerabilities. Such tools include the formation of Private Sector
Security Organizations (PSSOs) such as SecurityFocus, Bugtraq, and the
International Chamber of Commerce’s Cybercrime Unit.
• Setting up national strategies similar to the US National Strategy to Secure
Cyberspace, an umbrella initiative of all initiatives from various sectors of the
national critical infrastructure grid and the Council of Europe Convention on
Cybercrimes.
5.2
Cyber Crimes
According to the director of the US National Infrastructure Protection Center
(NIPC), cyber crimes present the greatest danger to e-commerce and the general
public in general [1]. The threat of crime using the Internet is real and growing, and
it is likely to be the scourge of the twenty-first century. A cyber crime is a crime like
any other crime, except that in this case, the illegal act must involve a connected
computing system either as an object of a crime, an instrument used to commit a
crime, or a repository of evidence related to a crime. Alternatively, one can define a
cyber crime as an act of unauthorized intervention into the working of the telecommunication networks and/or the sanctioning of an authorized access to the resources
of the computing elements in a network that leads to a threat to the system’s infrastructure or life or that causes significant property loss.
Because of the variations in jurisdiction boundaries, cyber acts are defined as
illegal in different ways depending on the communities in those boundaries.
Communities define acts to be illegal if such acts fall within the domains of that
community’s commission of crimes that a legislature of a state or a nation has specified and approved. Both the International Convention of Cyber Crimes and the
European Convention on Cyber Crimes have outlined the list of these crimes to
include the following:
•
•
•
•
•
•
•
•
•
•
•
•
Unlawful access to information
Illegal interception of information
Unlawful use of telecommunication equipment
Forgery with the use of computer measures
Intrusions of the public switched and packet network
Network integrity violations
Privacy violations
Industrial espionage
Pirated computer software
Fraud using a computing system
Internet/e-mail abuse
Using computers or computer technology to commit murder, terrorism, pornography, and hacking
5.2
Cyber Crimes
5.2.1
107
Ways of Executing Cyber Crimes
Because for any crime to be classified as a cyber crime, it must be committed with
the help of a computing resource, as defined above, cyber crimes are executed in one
of the two ways: penetration and denial-of-service attacks.
5.2.1.1 Penetration
A penetration cyber attack is a successful unauthorized access to a protected system
resource, or a successful unauthorized access to an automated system, or a successful act of bypassing the security mechanisms of a computing system [2]. A penetration cyber attack can also be defined as any attack that violates the integrity and
confidentiality of a computing system’s host.
However defined, a penetration cyber attack involves breaking into a computing
system and using known security vulnerabilities to gain access to any cyberspace
resource. With full penetration, an intruder has full access to all that computing
system’s resources. Full penetration, therefore, allows an intruder to alter data files,
change data, plant viruses, or install damaging Trojan horse programs into the computing system. It is also possible for intruders, especially if the victim computer is
on a computer network, to use it as a launching pad to attack other network resources.
Penetration attacks can be local, where the intruder gains access to a computer on a
LAN on which the program is run, or global on a WAN such as the Internet, where
an attack can originate thousands of miles from the victim computer.
5.2.1.2 Distributed Denial of Service (DDoS)
A denial of service is an interruption of service resulting from system unavailability
or destruction. It prevents any part of a target system from functioning as planned.
This includes any action that causes unauthorized destruction, modification, or
delay of service. Denial of service can also be caused by intentional degradation or
blocking of computer or network resources [2]. These denial-of-service attacks,
commonly known as distributed denial-of-service (DDoS) attacks, are a new form
of cyber attacks. They target computers connected to the Internet. They are not penetration attacks, and therefore, they do not change, alter, destroy, or modify system
resources. However, they affect the system through diminishing the system’s ability
to function; hence, they are capable of degrading the system’s performance eventually bringing a system down without destroying its resources.
According to the Economist [3], the software tools used to carry out DDoS first
came to light in the summer of 1999, and the first security specialists conference to
discuss how to deal with them was held in November of the same year. Since then,
there has been a growing trend in DDoS attacks mainly as a result of the growing
number, sizes, and scope of computer networks which increase first an attacker’s
accessibility to networks and second the number of victims. But at the same time,
as the victim base and sizes of computer networks have increased, there have been
no to little efforts to implement spoof prevention filters or any other preventive
action. In particular, security managers have implemented little, if any, system protection against these attacks.
108
5
Cyber Crimes and Hackers
Like penetration electronic attacks (e-attacks), DDoS attacks can also be either
local, where they can shut down LAN computers, or global, originating thousands
of miles away on the Internet, as was the case in the Canadian-generated DDoS
attacks. Attacks in this category include the following:
• IP spoofing is forging of an IP packet address. In particular, a source address in
the IP packet is forged. Since network routers use packet destination address to
route packets in the network, the only time a source address is used is by the
destination host to respond back to the source host. So forging the source IP
address causes the responses to be misdirected, thus creating problems in the
network. Many network attacks are a result of IP spoofing.
• SYN flooding: In Chap. 3, we discussed a three-way handshake used by the TCP
protocols to initiate a connection between two network elements. During the
handshake, the port door is left half open. An SYN flooding attack is flooding the
target system with so many connection requests coming from spoofed source
addresses that the victim server cannot complete because of the bogus source
addresses. In the process, all its memory gets hogged up and the victim is thus
overwhelmed by these requests and can be brought down.
• Smurf attack: In this attack, the intruder sends a large number of spoofed ICMP
Echo requests to broadcast IP addresses. Hosts on the broadcast multicast IP
network, say, respond to these bogus requests with reply ICMP Echo. This may
significantly multiply the reply ICMP Echos to the hosts with spoofed addresses.
• Buffer overflow is an attack in which the attacker floods a carefully chosen field
such as an address field with more characters than it can accommodate. These
excessive characters, in malicious cases, are actually executable code, which the
attacker can execute to cause havoc in the system, effectively giving the attacker
control of the system. Because anyone with little knowledge of the system can
use this kind of attack, buffer overflow has become one of the most serious
classes of security threats.
• Ping of death: A system attacker sends IP packets that are larger than the 65,536
bytes allowed by the IP protocol. Many operating systems, including network
operating systems, cannot handle these oversized packets; so, they freeze and
eventually crash.
• Land.c attack: The land.c program sends TCP SYN packets whose source and
destination IP addresses and port numbers are those of the victim’s.
• Teardrop.c attack uses a program that causes fragmentation of a TCP packet. It
exploits a reassembly and causes the victim system to crash or hang.
• Sequence number sniffing: In this attack, the intruder takes advantage of the
predictability of sequence numbers used in TCP implementations. The attacker
then uses a sniffed next sequence number to establish legitimacy.
Motives of DDoS Attack
DDoS attacks are not like penetration attacks where the intruders expect to gain
from such attacks; they are simply a nuisance to the system. As we pointed out
5.2
Cyber Crimes
109
earlier, since these attacks do not penetrate systems, they do not affect the integrity
of the resources other than deny access to them. This means that the intruders do not
expect to get many material gains as would be expected from penetration attacks.
So, because of this, most DDoS attacks are generated with very specific goals.
Among them are:
• Preventing others from using a network connection with such attacks as Smurf,
UDP, and ping flood attacks
• Preventing others from using a host or a service by severely impairing or disabling such a host or its IP stack with suck attacks as Land, Teardrop, Bonk,
Boink, SYN flooding, and ping of death
• Notoriety for computer savvy individuals who want to prove their ability and
competence in order to gain publicity.
5.2.2
Cyber Criminals
Who are the cyber criminals? They are ordinary users of cyberspace with a message. As the number of users swells, the number of criminals among them also
increases at almost the same rate. A number of studies have identified the following
groups as the most likely sources of cyber crimes [4]:
• Insiders: for a long time, system attacks were limited to in-house employeegenerated attacks to systems and theft of company property. In fact, disgruntled
insiders are a major source of computer crimes because they do not need a great
deal of knowledge about the victim computer system. In many cases, such insiders use the system everyday. This allows them to gain unrestricted access to the
computer system, thus causing damage to the system and/or data. The 1999
Computer Security Institute/FBI report notes that 55 % of respondents reported
malicious activity by insiders [5].
• Hackers: Hackers are actually computer enthusiasts who know a lot about computers and computer networks and use this knowledge with a criminal intent.
Since the mid-1980s, computer network hacking has been on the rise mostly
because of the widespread use of the Internet.
• Criminal groups: A number of cyber crimes are carried out by criminal groups
for different motives ranging from settling scores to pure thievery. For example,
such criminal groups with hacking abilities have broken into credit card companies to steal thousands of credit card numbers (see Chap. 3).
• Disgruntled ex-employees: Many studies have shown that disgruntled exemployees also pose a serious threat to organizations as sources of cyber crimes
targeting their former employers for a number of employee–employer issues that
led to the separation. In some cases, ex-employees simply use their knowledge of
the system to attack the organization for purely financial gains.
• Economic espionage spies: The growth of cyberspace and e-commerce and the
forces of globalization have created a new source of crime syndicates, the
110
5
Cyber Crimes and Hackers
organized economic spies that plow the Internet looking for company secrets.
As the price tag for original research skyrockets and competition in the market
place becomes global, companies around the globe are ready to pay any amount
for stolen commercial, marketing, and industrial secrets.
5.3
Hackers
The word hacker has changed meaning over the years as technology changed.
Currently, the word has two opposite meanings. One definition talks of a computer
enthusiast as an individual who enjoys exploring the details of computers and how
to stretch their capabilities, as opposed to most users who prefer to learn only the
minimum necessary. The opposite definition talks of a malicious or inquisitive meddler who tries to discover information by poking around [2].
Before acquiring its current derogatory meaning, the term hacking is used to
mean expert writing and modification of computer programs. Hackers were considered people who were highly knowledgeable about computing; they were considered computer experts who could make the computer do all the wonders through
programming. Today, however, hacking refers to a process of gaining unauthorized
access into a computer system for a variety of purposes, including the stealing and
altering of data and electronic demonstrations. For some time now, hacking as a
political or social demonstration has been used during international crises. During a
crisis period, hacking attacks and other Internet security breaches usually spike in
part because of sentiments over the crisis. For example, during the two Iraq wars,
there were elevated levels of hacker activities. According to the Atlanta-based
Internet Security Systems, around the start of the first Iraq war, there was a sharp
increase of about 37 % from the fourth quarter of the year before, the largest quarterly spike the company has ever recorded [1].
5.3.1
History of Hacking
The history of hacking has taken as many twists and turns as the word hacking itself
has. One can say that the history of hacking actually began with the invention of the
telephone in 1876 by Alexander Graham Bell. For it was this one invention that
made internetworking possible. There is agreement among computer historians that
the term hack was born at MIT. According to Slatalla, in the 1960s, MIT geeks had
an insatiable curiosity about how things worked. However, in those days of colossal
mainframe computers, “it was very expensive to run those slow-moving hunks of
metal; programmers had limited access to the dinosaurs. So, the smarter ones created what they called “hacks” – programming shortcuts – to complete computing
tasks more quickly. Sometimes their shortcuts were more elegant than the original
program” [6].
Although many early hack activities had motives, many took them to be either
highly admirable acts by expert computer enthusiasts or elaborate practical jokes,
5.3
Hackers
111
including the first recorded hack activity in 1969 by Joe Engressia, commonly
known as “The Whistler.” Engressia, the grandfather of phone phreaking, was born
blind and had a high pitch which he used to his advantage. He used to whistle into
the phones and could whistle perfectly any tone he wanted. He discovered phreaking while listening to the error messages caused by his calling of unconnected numbers. While listening to these messages, he used to whistle into the phone and quite
often got cut off. After getting cut off numerous times, he phoned AT&T to inquire
why when he whistled a tune into the phone receiver, he was cut off. He was surprised by an explanation on the working of the 2600-Hz tone by a phone company
engineer. Joe learned how to phreak. It is said that phreakers across the world used
to call Joe to tune their “blue boxes” [7].
By 1971 a Vietnam veteran, John Draper, commonly known as “Captain Crunch,”
took this practical whistling joke further and discovered that using a free toy whistle
from a cereal box to carefully blow into the receiver of a telephone produces the
precise tone of 2,600 Hz needed to make free long-distance phone calls [8]. With
this act, “phreaking,” a cousin of hacking, was born and it entered our language.
Three distinct terms began to emerge: hacker, cracker, and phreaker. Those who
wanted the word hack to remain pure and innocent preferred to be called hackers,
those who break into computer systems were called crackers, and those targeting
phones came to be known as phreakers. Following Captain Crunch’s instructions,
Al Gilbertson (not his real name) created the famous little “blue box.” Gilbertson’s
box was essentially a super telephone operator because it gave anyone who used it
free access to any telephone exchange. In the late 1971, Ron Anderson published an
article on the existence and working of this little blue box in Esquire magazine. Its
publication created an explosive growth in the use of blue boxes and an initiation of
a new class of kids into phreaking [9].
With the start of a limited national computer network by ARPNET, in the 1970s,
a limited form of a system of break-in from outsiders started appearing. Through the
1970s, a number of developments gave impetus to the hacking movement. The first
of these developments was the first publication of the Youth International Party Line
newsletter by activist Abbie Hoffman, in which he erroneously advocated for free
phone calls by stating that phone calls are part of an unlimited reservoir and phreaking did not hurt anybody and therefore should be free. The newsletter, whose name
was later changed to TAP, for Technical Assistance Program, by Hoffman’s publishing partner, Al Bell, continued to publish complex technical details on how to make
free calls [6].
The second was the creation of the bulletin boards. Throughout the 1970s, the
hacker movement, although becoming more active, remained splinted. This came to
an end in 1978 when two guys from Chicago, Randy Seuss and Ward Christiansen,
created the first personal-computer bulletin-board system (BBS).
The third development was the debut of the personal computer (PC). In 1981,
when IBM joined the PC wars, a new front in hacking was opened. The PCs brought
the computing power to more people because they were cheap, easy to program, and
somehow more portable. On the back of the PC was the movie “WarGames” in
1983. The science fiction movie watched by millions glamorized and popularized
112
5
Cyber Crimes and Hackers
hacking. The 1980s saw tremendous hacker activities with the formation of gang-like
hacking groups. Notorious individuals devised hacking names such as Kevin
Mitnick (“The Condor”), Lewis De Payne (“Roscoe”), Ian Murphy (“Captain zap”),
Bill Landreth (“The Cracker”), “Lex Luther” (founder of the Legion of Doom),
Chris Goggans (“Erik Bloodaxe”), Mark Abene (“Phiber Optik”), Adam Grant
(“The Urvile”), Franklin Darden (“The Leftist”), Robert Riggs (“The Prophet”),
Loyd Blankenship (“The Mentor”), Todd Lawrence (“The Marauder”), Scott Chasin
(“Doc Holiday”), Bruce Fancher (“Death Lord”), Patrick K. Kroupa (“Lord
Digital”), James Salsman (“Karl Marx”), Steven G. Steinberg (“Frank Drake”), and
“Professor Falken” [10].
The notorious hacking groups of the 1970s and 1980s included the “414- Club,”
the “Legion of Doom,” the “Chaos Computer Club” based in Germany,
“NuPrometheus League,” and the “Atlanta Three.” All these groups were targeting
either phone companies where they would get free phone calls or computer systems
to steal credit card and individual user account numbers.
During this period, a number of hacker publications were founded including The
Hacker Quarterly and Hacker’zine. In addition, bulletin boards were created,
including “The Phoenix Fortress” and “Plovernet.” These forums gave the hacker
community a clearing house to share and trade hacking ideas.
Hacker activities became so worrisome that the FBI started active tracking and
arrests, including the arrest, the first one, of Ian Murphy (Captain Zap) in 1981 followed by the arrest of Kevin Mitnick in the same year. It is also during this period
that the hacker culture and activities went global with reported hacker attacks and
activities from Australia, Germany, Argentina, and the United States. Ever since, we
have been on a wild ride.
The first headline-making hacking incident that used a virus and got national and
indeed global headlines took place in 1988 when a Cornell graduate student created
a computer virus that crashed 6,000 computers and effectively shut down the
Internet for 2 days [11]. Robert Morris’s action forced the US government to form
the federal Computer Emergency Response Team (CERT) to investigate similar and
related attacks on the nation’s computer networks. The law enforcement agencies
started to actively follow the comings and goings of the activities of the Internet and
sometimes eavesdropped on communication networks traffic. This did not sit well
with some activists, who formed the Electronic Frontier Foundation in 1990 to
defend the rights of those investigated for alleged computer hacking.
The 1990s saw heightened hacking activities and serious computer network
“near” meltdowns, including the 1991 expectation without incident of the
“Michelangelo” virus that was expected to crash computers on March 6, 1992, the
artist’s 517th birthday. In 1995, the notorious, self-styled hacker Kevin Mitnick was
first arrested by the FBI on charges of computer fraud that involved the stealing of
thousands of credit card numbers. In the second half of the 1990s, hacking activities
increased considerably, including the 1998 Solar Sunrise, a series of attacks targeting Pentagon computers that led the Pentagon to establish round-the-clock, online
guard duty at major military computer sites, and a coordinated attacker on Pentagon
5.3
Hackers
113
computers by Ehud Tenebaum, an Israeli teenager known as “The Analyzer,” and an
American teen. The close of the twentieth century saw the heightened anxiety in the
computing and computer user communities of both the millennium bug and the
ever-rising rate of computer network break-ins. So, in 1999, President Clinton
announced a $1.46 billion initiative to improve government computer security. The
plan would establish a network of intrusion detection monitors for certain federal
agencies and encourage the private sector to do the same [8]. The year 2000 probably
saw the most costly and most powerful computer network attacks that included the
“Mellisa,” the “Love Bug,” the “Killer Resume,” and a number of devastating DDoS
attacks. The following year, 2001, the elusive “Code Red” virus was released. The
future of viruses is as unpredictable as the kinds of viruses themselves.
The period since 1980 saw a rapid growth of hacking up to present. As we
observed in Sect. 3.2.4, until recently most hacker communities worked underground forming groups global like those listed in Table 3.1. Today, hackers are no
longer considered as bad to computer networks as it used to be, and now hackers are
being used by governments and organization to do the opposite of what they were
supposed to be doing, defending national critical networks and hardening company
networks. In fact hacker Web sites are growing.
5.3.2
Types of Hackers
There are several subsects of hackers based on hacking philosophies. The biggest
subsects are crackers, hacktivists, and cyberterrorists.
5.3.2.1 Crackers
A cracker is one who breaks security on a system. Crackers are hardcore hackers
characterized more as professional security breakers and thieves. The term was
recently coined only in the mid-1980s by purist hackers who wanted to differentiate
themselves from individuals with criminal motives whose sole purpose is to sneak
through security systems. Purist hackers were concerned that journalists were misusing the term “hacker.” They were worried that the mass media failed to understand
the distinction between computer enthusiasts and computer criminals, calling both
hackers. The distinction has, however, failed; so, the two terms hack and crack are
still being often used interchangeably.
Even though the public still does not see the difference between hackers and
crackers, purist hackers are still arguing that there is a big difference between what
they do and what crackers do. For example, they say cyberterrorists, cyber vandals,
and all criminal hackers are not hackers but crackers by the above definition.
There is a movement now of reformed crackers who are turning their hacking
knowledge into legitimate use, forming enterprises to work for and with cybersecurity companies and sometimes law enforcement agencies to find and patch
potential security breaches before their former counterparts can take advantage of
them.
114
5
Cyber Crimes and Hackers
5.3.2.2 Hacktivists
Hacktivism is a marriage between pure hacking and activism. Hacktivists are
conscious hackers with a cause. They grew out of the old phreakers. Hacktivists
carry out their activism in an electronic form in hope of highlighting what they consider noble causes such as institutional unethical or criminal actions and political
and other causes. Hacktivism also includes acts of civil disobedience using cyberspace. The tactics used in hacktivism change with the time and the technology. Just
as in the real world where activists use different approaches to get the message
across, in cyberspace, hacktivists also use several approaches including automated
e-mail bombs, Web defacing, virtual sit-ins, and computer viruses and worms [12].
Automated E-mail Bomb: E-mail bombs are used for a number of mainly activist
issues such as social and political, electronic, and civil demonstrations but can also
and has been used in a number of cases for coursing, revenge, and harassment of
individuals or organizations. The method of approach here is to choose a selection
of individuals or organizations and bombard them with thousands of automated
e-mails, which usually results in jamming and clogging the recipient’s mailbox.
If several individuals are targeted on the same server, the bombardment may end up
disabling the mail server. Political electronic demonstrations were used in a number
of global conflicts including the Kosovo and Iraq wars. And economic and social
demonstrations took place to electronically and physically picket the new world
economic order as was represented by the World Bank and the International
Monetary Fund (IMF) sitting in Seattle, Washington, and Washington, DC, in the
United States and in Prague, Hungary, and Genoa, Italy.
Web Defacing: The other attention getter for the hacktivist is Web defacing. It is
a favorite form of hacktivism for nearly all causes, political, social, or economic.
With this approach, the hacktivists penetrate into the Web server and replace the
selected site’s content and links with whatever they want the viewers to see. Some
of this may be political, social, or economic messages. Another approach similar to
Web defacing is to use the domain name service (DNS) to change the DNS server
content so that the victim’s domain name resolves to a carefully selected IP address
of a site where the hackers have their content they want the viewers to see.
One contributing factor to Web defacing is the simplicity of doing it. There is
detailed information for free on the Web outlining the bugs and vulnerabilities in
both the Web software and Web server protocols. There is also information that
details what exploits are needed to penetrate a Web server and deface a victim’s Web
site. Defacing technology has, like all other technologies, been developing fast.
It used to be that a hacker who wanted to deface a Web site would, remotely or
otherwise, break into the server that held the Web pages, gaining the access required
to edit the Web page, then alter the page. Breaking into a Web server would be
achieved through a remote exploit; for example, that would give the attacker access
to the system. The hacktivist would then sniff connections between computers to
access remote systems.
Newer scripts and Web server vulnerabilities now allow hackers to gain remote
access to Web sites on Web servers without gaining prior access to the server. This
is so because vulnerabilities and newer scripts utilize bugs that overwrite or append
5.3
Hackers
115
to the existing page without ever gaining a valid log-in and password combination
or any other form of legitimate access. As such, the attacker can only overwrite or
append to files on the system.
Since a wide variety of Web sites offer both hacking and security scripts and
utilities required to commit these acts, it is only a matter of minutes before scripts
are written and Web sites are selected and a victim is hit.
As an example, in November 2001, a Web defacing duo calling themselves
Sm0ked Crew defaced The New York Times site. Sm0ked Crew had earlier hit the
Web sites of big name technology giants such as Hewlett-Packard, Compaq
Computer, Gateway, Intel, AltaVista, and Disney’s Go.com [13].
On the political front, in April 2003, during the second Iraq war, hundred of sites
were defaced by both antiwar and pro-war hackers and hacktivists; among them
were a temporary defacement of the White House’s Web site and an attempt to shut
down British Prime Minister Tony Blair’s official site. In addition to defacing of
Web sites, at least nine viruses or “denial-of-service” attacks cropped up in the
weeks leading to war [1].
Virtual Sit-Ins: A virtual sit-in or a blockade is the cousin of a physical sit-in or
blockade. These are actions of civil concern about an issue, whether social, economic, or political. It is a way to call public attention to that issue. The process
works through disruption of normal operation of a victim site and denying or preventing access to the site. This is done by the hacktivists generating thousands of
digital messages directed at the site either directly or through surrogates. In many of
these civil disobedience cases, demonstrating hacktivists set up many automated
sites that generate automatic messages directed to the victim site. Although dated,
let us look at two typical virtual sit-in incidents. On April 20, 2001, a group calling
itself the Electrohippies Collective had a planned virtual sit-in of Web sites associated with the Free Trade Area of the Americas (FTAA) conference. The sit-in, which
started at 00.00 UTC, was to object to the FTAA conference and the entire FTAA
process by generating an electronic record of public pressure through the server logs
of the organizations concerned. Figure 5.1 shows a logo of an activist group against
global warming.
On February 7, 2002, during the annual meeting of the World Economic Forum
(WEF) in New York City, more than 160,000 demonstrators, organized by among
others, Ricardo Dominguez, co-founder of the Electronic Disturbance Theater
(EDT), went online to stage a “virtual sit-in” at the WEF home page. Using
Fig. 5.1 A logo of an activist
group to stop global warming
116
5
Cyber Crimes and Hackers
downloaded software tools that constantly reloaded the target Web sites, the
protestors replicated a denial-of-service attack on the site on the first day of the
conference, and by 10:00 AM of that day, the WEF site had collapsed and remained
down until late night of the next day [14].
5.3.2.3 Computer Viruses and Worms
Perhaps, the most widely used and easiest method of hacktivists is sending viruses
and worms. Both viruses and worms are forms of malicious code, although the
worm code may be less dangerous. Other differences include the fact that worms
are usually more autonomous and can spread on their own once delivered as
needed, while a virus can only propagate piggybacked on or embedded into
another code. We will give a more detailed discussion of both viruses and worms
in Chap. 14.
5.3.2.4 Cyberterrorists
Based on motives, cyberterrorists can be divided into two categories: the terrorists
and information warfare planners.
Terrorists. The World Trade Center attack in 2001 brought home the realization
and the potential for a terrorist attack on not only organizations’ digital infrastructure
but also a potential for an attack on the national critical infrastructure. Cyberterrorists
who are terrorists have many motives, ranging from political, economic, religious,
to personal. Most often, the techniques of their terror are through intimidation,
coercion, or actual destruction of the target.
Information Warfare Planners. This involves war planners to threaten attacking
a target by disrupting the target’s essential services by electronically controlling and
manipulating information across computer networks or destroying the information
infrastructure.
5.3.3
Hacker Motives
Since the hacker world is closed to nonhackers and no hacker likes to discuss one’s
secrets with nonmembers of the hacker community, it is extremely difficult to
accurately list all the hacker motives. From studies of attacked systems and some
writing from former hackers who are willing to speak out, we learn quite a lot about
this rather secretive community. For example, we have learned that hackers’ motives
can be put in two categories: those of the collective hacker community and those of
individual members. As a group, hackers like to interact with others on bulletin
boards, through electronic mail, and in person. They are curious about new technologies, adventurous to control new technologies, and they have a desire and are
willing to stimulate their intellect through learning from other hackers in order to be
accepted in more prestigious hacker communities. Most important, they have a
common dislike for and resistance to authority.
5.3
Hackers
117
Most of these collective motives are reflected in the hacker ethic. According to
Steven Levy, the hacker ethic has the following six tenets [1]:
• Access to computers and anything that might teach you something about the way
the world works should be unlimited and total. Always yield to the hands-on
imperative!
• All information should be free.
• Mistrust authority and promote decentralization.
• Hackers should be judged by their hacking, not bogus criteria such as degrees,
age, race, or position.
• You can create art and beauty on a computer.
• Computers can change your life for the better.
Collective hacker motives can also be reflected in the following three additional
principles (Doctor Crash, 1986) [10]:
• Hackers reject the notion that “businesses” are the only groups entitled to access
and use modern technology.
• Hacking is a major weapon in the fight against encroaching computer
technology.
• The high cost of computing equipment is beyond the means of most hackers,
which results in the perception that hacking and phreaking are the only recourse
to spreading computer literacy to the masses.
Apart from collective motives, individual hackers, just as any other computer
system users, have their own personal motives that drive their actions. Among these
are the following [15]:
Vendetta and/or Revenge: Although a typical hacking incident is usually nonfinancial and is, according to hacker profiles, for recognition and fame, there are
some incidents, especially from older hackers, that are for reasons that are only
mundane, such as a promotion denied, a boyfriend or girlfriend taken, an ex-spouse
given child custody, and other situations that may involve family and intimacy
issues. These may result in hacker-generated attack targeting the individual or the
company that is the cause of the displeasure. Also, social, political, and religious
issues, especially issues of passion, can drive rebellions in people that usually lead
to revenge cyber attacks. These mass computer attacks are also increasingly being
used as paybacks for what the attacker or attackers consider to be injustices done
that need to be avenged.
Jokes, Hoaxes, and Pranks: Even though it is extremely unlikely that serious
hackers can start cyber attacks just for jokes, hoaxes, or pranks, there are less serious
ones who can and have done so. Hoaxes are scare alerts started by one or more malicious people and are passed on by innocent users who think that they are helping the
community by spreading the warning. Most hoaxes are viruses and worms, although
118
5
Cyber Crimes and Hackers
there are hoaxes that are computer-related folklore stories and urban legends or true
stories sent out as text messages. Although many virus hoaxes are false scares, there
are some that may have some truth about them, but that often become greatly
exaggerated, such as “The Good Times” and “The Great Salmon.” Virus hoaxes
infect mailing lists, bulletin boards, and Usenet newsgroups. Worried system
administrators sometimes contribute to this scare by posting dire warnings to their
employees that become hoaxes themselves.
The most common hoax has been and still is that of the presence of a virus.
Almost every few weeks, there is always a virus hoax of a virus, and the creator of
such a hoax sometimes goes on to give remove remedies which, if one is not careful,
results in removing vital computer systems’ programs such as operating systems
and boot programs. Pranks usually appear as scare messages, usually in the form of
mass e-mail warnings of serious problems on a certain issue. Innocent people usually read such e-mails and get worried. If it is a health issue, innocent people end up
calling their physicians or going into hospitals because of a prank.
Jokes, on the other hand, are not very common for a number of reasons: first, it
is difficult to create a good joke for a mass of people such as the numbers of people
in cyberspace, and second, it is difficult to create a clear joke that many people will
appreciate.
Terrorism: Although cyberterrorism has been going on at a low level, very few
people were concerned about it until after September 11, 2001, with the attack on the
World Trade Center. Ever since, there has been a high degree of awareness, thanks to
the Department of Homeland Security. We now realize that with globalization, we
live in a networked world and that there is a growing dependence on computer
networks. Our critical national infrastructure and large financial and business systems
are interconnected and interdependent on each other. Targeting any point in the
national network infrastructure may result in serious disruption of the working of
these systems and may lead to a national disaster. The potential for electronic warfare
is real, and national defense, financial, transportation, water, and power grid systems
are susceptible to an electronic attack unless and until the nation is prepared for it.
Political and Military Espionage: The growth of the global network of computers, with the dependence and intertwining of both commercial and defense-related
business information systems, is creating fertile ground for both political and
military espionage. Cyberspace is making the collection, evaluation, analysis, integration, and interpretation of information from around the global easy and fast.
Modern espionage focuses on military, policy, and decision-making information.
For example, military superiority cannot be attained only with advanced and powerful weaponry unless one controls the information that brings about the interaction
and coordination between the central control, ships and aircrafts that launch the
weapon, and the guidance system on the weapon. Military information to run these
kinds of weapons is as important as the weapons themselves. So, having such
advanced weaponry comes with a heavy price of safeguarding the information on
the development and working of such systems. Nations are investing heavily in
acquiring military secrets for such weaponry and governments’ policies issues. The
increase in both political and military espionage has led to a boom in
5.3
Hackers
119
counterintelligence in which nations and private businesses are paying to train people that will counter the flow of information to the highest bidder.
Business Espionage: One of the effects of globalization and the interdependence
of financial, marketing, and global commerce has been the rise in the efforts to steal
and market business, commerce, and marketing information. As businesses become
global and world markets become one global bazaar, the marketplace for business
ideas and market strategies is becoming very highly competitive and intense. This
high competition and the expense involved have led to an easier way out: business
espionage. In fact, business information espionage is one of the most lucrative
careers today. Cyber sleuths are targeting employees using a variety of techniques,
including system break-ins, social engineering, sniffing, electronic surveillance of
company executive electronic communications, and company employee chat rooms
for information. Many companies now boast competitive or business intelligence
units, sometimes disguised as marketing intelligence or research but actually doing
business espionage. Likewise, business counterintelligence is also on the rise.
Hatred: The Internet communication medium is a paradox. It is the medium that
has brought nations and races together. Yet it is the same medium that is being used
to separate nations and races through hatred. The global communication networks
have given a new medium to homegrown cottage industry of hate that used only to
circulate through fliers and words of mouth. These hate groups have embraced the
Internet and have gone global. Hackers who hate others based on a string of human
attributes that may include national origin, gender, race, or mundane ones such as
the manner of speech one uses can target carefully selected systems where the victim is and carry out attacks of vengeance often rooted in ignorance.
Personal Gain/Fame/Fun/Notoriety: Serious hackers are usually profiled as
reclusive. Sometimes, the need to get out of this isolation and to look and be normal
and fit in drives them to try and accomplish feats that will bring them that sought
after fame and notoriety, especially within their hacker communities. However, such
fame and notoriety are often gained through feats of accomplishments of some challenging tasks. Such a task may be and quite often does involve breaking into a
revered system.
Ignorance: Although they are profiled as super-intelligent with a great love for
computers, they still fall victim to what many people fall victims to – ignorance.
They make decisions with no or little information. They target the wrong system
and the wrong person. At times also such acts usually occur as a result of individuals
authorized or not, but ignorant of the workings of the system stumbling upon weaknesses or performing forbidden acts that result in system resource modification or
destruction.
5.3.4
Hacking Topologies
We pointed out earlier that hackers are often computer enthusiasts with a very good
understanding of the working of computers and computer networks. They use this
knowledge to plan their system attacks. Seasoned hackers plan their attacks well in
120
5
Cyber Crimes and Hackers
advance and their attacks do not affect unmarked members of the system. To get to
this kind of precision, they usually use specific attack patterns of topologies. Using
these topologies, hackers can select to target one victim among a sea of network
hosts, a subnet of a LAN, or a global network. The attack pattern, the topology, is
affected by the following factors and network configuration:
• Equipment availability – this is more important if the victim is just one host. The
underlying equipment to bring about an attack on only one host and not affect
others must be available. Otherwise, an attack is not possible.
• Internet access availability – similarly, it is imperative that a selected victim host
or network be reachable. To be reachable, the host or subnet configuration must
avail options for connecting to the Internet.
• The environment of the network – depending on the environment where the victim host or subnet or full network is, care must be taken to isolate the target unit
so that nothing else is affected.
• Security regime – it is essential for the hacker to determine what type of defenses
is deployed around the victim unit. If the defenses are likely to present unusual
obstacles, then a different topology that may make the attack a little easier may
be selected.
The pattern chosen, therefore, is primarily based on the type of victim(s), motive,
location, method of delivery, and a few other things. There are four of these patterns: one-to-one, one-to-many, many-to-many, and many-to-one [15].
5.3.4.1 One-to-One
These hacker attacks originate from one attacker and are targeted to a known victim.
They are personalized attacks where the attacker knows the victim, and sometimes
the victim may know the attacker. One-to-one attacks are characterized by the following motives:
• Hate: This is when the attacker causes physical, psychological, or financial damage to the victim because of the victim’s race, nationality, gender, or any other
social attributes. In most of these attacks, the victim is innocent.
• Vendetta: This is when the attacker believes he or she is the victim paying back
for a wrong committed or an opportunity denied.
• Personal gain: This is when the attacker is driven by personal motives, usually
financial gain. Such attacks include theft of personal information from the victim, for ransom, or for sale.
• Joke: This is when the attacker, without any malicious intentions, simply wants
to send a joke to the victim. Most times, such jokes end up degrading and/or
dehumanizing the victim.
• Business espionage: This is when the victim is usually a business competitor.
Such attacks involve the stealing of business data, market plans, product blueprints, market analyses, and other data that have financial and business strategic
and competitive advantages (Figs. 5.2 and 5.3).
5.3
Hackers
121
Fig. 5.2 Shows a one-to-one topology
Fig. 5.3 Shows a one-to-many topology
5.3.4.2 One-to-Many
These attacks are fueled by anonymity. In most cases, the attacker does not know
any of the victims. Moreover, in all cases, the attackers will, at least that is what they
assume, remain anonymous to the victims. This topography has been the technique
of choice in the last 2–3 years because it is one of the easiest to carry out. The
motives that drive attackers to use this technique are as follows:
• Hate: There is hate when the attacker may specifically select a cross section of a
type of people he or she wants to hurt and deliver the payload to the most visible
location where such people have access. Examples of attacks using this technique include a number of e-mail attacks that have been sent to colleges and
churches that are predominantly of one ethnic group.
122
5
Cyber Crimes and Hackers
• Personal satisfaction occurs when the hacker gets fun/satisfaction from other
peoples’ suffering. Examples include all the recent e-mail attacks such as the
“Love Bug,” “Killer Resume,” and “Melissa.”
• Jokes/hoaxes are involved when the attacker is playing jokes or wants to
intimidate people.
5.3.4.3 Many-to-One
These attacks so far have been rare, but they have recently picked up momentum as
the DDoS attacks have once again gained favor in the hacker community. In a manyto-one attack technique, the attacker starts the attack by using one host to spoof
other hosts, the secondary victims, which are then used as the new source of an
avalanche of attacks on a selected victim. These types of attacks need a high degree
of coordination and, therefore, may require advanced planning and a good understanding of the infrastructure of the network. They also require a very well-executed
selection process in choosing the secondary victims and then eventually the final
victim. Attacks in this category are driven by:
• Personal vendetta: There is personal vendetta when the attacker may want to create the maximum possible effect, usually damage, to the selected victim site.
• Hate is involved when the attacker may select a site for no other reasons than
hate and bombard it in order to bring it down or destroy it.
• Terrorism: Attackers using this technique may also be driven by the need to
inflict as much terror as possible. Terrorism may be related to or part of crimes
like drug trafficking, theft where the aim is to destroy evidence after a successful
attack, or even political terrorism.
• Attention and fame: In some extreme circumstances, what motivates this topography may be just a need for personal attention or fame. This may be the case if
the targeted site is deemed to be a challenge or a hated site (Fig. 5.4).
5.3.4.4 Many-to-Many
As in the previous topography, attacks using this topography are rare; however,
there has been an increase recently in reported attacks using this technique. For
example, in some of the recent DDoS cases, there has been a select group of sites
chosen by the attackers as secondary victims. These are then used to bombard
another select group of victims. The numbers involved in each group may vary from
a few to several thousands. As was the case in the previous many-to-one topography,
attackers using this technique need a good understanding of the network infrastructure and a good and precise selection process to pick the secondary victims and
eventually selecting the final pool of victims. Attacks utilizing this topology are
mostly driven by a number of motives including:
• Attention and fame are sought when the attacker seeks publicity resulting from a
successful attack.
5.3
Hackers
123
Fig. 5.4 Shows a many-to-one topology
• Terrorism: Terrorism is usually driven by a desire to destroy something; this may
be a computer system or a site that may belong to financial institutions, public
safety systems, or a defense and communication infrastructure. Terrorism has
many faces including drug trafficking, political and financial terrorism, and the
usual international terrorism driven by international politics.
• Fun/hoax: This type of attack technique may also be driven by personal gratification in getting famous and having fun (Fig. 5.5).
5.3.5
Hackers’ Tools of System Exploitation
Earlier on, we discussed how hacking uses two types of attacking systems: DDoS
and penetration. In the DDoS, there are a variety of ways of denying access to the
system resources, and we have already discussed those. Let us now look at the most
widely used methods in system penetration attacks. System penetration is the most
widely used method of hacker attacks. Once in, a hacker has a wide variety of
choices, including viruses, worms, and sniffers [15].
5.3.5.1 Viruses
Let us start by giving a brief description of a computer virus and defer a more
detailed description of it until Chap. 14. A computer virus is a program that infects
a chosen system resource such as a file and may even spread within the system and
124
5
Cyber Crimes and Hackers
Fig. 5.5 Shows a many-to-many topology
beyond. Hackers have used various types of viruses in the past as tools, including
memory/resident, error-generating, program destroyers, system crushers, time theft,
hardware destroyers, Trojans, time bombs, trapdoors, and hoaxes. Let us give a brief
description of each and differ a more detailed study of each until Chap. 14.
Memory/Resident Virus: This is more insidious, difficult to detect, fast spreading,
and extremely difficult to eradicate and one of the most damaging computer viruses
that hackers use to attack the central storage part of a computer system. Once in
memory, the virus is able to attack any other program or data in the system. As we
will see in Chap. 14, they are of two types: transient, the category that includes
viruses that are active only when the inflicted program is executing, and resident, a
brand that attaches itself, via a surrogate software, to a portion of memory and
remains active long after the surrogate program has finished executing. Examples of
memory resident viruses include all boot sector viruses such as the Israel virus [16].
Error-Generating Virus: Hackers are fond of sending viruses that are difficult to
discover and yet are fast moving. Such viruses are deployed in executable code.
Every time the software is executed, errors are generated. The errors vary from
“hard” logical errors, resulting in complete system shut down, to simple “soft” logical errors which may cause simple momentary blimps of the screen.
Data and Program Destroyers: These are serious software destroyers that attach
themselves to a software and then use it as a conduit or surrogate for growth, replication, and as a launch pad for later attacks to this and other programs and data.
Once attached to a software, they attack any data or program that the software may
come in contact with, sometimes altering the contents, deleting, or completely
destroying those contents.
System Crusher: Hackers use system crusher viruses to completely disable the
system. This can be done in a number of ways. One way is to destroy the system
5.3
Hackers
125
programs such as operating system, compilers, loaders, linkers, and others. Another
approach is to self-replicate until the system is overwhelmed and crashes.
Computer Time Theft Virus: Hackers use this type of virus to steal system time
either by first becoming a legitimate user of the system or by preventing other legitimate users from using the system by first creating a number of system interruptions.
This effectively puts other programs scheduled to run into indefinite wait queues.
The intruder then gains the highest priority, like a superuser with full access to all
system resources. With this approach, system intrusion is very difficult to detect.
Hardware Destroyers: Although not very common, these killer viruses are used
by hackers to selectively destroy a system device by embedding the virus into device
microinstructions or “mic” such as bios and device drivers. Once embedded into the
mic, they may alter it in such ways that may cause the devices to move into positions
that normally result in physical damage. For example, there are viruses that are
known to lock up keyboards, disable mice, and cause disk read/write heads to move
to nonexisting sectors on the disk, thus causing the disk to crash.
Trojans: These are a class of viruses that hackers hide, just as in the Greek Trojan
Horse legend, into trusted programs such as compilers, editors, and other commonly
used programs.
Logic/Time Bombs: Logic bombs are timed and commonly used type of virus to
penetrate system, embedding themselves in the system’s software and lying in wait
until a trigger goes off. Trigger events can vary in type depending on the motive of
the virus. Most triggers are timed events. There are various types of these viruses
including Columbus Day, Valentine’s Day, Jerusalem-D, and the Michelangelo,
which was meant to activate on Michelangelo’s 517 birthday anniversary.
Trapdoors: Probably, these are some of the most used virus tools by hackers.
They find their way into the system through weak points and loopholes that are
found through system scans. Quite often, software manufacturers, during software
development and testing, intentionally leave trapdoors in their products, usually
undocumented, as secret entry points into the programs so that modification can be
done on the programs at a later date. Trapdoors are also used by programmers as
testing points. As is always the case, trapdoors can also be exploited by malicious
people, including programmers themselves. In a trapdoor attack, an intruder may
deposit virus-infected data file on a system instead of actually removing, copying,
or destroying the existing data files.
Hoaxes: Very common form of viruses, they most often do not originate from
hackers but from system users. Though not physically harmful, hoaxes can be a
disturbing type of nuisance to system users.
5.3.5.2 Worm
A worm is very similar to a virus. In fact, their differences are few. They are both
automated attacks, both self-generate or replicate new copies as they spread, and
both can damage any resource they attack. The main difference between them, however, is that while viruses always hide in software as surrogates, worms are standalone programs.
Hackers have been using worms as frequently as they have been using viruses to
attack computer systems.
126
5
Cyber Crimes and Hackers
5.3.5.3 Sniffer
A sniffer is a software script that sniffs around the target system looking for
passwords and other specific information that usually lead to identification of
system exploits. Hackers use sniffers extensively for this purpose.
5.3.6
Types of Attacks
Whatever their motives, hackers have a variety of techniques in their arsenal to carry
out their goals. Let us look at some of them here.
Social Engineering: This involves fooling the victim for fun and profit. Social
engineering depends on trusting that employees will fall for cheap hacker “tricks”
such as calling or e-mailing them masquerading as a system administrator, for
example, and getting their passwords which eventually lets in the intruder. Social
engineering is very hard to protect against. The only way to prevent it is through
employee education and employee awareness.
Impersonation is stealing access rights of authorized users. There are many ways
an attacker such as a hacker can impersonate a legitimate user. For example, a
hacker can capture a user telnet session using a network sniffer such as tcpdump or
nitsniff. The hacker can then later log-in as a legitimate user with the stolen log-in
access rights of the victim.
Exploits: This involves exploiting a hole in software or operating systems. As is
usually the case, many software products are brought on the market either through
a rush to finish or lack of testing, with gaping loopholes. Badly written software is
very common even in large software projects such as operating systems. Hackers
quite often scan network hosts for exploits and use them to enter systems.
Transitive trust exploits host-to-host or network-to-network trust. Either through
client–server three-way handshake or server-to-server next-hop relationships, there
is always a trust relationship between two network hosts during any transmission.
This trust relationship is quite often compromised by hackers in a variety of ways.
For example, an attacker can easily do an IP spoof or a sequence number attack
between two transmitting elements and gets away with information that compromises the security of the two communicating elements.
Data Attacks: Script programming has not only brought new dynamism into Web
development, but it has also brought a danger of hostile code into systems through
scripts. Current scripts can run on both the server, where they traditionally used to
run, and also on the client. In doing so, scripts can allow an intruder to deposit hostile code into the system, including Trojans, worms, or viruses. We will discuss
scripts in detail in the next chapter.
Infrastructure Weaknesses: Some of the greatest network infrastructure weaknesses are found in the communication protocols. Many hackers, by virtue of their
knowledge of the network infrastructure, take advantage of these loopholes and use
them as gateways to attack systems. Many times, whenever a loophole is found in
the protocols, patches are soon made available, but not many system administrators
5.4
Dealing with the Rising Tide of Cyber Crimes
127
follow through with patching the security holes. Hackers start by scanning systems
to find those unpatched holes. In fact, most of the system attacks from hackers use
known vulnerabilities that should have been patched.
Denial of Service: This is a favorite attack technique for many hackers, especially hacktivists. It consists of preventing the system from being used as planned
through overwhelming the servers with traffic. The victim server is selected and
then bombarded with packets with spoofed IP addresses. Many times, innocent
hosts are forced to take part in the bombardment of the victim to increase the traffic
on the victim until the victim is overwhelmed and eventually fails.
Active Wiretap: In an active wiretap, messages are intercepted during transmission. When the interception happens, two things may take place: First, the data in
the intercepted package may be compromised by introduction of new data such as
change of source or destination IP address or the change in the packet sequence
numbers. Second, data may not be changed but copied to be used later such as in the
scanning and sniffing of packets. In either case, the confidentiality of data is compromised and the security of the network is put at risk.
5.4
Dealing with the Rising Tide of Cyber Crimes
Most system attacks take place before even experienced security experts have
advance knowledge of them. Most of the security solutions are best practices as we
have so far seen, and we will continue to discuss them as either preventive or reactive. An effective plan must consist of three components: prevention, detection, and
analysis and response.
5.4.1
Prevention
Prevention is probably the best system security policy, but only if we know what to
prevent the systems from. It has been and it continues to be an uphill battle for the
security community to be able to predict what type of attack will occur the next time
around. Although prevention is the best approach to system security, the future of
system security cannot and should not rely on the guesses of a few security people,
who have and will continue to get it wrong sometimes. In the few bright spots in the
protection of systems through prevention has been the fact that most of the attack
signatures are repeat signatures. Although it is difficult and we are constantly chasing the hackers who are always ahead of us, we still need to do something. Among
those possible approaches are the following:
•
•
•
•
A security policy
Risk management
Perimeter security
Encryption
128
5
Cyber Crimes and Hackers
• Legislation
• Self-regulation
• Mass education
We will discuss all these in detail in the chapters that follow.
5.4.2
Detection
In case prevention fails, the next best strategy should be early detection. Detecting
cyber crimes before they occur constitutes a 24-h monitoring system to alert security personnel whenever something unusual (something with a non-normal pattern,
different from the usual pattern of traffic in and around the system) occurs. Detection
systems must continuously capture, analyze, and report on the daily happenings in
and around the network. In capturing, analyzing, and reporting, several techniques
are used, including intrusion detection, vulnerability scanning, virus detection, and
other ad hoc methods. We will look at these in the coming chapters.
5.4.3
Recovery
Whether or not prevention or detection solutions were deployed on the system, if a
security incident has occurred on a system, a recovery plan, as spelled out in the
security plan, must be followed.
5.5
Conclusion
Dealing with rising cyber crimes in general and hacker activities in particular, in this
fast-moving computer communication revolution in which everyone is likely to be
affected, is a major challenge not only to the people in the security community but
for all of us. We must devise means that will stop the growth, stop the spiral, and
protect the systems from attacks. But this fight is already cut out for us and it may
be tough in that we are chasing the enemy who seems, on many occasions, to know
more than we do and is constantly ahead of us.
Preventing cyber crimes requires an enormous amount of effort and planning.
The goal is to have advance information before an attack occurs. However, the challenge is to get this advance information. Also getting this information in advance
does not help very much unless we can quickly analyze it and plan an appropriate
response in time to prevent the systems from being adversely affected. In real life,
however, there is no such thing as the luxury of advance information before an
attack.
References
129
Exercises
1. Define the following terms:
(i) Hacker
(ii) Hacktivist
(iii) Cracker
2. Why is hacking a big threat to system security?
3. What is the best way to deal with hacking?
4. Discuss the politics of dealing with hacktivism.
5. Following the history of hacking, can you say that hacking is getting under
control? Why or why not?
6. What kind of legislation can be effective to prevent hacking?
7. List and discuss the types of hacker crimes.
8. Discuss the major sources of computer crimes.
9. Why is crime reporting so low in major industries?
10. Insider abuse is a major crime category. Discuss ways to solve it.
Advanced Exercises
Devise a plan to compute the cost of computer crime.
What major crimes would you include in the preceding study?
From your study, identify the most expensive attacks.
Devise techniques to study the problem of non-reporting. Estimate the costs
associated with it.
5. Study the reporting patterns of computer crimes reported by industry. Which
industry reports best?
1.
2.
3.
4.
References
1. Cybercrime threat “real and growing”. http://news.bbc.co.uk/2/hi/science/nature/978163.stm
2. Glossary of vulnerability testing terminology. http://www.ee.oulu.fi/research/ouspg/sage/
glossary/
3. Anatomy of an attack. The Economist, 19–25 February 2000
4. Kizza JM (2003) Social and ethical issues in the information age, 2nd edn. Springer, New York
5. Freeh LJ, FBI congressional report on cybercrime. http://www.fbi.gov/congress00/
cyber021600.htm
6. Slatalla M, A brief history of hacking. http://tlc.discovery.com/convergence/hackers/articles/
history.html
7. Phone phreaking: the telecommunications underground. http://telephonetribute.com/phonephreaking.html
8. Timeline of hacking. http://fyi.cnn.com/fyi/interactive/school.tools/timelines/1999/computer.
hacking/frameset.exclude.html
9. Rosenbaum R, Secrets of the little blue box. http://www.webcrunchers.com/crunch/esqart.
html
10. The complete history of hacking. http://www.wbglinks.net/pages/history/
11. Denning PJ (1990) Computers under attack: intruders, worms and viruses. ACM Press,
New York
130
5
Cyber Crimes and Hackers
12. Denning D, Activism, hacktivism, and cyberterrorisim: the internet as a tool or influencing
foreign policy. http://www.nautilus.og/info-policy/workshop/papers/denning.html
13. Lemos R, Online vandals smoke New York Times site CNET News.com. http://news.com.
com/2009–1001–252754.html
14. Shachtman N, Hacktivists stage virtual sit-in at WEF Web site AlterNet. http://www.alternet.
org/story.html?StoryID=12374
15. Kizza JM (2001) Computer network security and cyber ethics. McFarland, North Calorina
16. Forchet K (1994) Computer security management. Boyd & Frasher Publishing, Danvers
6
Scripting and Security in Computer
Networks and Web Browsers
6.1
Introduction
The rapid growth of the Internet and its ability to offer services have made it the
fastest-growing medium of communication today. Today’s and tomorrow’s business
transactions involving financial data, product development and marketing, storage
of sensitive company information, and the creation, dissemination, sharing, and
storing of information are and will continue to be made online, most specifically on
the Web. The automation and dynamic growth of an interactive Web has created a
huge demand for a new type of Web programming to meet the growing demand of
millions of Web services from users around the world. Some services and requests
are tedious and others are complex, yet the rate of growth of the number of requests,
the amount of services requested in terms of bandwidth, and the quality of information requested warrant a technology to automate the process. Script technology
came in timely to the rescue. Scripting is a powerful automation technology on the
Internet that makes the Web highly interactive.
Scripting technology is making the Web interactive and automated as Web servers
accept inputs from users and respond to user inputs. While scripting is making the
Internet and, in particular, the Web alive and productive, it also introduces a huge
security problem to an already security-burdened cyberspace. Hostile scripts embedded in Web pages, as well as HTML-formatted e-mail, attachments, and applets,
introduce a new security paradigm in cyberspace security. In particular, security
problems are introduced in two areas: at the server and at the client. Before we look
at the security at both of these points, let us first understand the scripting standard.
6.2
Scripting
A program script is a logical sequence of line commands which causes the computer
to accomplish some task. Many times we refer to such code as macros or batch files
because they can be executed without user interaction. A script language is a
© Springer-Verlag London 2015
J.M. Kizza, Guide to Computer Network Security, Computer Communications
and Networks, DOI 10.1007/978-1-4471-6654-2_6
131
132
6
Scripting and Security in Computer Networks and Web Browsers
programming language through which you can write scripts. Scripts can be written
in any programming language or a special language as long as they are surrogated
by another program to interpret and execute them on the fly by a program unlike
compiled programs which are run by the computer operating system.
Because scripts are usually small programs, written with a specific purpose in
mind to perform tasks very quickly and easily, many times in constrained and
embedded environments with abstracted performance and safety, unlike general-
purpose programs written in general-purpose programming languages, are not in
most cases full-featured programs, but tend to be “glue” programs that they hold
together other pieces of software.
Therefore, scripting languages are not your general-purpose programming languages. Their syntax, features, library, etc., are focused more around accomplishing
small tasks quickly. The scripts can be either application scripts, if they are executed
by an application program surrogate like Microsoft spreadsheet, or command line
scripts if they are executed from a command line like the Windows or Unix/Linux
command line.
6.3
Scripting Languages
CGI scripts can be written in any programming language. Because of the need for
quick execution of the scripts both at the server and in the client browsers and the
need of not storing source code at the server, it is getting more and more convenient
to use scripting languages that are interpretable instead of languages that are compiled like C and C++. The most common scripting languages include:
•
•
•
•
•
•
•
JavaScript
Perl
Tcl/Tk
Python
VBA
Windows Script Host
Others including specific mobile device scripting languages
There are basically two categories of scripting languages, those whose scripts are
on the server side of the client–server programming model and those whose scripts
are on the client side.
6.3.1
Server-Side Scripting Languages
Ever since the debut of the World Wide Web and the development of HTML to
specify and present information, there has been a realization that HTML documents
are too static.
6.3 Scripting Languages
133
There was a need to put dynamism into HTTP so that the interaction between the
client and the server would become dynamic. This problem was easy to solve
because the hardware on which Web server software runs has processing power and
many applications such as e-mail, database manipulation, or calendaring are already
installed and ripe for utilization [1]. The CGI concept was born.
Among the many sever-side scripting languages are PERL, PHP, ColdFusion,
ASP, MySQL, Java servlets, and MivaScript.
6.3.1.1 Perl Scripts
Practical Extraction and Report Language (Perl) is an interpretable programming
language that is also portable. It is used extensively in Web programming to make
text processing interactive and dynamic. Developed in 1986 by Larry Wall, the language has become very popular. Although it is an interpreted language, unlike C and
C++, Perl has many features and basic constructs and variables similar to C and
C++. However, unlike C and C++, Perl has no pointers and defined data types.
One of the security advantages of Perl over C, say, is that Perl does not use pointers where a programmer can misuse a pointer and access unauthorized data. Perl
also introduces a gateway into Internet programming between the client and the
server. This gateway is a security gatekeeper scrutinizing all incoming data into the
server to prevent malicious code and data into the server. Perl does this by denying
programs written in Perl from writing to a variable, whereby this variable can corrupt other variables.
Perl also has a version called Taintperl that always checks data dependencies to
prevent system commands from allowing untrusted data or code into the server.
6.3.1.2 PHP
PHP (Hypertext Preprocessor) is a widely used general-purpose scripting language
that is especially suited for Web development and can be embedded into HTML. It
is an open-source language suited for Web development, and this makes it very
popular.
Just like Perl, PHP code is executed on the server and the client just receives the
results of running a PHP script on the server. With PHP, you can do just about anything other CGI program can do, such as collect form data, generate dynamic page
content, or send and receive cookies.
6.3.2
Client-Side Scripting Languages
The World Wide Web (WWW) created the romance of representing portable information from a wide range of applications for a global audience. This was accomplished by the HyperText Markup Language (HTML), a simple markup language
that revolutionized document representation on the Internet. But for a while, HTML
documents were static. The scripting specifications were developed to extend
HTML and make it more dynamic and interactive. Client-side scripting of HTML
documents and objects embedded within HTML documents has been developed to
134
6
Scripting and Security in Computer Networks and Web Browsers
bring dynamism and automation of user documents. Scripts including JavaScript
and VBScript are being used widely to automate client processes.
For a long time, during the development of CGI programming, programmers
noticed that much of what CGI does, such as maintaining a state, filling out forms,
error checking, or performing numeric calculation, can be handled on the client’s
side. Quite often, the client computer has quite a bit of CPU power idle, while the
server is being bombarded with hundreds or thousands of CGI requests for the mundane jobs above. The programmers saw it justifiable to shift the burden to the client,
and this led to the birth of client-side scripting
Among the many client-side scripting languages are DTML/CSS, Java,
JavaScript, and VBScript.
6.3.2.1 JavaScripts
JavaScript is a programming that performs client-side scripting, making Web pages
more interactive. Client-side scripting means that the code works only on the user’s
computer, not on the server-side. It was developed by Sun Microsystems to bridge
the gap between Web designers who needed a dynamic and interactive Web environment and Java programmers. It is an interpretable language like Perl. That means
the interpreter in the browser is all that is needed for a JavaSrcipt to be executed by
the client and it will run. JavaScript’s ability to run scripts on the client’s browser
makes the client able to run interactive Web scripts that do not need a server. This
feature makes creating JavaScript scripts easy because they are simply embedded
into any HTML code. It has, therefore, become the de facto standard for enhancing
and adding functionality to Web pages.
This convenience, however, creates a security threat because when a browser can
execute a JavaScript at any time, it means that hostile code can be injected into the
script and the browser would run it from any client. This problem can be fixed only
if browsers can let an executing script perform a limited number of commands. In
addition, scripts run from a browser can introduce into the client systems programming errors in the coding of the script itself, which may lead to a security threat in
the system itself.
6.3.2.2 VBScript
Based on part on the popularity of the Visual Basic programming language and on
the need to have a scripting language to counter JavaScript, Microsoft developed
VBScript (V and B for Visual Basic). VBScript has a syntax similar to the Visual
Basic programming language syntax. Since VBScript is based on Microsoft Visual
Basic, and unlike JavaScript which can run in many browsers, VBScript interpreter
is supported only in the Microsoft Internet Explore.
6.4
Scripting in Computer Network
The use of scripting in computer network started when network administrators
released that one of the best ways to tame the growing mountain of tasks required to
have a well-functioning network was to use scripting and take advantage of the
6.4 Scripting in Computer Network
135
automated natured of execution of scripts via surrogate programs. This cut down on
the need for attention for many sometimes repeated tasks, in the running of a system
network.
According to Allen Rouse [2], scripting lets you automate various network
administration tasks, such as those that are performed every day or even several
times a day and those performed widely throughout the network like log-in scripts
and modification to the registry scripts in a widely distributed network of servers.
There are many functions in a daily administration of a network that are performed
by scripts including [2]:
•
•
•
•
•
•
•
•
•
•
•
•
Machine startup
Machine shutdown
User log-in and log out
Scripting basic TCP/IP networking on clients, including comparisons of GUI
and command-line tools to analogous scripting techniques
Extending these scripting techniques to remote and multiple computers
IP address allocation with DHCP and static IP addresses
DNS client management
WINS client management
TCP/IP filtering, Internetwork Packet Exchange (IPX), and other network
protocols
Managing system time and network settings in the registry
Addition of new clients to a network
Client–server information exchange
With all these tasks taken over by scripting, you may notice the tremendous
advantages scripting brings to the table for system administration in time savings,
network consistence, and flexibility.
By every account, scripting is on the rise with the changing technologies. There
is tremendous enthusiasm for growth in the four traditional areas of scripting that
include:
•
•
•
•
System administration
Graphical user interface (GUI)
Internet information exchange (CGI) and the growing popularity of the browser
Application and component frameworks like ActiveX and others
We will focus on the Internet information exchange in this chapter.
6.4.1
Introduction to the Common Gateway Interface (CGI)
One of the most essential useful areas in network performance when scripting plays
a vital role is in the network client–server information exchange. This is done via a
Common Gateway Interface or CGI. CGI is a standard to specify a data format that
servers, browsers, and programs must use in order to exchange information.
136
6
Scripting and Security in Computer Networks and Web Browsers
A program written in any language that uses this standard to exchange data between
a Web server and a client’s browser is a CGI script. In other words, a CGI script is
an external gateway program to interface with information servers such as HTTP or
Web servers and client browsers. CGI scripts are great in that they allow the Web
servers to be dynamic and interactive with the client browser as the server receives
and accepts user inputs and responds to them in a measured and relevant way to
satisfy the user. Without CGI, the information the users would get from an information server would not be packaged based on the request but based on how it is stored
on the server.
CGI programs are of two types: those written in programming languages such as
C/C++ and Fortran that can be compiled to produce an executable module stored on
the server and scripts written in scripting languages such as PERL, Java, and Unix
shell. For these CGI scripts, no associated source code needs to be stored by the
information server as is the case in CGI programs. CGI scripts written in scripting
languages are not complied like those in nonscripting languages. Instead, they are
text code which is interpreted by the interpreter on the information server or in the
browser and run right away. The advantage to this is you can copy your script with
little or no changes to any machine with the same interpreter and it will run. In addition, the scripts are easier to debug, modify, and maintain than a typical compiled
program.
Both CGI programs and scripts, when executed at the information server, help
organize information for both the server and the client. For example, the server may
want to retrieve information entered by the visitors and use it to package a suitable
output for the clients. In addition, GCI may be used to dynamically set field descriptions on a form and in real-time inform the user on what data has been entered and
yet to be entered. At the end, the form may even be returned to the user for proofreading before it is submitted.
CGI scripts go beyond dynamic form filling to automating a broad range of services in search engines and directories and taking on mundane jobs such as making
download available, granting access rights to users, and order confirmation.
As we pointed out earlier, CGI scripts can be written in any programming language that an information server can execute. Many of these languages include
script languages such as Perl, JavaScript, TCL, Applescript, Unix shell, and
VBScript and nonscript languages such as C/C++, Fortran, and Visual Basic. There
is dynamism in the languages themselves; so, we may have new languages in the
near future.
6.4.1.1 CGI Scripts in a Three-Way Handshake
As we discussed in Chap. 3, the communication between a server and a client opens
with the same etiquette we use when we meet a stranger. First, a trust relationship
must be established before any requests are made. This can be done in a number of
ways. Some people start with a formal “Hello, I’m …,” and then, “I need …” upon
which the stranger says “Hello, I’m …” and then, “Sure I can …” Others carry it
further to hugs, kisses, and all other ways people use to break the ice. If the stranger
is ready for a request, then this information is passed back to you in a form of an
6.4 Scripting in Computer Network
137
acknowledgment to your first embrace. However, if the stranger is not ready to talk
to you, there is usually no acknowledgment of your initial advances and no further
communication may follow until the stranger’s acknowledgment comes through. At
this point, the stranger puts out a welcome mat and leaves the door open for you to
come in and start business. Now, it is up to the initiator of the communication to
start full communication.
When computers are communicating, they follow these etiquette patterns and
protocols, and we call this procedure a handshake. In fact, for computers it is called
a three-way handshake. A three-way handshake starts with the client sending a
packet, called a SYN (short for synchronization), which contains both the client and
server addresses together with some initial information for introductions. Upon
receipt of this packet by the server’s welcome open door, called a port, the server
creates a communication socket with the same port number such as the client
requested through which future communication with the client will go. After creating the communication socket, the server puts the socket in queue and informs the
client by sending an acknowledgment called a SYN-ACK. The server’s communication socket will remain open and in queue waiting for an ACK from the client and
data packets thereafter. The socket door remains half-open until the server sends the
client an ACK packet signaling full communication. During this time, however, the
server can welcome many more clients that want to communicate, and communication sockets will be opened for each.
The CGI script is a server-side language that resides on the server side, and it
receives the client’s SYN request for a service. The script then executes and lets the
server and client start to communicate directly. In this position, the script is able to
dynamically receive and pass data between the client and server. The client browser
has no idea that the server is executing a script. When the server receives the script’s
output, it then adds the necessary protocol data and sends the packet or packets back
to the client’s browser. Figure 6.1 shows the position of a CGI script in a three-way
handshake.
The CGI scripts reside on the server side, in fact on the computer on which the
server is, because a user on a client machine cannot execute the script in a browser
on the server; one can view only the output of the script after it executes on the
server and transmits the output using a browser on the client machine the user is on.
6.4.2
Server-Side Scripting: The CGI Interface
In the previous section, we stated that the CGI script is on the server side of the
relationship between the client and the server. The scripts are stored on the server
and are executed by the server to respond to the client demands. There is, therefore,
an interface that separates the server and the script. This interface, as shown in
Fig. 6.2, consists of information from the server supplied to the script that includes
input variables extracted from an HTTP header from the client and information
from the script back to the server. Output information from the server to the script
and from the script to the server is passed through environment variables and
138
6
Scripting and Security in Computer Networks and Web Browsers
Fig. 6.1 The position of a CGI script in a three-way handshake
Fig. 6.2 A client CGI script interface
6.5 Computer Network Scripts and Security
139
through script command lines. Command line inputs instruct a script to do certain
tasks such as search and query.
6.5
Computer Network Scripts and Security
As we have pointed out above, by all accounts, scripting is on the rise with the
changing technologies. There is tremendous enthusiasm for growth n the four traditional areas of scripting that include:
•
•
•
•
System administration
Graphical user interface (GUI)
Internet information exchange (CGI) and the growing popularity of the browser
Application and component frameworks like ActiveX and others
As scripting grows, so will its associated security problems. Hackers are constantly developing and testing a repertoire of their own scripts that will compromise
other scripts wherever they are on the Web, in the computer network system, or in
applications. The most common of such hacker techniques today include Web cross
site scripting or XSS or CSS. Cross site scripting allows attackers of Web sites to
embed malicious scripts into dynamic unsuspecting Web and network scripts.
Although this is a threat to most scripts, we will focus our script security discussion
on the CGI scripts.
6.5.1
CGI Script Security
To an information server, the CGI script is like an open window to a private house
where passersby can enter the house to request services. It is an open gateway that
allows anyone anywhere to run an executable program on your server and even send
their own programs to run on your server. An open window like this on a server is
not the safest thing to have, and security issues are involved. But since CGI scripting
is the fastest-growing component of the Internet, it is a problem we have to contend
with and meet head on. CGI scripts present security problems to cyberspace in
several ways including:
• Program development: During program development, CGI scripts are written in
high level programming language and complied before being executed or they
are written in a scripting language and they are interpreted before they are executed. In either way, programming is more difficult than composing documents
with HTML, for example. Because of the programming complexity and owing to
lack of program development discipline, errors introduced into the program are
difficult to find, especially in noncompiled scripts.
140
6
Scripting and Security in Computer Networks and Web Browsers
• Transient nature of execution: When CGI scripts come into the server, they run
as separate processes from that of the host server. Although this is good because
it isolates the server from most script errors, the imported scripts may introduce
hostile code into the server.
• Cross-pollination: The hostile code introduced into the server by a transient
script can propagate into other server applications and can even be re-transmitted
to other servers by a script bouncing off this server or originating from this
server.
• Resource-guzzling: Scripts that are resource intensive could cause a security
problem to a server with limited resources.
• Remote execution: Since servers can send CGI scripts to execute on surrogate
servers, both the sending and receiving servers are left open to hostile code usually transmitted by the script.
In all these situations, a security threat occurs when someone breaks into a script.
Broken scripts are extremely dangerous.
Kris Jamsa gives the following security threats that can happen to a broken script [4]:
• Giving an attacker access to the system’s password file for decryption.
• Mailing a map of the system which gives the attacker more time offline to analyze the system’s vulnerabilities
• Starting a login server on a high port and telneting in.
• Beginning a distributed denial-of-service attack against the server.
• Erasing or altering the server’s log files.
• In addition to these others, the following security threats are also possible [5]:
–– Malicious code provided by one client for another client: This can happen, for
example, in sites that host discussion groups where one client can embed
malicious HTML tags in a message intended for another client. According to
the Computer Emergency Response Team (CERT), an attacker might post a
message like
Hello message board. This is a message. < SCRIPT > malicious code This is the end of my message.
When a victim with scripts enabled in his or her browser reads this message,
the malicious code may be executed unexpectedly. Many different scripting
tags that can be embedded in this way include < SCRIPT>,