OWASP Code Review Guide V2

User Manual:

Open the PDF directly: View PDF PDF.
Page Count: 220

DownloadOWASP Code Review Guide V2
Open PDF In BrowserView PDF
CODE
REVIEW
GUIDE

2.0

RELEASE

Project leaders: Larry Conklin and Gary Robinson
Creative Commons (CC) Attribution
Free Version at: https://www.owasp.org

1

Introduction

1

Foreword
3

Acknowledgements
5

Introduction
6

How To Use The Code Review Guide
8

2
Secure Code Review

9

Methodology

20

3
Technical Reference For Secure Code Review

Appendix

A1 Injection

43

Code Review Do’s And Dont’s

192

A2 Broken Authentication And Session Management

58

Code Review Checklist

196

A3 Cross-Site Scripting (XSS)

70

Threat Modeling Example

200

A4 Insecure Direct Object Reference

77

Code Crawling

206

A5 Security Misconfiguration

82

A6 Sensitive Data Exposure

117

A7 Missing Function Level Access Control

133

A8 Cross-Site Request Forgery (CSRF)

139

A9 Using Components With Know Vulnerabilities

146

A10 Unvalidated Redirects And Forwards

149

4
HTML5

154

Same Origin Policy

158

Reviewing Logging Code

160

Error Handling

163

Reviewing Security Alerts

175

Review For Active Defence

178

Race Conditions

181

Buffer Overruns

183

Client Side JavaScript

188

2

1

Code Review Guide Foreword - By Eoin Keary

1

FOREWORD

By Eoin Keary,
Long Serving OWASP Global Board Member
The OWASP Code Review guide was originally born from the
OWASP Testing Guide. Initially code review was covered in the
Testing Guide, as it seemed like a good idea at the time. However, the topic of security code review is too big and evolved into
its own stand-alone guide.
I started the Code Review Project in 2006. This current edition
was started in April 2013 via the OWASP Project Reboot initiative and a grant from the United States Department of Homeland Security.
The OWASP Code Review team consists of a small, but talented,
group of volunteers who should really get out more often. The
volunteers have experience and a drive for the best practices
in secure code review in a variety of organizations, from small
start-ups to some of the largest software development organizations in the world.
It is common knowledge that more secure software can be produced and developed in a more cost effective way when bugs
are detected early on in the systems development lifecycle. Organizations with a proper code review function integrated into
the software development lifecycle (SDLC) produced remarkably better code from a security standpoint. To put it simply “We
can’t hack ourselves secure”. Attackers have more time to find
vulnerabilities on a system than the time allocated to a defender. Hacking our way secure amounts to an uneven battlefield,
asymmetric warfare, and a losing battle.
By necessity, this guide does not cover all programming languages. It mainly focuses on C#/.NET and Java, but includes C/
C++, PHP and other languages where possible. However, the
techniques advocated in the book can be easily adapted to almost any code environment. Fortunately (or unfortunately), the
security flaws in web applications are remarkably consistent
across programming languages.
Eoin Keary, June 2017

3

4

Acknowledgements

APPRECIATION TO UNITED STATES DEPARTMENT OF
HOMELAND SECURITY
OWASP community and Code Review Guide project leaders wish to expresses its deep appreciation to United States Department of Homeland Security for helping make this book
possible by funds provided to OWASP thru a grant. OWASP continues be to the preeminent
organization for free unbiased/unfretted application security.
We have seen a disturbing rise in threats and attacks on community institutions thru application vulnerabilities, only by joining forces, and with unfretted information can we help
turn back the tide these threats. The world now runs on software and that software needs
to be trust worthy. Our deepest appreciation and thanks to DHS for helping and in sharing
in this goal.

FEEDBACK
If you have any feedback for the OWASP Code Review team, and/or find any mistakes or
improvements in this Code Review Guide please contact us at:
owasp-codereview-project@owasp.org

Acknowledgements

ACKNOWLEDGEMENTS
Project Leaders
Larry Conklin

Gary Robinson

VERSION 2.0, 2017
Content Contributors
Larry Conklin
Gary Robinson
Johanna Curiel
Eoin Keary
Islam Azeddine Mennouchi
Abbas Naderi
Carlos Pantelides

Michael Hidalgo
Reviewers
Alison Shubert
Fernando Galves
Sytze van Koningsveld
Carolyn Cohen
Helen Gao
Jan Masztal

David Li
Lawrence J Timmins
Kwok Cheng
Ken Prole
David D’Amico
Robert Ferris
Lenny Halseth
Kenneth F. Belva

VERSION 1.0, 2007
Project Leader
Eoin Keary

Content Contributors
Jenelle Chapman
Andrew van der Stock
Paolo Perego
David Lowry
David Rook
Dinis Cruz
Jeff Williams

Reviewers
Jeff Williams
Rahin Jina

5

6

Introduction - Contents

INTRODUCTION
Welcome to the second edition of the OWASP Code Review Guide Project. The second edition brings the
successful OWASP Code Review Guide up to date with current threats and countermeasures. This version also includes new content reflecting the OWASP communities’ experiences of secure code review
best practices.

CONTENTS
The Second Edition of the Code Review Guide has been developed to advise software developers and
management on the best practices in secure code review, and how it can be used within a secure software development life-cycle (S-SDLC). The guide begins with sections that introduce the reader to
secure code review and how it can be introduced into a company’s S-SDLC. It then concentrates on
specific technical subjects and provides examples of what a reviewer should look for when reviewing
technical code. Specifically the guide covers:
Overview
This section introduces the reader to secure code review and the advantages it can bring to a development organization. It gives an overview of secure code review techniques and describes how code
review compares other techniques for analyzing secure code.
Methodology
The methodology section goes into more detail on how to integrate secure review techniques into development organizations S-SDLC and how the personnel reviewing the code can ensure they have the
correct context to conduct an effective review. Topics include applying risk based intelligence to security code reviews, using threat modelling to understand the application being reviewed, and understanding how external business drivers can affect the need for secure code review.

How to use

HOW TO USE THE CODE REVIEW GUIDE

The contents and the structure of the book have been carefully designed. Further, all the contributed chapters have been judiciously edited and integrated into a unifying framework that provides uniformity in structure and style.
This book is written to satisfy three different perspectives.
1. Management teams who wish to understand the reasons of why code reviews are needed and why they are included in best
practices in developing secure enterprise software for todays organizations. Senior management should thoroughly read sections one and two of this book. Management needs to consider the following items if doing secure coding is going to be part of
the organizations software development lifecycle:
• Does organization project estimation allot time for code reviews?
• Does management have the capability to track the relevant metrics of code review and static analysis for each project and
programmer?
• Management needs to decide when in the project life cycle will that code reviews should be done in the project lifecycle and
what changes to existing projects require review of previously completed code reviews.
2. Software leads who want to give manfully feedback to peers in code review with ample empirical artifacts as what to look for
in helping create secure enterprise software for their organizations. They should consider:
•As a peer code reviewer, to use this book you first decided on the type of code review do you want to accomplish. Lets spend a
few minutes going over each type of code review to help in deciding how this book can be assistance to you.
• API/design code reviews. Use this book to understand how architecture designs can lead to security vulnerabilities. Also if the
API is a third party API what security controls are in place in the code to prevent security vulnerabilities.
• Maintainability code reviews. These types of code reviews are more towards the organizations internal best coding practices.
This book does cover code metrics, which can help the code reviewer, better understand what code to look at for security vulnerabilities if a section of code is overly complex.
• Integration code reviews. Again these types of code reviews are more towards the organizations internal coding policies. Is
the code being integrated into the project fully vetted by IT management and approved? Many security vulnerabilities are now
being implemented by using open source libraries whichh may bring in dependencies that are not secure.
• Testing code reviews. Agile and Test Driven design where programmer creates unit tests to prove code methods works as the
programmer intended. This code is not a guide for testing software. The code reviewer may want to pay attention to unit test
cases to make sure all methods have appropriate exceptions; code fails in a safe way. If possible each security control in code has
the appropriate unit test cases.
3. Secure code reviewer who wants an updated guide on how secure code reviews are integrated in to the organizations secure
software development lifecycle. This book will also work as a reference guide for the code review as code is in the review process.
This book provides a complete source of information needed by the code reviewer. It should be read first as a story about code
reviews and seconds as a desktop reference guide.

7

8

2

Secure Code Review

SECURE CODE REVIEW

Technical Reference For Secure Code Review
Here the guide drills down into common vulnerabilities and technical controls, including XSS, SQL injection,
session tracking, authentication, authorization, logging, and information leakage, giving code examples in
various languages to guide the reviewer.
This section can be used to learn the important aspects of the various controls, and as an on-the-job reference
when conducting secure code reviews.
We start with the OWASP Top 10 issues, describing technical aspects to consider for each of these issues. We
then move onto other common application security issues not specific to the OWASP Top 10
Secure code review is probably the single-most effective technique for identifying security bugs early in the
system development lifecycle. When used together with automated and manual penetration testing, code
review can significantly increase the cost effectiveness of an application security verification effort.
This guide does not prescribe a process for performing a security code review. Rather, it provides guidance on
how the effort should be structured and executed. The guide also focuses on the mechanics of reviewing code
for certain vulnerabilities.
Manual secure code review provides insight into the “real risk” associated with insecure code. This contextual,
white-box approach is the single most important value. A human reviewer can understand the relevance of
a bug or vulnerability in code. Context requires human understanding of what is being assessed. With appropriate context we can make a serious risk estimate that accounts for both the likelihood of attack and the
business impact of a breach. Correct categorization of vulnerabilities helps with priority of remediation and
fixing the right things as opposed to wasting time fixing everything.
5.1 Why Does Code Have Vulnerabilities?
MITRE has catalogued circa 1000 different kinds of software weaknesses in the CWE project. These are all
different ways that software developers can make mistakes that lead to insecurity. Every one of these weaknesses is subtle and many are seriously tricky. Software developers are not taught about these weaknesses in
school and most do not receive any on the job training about these problems.
These problems have become so important in recent years because we continue to increase connectivity
and add technologies and protocols at an extremely fast rate. The ability to invent technology has seriously
outstripped the ability to secure it. Many of the technologies in use today simply have not received enough
(or any) security scrutiny.
There are many reasons why businesses are not spending the appropriate amount of time on security. Ultimately, these reasons stem from an underlying problem in the software market. Because software is essentially a black box, it is extremely difficult for a customer to tell the difference between good code and insecure
code. Without this visibility vendors are not encouraged to spend extra effort to produce secure code. Nevertheless, information security experts frequently get pushback when they advocate for security code review,
with the following (unjustified) excuses for not putting more effort into security:
“We never get hacked (that I know of ), we don’t need security”

9

10

Secure Code Review

“We have a firewall that protects our applications”
“We trust our employees not to attack our applications”
Over the last 10 years, the team involved with the OWASP Code Review Project has performed thousands of
application reviews, and found that every non-trivial application has had security vulnerabilities. If code has
not been reviewed for security holes, the likelihood that the application has problems is virtually 100%.
Still, there are many organizations that choose not to know about the security of their code. To them, consider
Rumsfeld’s cryptic explanation of what we actually know:
“...we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns
-- the ones we don’t know we don’t know.”- Donald Rumsfeld
If informed decisions are being made based on a measurement of risk in the enterprise, which will be fully
supported. However, if risks are not being understood, the company is not being duly diligent, and is being
irresponsible both to shareholders and customers.
5.2 What is Secure Code Review?
Code review aims to identify security flaws in the application related to its features and design, along with the
exact root causes. With the increasing complexity of applications and the advent of new technologies, the
traditional way of testing may fail to detect all the security flaws present in the applications. One must understand the code of the application, external components, and configurations to have a better chance of finding
the flaws. Such a deep dive into the application code also helps in determining exact mitigation techniques
that can be used to avert the security flaws.
It is the process of auditing the source code of an application to verify that the proper security and logical
controls are present, that they work as intended, and that they have been invoked in the right places. Code
review is a way of helping ensure that the application has been developed so as to be “self-defending” in its
given environment.
Secure code review allows a company to assure application developers are following secure development
techniques. A general rule of thumb is that a penetration test should not discover any additional application
vulnerabilities relating to the developed code after the application has undergone a proper security code
review. At the least very few issues should be discovered.
All security code reviews are a combination of human effort and technology support. At one end of the spectrum is an inexperienced person with a text editor. At the other end of the scale is an expert security team with
advanced static analysis (SAST) tools. Unfortunately, it takes a fairly serious level of expertise to use the current
application security tools effectively. They also don’t understand dynamic data flow or business logic. SAST
tools are great for coverage and setting a minimum baseline.
Tools can be used to perform this task but they always need human verification. They do not understand
context, which is the keystone of security code review. Tools are good at assessing large amounts of code and
pointing out possible issues, but a person needs to verify every result to determine if it is a real issue, if it is
actually exploitable, and calculate the risk to the enterprise. Human reviewers are also necessary to fill in for
the significant blind spots, which automated tools, simply cannot check.

Secure Code Review

5.3 What is the difference between Code Review and Secure Code Review?
The Capability Maturity Model (CMM) is a widely recognized process model for measuring the development
processes of a software development organization. It ranges from ‘level 1’ where development processes are
ad hoc, unstable and not repeatable, to ‘level 5’ where the development processes are well organized, documented and continuously improving. It is assumed that a company’s development processes would start out
at level 1 when starting out (a.k.a start-up mode) and will become more defined, repeatable and generally
professional as the organization matures and improves. Introducing the ability to perform code reviews (note
this is not dealing with secure code review yet) comes in when an organization has reached level 2 (Repeatable) or level 3 (Defined).
Secure Code Review is an enhancement to the standard code review practice where the structure of the review process places security considerations, such as company security standards, at the forefront of the decision-making. Many of these decisions will be explained in this document and attempt to ensure that the
review process can adequately cover security risks in the code base, for example ensuring high risk code is
reviewed in more depth, ensuring reviewers have the correct security context when reviewing the code, ensuring reviewers have the necessary skills and secure coding knowledge to effectively evaluate the code.
5.4 Determining the Scale of a Secure Source Code Review?
The level of secure source code review will vary depending on the business or regulatory needs of the software, the
size of the software development organization writing the applications and the skills of the personnel. Similar to
other aspects of software development such as performance, scalability and maintainability, security is a measure of
maturity in an application. Security is one of the non-functional requirements that should be built into every serious
application or tool that is used for commercial or governmental purposes.
If the development environment consists of one person programming as a hobby and writing a program to track
their weekly shopping in visual basic (CMM level 1), it is unlikely that that programmer will use all of the advice
within this document to perform extensive levels of secure code review. On the other extreme, a large organization
with thousands of developers writing hundreds of applications will (if they wish to be successful) take security very
seriously, just like they would take performance and scalability seriously.
Not every development organization has the necessity, or resources, to follow and implement all of the topics in
this document, but all organizations should be able to begin to write their development processes in a way that
can accommodate the processes and technical advice most important to them. Those processes should then
be extensible to accommodate more of the secure code review considerations as the organization develops and
matures.
In a start-up consisting of 3 people in a darkened room, there will not be a ‘code review team’ to send the code to,
instead it’ll be the bloke in the corner who read a secure coding book once and now uses it to prop up his monitor.
In a medium sized company there might be 400 developers, some with security as an interest or specialty, however
the organizations processes might give the same amount of time to review a 3 line CSS change as it gives to a
redesign of the flagship products authentication code. Here the challenge is to increase the workforce’s secure
coding knowledge (in general) and improve the processes through things like threat modelling and secure code
review.
For some larger companies with many thousands of developers, the need for security in the S-SDLC is at its greatest,
but process efficiency has an impact on the bottom line. Take an example of a large company with 5,000 developers.
If a change is introduced to the process that results in each developer taking an extra 15 minutes a week to perform
a task, suddenly that’s 1,250 hours extra each week for the company as a whole This results in a need for an extra 30
full time developers each year just to stay on track (assuming a 40 hour week). The challenge here is to ensure the
security changes to the lifecycle are efficient and do not impede the developers from performing their task.

11

12

Secure Code Review

Skilling a Workforce for Secure Code Review
There seems to be a catch-22 with the following sentiment; as many code developers are not aware or skilled in
security, a company should implement peer secure code reviews amongst developers.
How does a workforce introduce the security skills to implement a secure code review methodology? Many
security maturity models (e.g. BSIMM or OpenSAMM) discuss the concept of a core security team, who are skilled
developers and skill security subject matter experts (SMEs). In the early days of a company rolling out a secure code review process, the security SMEs will be central in the higher risk reviews, using their experience and
knowledge to point out aspects of the code that could introduce risk.
As well as the core security team a further group of developers with an interest in security can act as team local
security SMEs, taking part in many secure code reviews. These satellites (as BSIMM calls them) will be guided by
the core security team on technical issues, and will help encourage secure coding.
Over time, an organization builds security knowledge within its core, and satellite teams, which in turns spreads
the security knowledge across all developers since most code reviews will have a security SME taking part.
The ‘on-the-job’ training this gives to all developers is very important. Whereas an organization can send their developers on training courses (classroom or CBT) which will introduce them to common security topics and create
awareness, no training course can be 100% relevant to a developer’s job. In the secure code review process, each
developer who submits their code will receive security related feedback that is entirely relevant to them, since
the review is of the code they produced.
It must be remembered though, no matter what size the organization, the reason to perform secure code review is to catch more bugs and catch them earlier in the S-SDLC. It is quicker to conduct a secure code review
and find bugs that way, compared to finding the bugs in testing or in production. For the 5,000-person organization, how long will it take to find a bug in testing, investigate, re-code, re-review, re-release and re-test?
What if the code goes to production where project management and support will get involved in tracking the
issue and communicating with customers? Maybe 15 minutes a week will seem like a bargain.
5.5 We Can’t Hack Ourselves Secure
Penetration testing is generally a black-box point in time test and should be repeated on each release (or
build) of the source code to find any regressions. Many continuous integration tools (e.g. Jenkins/Hudson)
allow repeatable tests, including automated penetration tests, to be run against a built and installed version
of a product.
As source code changes, the value of the findings of an unmaintained penetration tests degrade with time.
There are also privacy, compliance and stability and availability concerns, which may not covered by penetration testing, but can be covered in code reviews. Data information leakage in a cloud environment for example
may not be discovered, or allowed, via a penetration test. Therefore penetration testing should be seen as an
important tool in the arsenal, but alone it will not ensure product software is secure.
The common methods of identifying vulnerabilities in a software project are:
• Source Code Scanning using automated tools that run against a source code repository or module, finding
string patterns deemed to potentially cause security vulnerabilities.

Secure Code Review

• Automated Penetration Testing (black/grey box) through penetrating testing tools automatic scans, where
the tool is installed on the network with the web site being tested, and runs a set of pre-defined tests against
the web site URLs.
• Manual Penetration Testing, again using tools, but with the expertise of a penetration tester performing more
complicated tests.
• Secure Code Review with a security subject matter expert.
It should be noted that no one method will be able to identify all vulnerabilities that a software project might
encounter, however a defense-in-depth approach will reduce the risk of unknown issues being including in
production software.
During a survey at AppSec USA 2015 the respondents rated which security method was the most effective in
finding:
1) General security vulnerabilities
2) Privacy issues
3) Business logic bugs
4) Compliance issues (such as HIPPA, PCI, etc.)
5) Availability issues
The results are shown in figure 1.

Figure 1: Survey relating detection methods to general vulnerability types
35
30
25
20
15
10
5
0
Vulnerabilities

Privacy

Source Code Scanning Tools
Automated Scan
Manual Pen Test
Manual Code Review

Business Logic

Compliance
(HIPPA)

Availability

13

14

Secure Code Review

Figure 2: Survey relating detection methods to OWASP Top 10 vulnerability types
35
30
25
20
15
10
5
0
A1

A2

A3

A4

A5

A6

A7

A8

A9

A10

Source Code Scanning Tool
Automated Scan
Manual Pen Test
Manual Code Review

These surveys show that manual code review should be a component of a company’s secure lifecycle, as in
many cases it is as good, or better, than other methods of detecting security issues.
5.6 Coupling Source Code Review and Penetration Testing
The term “360 review” refers to an approach in which the results of a source code review are used to plan and
execute a penetration test, and the results of the penetration test are, in turn, used to inform additional source
code review.

Figure 3: Code Review and Penetration Testing Interactions

CODE
REVIEW

EXPLOITED
VULNERABILITIES

SUSPECTED KNOWN
VULNERABILITIES

PENETRATION
TEST

Secure Code Review

Knowing the internal code structure from the code review, and using that knowledge to form test cases and
abuse cases is known as white box testing (also called clear box and glass box testing). This approach can lead to
a more productive penetration test, since testing can be focused on suspected or even known vulnerabilities. Using knowledge of the specific frameworks, libraries and languages used in the web application, the penetration
test can concentrate on weaknesses known to exist in those frameworks, libraries and languages.
A white box penetration test can also be used to establish the actual risk posed by a vulnerability discovered
through code review. A vulnerability found during code review may turn out not to be exploitable during penetration test due to the code reviewer(s) not considering a protective measure (input validation, for instance).
While the vulnerability in this case is real, the actual risk may be lower due to the lack of exposure. However there
is still an advantage to adding the penetration test encase the protective measure is changed in the future and
therefore exposes the vulnerability.
While vulnerabilities exploited during a white box penetration test (based on secure code review) are certainly
real, the actual risk of these vulnerabilities should be carefully analyzed. It is unrealistic that an attacker would
be given access to the target web application’s source code and advice from its developers. Thus, the risk that
an outside attacker could exploit the vulnerabilities found by the white box penetration tester is probably lower. However, if the web application organization is concerned with the risk of attackers with inside knowledge
(former employees or collusion with current employees or contractors), the real-world risk may be just as high.
The results of the penetration test can then be used to target additional areas for code review. Besides addressing the par-ticular vulnerability exploited in the test, it is a good practice to look for additional places where that
same class of vulnerability is present, even if not explicitly exploited in test. For instance, if output encoding is
not used in one area of the application and the penetration test exploited that, it is quite possible that output
encoding is also not used elsewhere in the application.
5.7 Implicit Advantages of Code Review to Development Practices
Integrating code review into a company’s development processes can have many benefits which will depend
upon the processes and tools used to perform code reviews, how well that data is backed up, and how those
tools are used. The days of bringing developers into a room and displaying code on a projector, whilst recording
the review results on a printed copy are long gone, today many tools exist to make code review more efficient
and to track the review records and decisions. When the code review process is structured correctly, the act of
reviewing code can be efficient and provide educational, auditable and historical benefits to any organization.
This section provides a list of benefits that a code review procedure can add to development organization.
Provides an historical record
If any developer has joined a company, or moved teams within a company, and had to maintain or enhance a
piece of code written years ago, one of the biggest frustrations can be the lack of context the new developer has
on the old code. Various schools of opinion exist on code documentation, both within the code (comments) and
external to the code (design and functional documents, wikis, etc.). Opinions range from zero-documentation
tolerance through to near-NASA level documentation, where the size of the documentation far exceeds the size
of the code module.
Many of the discussions that occur during a code review, if recorded, would provide valuable information (context) to module maintainers and new programmers. From the writer describing the module along with some of
their design decisions, to each reviewers comments, stating why they think one SQL query should be restructured, or an algorithm changed, there is a development story unfolding in front of the reviewers eyes which can
be used by future coders on the module, who are not involved in the review meetings.

15

16

Secure Code Review

Capturing those review discussions in a review tool automatically and storing them for future reference will provide the development organization with a history of the changes on the module which can be queried at a later time by new developers. These discussions can also contain links to any architectural/functional/design/test
specifications, bug or enhancement numbers.
Verification that the change has been tested
When a developer is about to submit code into the repository, how does the company know they have sufficiently tested it? Adding a description of the tests they have run (manually or automated) against the changed code
can give reviewers (and management) confidence that the change will work and not cause any regressions. Also
by declaring the tests the writer has ran against their change, the author is allowing reviewers to review the tests
and suggest further testing that may have been missed by the author.
In a development scenario where automated unit or component testing exists, the coding guidelines can require
that the developer include those unit/component tests in the code review. This again allows reviewers within this
environment to ensure the correct unit/component tests are going to be included in the environment, keeping
the quality of the continuous integration cycles.
Coding education for junior developers
After an employee learns the basics of a language and read a few of the best practices book, how can they get
good on-the-job skills to learn more? Besides buddy coding (which rarely happens and is never cost effective)
and training sessions (brown bag sessions on coding, tech talks, etc.) the design and code decisions discussed
during a code review can be a learning experience for junior developers. Many experienced developers admit to
this being a two way street, where new developers can come in with new ideas or tricks that the older developers
can learn from. Altogether this cross pollination of experience and ideas can only be beneficial to a development
organization.
Familiarization with code base
When a new feature is developed, it is often integrated with the main code base, and here code review can be a
conduit for the wider team to learn about the new feature and how its code will impact the product. This helps
prevent functional duplication where separate teams end up coding the same small piece of functionality.
This also applies for development environments with siloed teams. Here the code review author can reach out to
other teams to gain their insight, and allow those other teams to review their modules, and everyone then learns
a bit more about the company’s code base.
Pre-warning of integration clashes
In a busy code base there will be times (especially on core code modules) where multiple developers can write
code affecting the same module. Many people have had the experience of cutting the code and running the
tests, only to discover upon submission that some other change has modified the functionality, requiring the
author to recode and retest some aspects of their change. Spreading the word on upcoming changes via code
reviews gives a greater chance of a developer learning that a change is about to impact their upcoming commit,
and development timelines, etc., can be updated accordingly.

Secure Coding Guidelines Touch Point
Many development environments have coding guidelines which new code must adhere to. Coding guidelines
can take many forms. It’s worth pointing out that security guidelines can be a particularly relevant touch point

Secure Code Review

within a code review, as unfortunately the secure coding issues are understood only by a subset of the development team. Therefore it can be possible to include teams with various technical expertise into the code reviews,
i.e. someone from the security team (or that person in the corner who knows all the security stuff) can be invited
as a technical subject expert to the review to check the code from their particular angle. This is where the OWASP
top 10 guidelines could be enforced.

5.8 Technical Aspects of Secure Code Review
Security code reviews are very specific to the application being reviewed. They may highlight some flaws that
are new or specific to the code implementation of the application, like insecure termination of execution flow,
synchronization errors, etc. These flaws can only be uncovered when we understand the application code flow
and its logic. Thus, security code review is not just about scanning the code for set of unknown insecure code
patterns but it also involves understanding the code implementation of the application and enumerating the
flaws specific to it.
The application being reviewed might have been designed with some security controls in place, for example a
centralized blacklist, input validation, etc. These security controls must be studied carefully to identify if they
are fool-proof. According to the implementation of the control, the nature of attack or any specific attack vector that can be used to bypass it, must be analyzed. Enumerating the weakness in the existing security control
is another important aspect of the security code reviews.
There are various reasons why security flaws manifest in the application, like a lack of input validation or
parameter mishandling. In the process of a code review the exact root cause of flaws are exposed and the
complete data flow is traced. The term ‘source to sink analysis’ means to determine all possible inputs to the
application (source) and how they are being processed by it (sink). A sink could be an insecure code pattern
like a dynamic SQL query, a log writer, or a response to a client device.
Consider a scenario where the source is a user input. It flows through the different classes/components of the
application and finally falls into a concatenated SQL query (a sink) and there is no proper validation being
applied to it in the path. In this case the application will be vulnerable to SQL injection attack, as identified
by the source to sink analysis. Such an analysis helps in understanding, which vulnerable inputs can lead to a
possibility of an exploit in the application.
Once a flaw is identified, the reviewer must enumerate all the possible instances present in the application.
This would not be a code review initiated by a code change, this would be a code scan initiated by management based on a flaw being discovered and resources being committed to find if that flaw exists in other
parts of the product. For example, an application can be vulnerable to XSS vulnerability because of use of
un-validated inputs in insecure display methods like scriptlets ‘response.write’ method, etc. in several places.
5.9 Code Reviews and Regulatory Compliance
Many organizations with responsibility for safeguarding the integrity, confidentiality and availability of their
software and data need to meet regulatory compliance. This compliance is usually mandatory rather than a
voluntary step taken by the organization.
Compliance regulations include:
• PCI (Payment Card Industry) standards
• Central bank regulations

17

18

Secure Code Review

• Auditing objectives
• HIPPA
Compliance is an integral part of software security development life-cycle and code review is an important
part of compliance as many rules insist on the execution of code reviews in order to comply with certain regulations.
To execute proper code reviews that meet compliance rules it is imperative to use an approved methodology. Compliance requirements such as PCI, specifically requirement 6: “Develop and maintain secure systems”,
while PCI-DSS 3.0, which has been available since November 2013, exposes a series of requirements which
apply to development of software and identifying vulnerabilities in code. The Payment Card Industry Data
Security Standard (PCI-DSS) became a mandatory compliance step for companies processing credit card payments in June 2005. Performing code reviews on custom code has been a requirement since the first version
of the standard.
The PCI standard contains several points relating to secure application development, but this guide will focus
solely on the points, which mandate code reviews. All of the points relating to code reviews can be found in
requirement 6 “Develop and maintain secure systems and applications”.
5.10 PCI-DSS Requirements Related to Code Review
Specifically, requirement 6.3.2 mandates a code review of custom code. Reviewing custom code prior to release to production or customers in order to identify any potential coding vulnerability (using either manual
or automated processes) to include at least the following:
• Code changes are reviewed by individuals other than the originating code author, and by individuals knowledgeable about code review techniques and secure coding practices.
• Code reviews ensure code is developed according to secure coding guidelines
• Appropriate corrections are implemented prior to release.
• Code review results are reviewed and approved by management prior to release.
Requirement 6.5 address common coding vulnerabilities in software-development processes as follows:
• Train developers in secure coding techniques, including how to avoid common coding vulnerabilities, and
understanding how sensitive data is handled in memory.
• Develop applications based on secure coding guidelines.
The PCI Council expanded option one to include internal resources performing code reviews. This added
weight to an internal code review and should provide an additional reason to ensure this process is performed
correctly.
The Payment Application Data Security Standard (PA-DSS) is a set of rules and requirements similar to PCI-DSS.
However, PA-DSS applies especially to software vendors and others who develop payment applications that
store, process, or transmit cardholder data as part of authorization or settlement, where these payment applications are sold, distributed, or licensed to third parties.

Secure Code Review

PA-DSS Requirements Related to Code Review
Requirements regarding code review are also applied since these are derived from PA-DSS in requirement 5
(PCI, 2010):
5.2 Develop all payment applications (internal and external, and including web administrative access to product) based on secure coding guidelines.
5.1.4 Review of payment application code prior to release to customers after any significant change, to identify any potential coding vulnerability.
Note: This requirement for code reviews applies to all payment application components (both internal and
public-facing web applications), as part of the system development life cycle. Code reviews can be conducted
by knowledgeable internal personnel or third parties.

19

20

Methodology

METHODOLOGY
Code review is systematic examination of computer source code and reviews are done in various forms and
can be accomplished in various stages of each organization S-SDLC. This book does not attempt to tell each
organization how to implement code reviews in their organization but this section does go over in generic
terms and methodology of doing code reviews from informal walkthroughs, formal inspections, or Tool-assisted code reviews.
6.1 Factors to Consider when Developing a Code Review Process
When planning to execute a security code review, there are multiple factors to consider since every code
review is unique to its context. In addition to the elements discussed in this section, one must consider any
technical or business related factors (business decisions such as deadlines and resources) that impact the
analysis as these factors and may ultimately decide the course of the code review and the most effective way
to execute it.
Risks
It is impossible to secure everything at 100%, therefore it is essential to prioritize what features and components
must be securely reviewed with a risk based approach. While this project highlights some of the vital areas of design
security peer programmers should review all code being submitted to a repository, not all code will receive the attention and scrutiny of a secure code review.
Purpose & Context
Computer programs have different purposes and consequently the grade of security will vary depending on the
functionality being implemented. A payment web application will have higher security standards than a promotional website. Stay reminded of what the business wants to protect. In the case of a payment application, data such as
credit cards will have the highest priority however in the case of a promotional website, one of the most important
things to protect would be the connection credentials to the web servers. This is another way to place context into a
risk-based approach. Persons conducting the security review should be aware of these priorities.
Lines of Code
An indicator of the amount of work is the number of lines of code that must be reviewed. IDEs (Integrated Development Environments) such as Visual Studio or Eclipse contain features, which allows the amount of lines
of code to be calculated, or in Unix/Linux there are simple tools like ‘wc’ that can count the lines. Programs
written in object-oriented languages are divided into classes and each class is equivalent to a page of code.
Generally line numbers help pinpoint the exact location of the code that must be corrected and is very useful
when reviewing corrections done by a developer (such as the history in a code repository). The more lines of
code a program contains, the greater the chances that errors are present in the code.
Programming language
Programs written in typed safe languages (such as C# or Java) are less vulnerable to certain security bugs
such as buf- fer overflows than others like C and C++. When executing code review, the kind of language will
determine the types of expected bugs. Typically software houses tend towards a few languages that their
programmers are experienced in, however when a decision is made to create new code in a language new to
the developer management must be aware of the increased risk of securely reviewing that code due to the
lack of in-house experience. Throughout this guide, sections explain the most common issues surrounding
the specific programming language code to be reviewed, use this as a reference to spot specific security issues
in the code.

Methodology

Resources, Time & Deadlines
As ever, this is a fundamental factor. A proper code review for a complex program will take longer and it will
need higher analysis skills than a simple one. The risks involved if resources are not properly provided are
higher. Make sure that this is clearly assessed when executing a review.
6.2 Integrating Code Reviews in the S-SDLC
Code reviews exist in every formal Secure Software Development Lifecycle (S-SDLC), but code reviews also
vary widely in their level of formality. To confuse the subject more, code reviews vary in purpose and in relation to what the code reviewer is looking for, be it security, compliance, programming style, etc. Throughout
the S-SDLC (XP, Agile, RAD, BSIMM, CMMI, Microsoft ALM) there are points where an application security SME
should to be involved. The idea of integrating secure code reviews into an S-SLDC may sound daunting as
there is another layer of complexity or additional cost and time to an already over budget and time constrained project. However it is proven to be cost effective and provides an additional level of security that
static analyzers cannot provide.
In some industries the drive for secure enhancements to a company’s S-SDLC may not be driven purely by
the desire to produce better code, these industries have regulations and laws that demand a level of due care
when writing software (e.g. the governmental and financial industries) and the fines levelled at a company
who has not attempted to secure their S-SDLC will be far greater than the costs of adding security into the
development lifecycle.
When integrating secure code reviews into the S-SDLC the organization should create standards and policies
that the secure code reviewer should adhere to. This will create the right importance of the task so it is not just
looked at as a project task that just needs to be checked off. Project time also needs to be assigned to the task
so there is enough time to complete the tasks (and for any remedial tasks that come out of the secure code
review). Standards also allow management and security experts (e.g. CISOs, security architects) to direct employees on what secure coding is to be adhered to, and allows the employees to refer to the (standard) when
review arbitration is necessary.
Code Review Reports
A standard report template will provide enough information to enable the code reviewer to classify and prioritize the software vulnerabilities based on the applications threat model. This report
does not need to be pages in length, it can be document based or incorporated into many automated code review tools. A report should provide the following information:
• Date of review.
• Application name, code modules reviewed.
• Developers and code reviewer names.
• Task or feature name, (TFS, GIT, Subversion, trouble ticket, etc.).

21

22

Methodology

• A brief sentence(s) to classify and prioritize software vulnerability if any and what if any remedial
tasks need to be accomplished or follow up is needed.
• Link to documents related to task/feature, including requirements, design, testing and threat
modeling documents.
• Code Review checklist if used, or link to organization Code Review Checklist. (see Appendix A)
• Testing the developer has carried out on the code. Preferably the unit or automated tests themselves can be part of the review submission.
• If any tools such as FxCop, BinScope Binary Analyzer, etc. were used prior to code review.

Today most organizations have modified their S-SDLC process to add agile into their S-SDLC process. Because of this
the organization is going to need to look at their own internal development practices to best determine where and
how often secure code reviews need to happen. If the project is late and over budget then this increases the chance
that a software fix could cause a secure vulnerability since now the emphasis is on getting the project to deployment
quicker. Code reviews for code in production may find software vulnerabilities but understand that there is a race
with hackers to find the bug and the vulnerable software will remain in production while the remedial fix is being
worked on.
6.3 When to Code Review
Once an organization decides to include code reviews part of their internal code process. The next big question to
ask is to determine what stages of the SDLC will the code be reviewed. This section talks about three possible ways to
include code reviews. There are three stages be in the SDLC when code can be reviewed:
When code is about to be checked in (pre-commit)
The development organization can state in their process that all code has to be reviewed before the code can be
submitted to the source code repository. This has the disadvantage of slowing the check-in process down, as the
review can take time, however it has many advantages in that below standard code is never placed in the code line,
and management can be confident that (if processes are being followed) the submitted code is at the quality that
has been stipulated.
For example, processes may state that code to be submitted must include links to requirements and design documentation and necessary unit and automated tests. This way the reviewers will have context on the exact code modification being done (due to the documentation) and they will know how the developer has tested the code (due to
the tests). If the peer reviewers do not think the documentation is complete, or the tests are extensive enough, they
can reject the review, not because of the code itself, but because the necessary docs or tests are not complete. In an
environment using CI with automated tests running nightly, the development team as a whole will know the next
day (following check-in) if the submitted code was of enough quality. Also management know that once a bug or
feature is checked in that the developer has finished their task, there’s no “I’ll finish those tests up next week” scenarios which adds risk to the development task.
When code has just been checked into a code base (post-commit)
Here the developer submits their code change, and then uses the code repository change-lists to send the code
diff for review. This has the advantage of being faster for the developer as there’s no review gate to pass before they
check-in their code. The disadvantage is that, in practice, this method can lead to a lesser quality of code. A develop-

Methodology

er will be less inclined to fix smaller issues once the code has been checked in, usually with a mantra of “Well the code
is in now, it’ll do.”. There also a risk of timing, as other developers could write other code fixes into the same module
before the review is done or changes and tests have been written, meaning the developer not only has to implement
the code changes from the peer or security review, but they also have to do so in a way that does not break other
subsequent changes. Suddenly the developer has to re-test the subsequent fixes to ensure no regressions.
Some development organizations using the Agile methodology add a ‘security sprint’ into their processes. During the
security sprint the code can be security reviewed, and have security specific test cases (written or automated) added.
When code audits are done
Some organizations have processes to review code at certain intervals (i.e. yearly) or when a vulnerable piece of
code is suspected of being repeated throughout the code base. Here static code analyzers, or simple string searches
through the code (for specific vulnerability patterns) can speed up the process. This review is not connected to the
submission of a feature or bug fix, they are triggered by process considerations and are likely to involve the review of
an entire application or code base rather than a review of a single submission.
Who Should Perform Secure Code Reviews
Some organizations assume secure code review can be a job for a security or risk-analysis team member. However all developers need to understand the exposure points of their applications and what threats exist for their
applications.
Many companies have security teams that do not have members with coding backgrounds, which can make
interactions with development teams challenging. Because of this development teams are usually skeptical of
security input and guidance. Security teams are usually willing to slow things down to ensure confidentiality
and integrity controls are in place while developers are face with pressure from business units they support to
create and update code as quickly as possible. Unfortunately the more critical the application to operational or
business needs, the more pressure to deploy the code to production.
It is best to weave secure code reviews into the SDLC processes so that development organizations do not see
security as a hindrance, but as an assistance. As mentioned previously, spreading secure coding SMEs throughout an organization (satellites in BSIMM terminology) allows the secure code review tasks to scale and reach
more development teams. As the process grows, more of the developers gain awareness of secure coding issues
(as they have reviews rejected on secure coding grounds) and the frequency of secure coding issues in code
reviews should drop.
6.4 Security Code Review for Agile and Waterfall Development
Today agile development is an umbrella term for a lot of practices that include programming, continuous integration, testing, project management, etc. There are many flavors of agile development, perhaps as many flavors
as there are practitioners. Agile development is a heterogeneous reference framework where the development
team can pick what practices they want to use.
Agile has some practices that could affect how and when code is reviewed, for example agile tries to keep code
review and testing as near as possible to the development phase. It is a common practice to define short development cycles (a.k.a. Iterations or Sprints). At the end of each cycle, all the code should be production quality
code. It can be incomplete, but it must add some value. That affects the review process, as reviewing should be
continuous. From the point of view of secure coding review, it shouldn’t make a difference if the development

23

24

Methodology

organization uses agile or waterfall development practices. Code review is aligned to the code submitted, not the
order of feature development vs testing, or the time patterns assigned to the coding task. In many organizations
the line between waterfall and agile is becoming blurred, with traditional waterfall departments introducing the
continuous integration (CI) aspects from agile, including nightly builds, automated testing, test driven development, etc.
6.5 A Risk Based Approach to Code Review
A development house will have various degrees of code changes being reviewed, from simple one line bug fixes
in backend scripts that run once a year, to large feature submissions in critical business logic. Typically the intensity of the code review varies based on the perceived risk that the change presents.
In the end, the scale of the code review comes down to the management of resources (skilled persons, company
time, machines, etc.). It would not be scalable to bring in multiple security experts for every code change occurring on a product, the resources of those persons or those teams would not be large enough to handle every
change. Therefore companies can make a call on which changes are important and need to be closely scrutinized, and which ones can be allowed through with minimal inspection. This will allow management to better
size the development cycle, if a change is going to be done in an area which is high risk, management can know
to set aside sufficient time for code review and ensure persons with relevant skills will be available. The process
of deciding which changes need which level of code review is based on the risk level of the module the change
is within.
If the review intensity of code changes is based on the risk level of the module being changed, who should
decide the level of risk? Ultimately management is responsible for the output of a company, and thus they are
responsible for the risk associated with products sold by the company. Therefore it is up to management (or persons delegated by management) to create a reproducible measure or framework for deciding the risk associated
with a code change.
Decisions on the risk of a module or piece of code should be based on solid cost benefit analysis and it would be
irresponsible to decide all modules are high risk. Therefore management should meet with persons who have
an understanding of the code base and security issues faced by the products, and create a measure of risk for
various ele-ments of code. Code could be split up into modules, directories, products, etc., each with a risk level
associated with it.
Various methods exist in the realm of risk analysis to assign risk to entities, and many books have been dedicated
to this type of discussion. The three main techniques for establishing risk are outlined in table 1 below.
Table 1: Options For Establishing Risk
Technique

Method

Quantitative

Bring people together and establish a monetary value on the potential loss associated with the code. Gauge the likelihood that the code could be compromised. Use dollar values produced from these calculations to determine the level
of risk.

Qualitative

Bring people together and discuss opinions on what level of loss is associated with the modules, and opinions on likelihood of compromise. Qualitative does not attempt to nail down monetary associations with the loss, but tends towards
the perception or opinion of associated losses.

Delphi

Independently interview or question people on the losses and compromises of the modules, whilst letting them know
the feedback will be anonymous. The impression here is that the people will give more honest answers to the questions
and will not be swayed by other people’s arguments and answers.

Methodology

Risk is chance of something bad happening and the damage that can be caused if it occurs. The criteria for deciding the risk profile of different code modules will be up to the management team responsible for delivering the
changes, examples are provided in table 2.
Table 2: Common Criteria For Establishing The Risk Profile Of A Code Module
Criteria

Explanation

Ease of exposure

Is the code change in a piece of code directly exposed to the internet? Does an insider use the interface directly?

Value of loss

How much could be lost if the module has a vulnerability introduced? Does the module contain some critical password
hashing mechanism, or a simple change to HTML border on some internal test tool?

Regulatory controls

If a piece of code implements business logic associated with a standard that must be complied with, then these modules can be considered high risk as the penalties for non-conformity can be high.

When levels of risk have been associated with products and modules, then the policies can be created determining what level of code review must be conducted. It could be that code changes in a level one risk module
must be reviewed by 3 persons including a Security Architect, whereas changes in a level 4 risk module only
need a quick one person peer review.
Other options (or criteria) for riskier modules can include demands on automated testing or static analysis, e.g.
code changes in high risk code must include 80% code coverage on static analysis tools, and sufficient automated tests to ensure no regressions occur. These criteria can be demanded and checked as part of the code
review to ensure they are capable of testing the changed code.
Some companies logically split their code into differing repositories, with more sensitive code appearing in a
repository with a limited subset of developers having access. If the code is split in this fashion, then it must be
remembered that only developers with access to the riskier code should be able to conduct reviews of that
code.
Risk analysis could also be used during the code review to decide how to react to a code change that introduces risk into the product, as in table 3. In a typical risk analysis process, the team needs to decide whether to
accept, transfer, avoid or reduce the risks. When it comes to code reviews it is not possible to transfer the risk
as transferring risk normally means taking out insurance to cover the cost of exposure.
Table 3: Options For Handling Risks Identified In A Code Review
Risk Resolution

Explanation

Reduce

This is the typical resolution path. When a code reviewer finds that the code change introduces risk into an element of
business logic (or simply a bug) the code will be changed to fix the bug or code in a way that reduces the risk.

Accept

When the code change introduces a risk in the code but there is no other way to implement the business logic, the code
change can pass code review if the risk is considered acceptable. The risk and any workarounds or mitigating factors
should be documented correctly so that it is not ignored.

Avoid

When the code change introduces a risk that is too great to be accepted, and it is not possible to reduce the risk by implementing a code change, then the team need to consider not performing the change. Ideally this decision should be reached
before the code review stage, but it is entirely possible that factors can arise during code implementation that changes the
understood risk profile of a code module and prompts management to reconsider if a change should go ahead.

25

26

Methodology

6.6 Code Review Preparation
A security review of the application should uncover common security bugs as well as the issues specific to business
logic of the application. In order to effectively review a body of code it is important that the reviewers understand
the business purpose of the application and the critical business impacts. The reviewers should understand the attack
surface, identify the different threat agents and their motivations, and how they could potentially attack the application.
For the software developer whose code is being reviewed, performing code review can feel like an audit and developers may find it challenging to not take the feedback personally. A way to approach this is to create an atmosphere
of collaboration be-tween the reviewer, the development team, the business representatives, and any other vested
interests. Por-traying the image of an advisor and not a policeman is important to get co-operation from the development team.
The extent to which information gathering occurs will depend on the size of the organization, the skill set of the reviewers, and the criticality/risk of the code being reviewed. A small change to the CSS file in a 20- person start-up will
not result in a full threat model and a separate secure review team. At the same time a new single sign-on authentication module in a multi-billion dollar company will not be secure code reviewed by a person who once read an article
on secure coding. Even within the same organization, high-risk modules or applications may get threat modeled,
where the lower risk modules can be reviewed with a lesser emphasis on the reviewer understanding the security
model of the module.
This section will present the basic items the reviewer (or review team) should attempt to understand about the application subjected to a secure code review. This can be used in smaller companies that don’t have the resources to
create a full security baseline, or on low risk code within larger companies. A later section goes into detail on threat
modeling, which would be used by larger companies on their highest risk code bases.
In an ideal world the reviewer would be involved in the design phase of the application, but this is rarely the case.
However regardless of the size of the code change, the engineer initiating the code review should direct reviewers
to any relevant architecture or design documents. The easiest way to do this is to include a link to the documents (assuming they’re stored in an online document repository) in the initial e-mail, or in the code review tool. The reviewer
can then verify that the key risks have been properly addressed by security controls and that those controls are used
in the right places.
To effectively conduct the review the reviewer should develop familiarity with the following aspects:
Application features and Business Rules
The reviewer should understand all the features currently provided by the application and capture all the business
restrictions/rules related to them. There is also a case for being mindful of potential future functionality that might be
on the roadmap for an application, thereby future-proofing the security decisions made during current code reviews.
What are the consequences of this system failing? Shall the enterprise be affected in any great way if the application
cannot perform its functions as intended?
Context
All security is in context of what we are trying to secure. Recommending military standard security mechanisms on
an application that vends apples would be overkill and out of context. What type of data is being manipulated or
processed, and what would the damage to the company be if this data was compromised? Context is the “Holy Grail”
of secure code inspection and risk assessment.

Methodology

Sensitive Data
The reviewer should also make a note of the data entities like account numbers and passwords that are sensitive to
the application. The categorizing the data entities based on their sensitivity will help the reviewer to determine the
im-pact of any kind of data loss in the application.
User roles and access rights
It is important to understand the type of users allowed to access the application. Is it externally facing or internal to
“trusted” users? Generally an application that is accessible only for the internal users of an organization might be
exposed to threats that are different than the one that is available for anyone on the Internet. Hence, knowing the
users of the application and its deployed environment would allow the reviewer to realize the threat agents correctly.
In addition to this, the different privileges levels present in the application must also be understood. It would help
the reviewer to enumerate different security violations/privilege escalation attacks that can be applicable to the application.
Application type
This refers to understanding whether the application is browser based application, a desktop based standalone application, a web-service, a mobile applications or a hybrid application. Different type of application faces different
kinds of security threats and understanding the type of the application would help the reviewer to look for specific
security flaws, determine correct threats agents and highlight necessary controls suitable to the application.
Code
The language(s) used, the features and issues of that language from a security perspective. The issues a programmer
needs to look out for and language best practices from a security and performance perspective.
Design
Generally web applications have a well-defined code layout if they are developed using MVC design principle. Applications can have their own custom design or they may use some well-known design frameworks like Struts/Spring
etc. Where are the application properties/configuration parameters stored? How is the business class identified for
any feature/URL? What types of classes get executed for any processing any request? (e.g. centralized controller,
command classes, view pages etc.) How is the view rendered to the users for any request?
Company Standards and Guidelines
Many companies will have standards and guidelines dictated by management. This is how the management (ultimately responsible for the organizations information security) control what levels of security are applied to various
functions, and how they should be applied. For example, if the company has a Secure Coding Guidelines document,
reviewers should know and understand the guidelines and apply them during the code review.
6.7 Code Review Discovery and Gathering Information
The reviewers will need certain information about the application in order to be effective. Frequently, this information can be obtained by studying design documents, business requirements, functional specifications, test results,
and the like. However, in most real-world projects, the documentation is significantly out of date and almost never
has appropriate security information. If the development organization has procedures and templates for architecture
and design documents, the reviewer can suggest updates to ensure security is considered (and documented) at
these phases.
If the reviewers are initially unfamiliar with the application, one of the most effective ways to get started is to talk
with the developers and the lead architect for the application. This does not have to be a long meeting, it could be a
whiteboard session for the development team to share some basic information about the key security considerations

27

28

Methodology

and controls. A walkthrough of the actual running application is very helpful to give the reviewers a good idea about
how the application is intended to work. Also a brief overview of the structure of the code base and any libraries used
can help the reviewers get started.
If the information about the application cannot be gained in any other way, then the reviewers will have to spend
some time doing reconnaissance and sharing information about how the application appears to work by examining
the code. Preferably this information can then be documented to aid future reviews.
Security code review is not simply about the code structure. It is important to remember the data; the reason that
we review code is to ensure that it adequately protects the information and assets it has been entrusted with, such
as money, intellectual property, trade secrets, or lives. The context of the data with which the application is intended
to process is very important in establishing potential risk. If the application is developed using an inbuilt/well-known
design framework the answers to the most of these questions would be pre-defined. But, in case it is custom then this
information will surely aid the review process, mainly in capturing the data flow and internal validations. Knowing
the architecture of the application goes a long way in understanding the security threats that can be applicable to
the application.
A design is a blueprint of an application; it lays a foundation for its development. It illustrates the layout of the application and identifies different application components needed for it. It is a structure that determines execution flow
of the application. Most of the application designs are based on a concept of MVC. In such designs different components interact with each other in an ordered sequence to serve any user request. Design review should be an integral
part of secure software development process. Design reviews also help to implementing the security requirements
in a better way.
Collecting all the required information of the proposed design including flow charts, sequence diagrams, class diagrams and requirements documents to understand the objective of the proposed design. The design is thoroughly
studied mainly with respect to the data flow, different application component interactions and data handling. This is
achieved through manual analysis and discussions with the design or technical architect’s team. The design and the
architecture of the application must be understood thoroughly to analyze vulnerable areas that can lead to security
breaches in the application.
After understanding the design, the next phase is to analyze the threats to the design. This involves observing the
design from an attacker’s perspective and uncovering the backdoors and insecure areas present in it. Table 4 below
Table 4: Example Design Questions During Secure Code Review
Design Area

Questions to consider

Data Flow

• Are user inputs used to directly reference business logic?
• Is there potential for data binding flaws?
• Is the execution flow correct in failure cases?

Authentication and
access control

• Does the design implement access control for all resources?
• Are sessions handled correctly?
• What functionality can be accessed without authentication?

Existing security controls

• Are there any known weaknesses in third-part security controls
• Is the placements of security controls correct?

Architecture

• Are connections to external servers secure?
• Are inputs from external sources validated?

Configuration files and
data stores

• Is there any sensitive data in configuration files?
• Who has access to configuration or data files?

Methodology

Code Review Checklist
Defining a generic checklist, which the development team can fill out can give reviewers the desired context. The
checklist is a good barometer for the level of security the developers have attempted or thought of. If security
code review becomes a common requirement, then this checklist can be incorporated into a development procedure (e.g. document templates) so that the information is always available to code reviewers. See Appendix A
for a sample code review checklist.
The checklist should cover the most critical security controls and vulnerability areas such as:
• Data Validation
• Authentication
• Session Management
• Authorization
• Cryptography
• Error Handling
• Logging
• Security Configuration
• Network Architecture
highlights some questions that can be asked of the architecture and design to aid secure code reviews.
Every security requirement should be associated with a security control best suited for the design. Here, we would
identify exact changes or additions to be incorporated in the design that are needed to meet any requirement or
mitigate a threat. The list of security requirements and proposed controls can be then discussed with the development teams. The queries of the teams should be addressed and feasibility of incorporating the controls must be
determined. Exceptions, if any must be taken into account and alternate recommendations should be proposed. In
this phase a final agreement on the security controls is achieved. The final design incorporated by the development
teams can be reviewed again and finalized for further development process.
6.8 Static Code Analysis
Static Code Analysis is carried out during the implementation phase of S-SDLC. Static code analysis commonly
refers to running static code analysis tools that attempt to highlight possible vulnerabilities within the ‘static’
(non-running) source code.
Ideally static code analysis tools would automatically find security flaws with few false positives. That means it should
have a high degree of confidence that the bugs that it finds are real flaws. However, this ideal is beyond the state of
the art for many types of application security flaws. Thus, such tools frequently serve as aids for an analyst to help
them zero in on security relevant portions of code so they can find flaws more efficiently, rather than a tool that finds
all flaws automatically.
Bugs may exist in the application due to insecure code, design or configuration. Automated analysis can be carried on
the application code to identify bugs through either of the following two options:

29

30

Methodology

1. Static code scanner scripts based on a pattern search (in-house and open source).
2. Static code analyzers (commercial and open source).
Advantages and disadvantages of source code scanners are shown in tables 5 and 6.
Table 5: Advantages To Using Source Code Scanners
Advantage

Explanation

Reduction in manual
efforts

The type of patterns to be scanned for remains common across applications, computers are better at such scans than
humans. In this scenario, scanners play a big role is automating the process of searching the vulnerabilities through
large codebases.

Find all the instances of
the vulnerabilities

Scanners are very effective in identifying all the instances of a particular vulnerability with their exact location. This is
helpful for larger code base where tracing for flaws in all the files is difficult.

Source to sink analysis

Some analyzers can trace the code and identify the vulnerabilities through source to sink analysis. They identify possible
inputs to the application and trace them thoroughly throughout the code until they find them to be associated with any
insecure code pattern. Such a source to sink analysis helps the developers in understanding the flaws better as they get
a complete root cause analysis of the flaw

Elaborate reporting
format

Scanners provide a detailed report on the observed vulnerabilities with exact code snippets, risk rating and complete
description of the vulnerabilities. This helps the development teams to easily understand the flaws and implement
necessary controls

Though code scanning scripts and open source tools can be efficient at finding insecure code patterns, they often
lack the capability of tracing the data flow. This gap is filled by static code analyzers, which identify the insecure code
patterns by partially (or fully) compiling the code and investigating the execution branches, allowing for source to
sink analysis. Static code analyzers and scanners are comprehensive options to complement the process of code
review.
Table 6: Disadvantages To Using Source Code Scanners
Limitation

Explanation

Business logic flaws
remain untouched

The flaws that are related to application’s business logic, transactions, and sensitive data remain untouched by the scanners. The security controls that need to be implemented in the application specific to its features and design are often
not pointed by the scanners. This is considered as the biggest limitation of static code analyzers.

Limited scope

Static code analyzers are often designed for specific frameworks or languages and within that scope they can search
for a certain set of vulnerable patterns. Outside of this scope they fail to address the issues not covered in their search
pattern repository.

Design flaws

Design flaws are not specific to the code structure and static code analyzers focus on the code. A scanner/analyzer will
not spot a design issue when looking at the code, whilst a human can often identify design issues when looking at their
implementation.

False positives

Not all of the issues flagged by static code analyzers are truly issues, and thus the results from these tools need to be
understood and triaged by an experienced programmer who understands secure coding. Therefore anyone hoping
that secure code checking can be automated and run at the end of the build will be disappointed, and there is still a
deal of manual intervention required with analyzers.

Choosing a static analysis tool
Choosing a static analysis tool is a difficult task since there are a lot of choices. The comparison charts below could
help organization decide which tool is right for them, although this list is not exhaustive.

Methodology

Some of the criteria for choosing a tool are:
• Does the tool support the programming language used?
• Is there a preference between commercial or free tools? Usually the commercial tools have more features and are
more reliable than the free ones, whilst their usability might differ.
• What type of analysis is being carried out? Is it security, quality, static or dynamic analysis?
The next step requires that some work is done since it is quite subjective. The best thing to do is to test a few tools to
see if the team is satisfied with different aspects such as the user experience, the reporting of vulnerabilities, the level
of false positives and the customization and the customer support. The choice should not be based on the number of
features, but on the features needed and how they could be integrated in the S-SDLC. Also, before choosing the tool,
the expertise of the targeted users should be clearly evaluated in order to choose an appropriate tool.
6.9 Application Threat Modeling
Threat modeling is an in-depth approach for analyzing the security of an application. It is a structured approach that
enables employees to identify, quantify, and address the security risks associated with an application. Threat modeling is not an approach to reviewing code, but it complements the secure code review process by providing context
and risk analysis of the application.
The inclusion of threat modeling in the S-SDLC can help to ensure that applications are being developed with security built-in from the very beginning. This, combined with the documentation produced as part of the threat modeling
process, can give the reviewer a greater understanding of the system, allows the reviewer to see where the entry
points to the application are (i.e. the attack surface) and the associated threats with each entry point (i.e. attack vectors).
The concept of threat modeling is not new but there has been a clear mind-set change in recent years. Modern threat
modeling looks at a system from a potential attacker’s perspective, as opposed to a defender’s viewpoint. Many companies have been strong advocates of the process over the past number of years, including Microsoft who has made
threat modeling a core component of their S-SDLC, which they claim to be one of the reasons for the increased security of their products in recent years.
When source code analysis is performed outside the S-SDLC, such as on existing applications, the results of the threat
modeling help in reducing the complexity of the source code analysis by promoting a risk based approach. Instead of
reviewing all source code with equal focus, a reviewer can prioritize the security code review of components whose
threat modeling has ranked with high risk threats.
The threat modeling process can be decomposed into 3 high level steps:
6.9.1. Step 1: Decompose the Application.
The first step in the threat modelling process is concerned with gaining an understanding of the application and how
it interacts with external entities. This involves creating use-cases to understand how the application is used, identifying entry points to see where a potential attacker could interact with the application, identifying assets i.e. items/
areas that the attacker would be interested in, and identifying trust levels which represent the access rights that the
application will grant to external entities. This information is documented in the threat model document and it is also
used to produce data flow diagrams (DFDs) for the application. The DFDs show the different data paths through the
system, highlighting the privilege (trust) boundaries.

31

32

Methodology

Items to consider when decomposing the application include
External Dependencies
External dependencies are items external to the code of the application that may pose a threat to the application.
These items are typically still within the control of the organization, but possibly not within the control of the development team. The first area to look at when investigating external dependencies is how the application will be
deployed in a production environment.
This involves looking at how the application is or is not intended to be run. For example if the application is expected
to be run on a server that has been hardened to the organization’s hardening standard and it is expected to sit behind
a firewall, then this information should be documented.
Entry Points
Entry points (aka attack vectors) define the interfaces through which potential attackers can interact with the application or supply it with data. In order for a potential attacker to attack an application, entry points must exist. Entry
points in an application can be layered, for example each web page in a web application may contain multiple entry
points.
Assets
The system must have something that the attacker is interested in; these items/areas of interest are defined as assets.
Assets are essentially threat targets, i.e. they are the reason threats will exist. Assets can be both physical assets and
abstract assets. For example, an asset of an application might be a list of clients and their personal information; this is
a physical asset. An abstract asset might be the reputation of an organization.
Determining the Attack Surface
The attack surface is determined by analyzing the inputs, data flows and transactions. A major part of actually performing a security code review is performing an analysis of the attack surface. An application takes inputs and produces output of some kind. The first step is to identify all input to the code.
Inputs to the application may include the bullet points below and figure 4 describes an example process for identifying an applications input paths:
• Browser input
• Cookies
• Property files
• External processes
• Data feeds
• Service responses
• Flat files
• Command line parameters
• Environment variables

Methodology

Figure 4: Example process diagram for identifying input paths

TRANSITIONAL
ANALYSIS
INITIATION

IDENTIFY INPUT PATHS

IDENTIFY
ATTACK
SURFACE

INPUT
PARAMETERS
(CONFIG)

INPUT
PARAMETERS
(USER)

IDENTIFY
ATTACK
SURFACE

IDENTIFY AREAS
OF LATE & DYNAMIC BINDING

INPUT
PARAMETERS
(CONTROL)

IDENTIFY
ATTACK
SURFACE

FOLLOW PATH
EACH PARAMETER THROUGH
CODE

INPUT
PARAMETERS
(BLACKEND)

IDENTIFY
ATTACK
SURFACE

IDENTIFY AREAS
OF CONFIG FILE
REFERENCE

33

34

Methodology

Trust Levels
Trust levels represent the access rights that the application will grant to external entities. The trust levels are cross-referenced with the entry points and assets. This allows a team to define the access rights or privileges required at each
entry point, and those required to interact with each asset.
Data flow analysis
Exploring the attack surface includes dynamic and static data flow analysis. Where and when variables are set and
how the variables are used throughout the workflow, how attributes of objects and parameters might affect other
data within the program. It determines if the parameters, method calls, and data exchange mechanisms implement
the required security.
Transaction analysis
Transaction analysis is needed to identify and analyze all transactions within the application, along with the relevant
security functions invoked.
The areas that are covered during transaction analysis are:
• Data/Input Validation of data from all untrusted sources
• Authentication
• Session Management
• Authorization
• Cryptography (data at rest and in transit)
• Error Handling /Information Leakage
• Logging /Auditing
Data Flow Diagrams
All of the information collected allows an accurately model the application through the use of Data Flow Diagrams
(DFDs). The DFDs will allow the employee to gain a better understanding of the application by providing a visual
representation of how the application processes data. The focus of the DFDs is on how data moves through the
application and what happens to the data as it moves. DFDs are hierarchical in structure, so they can be used to
decompose the application into subsystems. The high level DFD will allow the employee to clarify the scope of the
application being modelled. The lower level iterations will allow more focus on the specific processes involved when
processing specific data.
There are a number of symbols that are used in DFDs for threat modelling, as show in the following table 7 below:
Table 7: Threat Modeling Symbols
Methodology

ELEMENT

IMAGE

DESCRIPTION

EXTERNAL ENTITY

The external entity shape is used to represent any entity
outside the application that interacts with the application via
an entry point.

PROCESS

The process shape represents a task that handles data within
the application. The task may process the data or perform an
action based on the data.

Methodology

MULTIPLE PROCESS

The multiple process shape is used to present a collection of
subprocesses. The multiple process can be broken down into
its subprocesses in another DFD.

DATA STORE

The data store shape is used to represent locations where
data is stored. Data stores do not modify the data, they only
store data.

DATA FLOW

The data flow shape represents data movement within the
application. The direction of the data movement is represented by the arrow.

PRIVILEGE BOUNDARY

The privilege boundary shape is used to represent the
change of privilege levels as the data flows through the
application.

DFDs show how data moves logically through the system and allows the identification data entering or leaving the system along with the storage of data and the flow of control through these components. Trust boundaries show any location where the level of trust changes. Process components show where data is processed,
such as web servers, application servers, and database servers. Entry points show where data enters the system (i.e. input fields, methods) and exit points are where it leaves the system (i.e. dynamic output, methods),
respectively. Entry and exit points define a trust boundary.
6.9.2 Step 2: Determine and rank threats
Critical to the identification of threats is using a threat categorization methodology. A threat categorization
such as STRIDE can be used, or the Application Security Frame (ASF) that defines threat categories such as Auditing & Logging, Authentication, Authorization, Configuration Management, Data Protection in Storage and
Transit, Data Validation and Exception Management.
The goal of the threat categorization is to help identify threats both from the attacker (STRIDE) and the defensive perspective (ASF). DFDs produced in step 1 help to identify the potential threat targets from the attacker’s
perspective, such as data sources, processes, data flows, and interactions with users. These threats can be
identified further as the roots for threat trees; there is one tree for each threat goal.
From the defensive perspective, ASF categorization helps to identify the threats as weaknesses of security
controls for such threats. Common threat-lists with examples can help in the identification of such threats. Use
and abuse cases can illustrate how existing protective measures could be bypassed, or where a lack of such
protection exists.
The determination of the security risk for each threat can be determined using a value-based risk model such
as DREAD or a less subjective qualitative risk model based upon general risk factors (e.g. likelihood and impact).
The first step in the determination of threats is adopting a threat categorization. A threat categorization pro-

35

36

Methodology

vides a set of threat categories with corresponding examples so that threats can be systematically identified
in the application in a structured and repeatable manner.
STRIDE
Threat lists based on the STRIDE model are useful in the identification of threats with regards to the attacker
goals. For example, if the threat scenario is attacking the login, would the attacker brute force the password to
break the authentication? If the threat scenario is to try to elevate privileges to gain another user’s privileges,
would the attacker try to perform forceful browsing?
A threat categorization such as STRIDE is useful in the identification of threats by classifying attacker goals
such as shown in table 8.
Table 8: Explanation Of The Stride Attributes
STRIDE

Explanation

Spoofing

“Identity spoofing” is a key risk for applications that have many users but provide a single execution context at the application and database level. In particular, users should not be able to become any other user or assume the attributes
of another user.

Tampering

Users can potentially change data delivered to them, return it, and thereby potentially manipulate client-side validation, GET and POST results, cookies, HTTP headers, and so forth. The application should also carefully check data received from the user and validate that it is sane and applicable before storing or using it.

Repudiation

Users may dispute transactions if there is insufficient auditing or recordkeeping of their activity. For example, if a user
says they did not make a financial transfer, and the functionality cannot track his/her activities through the application,
then it is extremely likely that the transaction will have to be written off as a loss.

Information Disclosure

Users are rightfully wary of submitting private details to a system. Is possible for an attacker to publicly reveal user data
at large, whether anonymously or as an authorized user?

Denial of Service

Application designers should be aware that their applications may be subject to a denial of service attack. The use of
expensive resources such as large files, complex calculations, heavy-duty searches, or long queries should be reserved
for authenticated and authorized users, and not available to anonymous users.

Elevation of Privilege

If an application provides distinct user and administrative roles, then it is vital to ensure that the user cannot elevate
his/her role to a higher privilege one.

It is vital that all possible attack vectors should be evaluated from the attacker’s point of view. For example, the login
page allows sending authentication credentials, and the input data accepted by an entry point has to validate for
potential malicious input to exploit vulnerabilities such as SQL injection, cross site scripting, and buffer overflows.
Additionally, the data flow passing through that point has to be used to determine the threats to the entry points
to the next components along the flow. If the following components can be regarded critical (e.g. the hold sensitive
data), that entry point can be regarded more critical as well. In an end to end data flow the input data (i.e. username
and password) from a login page, passed on without validation, could be exploited for a SQL injection attack to manipulate a query for breaking the authentication or to modify a table in the database.
Exit points might serve as attack points to the client (e.g. XSS vulnerabilities) as well for the realization of information
disclosure vulnerabilities. In the case of exit points from components handling confidential data (e.g. data access
components), any exit points lacking security controls to protect the confidentiality and integrity can lead to disclosure of such confidential information to an unauthorized user.
In many cases threats enabled by exit points are related to the threats of the corresponding entry point. In the login
example, error messages returned to the user via the exit point might allow for entry point attacks, such as account

Methodology

harvesting (e.g. username not found), or SQL injection (e.g. SQL exception errors). From the defensive perspective,
the identification of threats driven by security control categorization such as ASF allows a threat analyst to focus on
specific issues related to weaknesses (e.g. vulnerabilities) in security controls. Typically the process of threat identification involves going through iterative cycles where initially all the possible threats in the threat list that apply to
each component are evaluated. At the next iteration, threats are further analyzed by exploring the attack paths, the
root causes (e.g. vulnerabilities) for the threat to be exploited, and the necessary mitigation controls (e.g. countermeasures).
Once common threats, vulnerabilities, and attacks are assessed, a more focused threat analysis should take in consideration use and abuse cases. By thoroughly analyzing the use scenarios, weaknesses can be identified that could
lead to the realization of a threat. Abuse cases should be identified as part of the security requirement engineering
activity. These abuse cases can illustrate how existing protective measures could be bypassed, or where a lack of such
protection exists. Finally, it is possible to bring all of this together by determining the types of threat to each component of the decomposed system. This can be done by repeating the techniques already discussed on a lower level
threat model, again using a threat categorization such as STRIDE or ASF, the use of threat trees to determine how the
threat can be exposed by vulnerability, and use and misuse cases to further validate the lack of a countermeasure to
mitigate the threat.
Microsoft DREAD threat-risk ranking model
In the Microsoft DREAD threat-risk ranking model, the technical risk factors for impact are Damage and Affected Users, while the ease of exploitation factors are Reproducibility, Exploitability and Discoverability. This risk factorization
allows the assignment of values to the different influencing factors of a threat.
To determine the ranking of a threat, the threat analyst has to answer basic questions for each factor of risk, for example:
Table 9: Explanation Of The Dread Attributes
DREAD

Questions

Damage

How big would the damage be if the attack succeeded?
Can an attacker completely take over and manipulate the system?
Can an attacker crash the system?

Reproducibility

How easy is it to reproduce an attack to work?
Can the exploit be automated?

Exploitability

How much time, effort, and expertise is needed to exploit the threat?
Does the attacker need to be authenticated?

Affected Users

If a threat were exploited, what percentage of users would be affected?
Can an attacker gain administrative access to the system?

Discoverability

How easy is it for an attacker to discover this threat?

The impact mainly depends on the damage potential and the extent of the impact, such as the number of
components that are affected by a threat.

37

38

Methodology

These questions help in the calculation of the overall risk values by assigning qualitative values such as High,
Medium and Low to Likelihood and Impact factors. In this case, using qualitative values, rather than numeric
ones like in the case of the DREAD model, help avoid the ranking becoming overly subjective.
Likelihood
A more generic risk model takes into consideration the Likelihood (e.g. probability of an attack) and the Impact
(e.g. damage potential):
Risk = Likelihood x Impact
Note that this is a conceptual formula and is not expected to use actual values for likelihood and impact. The
likelihood or probability is defined by the ease of exploitation, which mainly depends on the type of threat and
the system characteristics, and by the possibility to realize a threat, which is determined by the existence of an
appropriate countermeasure.

6.9.3 Step 3: Determine countermeasures and mitigation.
A lack of protection against a threat might indicate a vulnerability whose risk exposure could be mitigated
with the implementation of a countermeasure. Such countermeasures can be identified using threat-countermeasure mapping lists. Once a risk ranking is assigned to the threats, it is possible to sort threats from the highest to the lowest risk, and prioritize the mitigation effort, such as by responding to such threats by applying the
identified countermeasures.
The risk mitigation strategy might involve evaluating these threats from the business impact that they pose
and establishing countermeasures (or design changes) to reduce the risk.
Other options might include accepting the risk, assuming the business impact is acceptable because of compensating controls, informing the user of the threat, removing the risk posed by the threat completely, or the
least preferable option, that is, to do nothing. If the risk identified is extreme, the functionality or product
could be discontinued, as the risk of something going wrong is greater than the benefit.
The purpose of the countermeasure identification is to determine if there is some kind of protective measure
(e.g. security control, policy measures) in place that can prevent each threat previously identified via threat
analysis from being realized. Vulnerabilities are then those threats that have no countermeasures.
Since each of these threats has been categorized either with STRIDE or ASF, it can be possible to find appropriate countermeasures in the application within the given category. Each of the above steps is documented as
they are carried out. The resulting set of documents is the threat model for the application. Detailed examples
of how to carry out threat modeling is given in Appendix B.
Threat Profile
Once threats and corresponding countermeasures are identified it is possible to derive a threat profile with the
following criteria:

Methodology

Table 10: Types Of Mitigated Threats
Threat Type

Description

Non-mitigated threats

Threats which have no countermeasures and represent vulnerabilities that can be fully exploited and cause an impact.

Partially mitigated
threats

Threats partially mitigated by one or more countermeasures, which represent vulnerabilities that can only partially be
exploited and cause a limited impact.

Fully mitigated threats

These threats have appropriate countermeasures in place and do not expose vulnerability and cause impact.

6.10 Metrics and Code Review
Metrics measure the size and complexity of a piece of code. There is a long list of quality and security characteristics that can be considered when reviewing code (such as, but not limited to, correctness, efficiency,
portability, maintainability, reliability and securability). No two-code review sessions will be the same so some
judgment will be needed to decide the best path. Metrics can help decide the scale of a code review.
Metrics can also be recorded relating to the performance of the code reviewers and the accuracy of the review
process, the performance of the code review function, and the efficiency and effectiveness of the code review
function.
The figure 5 describes the use of metrics throughout the code review process.
Some of the options for calculating the size of a review task include:
Lines of Code (LOC):
A count of the executable lines of code (commented-out code or blank lines are not counted). This gives a
rough estimate but is not particularly scientific.
Function Point:
The estimation of software size by measuring functionality. The combination of a number of statements which
perform a specific task, independent of programming language used or development methodology. In an
object orientated language a class could be a functional point.
Defect Density:
The average occurrence of programming faults per Lines of Code (LOC). This gives a high level view of the code
quality but not much more. Fault density on its own does not give rise to a pragmatic metric. Defect density
would cover minor issues as well as major security flaws in the code; all are treated the same way. Security of
code cannot be judged accurately using defect density alone.
Risk Density:
Similar to defect density, but discovered issues are rated by risk (high, medium & low). In doing this we can
give insight into the quality of the code being developed via a [X Risk / LoC] or [Y Risk / Function Point] value
(X&Y being high, medium or low risks) as defined by internal application development policies and standards.
For example:
4 High Risk Defects per 1000 (Lines of Code)
2 Medium Risk Defects per 3 Function Points

39

40

Methodology

Figure 5: Example process diagram for identifying input paths

The Use of Metrics
Throughout The
Code Review
Process

RECOMENDED
CODE TRIAGE
MEETING

CODE
SUBMITED
FOR SCR

NO

CODE
RESUBMITED
FOR
RE-REVIEW

TREND
ANALYSIS

HAS CONTEXT
OF CODE BEEN
DEFINED

DEFINE CRITERIA
PROJECT OR
VULNERABILITY
BASED

YES
STANDARDS

POLICIES

PERFORM
REVIEW
COMMUNICATE
RESULTS TO TEAM

GUIDELINES

CODE REVIEW
DATABASE
RECORD
FINDINGS

DEVELOP
METRICS

PREVIOUS
FINDINGS

PERSIST METRICS

Methodology

Cyclomatic complexity (CC):
A static analysis metric used to assist in the establishment of risk and stability estimations on an item of code,
such as a class, method, or even a complete system. It was defined by Thomas McCabe in the 70’s and it is easy
to calculate and apply, hence its usefulness.
The McCabe cyclomatic complexity metric is designed to indicate a program’s testability, understandability
and maintainability. This is accomplished by measuring the control flow structure, in order to predict the difficulty of understanding, testing, maintaining, etc. Once the control flow structure is understood one can gain a
realization of the extent to which the program is likely to contain defects. The cyclomatic complexity metric is
intended to be independent of language and language format that measures the number of linearly independent paths through a program module. It is also the minimum number of paths that should be tested.
By knowing the cyclomatic complexity of the product, one can focus on the module with the highest complexity. This will most likely be one of the paths data will take, thus able to guide one to a potentially high risk
location for vulnerabilities. The higher the complexity the greater potential for more bugs. The more bugs the
higher the probability for more security flaws.
Does cyclomatic complexity reveal security risk? One will not know until after a review of the security posture
of the module. The cyclomatic complexity metric provides a risk-based approach on where to begin to review
and analyze the code. Securing an application is a complex task and in many ways complexity an enemy of
security as software complexity can make software bugs hard to detect. Complexity of software increases over
time as the product is updated or maintained.
Cyclomatic complexity can be calculated as:
CC = Number of decisions +1
… where a decision would be considered as commands where execution is branched with if/else, switch, case,
catch, while, do, templated class calls, etc.,
As the decision count increases, so do the complexity and the number of paths. Complex code leads to less
stability and maintainability.
The more complex the code, the higher risk of defects. A company can establish thresholds for cyclomatic
complexity for a module:
0-10: Stable code, acceptable complexity
11-15: Medium Risk, more complex
16-20: High Risk code, too many decisions for a unit of code.
Modules with a very high cyclomatic complexity are extremely complex and could be refactored into smaller
methods.
Bad Fix Probability:
This is the probability of an error accidentally inserted into a program while trying to fix a previous error,
known in some companies as a regression.
Cyclomatic Complexity: 1 – 10 == Bad Fix Probability: 5%
Cyclomatic Complexity: 20 –30 == Bad Fix Probability: 20%
Cyclomatic Complexity: > 50 == Bad Fix Probability: 40%
Cyclomatic Complexity: Approaching 100 == Bad Fix Probability: 60%
As the complexity of software increase so does the probability to introduce new errors.

41

42

Methodology

Inspection Rate:
This metric can be used to get a rough idea of the required duration to perform a code review. The inspection
rate is the rate of coverage a code reviewer can cover per unit of time. For example, a rate of 250 lines per hour
could be a baseline. This rate should not be used as part of a measure of review quality, but simply to determine duration of the task.
Defect Detection Rate:
This metric measure the defects found per unit of time. Again, can be used to measure performance of the
code review team, but not to be used as a quality measure. Defect detection rate would normally increase as
the inspection rate (above) decreases.
Re-inspection Defect Rate:
The rate at which upon re-inspection of the code more defects exist, some defects still exist, or other defects
manifest through an attempt to address previously discovered defects (regressions).
6.11 Crawling Code
Crawling code is the practice of scanning a code base of the review target and interface entry points, looking
for key code pointers wherein possible security vulnerability might reside. Certain APIs are related to interfacing to the external world or file IO or user management, which are key areas for an attacker to focus on. In
crawling code we look for APIs relating to these areas. We also need to look for business logic areas which may
cause security issues, but generally these are bespoke methods which have bespoke names and cannot be detected directly, even though we may touch on certain methods due to their relationship with a certain key API.
We also need to look for common issues relating to a specific language; issues that may not be security related
but which may affect the stability/availability of the application in the case of extraordinary circumstances.
Other issues when performing a code review are areas such a simple copyright notice in order to protect one’s
intellectual property. Generally these issues should be part of a companies Coding Guidelines (or Standard),
and should be enforceable during a code review. For example a reviewer can reject a code review because
the code violates something in the Coding Guidelines, regardless of whether or not the code would work in
its current state.
Crawling code can be done manually or in an automated fashion using automated tools. However working
manually is probably not effective, as (as can be seen below) there are plenty of indicators, which can apply to
a language. Tools as simple as grep or wingrep can be used. Other tools are available which would search for
keywords relating to a specific programming language. If a team is using a particular review tool that allows
it to specify strings to be highlighted in a review (e.g. Python based review tools using pygments syntax highlighter, or an in-house tool for which the team can change the source code) then they could add the relevant
string indicators from the lists below and have them highlighted to reviewers automatically.
The basis of the code review is to locate and analyze areas of code, which may have application security implications. Assuming the code reviewer has a thorough understanding of the code, what it is intended to do, and
the context in which it is to be used, firstly one needs to sweep the code base for areas of interest.
Appendix C gives practical examples of how to carry out code crawling in the following programming languages:
• .Net
• Java
• ASP
• C++/Apache

43

A1

44

A1 - Injection

A1

INJECTION

7.1 Overview
What is Injection?
Injection attacks allow a malicious user to add or inject content and commands into an application in order to
modify its behaviour. These types of attacks are common, widespread, an easy for a hacker to test if a web site
is vulnerable and easy and quick for the attacker to take advantage of. Today they are very common in legacy
applications that haven’t been updated.
7.2 SQL Injection
The most common injection vulnerability is SQL injection. Injection vulnerability is also easy to remediate and
protect against. This vulnerability covers SQL, LDAP, Xpath, OS commands, XML parsers.
Injection vulnerability can lead to…
1. Disclosure/leaking of sensitive information.
2. Data integrity issues. SQL injection may modify data, add new data, or delete data.
3. Elevation of privileges.
4. Gaining access to back-end network.
SQL commands are not protected from the untrusted input. SQL parser is not able to distinguish between
code and data.

String custQuery =

SELECT custName, address1 FROM cust_table WHERE custID= ‘“

+ request.GetParameter(“id”) + ““

Code

Data

Using string concatenation to generate a SQL statement is very common in legacy applications where developers were not considering security. The issue is this coding technique does not tell the parser which part
of the statement is code and which part is data. In situations where user input is concatenated into the SQL
statement, an attacker can modify the SQL statement by adding SQL code to the input data.
1. Untrusted input is acceptable by the application. There are several ways to mitigate injection vulnerability,
whitelisting, regex, etc. The five best ways are. All five should be used together for a defense in depth approach.
1. HtmlEncode all user input.
2. Using static analysis tools. Most static analysis for languages like .Net, Java, python are accurate. However
static analysis can become an issue when injection comes from JavaScript and CSS.
3. Parameterize SQL queries. Use SQL methods provided by the programming language or framework that
parameterize the statements, so that the SQL parser can distinguish between code and data.

A1 - Injection

4. Use Stored Procedures. Stored procedures will generally help the SQL parser differentiate code and data.
However Stored Procedures can be used to build dynamic SQL statements allowing the code and data to become blended together causing the it to become vulnerable to injection.
5. Provide developer training for best practices for secure coding.
Blind SQL Injection
Typically SQL queries return search results that are presented to a user. However, there are cases where SQL
queries are happening behind the scenes that influence how the page is rendered, and unfortunately attackers can still glean information based on the error responses from various UI elements. Blind SQL injection is a
type of attack that asks the database true or false questions and determines the answer based on the applications response.
Effectively the attacker uses SQL queries to determine what error responses are returned for valid SQL, and
which responses are returned for invalid SQL. Then the attacker can probe; for example check if a table called
“user_password_table” exists. Once they have that information, they could use an attack like the one described
above to maliciously delete the table, or attempt to return information from the table (does the username
“john” exist?). Blind SQL injections can also use timings instead of error messages, e.g. if invalid SQL takes 2
seconds to respond, but valid SQL returns in 0.5 seconds, the attacker can use this information.
Parameterized SQL Queries
Parameterized SQL queries (sometimes called prepared statements) allow the SQL query string to be defined
in such a way that the client input can’t be treated as part of the SQL syntax.
Take the example in sample 7.1:
Sample 7.1
1
2
3
4

String query = “SELECT id, firstname, lastname FROM authors WHERE forename = ? and surname = ?”;
PreparedStatement pstmt = connection.prepareStatement( query );
pstmt.setString( 1, firstname );
pstmt.setString( 2, lastname );

In this example the string ‘query’ is constructed in a way that it does not rely on any client input, and the
‘PreparedStatement’ is constructed from that string. When the client input is to be entered into the SQl, the
‘setString’ function is used and the first question mark “?” is replaced by the string value of ‘firstname’, the second question mark is replaced by the value of ‘lastname’. When the ‘setString’ function is called, this function
automatically checks that no SQL syntax is contained within the string value. Most prepared statement APIs
allow you to specify the type that should be entered, e.g. ‘setInt’, or ‘setBinary’, etc.
Safe String Concatenation?
So does this mean you can’t use string concatenation at all in your DB handling code? It is possible to use string
concatenation safely, but it does increase the risk of an error, even without an attacker attempting to inject
SQL syntax into your application.
You should never use string concatenation in combination with the client input value. Take an example where
the existence (not the value) of a client input variable “surname” is used to construct the SQL query of the
prepared statement;

45

46

A1 - Injection

Sample 7.2
1 String query = “Select id, firstname, lastname FROM authors WHERE forename = ?”;
2 if (lastname!= NULL && lastname.length != 0) {
3 query += “ and surname = ?”;
4 }
5 query += “;”;
6
7
PreparedStatement pstmt = connection.prepareStatement( query );
8
pstmt.setString( 1, firstname);
9
10
if (lastname!= NULL && lastname.length != 0) { pstmt.setString( 2, lastname ); }
Here the value of ‘lastname’ is not being used, but the existance of it is being evaluated. However there is still a risk
when the SQL statement is larger and has more complex business logic involved in creating it. Take the following
example where the function will search based on firstname or lastname:
Sample 7.3
1 String query = “select id, firstname, lastname FROM authors”;
2
3 if (( firstname != NULL && firstname.length != 0 ) && ( lastname != NULL && lastname.length != 0 )) {
4 query += “WHERE forename = ? AND surname = ?”;
5}
6 else if ( firstname != NULL && firstname.length != 0 ) {
7 query += “WHERE forename = ?”;
8}
9 else if ( lastname!= NULL && lastname.length != 0 ) {
10 query += “WHERE surname = ?”;
11 }
12
13 query += “;”;
14
15 PreparedStatement pstmt = connection.prepareStatement( query )

This logic will be fine when either firstname, or lastname is given, however if neither were given then the SQL statement would not have any WHERE clause, and the entire table would be returned. This is not an SQL injection (the
attacker has done nothing to cause this situation, except not passing two values) however the end result is the same,
information has been leaked from the database, despite the fact that a parameterized query was used.
For this reason, the advice is to avoid using string concatenation to create SQL query strings, even when using parameterized queries, especially if the concatenation involves building any items in the where clause.
Using Flexible Parameterized Statements
Functional requirements often need the SQL query being executed to be flexible based on the user input, e.g. if the
end user specifies a time span for their transaction search then this should be used, or they might wish to query based
on either surname or forename, or both. In this case the safe string concatenation above could be used, however from

A1 - Injection

a maintenance point of view this could invite future programmers to misunderstand the difference between safe
concatenation and the unsafe version (using input string values directly).
One option for flexible parameterized statements is to use ‘if’ statements to select the correct query based on the
input values provided, for example:
Sample 7.4
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22

String query;
PreparedStatement pstmt;
if ( (firstname!= NULL && firstname.length != 0) &&
lastname!= NULL && lastname.length != 0) ) {
query = “Select id, firstname, lastname FROM authors WHERE forename = ? and surname = ?”
pstmt = connection.prepareStatement( query );
pstmt.setString( 1, firstname );
pstmt.setString( 2, lastname );
}
else if (firstname != NULL && firstname.length != 0) {
query = “Select id, firstname, lastname FROM authors WHERE forename = ?”;
pstmt = connection.prepareStatement( query );
pstmt.setString( 1, firstname );
}
else if (lastname != NULL && lastname.length != 0){
query = “Select id, firstname, lastname FROM authors WHERE surname= ?”;
pstmt = connection.prepareStatement( query );
pstmt.setString( 1, lastname);
}
else{
throw NameNotSpecifiedException(); }

PHP SQL Injection
An SQL injection attack consists of injecting SQL query portions in the back-end database system via the client
interface in the web application. The consequence of a successful exploitation of an SQL injection varies from
just reading data to modifying data or executing system commands. SQL Injection in PHP remains the number
one attack vector, and also the number one reason for data compromises as shown in sample 7.5.
Example 1 :
Sample 7.5
1 

47

48

A1 - Injection

The most common ways to prevent SQL Injection in PHP are using functions such as addslashes() and mysql_
real_escape_string() but those function can always cause SQL Injections in some cases.
Addslashes :
You will avoid Sql injection using addslashes() only in the case when you wrap the query string with quotes.
The following example would still be vulnerable
Sample 7.6
1
2

$id = addslashes( $_GET[‘id’] );
$query = ‘SELECT title FROM books WHERE id = ‘ . $id;

mysql_real_escape_string():
mysql_real_escape_string() is a little bit more powerful than addslashes() as it calls MySQL’s library function
mysql_real_escape_string, which prepends backslashes to the following characters: \x00, \n, \r, \, ‘, “ and \x1a.
As with addslashes(), mysql_real_escape_string() will only work if the query string is wrapped in quotes. A
string such as the following would still be vulnerable to an SQL injection:
SQL injections occur when input to a web application is not controlled or sanitized before executing to the
back-end database.
The attacker tries to exploit this vulnerability by passing SQL commands in her/his input and therefore will
create a undesired response from the database such as providing information that bypasses the authorization
and authentication programmed in the web application. An example of vulnerable java code is shown in
sample 7.7
Sample 7.7
1 HttpServletRequest request = ...;
2 String userName = request.getParameter(“name”);
3 Connection con = ...
4 String query = “SELECT * FROM Users WHERE name = ‘” + userName + “’”;
5 con.execute(query);

An example of a vulnerable java code
The input parameter “name” is passed to the String query without any proper validation or verification. The
query ‘SELECT* FROM users where name” is equal to the string ‘username’ can be easily misused to bypass
something different that just the ‘name’. For example, the attacker can attempt to pass instead in this way accessing all user records and not only the one entitled to the specific user
“ OR 1=1.

A1 - Injection

.NET Sql Injection
Framework 1.0 & 2.0 might be more vulnerable to SQL injections than the later versions of .NET. Thanks to the
proper implementation and use of design patters already embedded in ASP.NET such as MVC(also depending
on the version), it is possible to create applications free from SQL injections, however, there might be times
where a developer might prefer to use SQL code directly in the code.
Example.
A developer creates a webpage with 3 fields and submit button, to search for employees on fields ‘name’,
‘lastname’ and ‘id’
The developer implements a string concatenated SQL statement or stored procedure in the code such as in
sample 7.8.
Sample 7.8
1 SqlDataAdapter thisCommand = new SqlDataAdapter(
2 “SELECT name, lastname FROM employees WHERE ei_id = ‘” +
idNumber.Text + “’”, thisConnection);

This code is equivalent to the executed SQL statement in sample 7.9.
Sample 7.9
1 SqlDataAdapter thisCommand = new SqlDataAdapter(
2 “SearchEmployeeSP ‘” + idNumber.Text + “’”, thisConnection);

A hacker can then insert the following employee ID via the web interface “123’;DROP TABLE pubs --” and execute the following code:
SELECT name, lastname FROM authors WHERE ei_id = ‘123’; DROP TABLE pubs --’
The semicolon “;” provides SQL with a signal that it has reached the end of the sql statement, however, the
hacker uses this to continue the statement with the malicious SQL code
; DROP TABLE pubs;
Parameter collections
Parameter collections such as SqlParameterCollection provide type checking and length validation. If you use
a parameters collection, input is treated as a literal value, and SQL Server does not treat it as executable code,
and therefore the payload can not be injected.
Using a parameters collection lets you enforce type and length checks. Values outside of the range trigger an
exception. Make sure you handle the exception correctly. Example of the SqlParameterCollection:
Hibernate Query Language (HQL)

49

50

A1 - Injection

Sample 7.10
1 using (SqlConnection conn = new SqlConnection(connectionString)) {
2 DataSet dataObj = new DataSet();
3 SqlDataAdapter sqlAdapter = new SqlDataAdapter( “StoredProc”, conn); sqlAdapter.SelectCommand.
CommandType =
4 CommandType.StoredProcedure;
5 sqlAdapter.SelectCommand.Parameters.Add(“@usrId”, SqlDbType.VarChar, 15);
6 sqlAdapter.SelectCommand.Parameters[“@usrId “].Value = UID.Text;
Hibernate facilitates the storage and retrieval of Java domain objects via Object/Relational Mapping (ORM).
It is a very common misconception that ORM solutions, like hibernate, are SQL Injection proof. Hibernate allows the
use of “native SQL” and defines a proprietary query language, called HQL (Hibernate Query Language); the former is
prone to SQL Injection and the later is prone to HQL (or ORM) injection.
What to Review
• Always validate user input by testing type, length, format, and range.
• Test the size and data type of input and enforce appropriate limits.
• Test the content of string variables and accept only expected values. Reject entries that contain binary data, escape
sequences, and comment characters.
• When you are working with XML documents, validate all data against its schema as it is entered.
• Never build SQL statements directly from user input.
• Use stored procedures to validate user input, when not using stored procedures use SQL API provided by platform.
i.e. Parameterized Statements.
• Implement multiple layers of validation.
• Never concatenate user input that is not validated. String concatenation is the primary point of entry for script
injection.
You should review all code that calls EXECUTE, EXEC, any SQL calls that can call outside resources or command line.
OWASP References
• https://www.owasp.org/index.php/SQL_Injection_Prevention_Cheat_Sheet OWASP SQL Injection Prevention
Cheat Sheet
• https://www.owasp.org/index.php/Query_Parameterization_Cheat_Sheet OWASP Query Parameterization
Cheat Sheet
• https://www.owasp.org/index.php/Command_Injection OWASP Command Injection Article
• https://www.owasp.org/index.php/XXE OWASP XML eXternal Entity (XXE) Reference Article

A1 - Injection

• https://www.owasp.org/index.php/ASVS ASVS: Output Encoding/Escaping Requirements (V6)
• https://www.owasp.org/index.php/Testing_for_SQL_Injection_(OWASP-DV-005) OWASP Testing Guide: Chapter on SQL Injection Testing
External References
• http://cwe.mitre.org/data/definitions/77.html CWE Entry 77 on Command Injection
• http://cwe.mitre.org/data/definitions/89.html CWE Entry 89 on SQL Injection
• http://cwe.mitre.org/data/definitions/564.html CWE Entry 564 on Hibernate Injection
• Livshits and Lam, 2005 “Finding Security Vulnerabilities in Java Applications with Static Analysis” available at
https://www.usenix.org/legacy/event/sec05/tech/full_papers/livshits/livshits_html/#sec:sqlinjexample
• http://www.php.net/manual/en/book.pdo.php PDO
• https://technet.microsoft.com/en-us/library/ms161953(v=sql.105).aspx
7.3 JSON (JavaScript Object Notation)
JSON (JavaScript Object Notation) is an open standard format that uses easy to read text to transmit data between a server and web applications. JSON data can be used by a large number of programming Languages
and is becoming the de-facto standard in replacing XML.
JSON main security concern is JSON text dynamically embedded in JavaScript, because of this injection is a very real
vulnerability. The vulnerability in the program that may inadvertently to run a malicious script or store the malicious
script to a database. This is a very real possibility when dealing with data retrieved from the Internet.
The code reviewer needs to make sure the JSON is not used with Javascript eval. Make sure JSON.parse(…) is used.
Var parsed_object = eval(“(“ + Jason_text + “)”); // Red flag for the code reviewer.
JSON.parse(text[, reviver]); .. // Much better then using javascript eval function.
Code reviewer should check to make sure the developer is not attempting to reject known bad patterns in text/
string data, Using regex or other devices is fraught with error and makes testing for correctness very hard. Allow only
whitelisted alphanumeric keywords and carefully validated numbers.
Do not allow JSON data to construct dynamic HTML. Always us safe DOM features like innerText or CreateTextNode(…)
Object/Relational Mapping (ORM)
Object/Relation Mapping (ORM) facilitates the storage and retrieval of domain objects via HQL (Hibernate Query
Language) or .NET Entity framework.
It is a very common misconception that ORM solutions, like hibernate are SQL Injection proof. They are not. ORM’s
allow the use of “native SQL”. Thru proprietary query language, called HQL is prone to SQL Injection and the later is
prone to HQL (or ORM) injection. Linq is not SQL and because of that is not prone to SQL injection. However using
excutequery or excutecommand via linq causes the program not to use linq protection mechanism and is vulnerability to SQL injection.

51

52

A1 - Injection

Bad Java Code Examples
List results = session.createQuery(“from Items as item where item.id = “ + currentItem.getId()).list();
NHibernate is the same as Hibernate except it is an ORM solution for Microsoft .NET platform. NHibernate is also vulnerable to SQL injection if used my dynamic queries.
Bad .Net Code Example
string userName = ctx.GetAuthenticatedUserName();
String query = “SELECT * FROM Items WHERE owner = ‘”
+ userName + “’ AND itemname = ‘”
+ ItemName.Text + “’”;
List items = sess.CreateSQLQuery(query).List()

Code Reviewer Action
Code reviewer needs to make sure any data used in an HQL query uses HQL parameterized queries so that it
would be used as data and not as code. They can also use the Criteria API at https://docs.jboss.org/hibernate/
orm/3.3/reference/en/html/querycriteria.html
7.5 Content Security Policy (CSP)
Is a W3C specification offering the possibility to instruct the client browser from which location and/or which
type of resources are allowed to be loaded. To define a loading behavior, the CSP specification use “directive”
where a directive defines a loading behavior for a target resource type. CSP helps to detect and mitigate certain types of attacks, including Cross Site Scripting (XSS) and data injection attacks. These attacks are used for
everything from data theft to site defacement or distribution of malware
Directives can be specified using HTTP response header (a server may send more than one CSP HTTP header
field with a given resource representation and a server may send different CSP header field values with different representations of the same resource or with different resources) or HTML Meta tag, the HTTP headers
below are defined by the specs:
• Content-Security-Policy : Defined by W3C Specs as standard header, used by Chrome version 25 and later,
Firefox version 23 and later, Opera version 19 and later.
• X-Content-Security-Policy : Used by Firefox until version 23, and Internet Explorer version 10 (which partially
implements Content Security Policy).
• X-WebKit-CSP : Used by Chrome until version 25
Risk
The risk with CSP can have 2 main sources:
• Policies misconfiguration,
• Too permissive policies.
What to Review
Code reviewer needs to understand what content security policies were required by application design and
how these policies are tested to ensure they are in use by the application.

A1 - Injection

Useful security-related HTTP headers
In most architectures these headers can be set in web servers configuration without changing actual application’s code. This offers significantly faster and cheaper method for at least partial mitigation of existing issues,
and an additional layer of defense for new applications.
Table 11: Security Related HTTP Headers
Header name

Description

Example

Strict-Transport-Security
https://tools.ietf.org/
html/rfc6797

HTTP Strict-Transport-Security (HSTS) enforces secure (HTTP over SSL/TLS) connections
to the server. This reduces impact of bugs in web applications leaking session data
through cookies and external links and defends against Man-in-the-middle attacks.
HSTS also disables the ability for user’s to ignore SSL negotiation warnings.

Strict-Transport-Security:
max-age=16070400;
includeSubDomains

X-Frame-Options
https://tools.ietf.org/
html/draft-ietf-websec-xframe-options-01

Provides Click jacking protection. Values: deny - no rendering within a frame, sameorigin
- no rendering if origin mismatch, allow-from: DOMAIN - allow rendering if framed by
frame loaded from DOMAIN

X-Frame-Options: deny

This header enables the Cross-site scripting (XSS) filter built into most recent web browsers. It’s usually enabled by default anyway, so the role of this header is to re-enable the
filter for this particular website if it was disabled by the user. This header is supported in
IE 8+, and in Chrome (not sure which versions). The anti-XSS filter was added in Chrome
4. Its unknown if that version honored this header.

X-XSS-Protection: 1;
mode=block

The only defined value, “nosniff”, prevents Internet Explorer and Google Chrome from
MIME-sniffing a response away from the declared content-type. This also applies to Google Chrome, when downloading extensions. This reduces exposure to drive-by download attacks and sites serving user uploaded content that, by clever naming, could be
treated by MSIE as executable or dynamic HTML files.

X-Content-Type-Options:
nosniff

Content Security Policy requires careful tuning and precise definition of the policy. If
enabled, CSP has significant impact on the way browser renders pages (e.g., inline JavaScript disabled by default and must be explicitly allowed in policy). CSP prevents a wide
range of attacks, including Cross-site scripting and other cross-site injections.

Content-Security-Policy:
default-src ‘self’

Like Content-Security-Policy, but only reports. Useful during implementation, tuning
and testing efforts.

Content-Security-Policy-Report-Only: default-src ‘self’; report-uri
http://loghost.example.
com/reports.jsp

Frame-Options
https://tools.ietf.org/
html/draft-ietf-websecframe-options-00
X-XSS-Protection
[http://blogs.
msdn.com/b/ie/archive/2008/07/02/
ie8-security-part-iv-thexss-filter.aspx X-XSS-Protection]

X-Content-Type-Options
https://blogs.msdn.
microsoft.com/
ie/2008/09/02/ie8-security-part-vi-beta-2-update/
Content-Security-Policy,
X-Content-Security-policy,X-WebKit-CSP
https://www.w3.org/TR/
CSP/
Content-Security-Policy-Report_Only
https://www.w3.org/TR/
CSP/

Note the Spring Security library can assist with these headers, see http://docs.spring.io/spring-security/site/
docs/current/reference/html/headers.html

53

54

A1 - Injection

References
Apache: http://httpd.apache.org/docs/2.0/mod/mod_headers.html
IIS: http://technet.microsoft.com/pl-pl/library/cc753133(v=ws.10).aspx
7.6 Input Validation
Input validation is one of the most effective technical controls for application security. It can mitigate numerous vulnerabilities including cross-site scripting, various forms of injection, and some buffer overflows. Input
validation is more than checking form field values.
All data from users needs to be considered untrusted. Remember one of the top rules of secure coding is
“Don’t trust user input”. Always validate user data with the full knowledge of what your application is trying
to accomplish.
Regular expressions can be used to validate user input, but the more complicated the regular express are the
more chance it is not full proof and has errors for corner cases. Regular expressions are also very hard fro QA
to test. Regular expressions may also make it hard for the code reviewer to do a good review of the regular
expressions.
Data Validation
All external input to the system (and between systems/applications) should undergo input validation. The
validation rules are defined by the business requirements for the application. If possible, an exact match validator should be implemented. Exact match only permits data that conforms to an expected value. A “Known
good” approach (white-list), which is a little weaker, but more flexible, is common. Known good only permits
characters/ASCII ranges defined within a white-list.
Such a range is defined by the business requirements of the input field. The other approaches to data validation are “known bad,” which is a black list of “bad characters”. This approach is not future proof and would need
maintenance. “Encode bad” would be very weak, as it would simply encode characters considered “bad” to a
format, which should not affect the functionality of the application.
Business Validation
Business validation is concerned with business logic. An understanding of the business logic is required prior
to reviewing the code, which performs such logic. Business validation could be used to limit the value range
or a transaction inputted by a user or reject input, which does not make too much business sense. Reviewing
code for business validation can also include rounding errors or floating point issues which may give rise to
issues such as integer overflows, which can dramatically damage the bottom line.
Canonicalization
Canonicalization is the process by which various equivalent forms of a name can be resolved to a single standard name, or the “canonical” name.
The most popular encodings are UTF-8, UTF-16, and so on (which are described in detail in RFC 2279). A single
character, such as a period/full-stop (.), may be represented in many different ways: ASCII 2E, Unicode C0 AE,
and many others.
With the myriad ways of encoding user input, a web application’s filters can be easily circumvented if they’re
not carefully built.

A1 - Injection

Bad Example
Sample 7.11
public static void main(String[] args) {
File x = new File(“/cmd/” + args[1]);
String absPath = x.getAbsolutePath();
}

Good Example
Sample 7.12
public static void main(String[] args) throws IOException {
File x = new File(“/cmd/” + args[1]);
String canonicalPath = x.getCanonicalPath();

.NET Request Validation
One solution is to use .Net “Request Validation”. Using request validation is a good start on validating user
data and is useful. The downside is too generic and not specific enough to meet all of our requirements to
provide full trust of user data.
You can never use request validation for securing your application against cross-site scripting attacks.
The following example shows how to use a static method in the Uri class to determine whether the Uri provided by a user is valid.
var isValidUri = Uri.IsWellFormedUriString(passedUri, UriKind.Absolute);
However, to sufficiently verify the Uri, you should also check to make sure it specifies http or https. The following example uses instance methods to verify that the Uri is valid.
var uriToVerify = new Uri(passedUri);
var isValidUri = uriToVerify.IsWellFormedOriginalString();
var isValidScheme = uriToVerify.Scheme == “http” || uriToVerify.Scheme == “https”;
Before rendering user input as HTML or including user input in a SQL query, encode the values to ensure malicious code is not included.
You can HTML encode the value in markup with the <%: %> syntax, as shown below.
<%: userInput %>
Or, in Razor syntax, you can HTML encode with @, as shown below.

55

56

A1 - Injection

@userInput
The next example shows how to HTML encode a value in code-behind.
var encodedInput = Server.HtmlEncode(userInput);
Managed Code and Non-Managed Code
Both Java and .Net have the concept of managed and non-managed code. To offer some of these protections
during the invocation of native code, do not declare a native method public. Instead, declare it private and
expose the functionality through a public wrapper method. A wrapper can safely perform any necessary input
validation prior to the invocation of the native method:
Java Sample code to call a Native Method with Data Validation in place
Sample 7.13
public final class NativeMethodWrapper {
private native void nativeOperation(byte[] data, int offset, int len);

public void doOperation(byte[] data, int offset, int len) {
// copy mutable input
data = data.clone();
// validate input
// Note offset+len would be subject to integer overflow.
// For instance if offset = 1 and len = Integer.MAX_VALUE,
// then offset+len == Integer.MIN_VALUE which is lower
// than data.length.
// Further,
// loops of the form
// for (int i=offset; i data.length - len) {
throw new IllegalArgumentException();
}
nativeOperation(data, offset, len);
}
}

Data validations checklist for the Code Reviewer.
• Ensure that a Data Validation mechanism is present.

A1 - Injection

• Ensure all input that can (and will) be modified by a malicious user such as HTTP headers, input fields, hidden
fields, drop down lists, and other web components are properly validated.
• Ensure that the proper length checks on all input exist.
• Ensure that all fields, cookies, http headers/bodies, and form fields are validated.
• Ensure that the data is well formed and contains only known good chars if possible.
• Ensure that the data validation occurs on the server side.
• Examine where data validation occurs and if a centralized model or decentralized model is used.
• Ensure there are no backdoors in the data validation model.
• “Golden Rule: All external input, no matter what it is, will be examined and validated.”
Resources:
http://msdn.microsoft.com/en-us/library/vstudio/system.uri

57

58

A2

A2 - Broken Authentication and Session Management

A2

BROKEN AUTHENTICATION AND
SESSION MANAGEMENT

8.1 Overview
Web applications and Web services both use authentication as the primary means of access control from logins via user id and passwords. This control is essential to the prevention of confidential files, data, or web pages
from being access by hackers or users who do not have the necessary access control level.
8.2 Description
Authentication is important, as it is the gateway to the functionality you are wishing to protect. Once a user
is authenticated their requests will be authorized to perform some level of interaction with your application
that non-authenticated users will be barred from. You cannot control how users manage their authentication
information or tokens, but you can ensure there is now way to perform application functions without proper
authentication occurring.
There are many forms of authentication with passwords being the most common. Other forms include client
certificates, biometrics, one time passwords over SMS or special devices, or authentication frameworks such as
Open Authorization (OAUTH) or Single Sign On (SSO).
Typically authentication is done once, when the user logs into a website, and successful authentication results
in a web session being setup for the user (see Session Management). Further (and stronger) authentication
can be subsequently requested if the user attempts to perform a high risk function, for example a bank user
could be asked to confirm an 6 digit number that was sent to their registered phone number before allowing
money to be transferred.
Authentication is just as important within a companies firewall as outside it. Attackers should not be able to
run free on a companies internal applications simply because they found a way in through a firewall. Also
separation of privilege (or duties) means someone working in accounts should not be able to modify code in
a repository, or application managers should not be able to edit the payroll spreadsheets.
8.3 What to Review
When reviewing code modules which perform authentication functions, some common issues to look out for
include:
• Ensure the login page is only available over TLS. Some sites leave the login page has HTTP, but make the form
submission URL HTTPS so that the users username and password are encrypted when sent to the server. However if the login page is not secured, a risk exists for a man-in-the-middle to modify the form submission URL
to an HTTP URL, and when the user enters their username & password they are sent in the clear.
• Make sure your usernames/user-ids are case insensitive. Many sites use email addresses for usernames and
email addresses are already case insensitive. Regardless, it would be very strange for user ‘smith’ and user
‘Smith’ to be different users. Could result in serious confusion.
• Ensure failure messages for invalid usernames or passwords do not leak information. If the error message
indicates the username was valid, but the password was wrong, then attackers will know that username exists.
If the password was wrong, do not indicate how it was wrong.
• Make sure that every character the user types in is actually included in the password.

59

60

A2 - Broken Authentication and Session Management

• Do not log invalid passwords. Many times an e-mail address is used as the username, and those users will
have a few passwords memorized but may forget which one they used on your web site. The first time they
may use a password that in invalid for your site, but valid for many other sites that this user (identified by the
username). If you log that username and password combination, and that log leaks out, this low level compromise on your site could negatively affect many other sites.
• Longer passwords provide a greater combination of characters and consequently make it more difficult for
an attacker to guess. Minimum length of the passwords should be enforced by the application. Passwords
shorter than 10 characters are considered to be weak. Passphrases should be encouraged. For more on password lengths see the OWASP Authentication Cheat Sheet.
• To prevent brute force attacks, implement temporary account lockouts or rate limit login responses. If a user
fails to provide the correct username and password 5 times, then lock the account for X minutes, or implement
logic where login responses take an extra 10 seconds. Be careful though, this could leak the fact that the username is valid to attackers continually trying random usernames, so as an extra measure, consider implementing the same logic for invalid usernames.
• For internal systems, consider forcing the users to change passwords after a set period of time, and store a
reference (e.g. hash) of the last 5 or more passwords to ensure the user is not simply re-using their old password.
• Password complexity should be enforced by making users choose password strings that include various type of
characters (e.g. upper- and lower-case letters, numbers, punctuation, etc.). Ideally, the application would indicate to
the user as they type in their new password how much of the complexity policy their new password meets. For more
on password complexity see the OWASP Authentication Cheat Sheet.
• It is common for an application to have a mechanism that provides a means for a user to gain access to their
account in the event they forget their password. This is an example of web site functionality this is invoked by unauthenticated users (as they have not provided their password). Ensure interfaces like this are protected from misuse, if
asking for password reminder results in an e-mail or SMS being sent to the registered user, do not allow the password
reset feature to be used to spam the user by attackers constantly entering the username into this form. Please see
Forgot Password Cheat Sheet for details on this feature.
• It is critical for an application to store a password using the right cryptographic technique. Please see
Password Storage Cheat Sheet for details on this feature.
When reviewing MVC .NET is is important to make sure the application transmits and received over a secure link. It is
recommended to have all web pages not just login pages use SSL.
We need to protect session cookies, which are useful as users credentials.
Sample 8.1
public static void RegisterGlobalFilters(GlobalFilterCollection filters) {
“ flters.Add(new RequireHttpsAttribute()); “
}

In the global.asax file we can review the RegisterGlobalFilters method.

A2 - Broken Authentication and Session Management

The attribute RequireHttpsAttribute() can be used to make sure the application runs over SSL/TLS
It is recommended that this be enabled for SSL/TLS sites.
• For high risk functions, e.g. banking transactions, user profile updates, etc., utilize multi-factor authentication
(MFA). This also mitigates against CSRF and session hijacking attacks. MFA is using more than one authentication
factor to logon or process a transaction:
• Something you know (account details or passwords)
• Something you have (tokens or mobile phones)
• Something you are (biometrics)
• If the client machine is in a controlled environment utilize SSL Client Authentication, also known as two-way
SSL authentication, which consists of both browser and server sending their respective SSL certificates during the TLS
handshake process. This provides stronger authentication as the server administrator can create and issue client certificates, allowing the server to only trust login requests from machines that have this client SSL certificate installed.
Secure transmission of the client certificate is important.
References
• https://www.owasp.org/index.php/Authentication_Cheat_Sheet
• http://csrc.nist.gov/publications/nistpubs/800-132/nist-sp800-132.pdf
• http://www.codeproject.com/Articles/326574/An-Introduction-to-Mutual-SSL-Authentication
• https://cwe.mitre.org/data/definitions/287.html Improper Authentication
• OWASP ASVS requirements areas for Authentication (V2)
8.4 Forgot Password
Overview
If your web site needs to have user authentication then most likely it will require user name and password to
authenticate user accesses. However as computer system have increased in complexity, so has authenticating
users has also increased. As a result the code reviewer needs to be aware of the benefits and drawbacks of user
authentication referred to as “Direct Authentication” pattern in this section. This section is going to emphasise
design patterns for when users forget user id and or password and what the code reviewer needs to consider
when reviewing how user id and passwords can be retrieved when forgotten by the user and how to do this
in a secure manner.
General considerations
Notify user by (phone sms, email) such that the user has to click a link in the email that takes them to your site
and ask the user to enter a new password.
Ask user to enter login credentials they already have (Facebook, Twitter, Google, Microsoft Live, OpenID etc) to
validate user before allowing user to change password.
Send notification to user to confrm registration or forgot password usage.
Send notifications that account information has been changed for registered email. Set appropriate time out
value. I.e. If user does not respond to email within 48 hours then user will be frozen out of system until user
re-affirms password change.
• The identity and shared secret/password must be transferred using encryption to provide data confidentiality.

61

62

A2 - Broken Authentication and Session Management

• A shared secret can never be stored in clear text format, even if only for a short time in a message queue.
• A shared secret must always be stored in hashed or encrypted format in a database.
• The organization storing the encrypted shared secret does not need the ability to view or decrypt users
passwords. User password must never be sent back to a user.
• If the client must cache the username and password for presentation for subsequent calls to a Web service
then a secure cache mechanism needs to be in place to protect user name and password.
• When reporting an invalid entry back to a user, the username and or password should not be identified as being invalid. User feed back/error message must consider both user name and password as one item “user credential”.
I.e. “The username or password you entered is incorrect.”
8.5 CAPTCHA
Overview
CAPTCHA (an acronym for “Completely Automated Public Turing test to tell Computers and Humans Apart”.) is
an access control technique.
CAPTCHA is used to prevent automated software from gaining access to webmail services like Gmail, Hotmail
and Yahoo to create e-mail spam, automated postings to blogs, forums and wikis for the purpose of promotion
(commercial, and or political) or harassment and vandalism and automated account creation.
CAPTCHA’s have proved useful and their use has been upheld in court. Circumventing CAPTCHA has been
upheld in US Courts as a violation Digital Millennium Copyright Act anti-circumvention section 1201(a)(3) and
European Directive 2001/29/EC.
General considerations
Code review of CAPTCHA’s the reviewer needs to pay attention to the following rules to make sure the CAPTCHA is built with strong security principals.
• Do not allow the user to enter multiple guesses after an incorrect attempt.
• The software designer and code review need to understand the statics of guessing. I.e. One CAPTCHA
design shows four (3 cats and 1 boat) pictures, User is requested to pick the picture where it is not in the same category of the other pictures. Automated software will have a success rate of 25% by always picking the first picture.
Second depending on the fixed pool of CAPTCHA images over time an attacker can create a database of correct
answers then gain 100% access.
• Consider using a key being passed to the server that uses a HMAC (Hash-based message authentication
code) the answer.
Text base CAPTCHA’s should adhere to the following security design principals...
1. Randomize the CAPTCHA length: Don’t use a fixed length; it gives too much information to the attacker.
2. Randomize the character size: Make sure the attacker can’t make educated guesses by using several font
sizes / several fonts.
3. Wave the CAPTCHA: Waving the CAPTCHA increases the difficulty for the attacker.
4. Don’t use a complex charset: Using a large charset does not improve significantly the CAPTCHA scheme’s

A2 - Broken Authentication and Session Management

security and really hurts human accuracy.
5. Use anti-recognition techniques as a means of strengthening CAPTCHA security: Rotation, scaling and
rotating some characters and using various font sizes will reduce the recognition efficiency and increase security by making character width less predictable.
6. Keep the line within the CAPTCHAs: Lines must cross only some of the CAPTCHA letters, so that it is
impossible to tell whether it is a line or a character segment.
7. Use large lines: Using lines that are not as wide as the character segments gives an attacker a robust
discriminator and makes the line anti-segmentation technique vulnerable to many attack techniques.
8. CAPTCHA does create issues for web sites that must be ADA (Americans with Disabilities Act of 1990)
compliant. Code reviewer may need to be aware of web accessibilities and security to review the CAPTCHA
implementation where web site is required to be ADA complaint by law.
References
• UNITED STATES of AMERICA vs KENNETH LOWSON, KRISTOFER KIRSCH, LOEL STEVENSON Federal Indictment.
February 23, 2010. Retrieved 2012-01-02.
• http://www.google.com/recaptcha/captcha
• http://www.ada.gov/anprm2010/web%20anprm_2010.htm
• Inaccessibility of CAPTCHA - Alternatives to Visual Turing Tests on the Web http://www.w3.org/TR/turingtest/
8.6 Out-of-Band Communication
Overview
The term ‘out-of-band’ is commonly used when an web application communicates with an end user over a
channel separate to the HTTP request/responses conducted through the users’ web browser. Common examples include text/SMS, phone calls, e-mail and regular mail.
Description
The main reason an application would wish to communicate with the end user via these separate channels
is for security. A username and password combination could be sufficient authentication to allow a user to
browse and use non-sensitive parts of a website, however more sensitive (or risky) functions could require
a stronger form of authentication. A username and password could have been stolen through an infected
computer, through social engineering, database leak or other attacks, meaning the web application cannot
put too much in trust a web session providing the valid username and password combination is actually the
intended user.
Examples of sensitive operations could include:
• Changing password
• Changing account details, such as e-mail address, home address, etc
• Transferring funds in a banking application
• Submitting, modifying or cancelling orders
In these cases many applications will communicate with you via a channel other than a browsing session.
Many large on-line stores will send you confirmation e-mails when you change account details, or purchase
items. This protects in the case that an attacker has the username and password, if they buy something, the
legitimate user will get an e-mail and have a chance to cancel the order or alert the website that they did not
modify the account.

63

64

A2 - Broken Authentication and Session Management

When out-of-band techniques are performed for authentication it is termed two-factor authentication. There
are three ways to authenticate:
1. Something you know (e.g. password, passphrase, memorized PIN)
2. Something you have (e.g. mobile phone, cash card, RSA token)
3. Something you are (e.g. iris scan, fingerprint)
If a banking website allows users to initiate transactions online, it could use two-factor authentication by taking 1) the password used to log in and 2) sending an PIN number over SMS to the users registered phone, and
then requiring the user enter the PIN before completing the transaction. This requires something the user
knows (password) and has (phone to receive the PIN).
A ‘chip-and-pin’ banking card will use two-factor authentication, by requiring users to have the card with them
(something they have) and also enter a PIN when performing a purchase (something they know). A ‘chip-andpin’ card is not use within the PIN number, likewise knowing the PIN number is useless if you do not have the
card.
What to Review
When reviewing code modules which perform out-of-band functions, some common issues to look out for
include:
1. Recognize the risk of the system being abused. Attackers would like to flood someone with SMS messages
from your site, or e-mails to random people. Ensure:
2. When possible, only authenticated users can access links that cause an out-of-band feature to be invoked
(forgot password being an exception).
3. Rate limit the interface, thus users with infected machines, or hacked accounts, can’t use it to flood
out-of-band messages to a user.
4. Do not allow the feature to accept the destination from the user, only use registered phone numbers,
e-mails, addresses.
5. For high risk sites (e.g. banking) the users phone number can be registered in person rather than via the
web site.
6. Do not send any personal or authentication information in the out-of-band communication.
7. Ensure any PINs or passwords send over out-of-band channels have a short life-span and are random.
8. A consideration can be to prevent SMS messages being sent to the device currently conducting the
browsing session (i.e. user browsing on their iPhone, were the SMS is sent to). However this can be hard to enforce.
If possible give users the choice of band they wish to use. For banking sites Zitmo malware on mobile
devices (see references) can intercept the SMS messages, however iOS devices have not been affected by this malware yet, so users could choose to have SMS PINs sent to their Apple devices, and when on Android they could use
recorded voice messages to receive the PIN.
9. In a typical deployments specialized hardware/software separate from the web application will handle the
out-of-band communication, including the creation of temporary PINs and possibly passwords. In this scenario there
is no need to expose the PIN/password to your web application (increasing risk of exposure), instead the web application should query the specialized hardware/software with the PIN/password supplied by the end user, and receive

A2 - Broken Authentication and Session Management

a positive or negative response.
Many sectors including the banking sector have regulations requiring the use of two-factor authentication for certain
types of transactions. In other cases two-factor authentication can reduce costs due to fraud and re-assure customers
of the security of a website.
References
• https://www.owasp.org/index.php/Forgot_Password_Cheat_Sheet
• http://securelist.com/blog/virus-watch/57860/new-zitmo-for-android-and-blackberry/
8.7 Session Management
Overview
A web session is a sequence of network HTTP request and response transactions associated to the same user. Session
management or state is needed by web applications that require the retaining of information or status about each
user for the duration of multiple requests. Therefore, sessions provide the ability to establish variables – such as access
rights and localization settings – which will apply to each and every interaction a user has with the web application
for the duration of the session.
Description
Code reviewer needs to understand what session techniques the developers used, and how to spot vulnerabilities
that may create potential security risks. Web applications can create sessions to keep track of anonymous users after
the very first user request. An example would be maintaining the user language preference. Additionally, web applications will make use of sessions once the user has authenticated. This ensures the ability to identify the user on
any subsequent requests as well as being able to apply security access controls, authorized access to the user private
data, and to increase the usability of the application. Therefore, current web applications can provide session capabilities both pre and post authentication.
The session ID or token binds the user authentication credentials (in the form of a user session) to the user
HTTP traffic and the appropriate access controls enforced by the web application. The complexity of these
three components (authentication, session management, and access control) in modern web applications,
plus the fact that its implementation and binding resides on the web developer’s hands (as web development
framework do not provide strict relationships between these modules), makes the implementation of a secure
session management module very challenging.
The disclosure, capture, prediction, brute force, or fixation of the session ID will lead to session hijacking (or
sidejacking) attacks, where an attacker is able to fully impersonate a victim user in the web application. Attackers can perform two types of session hijacking attacks, targeted or generic. In a targeted attack, the attacker’s
goal is to impersonate a specific (or privileged) web application victim user. For generic attacks, the attacker’s
goal is to impersonate (or get access as) any valid or legitimate user in the web application.
With the goal of implementing secure session IDs, the generation of identifiers (IDs or tokens) must meet the
following properties:
• The name used by the session ID should not be extremely descriptive nor offer unnecessary details about the
purpose and meaning of the ID.
• It is recommended to change the default session ID name of the web development framework to a generic
name, such as “id”.
• The session ID length must be at least 128 bits (16 bytes) (The session ID value must provide at least 64 bits
of entropy).

65

66

A2 - Broken Authentication and Session Management

• The session ID content (or value) must be meaningless to prevent information disclosure attacks, where an
attacker is able to decode the contents of the ID and extract details of the user, the session, or the inner workings of the web application.
It is recommended to create cryptographically strong session IDs through the usage of cryptographic hash
functions such as SHA2 (256 bits).
What to Review
Require cookies when your application includes authentication. Code reviewer needs to understand what information is stored in the application cookies. Risk management is needed to address if sensitive information
is stored in the cookie requiring SSL for the cookie
.Net ASPX web.config
Sample 8.2




Java web.xml
Sample 8.3


true



PHP.INI
Sample 8.4
void session_set_cookie_params ( int $lifetime [, string $path [, string $domain [, bool $secure = true [, bool
$httponly = true ]]]] )

Session Expiration
In reviewing session handling code the reviewer needs to understand what expiration timeouts are needed
by the web application or if default session timeout are being used. Insufficient session expiration by the web

A2 - Broken Authentication and Session Management

application increases the exposure of other session-based attacks, as for the attacker to be able to reuse a valid
session ID and hijack the associated session, it must still be active.
Remember for secure coding one of our goals is to reduce the attack surface of our application.
.Net ASPX
Sample 8.5




ASPX the developer can change the default time out for a session. This code in the web.config file sets the
timeout session to 15 minutes. The default timeout for an aspx session is 30 minutes.
Java
Sample 8.6

1


PHP
Does not have a session timeout mechanism. PHP developers will need to create their own custom session timeout.
Session Logout/Ending.
Web applications should provide mechanisms that allow security aware users to actively close their session once they
have finished using the web application.
.Net ASPX Session.Abandon() method destroys all the objects stored in a Session object and releases their resources.
If you do not call the Abandon method explicitly, the server destroys these objects when the session times out. You
should use it when the user logs out. Session.Clear() Removes all keys and values from the session. Does not change
session ID. Use this command if you if you don’t want the user to relogin and reset all the session specific data.
Session Attacks
Generally three sorts of session attacks are possible:
• Session Hijacking: stealing someone’s session-id, and using it to impersonate that user.
• Session Fixation: setting someone’s session-id to a predefined value, and impersonating them using that
known value
• Session Elevation: when the importance of a session is changed, but its ID is not.

67

68

A2 - Broken Authentication and Session Management

Session Hijacking
• Mostly done via XSS attacks, mostly can be prevented by HTTP-Only session cookies (unless Javascript code
requires access to them).
• It’s generally a good idea for Javascript not to need access to session cookies, as preventing all flavors of XSS
is usually the toughest part of hardening a system.
• Session-ids should be placed inside cookies, and not in URLs. URL information are stored in browser’s history,
and HTTP Referrers, and can be accessed by attackers.
• As cookies can be accessed by default from javascript and preventing all flavors of XSS is usually the toughest
part of hardening a system, there is an attribute called “HTTPOnly”, that forbids this access. The session cookie
should has this attribute set. Anyway, as there is no need to access a session cookie from the client, you
should get suspicious about client side code that depends on this access.
• Geographical location checking can help detect simple hijacking scenarios. Advanced hijackers use the
sameIP (or range) of the victim.
• An active session should be warned when it is accessed from another location.
• An active users should be warned when s/he has an active session somewhere else (if the policy allows
multiple sessions for a single user).
Session Fixation
• If the application sees a new session-id that is not present in the pool, it should be rejected and a new
session-id should be advertised. This is the sole method to prevent fixation.
• All the session-ids should be generated by the application, and then stored in a pool to be checked later for.
Application is the sole authority for session generation.
Session Elevation
• Whenever a session is elevated (login, logout, certain authorization), it should be rolled.
• Many applications create sessions for visitors as well (and not just authenticated users). They should
definitely roll the session on elevation, because the user expects the application to treat them securely after
they login.
• When a down-elevation occurs, the session information regarding the higher level should be flushed.
• Sessions should be rolled when they are elevated. Rolling means that the session-id should be changed, and
the session information should be transferred to the new id.
Server-Side Defenses for Session Management
.NET ASPX
Generating new session Id’s helps prevent, session rolling, fixation, hijacking.

A2 - Broken Authentication and Session Management

Sample 8.7
public class GuidSessionIDManager : SessionIDManager {
public override string CreateSessionID(HttpContext context){
return Guid.NewGuid().ToString();
}
public override bool Validate(string id) {
try{
Guid testGuid = new Guid(id);
if (id == testGuid.ToString())
return true;
}catch(Exception e) { throw e }
return false;
}
}
Java
Sample 8.8
request.getSession(false).invalidate();
//and then create a new session with
getSession(true) (getSession())

PHP.INI
Sample 8.9
session.use_trans_sid = 0
session.use_only_cookies

References
• https://wwww.owasp.org/index.php/SecureFlag

69

70

A3

A3 - Cross-Site Scripting (XSS)

A3

CROSS-SITE SCRIPTING (XSS)

9.1 Overview
What is Cross-Site Scripting (XSS)?
Cross-site scripting (XSS) is a type of coding vulnerability. It is usually found in web applications. XSS enables attackers to inject malicious into web pages viewed by other users. XSS may allow attackers to bypass access controls such
as the same-origin policy may. This is one of the most common vulnerabilities found accordingly with OWASP Top
10. Symantec in its annual threat report found that XSS was the number two vulnerability found on web servers. The
severity/risk of this vulnerability may range from a nuisance to a major security risk, depending on the sensitivity of
the data handled by the vulnerable site and the nature of any security mitigation implemented by the site’s organization.
Description
There are three types of XSS, Reflected XSS (Non-Persistent), Stored XSS(Persistent), and DOM based XSS. Each of
these types has different means to deliver a malicious payload to the server. The important takeaway is the consequences are the same.
What to Review
Cross-site scripting vulnerabilities are difficult to identify and remove from a web application
Cross-site scripting flaws can be difficult to identify and remove from a web application. The best practice to search
for flaws is to perform an intense code review and search for all places where user input through a HTTP request
could possibly make its way into the HTML output.
Code reviewer needs to closely review.
1. That untrusted data is not transmitted in the same HTTP responses as HTML or JavaScript.
2. When data is transmitted from the server to the client, untrusted data must be properly encoded and the HTTP
response. Do not assume data from the server is safe. Best practice is to always check data.
3. When introduced into the DOM, untrusted data MUST be introduced using one of the following APIs:
a. Node.textContent
b. document.createTextNode
c. Element.setAttribute (second parameter only)
Code reviewer should also be aware of the HTML tags (such as , ,  etc.) can
be used to transmit malicious JavaScript.
Web application vulnerability automated tools/scanners can help to find Cross-Site scripting flaws. However they
cannot find all XSS vulnerabilities, hence manual code reviews are important. Manual code reviews wont catch
all either but a defense in depth approach is always the best approach based on your level of risk.
OWASP Zed Attack Proxy(ZAP) is an easy to use integrated penetration-testing tool for finding vulnerabilities in web
applications. ZAP provides automated scanners as well as a set of tools that allow you to find security vulnerabilities
manually. It acts as a web proxy that you point your browser to so it can see the traffic going to a site and allows you
to spider, scan, fuzz, and attack the application. There are other scanners available both open source and commercial.

71

72

Use Microsft’s Anti-XSS library
Another level of help to prevent XSS is to use an Anti-XSS library.
Unfortunately, HtmlEncode or validation feature is not enough to deal with XSS, especially if the user input
needs to be added to JavaScript code, tag attributes, XML or URL. In this case a good option is the Anti-XSS
libray
.NET ASPX
1. On ASPX .Net pages code review should check to make sure web config file does not turn off page
validation. 
2. .Net framework 4.0 does not allow page validation to be turned off. Hence if the programmer wants to turn of
page validation the developer will need to regress back to 2.0 validation mode. 
3. Code reviewer needs to make sure page validation is never turned off on anywhere and if it is understand
why and the risks it opens the organization to. <%@ Page Language=”C#” ValidationRequest=”false”
.NET MVC
When MVC web apps are exposed to malicious XSS code, they will throw an error like the following one:
Figure 6: Example .Net XSS Framework Error

To avoid this vulnerability, make sure the following code is included:
<%server.HtmlEncode(stringValue)%>
The HTMLEncode method applies HTML encoding to a specified string. This is useful as a quick method of
encoding form data and other client request data before using it in your Web application. Encoding data converts potentially unsafe characters to their HTML-encoded equivalent.(MSDN,2013)
JavaScript and JavaScript Frameworks
Both Javascript and Javascript frameworks are now widely use in web applications today. This hinders the
code reviewer in knowing what frameworks do a good job on preventing XSS flaws and which ones don’t.
Code reviewer should check to see what to see if any CVE exists for the framework being used and also check
that the javascript framework is the latest stable version.

73

OWASP References
• OWASP XSS Prevention Cheat Sheet
• OWASP XSS Filter Evasion Cheat Sheet
• OWASP DOM based XSS Prevention Cheat Sheet
• Testing Guide: 1st 3 chapters on Data Validation Testing
• OWASP Zed Attack Proxy Project
External References
• https://www4.symantec.com/mktginfo/whitepaper/ISTR/21347932_GA-internet-security-threat-report-volume-20-2015-social_v2.pdf
• https://cwe.mitre.org/data/definitions/79.html
• http://webblaze.cs.berkeley.edu/papers/scriptgard.pdf
• http://html5sec.org
• https://cve.mitre.org
9.2 HTML Attribute Encoding
HTML attributes may contain untrusted data. It is important to determine if any ot the HTML attributes on a
given page contains data from outside the trust boundary.
Some HTML attributes are considered safer than others such as align, alink, alt, bgcolor, border, cellpadding,
cellspacing, class, color, cols, colspan, coords, dir, face, height, hspace, ismap, lang, marginheight, marginwidth,
multiple, nohref, noresize, noshade, nowrap, ref, rel, rev, rows, rowspan, scrolling, shape, span, summary, tabindex, title, usemap, valign, value, vlink, vspace, width.
When reviewing code for XSS we need to look for HTML attributes such as the following reviewing code for
XSS we need to look for HTML attributes such as the following:

Attacks may take the following format:
“>
What is Attribute encoding?
HTML attribute encoding replaces a subset of characters that are important to prevent a string of characters
from breaking the attribute of an HTML element.
This is because the nature of attributes, the data they contain, and how they are parsed and interpreted by
a browser or HTML parser is different than how an HTML document and its elements are read; OWASP XSS
Prevention Cheat Sheet. Except for alphanumeric characters, escape all characters with ASCII values less than
256 with the &#xHH; format (or a named entity if available) to prevent switching out of the attribute. The reason this rule is so broad is that developers frequently leave attributes unquoted. Properly quoted attributes
can only be escaped with the corresponding quote. Unquoted attributes can be broken out of with many
characters, including [space] % * + , - / ; < = > ^ and |.
Attribute encoding may be performed in a number of ways. Two resources are:
1. HttpUtility.HtmlAttributeEncode

74

A3 - Cross-Site Scripting (XSS)

http://msdn.microsoft.com/en-us/library/wdek0zbf.aspx
2. OWASP Java Encoder Project
https://www.owasp.org/index.php/OWASP_Java_Encoder_Project
HTML Entity
HTML elements which contain user controlled data or data from untrusted sourced should be reviewed for
contextual output encoding. In the case of HTML entities we need to help ensure HTML Entity encoding is
performed:
Example HTML Entity containing untrusted data:
HTML Body Context UNTRUSTED DATA OR ...UNTRUSTED DATA  OR
UNTRUSTED DATA
HTML Entity Encoding is required & --> & < --> < > --> > “ --> " ‘ --> ' It is recommended to review where/if untrusted data is placed within entity objects. Searching the source code fro the following encoders may help establish if HTML entity encoding is being done in the application and in a consistent manner. OWASP Java Encoder Project https://www.owasp.org/index.php/OWASP_Java_Encoder_Project ” /> OWASP ESAPI http://code.google.com/p/owasp-esapi-java/source/browse/trunk/src/main/java/org/owasp/esapi/codecs/ HTMLEntityCodec.java String safe = ESAPI.encoder().encodeForHTML( request.getParameter( “input” ) ); JavaScript Parameters Untrusted data, if being placed inside a JavaScript function/code requires validation. Invalidated data may break out of the data context and wind up being executed in the code context on a users browser. Examples of exploitation points (sinks) that are worth reviewing for: A3 - Cross-Site Scripting (XSS) script> attack: ‘);/* BAD STUFF */ eval() Sample 9.1 var txtField = “A1”; var txtUserInput = “’test@google.ie’;alert(1);”; eval( “document.forms[0].” + txtField + “.value =” + A1); jquery Sample 9.2 var txtAlertMsg = “Hello World: “; var txtUserInput = “test The text in the example above has a number of issues. Firstly, it displays the HTTP request to the user in the form of Request.Url.ToString(). Assuming there has been no data validation prior to this point, we are vulnerable to cross site scripting attacks. Secondly, the error message and stack trace is displayed to the user using Server.GetLastError().ToString() which divulges internal information regarding the application. After the Page_Error is called, the Application_Error sub is called. When an error occurs, the Application_Error function is called. In this method we can log the error and redirect to another page. In fact catching errors in Application_Error instead of Page_Error would be an example of centralizing errors as described earlier. Error Handling Sample 20.7 <%@ Import Namespace=”System.Diagnostics” %> Above is an example of code in Global.asax and the Application_Error method. The error is logged and then the user is redirected. Non-validated parameters are being logged here in the form of Request.Path. Care must be taken not to log or display non-validated input from any external source. ‘’’’ Web.config has custom error tags which can be used to handle errors. This is called last and if Page_error or Application_error is called and has functionality, that functionality shall be executed first. If the previous two handling mechanisms do not redirect or clear (Response.Redirect or a Server.ClearError), this will be called and you shall be forwarded to the page defined in web.config in the customErrors section, which is configured as follows: Sample 20.8 ” defaultRedirect=””> ” redirect=””/> The “mode” attribute value of “On” means that custom errors are enabled whilst the “Off” value means that custom errors are disabled. The “mode” attribute can also be set to “RemoteOnly” which specifies that custom errors are shown only to remote clients and ASP.NET errors are shown to requests coming from the the local host. If the “mode” attribute is not set then it defaults to “RemoteOnly”. When an error occurs, if the status code of the response matches one of the error elements, then the relevent ‘redirect’ value is returned as the error page. If the status code does not match then the error page from the ‘defaultRedirect’ attribute will be displayed. If no value is set for ‘defaultRedirect’ then a generic IIS error page is returned. An example of the customErrors section completed for an application is as follows: 169 170 Error Handling Sample 20.9 What to Review: Error Handling in Apache In Apache you have two choices in how to return error messages to the client: 1. You can write the error status code into the req object and write the response to appear the way you want, then have you handler return ‘DONE’ (which means the Apache framework will not allow any further handlers/ filters to process the request and will send the response to the client. 2. Your handler or filter code can return pre-defined values which will tell the Apache framework the result of your codes processsing (essentially the HTTP status code). You can then configure what error pages should be returned for each error code. In the interest of centralizing all error code handling, option 2 can make more sense. To return a specific predefined value from your handler, refer to the Apache documentation for the list of values to use, and then return from the handler function as shown in the following example: Sample 20.10 static int my_handler(request_rec *r) { if ( problem_processing() ) { return HTTP_INTERNAL_SERVER_ERROR; } ... continue processing request ... } In the httpd.conf file you can then specify which page should be returned for each error code using the ‘ErrorDocument’ directive. The format of this directive is as follows: • ErrorDocument <3-digit-code> ... where the 3 digit code is the HTTP response code set by the handler, and the action is a local or external URL to be returned, or specific text to display. The following examples are taken from the Apache ErrorDocument documentation (https://httpd.apache.org/docs/2.4/custom-error.html) which contains more information and options on ErrorDocument directives: Error Handling Sample 20.11 ErrorDocument 500 “Sorry, our script crashed. Oh dear” ErrorDocument 500 /cgi-bin/crash-recover ErrorDocument 500 http://error.example.com/server_error.html ErrorDocument 404 /errors/not_found.html ErrorDocument 401 /subscription/how_to_subscribe.html What to Review: Leading Practice for Error Handling Code that might throw exceptions should be in a try block and code that handles exceptions in a catch block. The catch block is a series of statements beginning with the keyword catch, followed by an exception type and an action to be taken. Example: Java Try-Catch: Sample 20.12 public class DoStuff { public static void Main() { try { StreamReader sr = File.OpenText(“stuff.txt”); Console.WriteLine(“Reading line {0}”, sr.ReadLine()); } catch(MyClassExtendedFromException e) { Console.WriteLine(“An error occurred. Please leave to room”); logerror(“Error: “, e); } } } .NET Try–Catch Sample 20.13 public void run() { while (!stop) { try { // Perform work here } catch (Throwable t) { // Log the exception and continue WriteToUser(“An Error has occurred, put the kettle on”); logger.log(Level.SEVERE, “Unexception exception”, t); 171 172 Error Handling } } } C++ Try–Catch Sample 20.14 void perform_fn() { try { // Perform work here } catch ( const MyClassExtendedFromStdException& e) { // Log the exception and continue WriteToUser(“An Error has occurred, put the kettle on”); logger.log(Level.SEVERE, “Unexception exception”, e); } } In general, it is best practice to catch a specific type of exception rather than use the basic catch(Exception) or catch(Throwable) statement in the case of Java. What to Review: The Order of Catching Exceptions Keep in mind that many languages will attempt to match the thrown exception to the catch clause even if it means matching the thrown exception to a parent class. Also remember that catch clauses are checked in the order they are coded on the page. This could leave you in the situation where a certain type of exception might never be handled correctly, take the following example where ‘non_even_argument’ is a subclass of ‘std::invalid_argument’: Sample 20.15 class non_even_argument : public std::invalid_argument { public: explicit non_even_argument (const string& what_arg); }; void do_fn() { try { // Perform work that could throw } catch ( const std::invalid_argument& e ) { Error Handling // Perform generic invalid argument processing and return failure } catch ( const non_even_argument& e ) { // Perform specific processing to make argument even and continue processing } } The problem with this code is that when a ‘non_even_argument is thrown, the catch branch handling ‘std::invalid_argument’ will always be executed as it is a parent of ‘non_even_argument’ and thus the runtime system will consider it a match (this could also lead to slicing). Thus you need to be aware of the hierarchy of your exception objects and ensure that you list the catch for the more specific exceptions first in your code. If the language in question has a finally method, use it. The finally method is guaranteed to always be called. The finally method can be used to release resources referenced by the method that threw the exception. This is very important. An example would be if a method gained a database connection from a pool of connections, and an exception occurred without finally, the connection object shall not be returned to the pool for some time (until the timeout). This can lead to pool exhaustion. finally() is called even if no exception is thrown. Sample 20.16 void perform_fn() { try { // Perform work here } } catch ( const MyClassExtendedFromStdException& e) { // Log the exception and continue WriteToUser(“An Error has occurred, put the kettle on”); logger.log(Level.SEVERE, “Unexception exception”, e); } A Java example showing finally() being used to release system resources. What to Review: Releasing resources and good housekeeping RAII is Resource Acquisition Is Initialization, which is a way of saying that when you first create an instance of a type, it should be fully setup (or as much as possible) so that it’s in a good state. Another advantage of RAII is how objects are disposed of, effectively when an object instance is no longer needed then it resources are automatically returned when the object goes out of scope (C++) or when it’s ‘using’ block is finished (C# ‘using’ directive which calls the Dispose method, or Java 7’s try-with-resources feature) RAII has the advantage that programmers (and users to libraries) don’t need to explicitly delete objects, the objects will be removed themselves, and in the process of removing themselves (destructor or Dispose) For Classic ASP pages it is recommended to enclose all cleaning in a function and call it into an error handling 173 174 Error Handling statement after an “On Error Resume Next”. Building an infrastructure for consistent error reporting proves more difficult than error handling. Struts provides the ActionMessages and ActionErrors classes for maintaining a stack of error messages to be reported, which can be used with JSP tags like to display these error messages to the user. To report a different severity of a message in a different manner (like error, warning, or information) the following tasks are required: 1. Register, instantiate the errors under the appropriate severity 2. Identify these messages and show them in a consistent manner. Struts ActionErrors class makes error handling quite easy: Sample 20.17 ActionErrors errors = new ActionErrors() errors.add(“fatal”, new ActionError(“....”)); errors.add(“error”, new ActionError(“....”)); errors.add(“warning”, new ActionError(“....”)); errors.add(“information”, new ActionError(“....”)); saveErrors(request,errors); // Important to do this Now that we have added the errors, we display them by using tags in the HTML page. Sample 20.18 References • For classic ASP pages you need to do some IIS configuration, follow http://support.microsoft.com/ kb/299981 for more information. • For default HTTP error page handling in struts (web.xml) see https://software-security.sans.org/ blog/2010/08/11/security-misconfigurations-java-webxml-files Reviewing Security Alerts REVIEWING SECURITY ALERTS How will your code and applications react when something has gone wrong? Many companies that follow secure design and coding principals do so to prevent attackers from getting into their network, however many companies do not consider designing and coding for the scenario where an attacker may have found a vulnerability, or has already exploited it to run code within a companies firewalls (i.e. within the Intranet). Many companies employ SIEM logging technologies to monitor network and OS logs for patterns that detect suspicions activity, this section aims to further encourage application layers and interfaces to do the same. 21.1 Description This section concentrates on: 1. Design and code that allows the user to react when a system is being attacked. 2. Concepts allowing applications to flag when they have been breached. When a company implements secure design and coding, it will have the aim of preventing attackers from misusing the software and accessing information they should not have access to. Input validation checks for SQL injections, XSS, CSRF, etc. should prevent attackers from being able to exploit these types of vulnerabilities against the software. However how should software react when an attacker is attempting to breach the defenses, or the protections have been breached? For an application to alert to security issues, it needs context on what is ‘normal’ and what constitutes a security issue. This will differ based on the application and the context within which it is running. In general applications should not attempt to log every item that occurs as the excessive logging will slow down the system, fill up disk or DB space, and make it very hard to filter through all the information to find the security issue. At the same time, if not enough information is monitored or logged, then security alerting will be very hard to do based on the available information. To achieve this balance an application could use its own risk scoring system, monitoring at a system level what risk triggers have been spotted (i.e. invalid inputs, failed passwords, etc.) and use different modes of logging. Take an example of normal usage, in this scenario only critical items are logged. However if the security risk is perceived to have increased, then major or security level items can be logged and acted upon. This higher security risk could also invoke further security functionality as described later in this section. Take an example where an online form (post authentication) allows a user to enter a month of the year. Here the UI is designed to give the user a drop down list of the months (January through to December). In this case the logged in user should only ever enter one of 12 values, since they typically should not be entering any text, instead they are simply selecting one of the pre-defined drop down values. If the server receiving this form has followed secure coding practices, it will typically check that the form field matches one of the 12 allowed values, and then considers it valid. If the form field does not match, it returns an error, and may log a message in the server. This prevents the attacker from exploiting this particular field, however this is unlikely to deter an attacker and they would move onto other form fields. 175 176 Reviewing Security Alerts In this scenario we have more information available to us than we have recorded. We have returned an error back to the user, and maybe logged an error on the server. In fact we know a lot more; an authenticated user has entered an invalid value which they should never have been able to do (as it’s a drop down list) in normal usage. This could be due to a few reasons: • There’s a bug in the software and the user is not malicious. • An attacker has stolen the users login credentials and is attempting to attack the system. • A user has logged in but has a virus/trojan which is attempting to attack the system. • A user has logged in but is experiencing a man-in-the-middle attack. • A user is not intending to be malicious but has somehow changed the value with some browser plugin, etc. If it’s the first case above, then the company should know about it to fix their system. If it’s case 2, 3 or 3 then the application should take some action to protect itself and the user, such as reducing the functionality available to the user (i.e. no PII viewable, can’t change passwords, can’t perform financial transactions) or forcing further authentication such as security questions or out-of-band authentication. The system could also alert the user to the fact that the unexpected input was spotted and advise them to run antivirus, etc., thus stopping an attack when it is underway. Obviously care must be taken in limiting user functionality or alerting users encase it’s an honest mistake, so using a risk score or noting session alerts should be used. For example, if everything has been normal in the browsing session and 1 character is out of place, then showing a red pop-up box stating the user has been hacked is not reasonable, however if this is not the usual IP address for the user, they have logged in at an unusual time, and this is the 5th malformed entry with what looks like an SQL injection string, then it would be reasonable for the application to react. This possible reaction would need to be stated in legal documentation. In another scenario, if an attacker has got through the application defenses extracted part of the applications customer database, would the company know? Splitting information in the database into separate tables makes sense from an efficiency point of view, but also from a security view, even putting confidential information into a separate partition can make it harder for the attacker. However if the attacker has the information it can be hard to detect and applications should make steps to aid alerting software (e.g. SIEM systems). Many financial institutions use risk scoring systems to look at elements of the user’s session to give a risk score, if Johnny always logs in at 6pm on a Thursday from the same IP, then we have a trusted pattern. If suddenly Johnny logs in at 2:15am from an IP address on the other side of the world, after getting the password wrong 7 times, then maybe he’s jetlagged after a long trip, or perhaps his account has been hacked. Either way, asking him for out-of-band authentication would be reasonable to allow Johnny to log in, or to block an attacker from using Johnny’s account. If the application takes this to a larger view, it can determine that on a normal day 3% of the users log on in what would be considered a riskier way, i.e. different IP address, different time, etc. If on Thursday it sees this number rise to 23% then has something strange happened to the user base, or has the database been hacked? This type of information can be used to enforce a blanket out-of-band authentication (and internal investigation of the logs) for the 23% of ‘riskier’ users, thereby combining the risk score for the user with the overall risk score for the application. Reviewing Security Alerts Another good option is ‘honey accounts’ which are usernames and passwords that are never given out to users. These accounts are added just like any other user, and stored in the DB, however they are also recorded in a special cache and checked on login. Since they are never given to any user, no user should ever logon with them, however if one of those accounts are used, then the only way that username password combination could be known is if an attacker got the database, and this information allows the application to move to a more secure state and alert the company that the DB has been hacked. What to Review When reviewing code modules from a security alerting point of view, some common issues to look out for include: • Will the application know if it’s being attacked? Does it ignore invalid inputs, logins, etc. or does it log them and monitor this state to capture a cumulative perception of the current risk to the system? • Can the application automatically change its logging level to react to security threats? Is changing security levels dynamic or does it require a restart? • Does the SDLC requirements or design documentation capture what would constitute a security alert? Has this determination been peer reviewed? Does the testing cycle run through these scenarios? • Does the system employ ‘honey accounts’ such that the application will know if the DB has been compromised? • Is there a risk based scoring system that records the normal usage of users and allows for determination or reaction if the risk increases? • If a SIEM system is being used, have appropriate triggers been identified? Has automated tests been created to ensure those trigger log messages are not accidentally modified by future enhancements or bug fixes? • Does the system track how many failed login attempts a user has experienced? Does the system react to this? • Does certain functionality (i.e. transaction initiation, changing password, etc) have different modes of operation based on the current risk score the application is currently operating within? • Can the application revert back to ‘normal’ operation when the security risk score drops to normal levels? • How are administrators alerted when security risk score rises? Or when a breach has been assumed? At an operational level, is this tested regularly? How are changes of personnel handled? 177 178 Reviewing for Active Defense REVIEW FOR ACTIVE DEFENSE Attack detection undertaken at the application layer has access to the complete context of an interaction and enhanced information about the user. If logic is applied within the code to detect suspicious activity (similar to an application level IPS) then the application will know what is a high-value issue and what is noise. Input data are already decrypted and canonicalized within the application and therefore application-specific intrusion detection is less susceptible to advanced evasion techniques. This leads to a very low level of attack identification false positives, providing appropriate detection points are selected. The fundamental requirements are the ability to perform four tasks: 1. Detection of a selection of suspicious and malicious events. 2. Use of this knowledge centrally to identify attacks. 3. Selection of a predefined response. 4. Execution of the response. 22.1 Description Applications can undertake a range of responses that may include high risk functionality such as changes to a user’s account or other changes to the application’s defensive posture. It can be difficult to detect active defense in dynamic analysis since the responses may be invisible to the tester. Code review is the best method to determine the existence of this defense. Other application functionality like authentication failure counts and lock-out, or limits on rate of file uploads are ‘localized’ protection mechanisms. This sort of standalone logic is ‘not’ active defense equivalents in the context of this review, unless they are rigged together into an application-wide sensory network and centralized analytical engine. It is not a bolt-on tool or code library, but instead offers insight to an approach for organizations to specify or develop their own implementations – specific to their own business, applications, environments, and risk profile – building upon existing standard security controls. What to Review In the case where a code review is being used to detect the presence of a defense, its absence should be noted as a weakness. Note that active defense cannot defend an application that has known vulnerabilities, and therefore the other parts of this guide are extremely important. The code reviewer should note the absence of active defense as a vulnerability. The purpose of code review is not necessarily to determine the efficacy of the active defense, but could simply be to determine if such capability exists. Detection points can be integrated into presentation, business and data layers of the application. Applicationspecific intrusion detection does not need to identify all invalid usage, to be able to determine an attack. There is no need for “infinite data” or “big data” and therefore the location of “detection points” may be very sparse within source code. Reviewing for Active Defense A useful approach for identifying such code is to find the name of a dedicated module for detecting suspicious activity (such as OWASP AppSensor). Additionally a company can implement a policy of tagging active defense detection points based on Mitre’s Common Attack Pattern Enumeration and Classifcation (CAPEC), strings such as CAPEC-212, CAPEC-213, etc. The OWASP AppSensor detection point type identifiers and CAPEC codes will often have been used in configuration values (e.g. [https://code.google.com/p/appsensor/source/browse/trunk/AppSensor/src/test/ resources/.esapi/ESAPI.properties?r=53 in ESAPI for Java]), parameter names and security event classification. Also, examine error logging and security event logging mechanisms as these may be being used to collect data that can then be used for attack detection. Identify the code or services called that perform this logging and examine the event properties recorded/sent. Then identify all places where these are called from. An examination of error handling code relating to input and output validation is very likely to reveal the presence of detection points. For example, in a whitelist type of detection point, additional code may have been added adjacent, or within error handling code flow: In some situations attack detection points are looking for blacklisted input, and the test may not exist otherwise, if ( var !Match this ) { Error handling Record event for attack detection } so brand new code is needed. Identification of detection points should also have found the locations where events are recorded (the “event store”). If detection points cannot be found, continue to review the code for execution of response, as this may provide insight into the existence of active defense. The event store has to be analysed in real time or very frequently, in order to identify attacks based on predefined criteria. The criteria should be defined in configuration settings (e.g. in configuration files, or read from another source such as a database). A process will examine the event store to determine if an attack is in progress, typically this will be attempting to identify an authenticated user, but it may also consider a single IP address, range of IP addresses, or groups of users such as one or more roles, users with a particular privilege or even all users. Once an attack has been identified, the response will be selected based on predefined criteria. Again an examination of configuration data should reveal the thresholds related to each detection point, groups of detection points or overall thresholds. The most common response actions are user warning messages, log out, account lockout and administrator notification. However, as this approach is connected into the application, the possibilities of response actions are limited only by the coded capabilities of the application. Search code for any global includes that poll attack identification/response identified above. Response actions (again a user, IP address, group of users, etc) will usually be initiated by the application, but in some cases other applications (e.g. alter a fraud setting) or infrastructure components (e.g. block an IP address range) may also be involved. Examine configuration files and any external communication the application performs. 179 180 Reviewing for Active Defense The following types of responses may have been coded: • Logging increased • Administrator notification • Other notification (e.g. other system) • Proxy • User status change • User notification • Timing change • Process terminated (same as traditional defenses) • Function disabled • Account log out • Account lock out • Collect data from user. Other capabilities of the application and related system components can be repurposed or extended, to provide the selected response actions. Therefore review the code associated with any localised security measures such as account lock out. References • The guidance for adding active response to applications given in theOWASP_AppSensor_Project • Category: OWASP Enterprise Security API • https://code.google.com/p/appsensor/ AppSensor demonstration code Race Conditions RACE CONDITIONS Race Conditions occur when a piece of code does not work as it is supposed to (like many security issues). They are the result of an unexpected ordering of events, which can result in the finite state machine of the code to transition to a undefined state, and also give rise to contention of more than one thread of execution over the same resource. Multiple threads of execution acting or manipulating the same area in memory or persisted data which gives rise to integrity issues. 23.1 Description With competing tasks manipulating the same resource, we can easily get a race condition as the resource is not in step-lock or utilises a token based system such as semaphores. For example if there are two processes (Thread 1, T1) and (Thread 2, T2). The code in question adds 10 to an integer X. The initial value of X is 5. X = X + 10 With no controls surrounding this code in a multithreaded environment, the code could experience the following problem: T1 places X into a register in thread 1 T2 places X into a register in thread 2 T1 adds 10 to the value in T1’s register resulting in 15 T2 adds 10 to the value in T2’s register resulting in 15 T1 saves the register value (15) into X. T1 saves the register value (15) into X. The value should actually be 25, as each thread added 10 to the initial value of 5. But the actual value is 15 due to T2 not letting T1 save into X before it takes a value of X for its addition. This leads to undefined behavior, where the application is in an unsure state and therefore security cannot be accurately enforced. What to Review • In C#.NET look for code which used multithreaded environments: o Thread o System.Threading o ThreadPool o System.Threading.Interlocked • In Java code look for o java.lang.Thread o start() o stop() o destroy() o init() o synchronized 181 182 Race Conditions o wait() o notify() o notifyAll() • For classic ASP multithreading is not a directly supported feature, so this kind of race condition could be present only when using COM objects. • Static methods and variables (one per class, not one per object) are an issue particularly if there is a shared state among multiple threads. For example, in Apache, struts static members should not be used to store information relating to a particular request. The same instance of a class can be used by multiple threads, and the value of the static member cannot be guaranteed. • Instances of classes do not need to be thread safe as one is made per operation/request. Static states must be thread safe. o References to static variables, these must be thread locked. o Releasing a lock in places other then finally{} may cause issues. o Static methods that alter static state. References • http://msdn2.microsoft.com/en-us/library/f857xew0(vs.71).aspx Buffer Overruns BUFFER OVERRUNS A buffer is an amount of contiguous memory set aside for storing information. For example if a program has to remember certain things, such as what your shopping cart contains or what data was inputted prior to the current operation. This information is stored in memory in a buffer. Languages like C, C++ (which many operating systems are written in), and Objective-C are extremely efficient, however they allow code to access process memory directly (through memory allocation and pointers) and intermingle data and control information (e.g. in the process stack). If a programmer makes a mistake with a buffer and allows user input to run past the allocated memory, the user input can overwrite program control information and allow the user to modify the execution of the code. Note that Java, C#.NET, Python and Ruby are not vulnerable to buffer overflows due to the way they store their strings in char arrays, of which the bounds are automatically checked by the frameworks, and the fact that they do not allow the programmer direct access to the memory (the virtual machine layer handles memory instead). Therefore this section does not apply to those languages. Note however that native code called within those languages (e.g. assembly, C, C++) through interfaces such as JNI or ‘unsafe’ C# sections can be susceptible to buffer overflows. 24.1 Description To allocate a buffer the code declares a variable of a particular size: • char myBuffer[100]; // large enough to hold 100 char variables • int myIntBuf[5]; // large enough to hold 5 integers • Widget myWidgetArray[17]; // large enough to hold 17 Widget objects As there is no automatic bounds checking code can attempt to add a Widget at array location 23 (which does not exist). When the code does this, the complier will calculate where the 23rd Widget should be placed in memory (by multiplying 23 x sizeof(Widget) and adding this to the location of the ‘myWidgetArray’ pointer). Any other object, or program control variable/register, that existed at this location will be overwritten. Arrays, vectors, etc. are indexed starting from 0, meaning the first element in the container is at ‘myBuffer[0]’, this means the last element in the container is not at array index 100, but at array index 99. This can often lead to mistakes and the ‘off by one’ error, when loops or programming logic assume objects can be written to the last index without corrupting memory. In C, and before the C++ STL became popular, strings were held as arrays of characters: • char nameString[10]; This means that the ‘nameString’ array of characters is vulnerable to array indexing problems described above, and when many of the string manipulation functions (such as strcpy, strcat, described later) are used, the possibility of writing beyond the 10th element allows a buffer overrun and thus memory corruption. As an example, a program might want to keep track of the days of the week. The programmer tells the computer to store a space for 7 numbers. This is an example of a buffer. But what happens if an attempt to add 183 184 Buffer Overruns 8 numbers is performed? Languages such as C and C++ do not perform bounds checking, and therefore if the program is written in such a language, the 8th piece of data would overwrite the program space of the next program in memory, and would result in data corruption. This can cause the program to crash at a minimum or a carefully crafted overflow can cause malicious code to be executed, as the overflow payload is actual code. What to Review: Buffer Overruns Sample 24.1 void copyData(char *userId) { char smallBuffer[10]; // size of 10 strcpy (smallBuffer, userId); } int main(int argc, char *argv[]) { char *userId = “01234567890”; // Payload of 12 when you include the ‘\n’ string termination // automatically added by the “01234567890” literal copyData (userId); // this shall cause a buffer overload } C library functions such as strcpy (), strcat (), sprintf () and vsprintf () operate on null terminated strings and perform no bounds checking. gets() is another function that reads input (into a buffer) from stdin until a terminating newline or EOF (End of File) is found. The scanf () family of functions also may result in buffer overflows. Using strncpy(), strncat() and snprintf() functions allows a third ‘length’ parameter to be passed which determines the maximum length of data that will be copied/etc. into the destination buffer. If this is correctly set to the size of the buffer being written to, it will prevent the target buffer being overflowed. Also note fgets() is a replacement for gets(). Always check the bounds of an array before writing it to a buffer. The Microsoft C runtime also provides additional versions of many functions with an ‘_s’ suffix (strcpy_s, strcat_s, sprintf_s). These functions perform additional checks for error conditions and call an error handler on failure. The C code below is not vulnerable to buffer overflow as the copy functionality is performed by ‘strncpy’ which specifies the third argument of the length of the character array to be copied, 10. Sample 24.2 void copyData(char *userId) { char smallBuffer[10]; // size of 10 strncpy(smallBuffer, userId, sizeof(smallBuffer)); // only copy first 10 elements smallBuffer[10] = 0; // Make sure it is terminated. } int main(int argc, char *argv[]) { char *userId = “01234567890”; // Payload of 11 copyData (userId); } Buffer Overruns Modern day C++ (C++11) programs have access to many STL objects and templates that help prevent security vulnerabilities. The std::string object does not require the calling code have any access to underlying pointers, and automatically grows the underlying string representation (character buffer on the heap) to accommodate the operations being performed. Therefore code is unable to cause a buffer overflow on a std::string object. Regarding pointers (which can be used in other ways to cause overflows), C++11 has smart pointers which again take away any necessity for the calling code to user the underlying pointer, these types of pointers are automatically allocated and destroyed when the variable goes out of scope. This helps to prevent memory leaks and double delete errors. Also the STL containers such as std::vector, std::list, etc., all allocate their memory dynamically meaning normal usage will not result in buffer overflows. Note that it is still possible to access these containers underlying raw pointers, or reinterpret_cast the objects, thus buffer overflows are possible, however they are more difficult to cause. Compliers also help with memory issues, in modern compilers there are ‘stack canaries’ which are subtle elements placed in the complied code which check for out-of-bound memory accesses. These can be enabled when compiling the code, or they could be enabled automatically. There are many examples of these stack canaries, and for some system many choices of stack canaries depending on an organizations appetite for security versus performance. Apple also have stack canaries for iOS code as Objective-C is also susceptible to buffer overflows. In general, there are obvious examples of code where a manual code reviewer can spot the potential for overflows and off-by-one errors, however other memory leaks (or issues) can be harder to spot. Therefore manual code review should be backed up by memory checking programs available on the market. What to Review: Format Function Overruns A format function is a function within the ANSI C specification that can be used to tailor primitive C data types to human readable form. They are used in nearly all C programs to output information, print error messages, or process strings. Table 23: Format Function Overruns Format String Relevant Input %x Hexadecimal values (unsigned int) %s Strings ((const) (unsigned) char*) %n Integer %d Decimal %u Unsigned decimal (unsigned int) Some format parameters: The %s in this case ensures that value pointed to by the parameter ‘abc’ is printed as an array of characters. For example: char* myString = “abc”; printf (“Hello: %s\n”, abc); 185 186 Buffer Overruns Through supplying the format string to the format function we are able to control the behaviour of it. So supplying input as a format string makes our application do things it’s not meant to. What exactly are we able to make the application do? If we supply %x (hex unsigned int) as the input, the ‘printf’ function shall expect to find an integer relating to that format string, but no argument exists. This cannot be detected at compile time. At runtime this issue shall surface. For every % in the argument the printf function finds it assumes that there is an associated value on the stack. In this way the function walks the stack downwards reading the corresponding values from the stack and printing them to the user. Using format strings we can execute some invalid pointer access by using a format string such as: • printf (“%s%s%s%s%s%s%s%s%s%s%s%s”); Worse again is using the ‘%n’ directive in ‘printf()’. This directive takes an ‘int*’ and ‘writes’ the number of bytes so far to that location. Where to look for this potential vulnerability. This issue is prevalent with the ‘printf()’ family of functions, ‘’printf(),fprintf(), sprintf(), snprintf().’ Also ‘syslog()’ (writes system log information) and setproctitle(const char *fmt, ...); (which sets the string used to display process identifier information). What to Review: Integer Overflows Data representation for integers will have a finite amount of space, for example a short in many languages is 16 bits twos complement number, which means it can hold a maximum number of 32,767 and a minimum number of -32,768. Twos complement means that the very first bit (of the 16) is a representation of whether the number of positive or negative. If the first bit is ‘1’, then it is a negative number. The representation of some boundary numbers are given in table 24. Table 24: Integer Overflows Number Representation 32,766 0111111111111110 32,767 0111111111111111 -32,768 1000000000000000 -1 1111111111111111 If you add 1 to 32,766, it adds 1 to the representation giving the representation for 32,767 shown above. However if you add one more again, it sets the first bit (a.k.a. most significant bit), which is then interpreted by the system as -32,768. If you have a loop (or other logic) which is adding or counting values in a short, then the application could experience this overflow. Note also that subtracting values below -32,768 also means the number will wrap around to a high positive, which is called underflow. Buffer Overruns Sample 24.3 #include int main(void){ int val; val = 0x7fffffff; /* 2147483647*/ printf(“val = %d (0x%x)\n”, val, val); printf(“val + 1 = %d (0x%x)\n”, val + 1 , val + 1); /*Overflow the int*/ return 0; } The binary representation of 0x7fffffff is 1111111111111111111111111111111; this integer is initialized with the highest positive value a signed long integer can hold. Here when we add 1 to the hex value of 0x7fffffff the value of the integer overflows and goes to a negative number (0x7fffffff + 1 = 80000000) In decimal this is (-2147483648). Think of the problems this may cause. Compilers will not detect this and the application will not notice this issue. We get these issues when we use signed integers in comparisons or in arithmetic and also when comparing signed integers with unsigned integers. Sample 24.4 int myArray[100]; int fillArray(int v1, int v2){ if(v2 > sizeof(myArray) / sizeof(int) -1 ){ return -1; /* Too Big */ } myArray [v2] = v1; return 0; } Here if v2 is a massive negative number the “if” condition shall pass. This condition checks to see if v2 is bigger than the array size. If the bounds check was not performed the line “myArray[v2] = v1” could have assigned the value v1 to a location out of the bounds of the array causing unexpected results. References • See the OWASP article on buffer overflow attacks. 187 188 Buffer Overruns • See the OWASP Testing Guide on how to test for buffer overflow vulnerabilities. • See Security Enhancements in the CRT: http://msdn2.microsoft.com/en-us/library/8ef0s5kh(VS.80).aspx JavaScript has several known security vulnerabilities, with HTML5 and JavaScript becoming more prevalent in web sites today and with more web sites moving to responsive web design with its dependence on JavaScript CLIENT SIDE JavaScript the code reviewer needs to understand what vulnerabilities to look for. JavaScript is fast becoming a significant point of entry of hackers to web application. For that reason we have included in the A1 Injection sub section. The most significant vulnerabilities in JavaScript are cross-site scripting (XSS) and Document Object Model, DOM-based XSS. Detection of DOM-based XSS can be challenging. This is caused by the following reasons. • JavaScript is often obfuscated to protect intellectual property. • JavaScript is often compressed out of concerned for bandwidth. In both of these cases it is strongly recommended the code review be able to review the JavaScript before it has been obfuscated and or compressed. This is a huge point of contention with QA software professionals because you are reviewing code that is not in its production state. Another aspect that makes code review of JavaScript challenging is its reliance of large frameworks such as Microsoft .NET and Java Server Faces and the use of JavaScript frameworks, such as JQuery, Knockout, Angular, Backbone. These frameworks aggravate the problem because the code can only be fully analyzed given the source code of the framework itself. These frameworks are usually several orders of magnitude larger then the code the code reviewer needs to review. Because of time and money most companies simple accept that these frameworks are secure or the risks are low and acceptable to the organization. Because of these challenges we recommend a hybrid analysis for JavaScript. Manual source to sink validation when necessary, static analysis with black-box testing and taint testing. First use a static analysis. Code Reviewer and the organization needs to understand that because of event-driven behaviors, complex dependencies between HTML DOM and JavaScript code, and asynchronous communication with the server side static analysis will always fall short and may show both positive, false, false –positive, and positive-false findings. Black-box traditional methods detection of reflected or stored XSS needs to be preformed. However this approach will not work for DOM-based XSS vulnerabilities. Taint analysis needs to be incorporated into static analysis engine. Taint Analysis attempts to identify variables that have been ‘tainted’ with user controllable input and traces them to possible vulnerable functions also known Buffer Overruns as a ‘sink’. If the tainted variable gets passed to a sink without first being sanitized it is flagged as vulnerability. Second the code reviewer needs to be certain the code was tested with JavaScript was turned off to make sure all client sided data validation was also validated on the server side. Sample 25.1 Code examples of JavaScript vulnerabilities. Sample 25.2 var url = document.location.url; var loginIdx = url.indexOf(‘login’); var loginSuffix = url.substring(loginIdx); url = ‘http://mySite/html/sso/’ + loginSuffix; document.location.url = url; Explanation: An attacker can send a link such as “http://hostname/welcome.html#name=”); Cybercriminal may controlled the following DOM elements including document.url,document.location,document.referrer,window.location Source: document.location Sink: windon.location.href Results: windon.location.href = http://www.BadGuysSite; - Client code open redirect. Source: document.url Storage: windows.localstorage.name 189 190 Buffer Overruns Sink: elem.innerHTML Results: elem.innerHTML = =Stored DOM-based Cross-site Scripting eval() is prone to security threats, and thus not recommended to be used. Consider these points: 1. Code passed to the eval is executed with the privileges of the executer. So, if the code passed can be affected by some malicious intentions, it leads to running malicious code in a user’s machine with your website’s privileges. 2. A malicious code can understand the scope with which the code passed to the eval was called. 3. You also shouldn’t use eval() or new Function() to parse JSON data. The above if used may raise security threats. JavaScript when used to dynamically evaluate code will create a potential security risk. eval(‘alert(“Query String ‘ + unescape(document.location.search) + ‘”);’); eval(untrusted string); Can lead to code injection or client-side open redirect. JavaScripts “new function” also may create a potential security risk. Three points of validity are required for JavaScript 1. Have all the logic server-side, JavaScript validation be turned off on the client 2. Check for all sorts of XSS DOM Attacks, never trust user data, know your source and sinks (i.e. look at all variables that contain user supplied input). 3. Check for insecure JavaScript libraries and update them frequently. References: • http://docstore.mik.ua/orelly/web/jscript/ch20_04.html • https://www.owasp.org/index.php/Static_Code_Analysis • http://www.cs.tau.ac.il/~omertrip/fse11/paper.pdf • http://www.jshint.com 191 APENDIX 192 Code Review Do’s And Dont’s CODE REVIEW DO’S AND DONT’S At work we are professions. But we need to make sure that even as professionals that when we do code reviews besides the technical aspects of the code reviews we need to make sure we consider the human side of code reviews. Here is a list of discussion points that code reviewers; peer developers need to take into consideration. This list is not comprehensive but a suggestion starting point for an enterprise to make sure code reviews are effective and not disruptive and a source of discourse. If code reviews become a source of discourse within an organization the effectives of finding security, functional bugs will decline and developers will find a way around the process. Being a good code reviewer requires good social skills, and is a skill that requires practice just like learning to code. • You don’t have to find fault in the code to do a code review. If you always fine something to criticize your comments will loose credibility. • Do not rush a code review. Finding security and functionality bugs is important but other developers or team members are waiting on you so you need to temper your do not rush with the proper amount urgency. • When reviewing code you need to know what is expected. Are you reviewing for security, functionality, maintainability, and/or style? Does your organization have tools and documents on code style or are you using your own coding style? Does your organization give tools to developers to mark unacceptable coding standards per the organizations own coding standards? • Before beginning a code review does your organization have a defined way to resolve any conflicts that may come up in the code review by the developer and code reviewer? • Does the code reviewer have a define set of artifacts that need to be produce as the result of the code review? • What is the process of the code review when code during the code review needs to be changed? • Is the code reviewer knowledgeable about the domain knowledge of the code that is being reviewed? Ample evidence abounds that code reviews are most effective if the code reviewer is knowledgeable about the domain of the code I.e. Compliance regularizations for industry and government, business functionality, risks, etc. Agile Software Development Lifecycle Code Review Do’s And Dont’s AGILE SOFTWARE DEVELOPMENT LIFECYCLE INTERACTION FASE 1 DESIGN CODE TEST DEPLOY INTERACTION FASE INTERACTION FASE 4 2 DESIGN CODE TEST DEPLOY DESIGN CODE TEST DEPLOY INTERACTION FASE 3 DESIGN CODE TEST DEPLOY 193 194 Code Review Do’s And Dont’s Integrating security into the agile sdlc process flow is difficult. The organization will need constant involvement from security team and or a dedication to security with well-trained coders on every team. SAST - AdHoc Static Analysis - Developer Initiated 6 DEVELOPMENT YES: FIX CODE SAST PASS OR FAIL? 5 YES: FAIL 1 CHECK-IN START 7 CHECK-IN CODE NO/FALSE POSITIVE SAST PASS OR FAIL? DEV CHECKMARX / SAST TUNED RULES CONTROLLED BY APPSEC 2 11 INVOKE SAST NO: REQUEST FIX/ ESCALATE SAST 3 4 PASS/FAIL NO PASS SECURITY 8 ADJUST RULEBASE YES NO FALSE POSITIVE YES FINISH/ SIGNED OFF SIGNOFF SIGN/OFF? RECORD METRICS ISR Continuous Integration and Test Driven Development The term ‘Continuous Integration’ originated with the Extreme Programming development process. Today it is one of the best practices of SDLC Agile. CI requires developers to check code into source control management Code Review Do’s And Dont’s application (scm) several times a day. An automated build server and application verify each check-in. The advantage is team members can quickly detect build problems early in the software development process. The disadvantage of CI for the Code Reviewer is while code may build properly; software security vulnerabilities may still exist. Code review may be part of the check-in process but review may only to make sure the code only meets the minimum standards of the organization. Code review is not a secure code review with risk assessment approach on what needs to have additional time spent doing a code review. The second disadvantage for the code review is because the organization is moving quickly with Agile process a design flaw may be introduced allowing a security vulnerabilities and the vulnerabilities may get deployed. A red flag for the code reviewer is… 1. No user stories talk about security vulnerabilities based on risk. 2. User stories do not openly describe source and sinks. 3. No risk assessment for the application has been done. Breaking Down Process Areas The term ‘Test Driven Development’ like CI originated with Extreme Programming development process. Today like CI it is one of the best practices of SDLC Agile. TDD requires developers to rely on repetition of a very short development cycle. First the step is the developer writes an automated test case that defines a needed improvement or new functionality. This first step TDD initial fails the test. Subsequent steps the developers create a minimum amount of code to pass. Finally the developer refactors the new code to the organization acceptable coding standard. BREAKING DOWN PROCESS AREAS SPRINT 1 30 AGILE 2 CONTINUOUS INTEGRATION 3 TEST DRIVEN DEVELOPMENT FUNCTIONING PRODUCT DAY (MAX) STORY INTEGRATION: AUTOMATED BUILD AND TESTS IMMEDIATE FEEDBACK ON ISSUES TDD: CREATE TEST FIRST CODE UNTIL IT PASSES 1 SPRINT BACKLOG CONTINOUS INTEGRATION PERIOD STORY STORY 1 2 PRODUCT BACKLOG 24 HOUR (MAX) 195 196 Code Review Checklist CODE REVIEW CHECKLIST CATEGORY General DESCRIPTION Are there backdoor/unexposed business logic classes? Business Logic and Design Are there unused configurations related to business logic? Business Logic and Design If request parameters are used to identify business logic methods, is there a proper mapping of user privileges and methods/actions allowed to them? Business Logic and Design Check if unexposed instance variables are present in form objects that get bound to user inputs. If present, check if they have default values. Business Logic and Design Check if unexposed instance variables present in form objects that get bound to user inputs. If present, check if they get initialized before form binding. Authorization Is the placement of authentication and authorization check correct? Authorization Is there execution stopped/terminated after for invalid request? I.e. when authentication/authorization check fails? Authorization Are the checks correct implemented? Is there any backdoor parameter? Authorization Is the check applied on all the required files and folder within web root directory? Authorization Are security checks placed before processing inputs? Business Logic and Design Check if unexposed instance variables are present in form objects that get bound to user inputs. If present, check if they have default values. Business Logic and Design Check if unexposed instance variables present in form objects that get bound to user inputs. If present, check if they get initialized before form binding. Authorization Is there execution stopped/terminated after for invalid request? I.e. when authentication/authorization check fails? Business Logic and Design Are the checks correct implemented? Is there any backdoor parameter? Business Logic and Design Is the check applied on all the required files and folder within web root directory? Business Logic and Design Is there any default configuration like Access- ALL? Business Logic and Design Does the configuration get applied to all files and users? Authorization Incase of container-managed authentication - Is the authentication based on web methods only? Authorization Incase of container-managed authentication - Does the authentication get applied on all resources? Session Management Does the design handle sessions securely? Authorization Incase of container-managed authentication - Is the authentication based on web methods only? Authorization Is Password Complexity Check enforced on the password? Cryptography Is password stored in an encrypted format? Authorization Is password disclosed to user/written to a file/logs/console? PASS FAIL Code Review Checklist CATEGORY DESCRIPTION Cryptography Are database credentials stored in an encrypted format Business Logic and Design Does the design support weak data stores like flat files Business Logic and Design Does the centralized validation get applied to all requests and all the inputs? Business Logic and Design Does the centralized validation check block all the special characters? Business Logic and Design Does are there any special kind of request skipped from validation? Business Logic and Design Does the design maintain any exclusion list for parameters or features from being validated? Imput Validation Are all the untrusted inputs validated? Input data is constrained and validated for type, length, format, and range. Cryptography Is the data sent on encrypted channel? Does the application use HTTPClient for making external connections? Cryptography Session Management Does Is the the datadesign sent on involve encrypted session channel? sharingDoes between the application components/modules? use HTTPClient Is session for making validated external connections? correctly on both ends? Business Logic and Design Does the design use any elevated OS/system privileges for external connections/commands? Business Logic and Design Is there any known flaw(s) in API’s/Technology used? For eg: DWR Business Logic and Design Does the design framework provide any inbuilt security control? Like <%: %> in ASP.NET MVC? Is the application taking advantage of these controls? Business Logic and Design Are privileges reduce whenever possible? Business Logic and Design Is the program designed to fail gracefully? Logging and Auditing Are logs logging personal information, passwords or other sensitive information? Logging and Auditing Do audit logs log connection attempts (both successful and failures)? Logging and Auditing Is there a process(s) in place to read audit logs for unintended/malicious behaviors? Cryptography Is all PI and sensitive information being sent over the network encrypted form. Authorization Does application design call for server authentication (anti-spoofing measure)? Authorization Does application support password expiration? Cryptography Does application use custom schemes for hashing and or cryptographic? PASS FAIL 197 198 Code Review Checklist CATEGORY DESCRIPTION Cryptography Are cryptographic functions used by the application the most recent version of these protocols, patched and process in place to keep them updated? General Are external libraries, tools, plugins used by the application functions the most recent version of these protocols, patched and process in place to keep them updated? General Classes that contain security secrets (like passwords) are only accessible through protected API’s Cryptography General Cryptography Does are there any special kind of request skipped from validation? Classes that contain security secrets (like passwords) are only accessible through protected API’s Keys are not held in code. General Plain text secrets are not stored in memory for extended periods of time. General Array bounds are checked. User Management and Authentication General User and role based privileges are documented All sensitive information used by application has been identified User Management and Authentication Authentication cookies are not persisted User Management and Authentication Authentication cookies are encrypted User Management and Authentication Authentication credentials are not passed by HTTP GET User Management and Authentication Authorization checks are granular (page and directory level) User Management and Authentication Authorization based on clearly defined roles User Management and Authentication Authorization works properly and cannot be circumvented by parameter manipulation User Management and Authentication Authorization cannot be bypassed by cookie manipulation Session Management No session parameters are passed in URLs Session Management Session cookies expire in a reasonable short time Session Management Session cookies are encrypted Session Management Session data is validated Session Management Session id is complex Session Management Session storage is secure PASS FAIL Code Review Checklist CATEGORY Session Management DESCRIPTION Session inactivity timeouts are enforced Data Management Data is validated on server side Data Management HTTP headers are validated for each request Business Logic and Design Are all of the entry points and trust boundaries identified by the design and are in risk analysis report? Data Management Is all XML input data validated against an agreed schema? Data Management Is output that contains untrusted data supplied input have the correct type of encoding (URL encoding, HTML encoding)? Data Management Has the correct encoding been applied to all data being output by the application Web Services Web service has documentation protocol is disable if the application does not need dynamic generation of WSDL. Web Services Web service endpoints address in Web Services Description Language (WSDL) is checked for validity Web Services Web service protocols that are unnecessary are disable (HTTP GET and HTTP POST PASS FAIL 199 200 Threat Modeling Example THREAT MODELING EXAMPLE Step 1 Decompose the Application The goal of decomposing the application is to gain an understanding of the application and how it interacts with external entities. Information gathering and documentation achieve this goal. The information gathering process is carried out using a clearly defined structure, which ensures the correct information is collected. This structure also defines how the information should be documented to produce the threat model. Figure 13 General Information The first item in the threat model is the general information relating to the threat model. This must include the following: 1. Application Name: The name of the application 2. Application Version: The version of the application 3. Description: A high level description of the application 4. Document Owner: The owner of the threat modeling document 5. Participants: The participants involved in the threat modeling process for this application 6. Reviewer: The reviewer(s) of the threat model Figure 14 OWNER - DAVID LOWRY THREAT MODEL INFORMATION PARTICIPANTS - DAVID ROOK REVIEWER - EOIN KEARY V2.0 Description The college library website is the first implementation of a website to provide librarians and library patrons (students and college staff ) with online services. As this is the first implementation of the website, the functionality will be limited. There will be three users of the application: 1. Students 2. Staff Threat Modeling Example 3. Librarians Staff and students will be able to log in and search for books, and staff members can request books. Librarians will be able to log in, add books, add users, and search for books. Figure 15 Entry Points Entry points should be documented as follows: 1. ID A unique ID assigned to the entry point. This will be used to cross reference the entry point with any threats or vulnerabilities that are identified. In the case of layer entry points, a major, minor notation should be used. 2. Name A descriptive name identifying the entry point and its purpose. 3. Description A textual description detailing the interaction or processing that occurs at the entry point. 4. Trust Levels The level of access required at the entry point is documented here. These will be cross-referenced with the trusts levels defined later in the document. Figure 16 Assets Assets are documented in the threat model as follows: 1. ID A unique ID is assigned to identify each asset. This will be used to cross reference the asset with any threats or vulnerabilities that are identified. 2. Name A descriptive name that clearly identifies the asset. 3. Description A textual description of what the asset is and why it needs to be protected. 4. Trust Levels The level of access required to access the entry point is documented here. These will be cross-referenced with the trust levels defined in the next step. 201 202 Threat Modeling Example Figure 17 Trust Levels Trust levels are documented in the threat model as follows: 1. ID A unique number is assigned to each trust level. This is used to cross reference the trust level with the entry points and assets. 2. Name A descriptive name that allows identification of the external entities that have been granted this trust level. 3. Description A textual description of the trust level detailing the external entity who has been granted the trust level. By using the understanding learned on the college library website architecture and design, the data flow diagram can be created as shown in figure X. Figure 18 USERS LIBRARIANS REQUEST REQUEST USER / WEBSERVER BOUNDARY RESPONSES WEB PAGES ON DISK PAGES RESPONSES COLEGE LIBRARY DATABASE SQL QUERY CALLS DATA WEB SERVER / DATABASE BOUNDARY COLEGE LIBRARY DATABASE DATA DATABASE FILES DATA Threat Modeling Example Specifically the user login data flow diagram will appear as in figure 19. Figure 19 Example application threat model of the user login USER / WEBSERVER BOUNDARY USERS LOGIN REQUEST LOGIN RESPONSE WEB SERVER / DATABASE BOUNDARY WEB SERVER PAGES AUTHENTICATE USER () AUTHENTICATE USER RESULT LOGIN PROCESS AUTHENTICATE USER SQL QUERY RESULT AUTHENTICATE USER SQL QUERY RESULT DATA WEB PAGES COLEGE LIBRARY DATABASE DATABASE FILES DATA Threat Modeling Example: Step 2a Threat Categorization The first step in the determination of threats is adopting a threat categorization. A threat categorization provides a set of threat categories with corresponding examples so that threats can be systematically identified in the application in a structured and repeatable manner. Stride A threat list of generic threats organized in these categories with examples and the affected security controls is provided in the following table: 203 204 Threat Modeling Example Threat Modeling Example: Step 2b Ranking of Threats Threats tend to be ranked from the perspective of risk factors. By determining the risk factor posed by the various identified threats, it is possible to create a prioritized list of threats to support a risk mitigation strategy, such as deciding on which threats have to be mitigated first. Different risk factors can be used to determine which threats can be ranked as High, Medium, or Low risk. In general, threat risk models use different factors to model risks. Microsoft DREAD threat-risk ranking model By referring to the college library website it is possible to document sample threats related to the use cases such as: Threat: Malicious user views confidential information of students, faculty members and librarians. 1. Damage potential Threat to reputation as well as financial and legal liability:8 2. Reproducibility Fully reproducible:10 3. Exploitability Require to be on the same subnet or have compromised a router:7 4. Affected users Affects all users:10 5. Discoverability Can be found out easily:10 Overall DREAD score: (8+10+7+10+10) / 5 = 9 In this case having 9 on a 10 point scale is certainly an high risk threat. Code Crawling CODE CRAWLING This appendix gives practical examples of how to carry out code crawling in the following programming languages: • .Net • Java • ASP • C++/Apache Searching for Code in .NET Firstly one needs to be familiar with the tools one can use in order to perform text searching, following this one needs to know what to look for. One could scan through the code looking for common patterns or keywords such as “User”, “Password”, “Pswd”, “Key”, “Http”, etc... This can be performed using the “Find in Files” tool in VS or using findstring as follows: findstr /s /m /i /d:c:\projects\codebase\sec “http” *.* HTTP Request Strings Requests from external sources are obviously a key area of a security code review. We need to ensure that all HTTP requests received are data validated for composition, max and min length, and if the data falls within the realms of the parameter whitelist. Bottom-line is this is a key area to look at and ensure security is enabled. STRING TO SEARCH request.accesstypes request.httpmethod request.cookies request.url request.browser request.querystring request.certificate request.urlreferrer request.files request.item request.rawurl request.useragent request.headers request.form request.servervariables request.TotalBytes request.BinaryRead OUTPUT request.userlanguages HTML Output Here we are looking for responses to the client. Responses which go unvalidated or which echo external input without data validation are key areas to examine. Many client side attacks result from poor response validation. STRING TO SEARCH response.write HttpUtility HtmlEncode innerText innerHTML <%= UrlEncode 205 206 Code Crawling SQL & Database Locating where a database may be involved in the code is an important aspect of the code review. Looking at the database code will help determine if the application is vulnerable to SQL injection. One aspect of this is to verify that the code uses either SqlParameter, OleDbParameter, or OdbcParameter(System.Data.SqlClient). These are typed and treat parameters as the literal value and not executable code in the database. STRING TO SEARCH exec sp_ select from insert update delete from where delete execute sp_ exec xp_ exec @ execute @ executestatement executeSQL setfilter executeQuery GetQueryResultInXML adodb sqloledb sql server driver Server.CreateObject .Provider System.Data.sql ADODB.recordset New OleDbConnection ExecuteReader DataSource SqlCommand Microsoft.Jet SqlDataReader ExecuteReader SqlDataAdapter StoredProcedure Cookies Cookie manipulation can be key to various application security exploits, such as session hijacking/fixation and parameter manipulation. One should examine any code relating to cookie functionality, as this would have a bearing on session security. STRING TO SEARCH System.Net.Cookie TAGS HTTPOnly document.cookie HTML Tags Many of the HTML tags below can be used for client side attacks such as cross site scripting. It is important to examine the context in which these tags are used and to examine any relevant data validation associated with the display and use of such tags within a web application. STRING TO SEARCH HtmlEncode URLEncode