OCAOCP Oracle Database 11g All In One Exam Guide

OCAOCP%20Oracle%20Database%2011g%20All-in-One%20Exam%20Guide

User Manual:

Open the PDF directly: View PDF PDF.
Page Count: 1073

DownloadOCAOCP Oracle Database 11g All-in-One Exam Guide
Open PDF In BrowserView PDF
www.allitebooks.com

®

OCA /OCP
Oracle Database 11g
A ll-in-One
Exam Guide
(Exam 1Z0-051, 1Z0-052, and 1Z0-053)

McGraw-Hill is an independent entity from Oracle Corporation. This
publication and CD may be used in assisting students to prepare for the
OCA exams 1Z0-051 and 1Z0-052, and the OCP exam 1Z0-053. Neither
Oracle Corporation nor The McGraw-Hill Companies warrant that use of
this publication will ensure passing the relevant exam.

www.allitebooks.com

About the Authors
John Watson (Oxford, UK) works for BPLC Management Consultants, teaching
and consulting throughout Europe and Africa. He was with Oracle University for
several years in South Africa, and before that worked for a number of companies,
government departments, and NGOs in England and Europe. He is OCP
qualified in both database and application server administration. John is the
author of several books and numerous articles on technology and has 25 years
of experience in IT.
Roopesh Ramklass (South Africa), OCP, is an Oracle specialist who has
worked in a range of contexts. He was part of Oracle’s Support team and taught
at Oracle University in South Africa for many years. As an independent consultant
and manager of his own consulting business, he designed and developed software
and training courses based on a wide spectrum of Oracle technologies., including
the database, application server, and business intelligence products. Roopesh is a
co-author of the OCA Oracle Database 11g: SQL Fundamentals I Exam Guide (Oracle
Press, 2008) and has more than 12 years of experience in the IT industry.

About the Technical Editors
Gavin Powell (Cartersville, GA) is a consultant and technical writer with 20 years
of experience in the IT industry. He has worked as a programmer, developer,
analyst, data modeler, and database administrator in numerous industries.
Bruce Swart (South Africa) works for 2Cana Solutions and has over 14 years
of experience in IT. While maintaining a keen interest for teaching others, he has
performed several roles, including developer, analyst, team leader, administrator,
project manager, consultant, and lecturer. He is OCP qualified in both database
and developer roles. He has taught at Oracle University in South Africa for several
years and has also spoken at numerous local Oracle User Group conferences. His
passion is helping others achieve greatness.
April Wells (Austin, TX) is an experienced Oracle DBA who holds multiple
DBA OCP certifications. She currently manages Oracle databases and Oracle data
warehouses at NetSpend corporation in Austin, Texas. Previously, April has worked
for Oracle Corporation in Austin, Texas, as on-site support at Dell, at Corporate
Systems in Amarillo, Texas, and at U.S. Steel in Pennsylvania and Minnesota.

www.allitebooks.com

OCA /OCP
Oracle Database 11g
All-in-One
Exam Guide
(Exam 1Z0-051, 1Z0-052, and 1Z0-053)

John Watson
Roopesh Ramklass

New York • Chicago • San Francisco • Lisbon
London • Madrid • Mexico City • Milan • New Delhi
San Juan • Seoul • Singapore • Sydney • Toronto

www.allitebooks.com

Copyright © 2010 by The McGraw-Hill Companies, Inc. All rights reserved. Except as permitted under the United States Copyright Act of 1976,
no part of this publication may be reproduced or distributed in any form or by any means, or stored in a database or retrieval system, without the
prior written permission of the publisher.
ISBN: 978-0-07-162921-8
MHID: 0-07-162921-1
The material in this eBook also appears in the print version of this title: ISBN: 978-0-07-162918-8, MHID: 0-07-162918-1.
All trademarks are trademarks of their respective owners. Rather than put a trademark symbol after every occurrence of a trademarked name, we
use names in an editorial fashion only, and to the benefit of the trademark owner, with no intention of infringement of the trademark. Where such
designations appear in this book, they have been printed with initial caps.
McGraw-Hill eBooks are available at special quantity discounts to use as premiums and sales promotions, or for use in corporate training programs.
To contact a representative please e-mail us at bulksales@mcgraw-hill.com.
Information has been obtained by Publisher from sources believed to be reliable. However, because of the possibility of human or mechanical error
by our sources, Publisher, or others, Publisher does not guarantee to the accuracy, adequacy, or completeness of any information included in this
work and is not responsible for any errors or omissions or the results obtained from the use of such information. Oracle Corporation does not make
any representations or warranties as to the accuracy, adequacy, or completeness of any information contained in this Work, and is not responsible
for any errors or omissions.
TERMS OF USE
This is a copyrighted work and The McGraw-Hill Companies, Inc. (“McGraw-Hill”) and its licensors reserve all rights in and to the work. Use of
this work is subject to these terms. Except as permitted under the Copyright Act of 1976 and the right to store and retrieve one copy of the work,
you may not decompile, disassemble, reverse engineer, reproduce, modify, create derivative works based upon, transmit, distribute, disseminate,
sell, publish or sublicense the work or any part of it without McGraw-Hill’s prior consent. You may use the work for your own noncommercial and
personal use; any other use of the work is strictly prohibited. Your right to use the work may be terminated if you fail to comply with these terms.
THE WORK IS PROVIDED “AS IS.” McGRAW-HILL AND ITS LICENSORS MAKE NO GUARANTEES OR WARRANTIES AS TO THE
ACCURACY, ADEQUACY OR COMPLETENESS OF OR RESULTS TO BE OBTAINED FROM USING THE WORK, INCLUDING ANY
INFORMATION THAT CAN BE ACCESSED THROUGH THE WORK VIA HYPERLINK OR OTHERWISE, AND EXPRESSLY DISCLAIM
ANY WARRANTY, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO IMPLIED WARRANTIES OF MERCHANTABILITY OR
FITNESS FOR A PARTICULAR PURPOSE. McGraw-Hill and its licensors do not warrant or guarantee that the functions contained in the work
will meet your requirements or that its operation will be uninterrupted or error free. Neither McGraw-Hill nor its licensors shall be liable to you
or anyone else for any inaccuracy, error or omission, regardless of cause, in the work or for any damages resulting therefrom. McGraw-Hill has no
responsibility for the content of any information accessed through the work. Under no circumstances shall McGraw-Hill and/or its licensors be
liable for any indirect, incidental, special, punitive, consequential or similar damages that result from the use of or inability to use the work, even
if any of them has been advised of the possibility of such damages. This limitation of liability shall apply to any claim or cause whatsoever whether
such claim or cause arises in contract, tort or otherwise.

Disclaimer:
This eBook does not include the ancillary media that was
packaged with the original printed version of the book.

www.allitebooks.com

GET

FREE SUBSCRIPTION
TO ORACLE MAGAZINE

YOUR

Oracle Magazine is essential gear for today’s information technology professionals.
Stay informed and increase your productivity with every issue of Oracle Magazine.
Inside each free bimonthly issue you’ll get:

t 6
 QUPEBUFJOGPSNBUJPOPO0SBDMF%BUBCBTF 0SBDMF"QQMJDBUJPO4FSWFS 
8FCEFWFMPQNFOU FOUFSQSJTFHSJEDPNQVUJOH EBUBCBTFUFDIOPMPHZ 
BOECVTJOFTTUSFOET
t 5IJSEQBSUZOFXTBOEBOOPVODFNFOUT
t 5 FDIOJDBMBSUJDMFTPO0SBDMFBOEQBSUOFSQSPEVDUT UFDIOPMPHJFT 
BOEPQFSBUJOHFOWJSPONFOUT
t %FWFMPQNFOUBOEBENJOJTUSBUJPOUJQT
t 3FBMXPSMEDVTUPNFSTUPSJFT

If there are other Oracle users at
your location who would like to
receive their own subscription to
Oracle Magazine, please photocopy this form and pass it along.

Three easy ways to subscribe:
1 Web

7JTJUPVS8FCTJUFBU oracle.com/oraclemagazine
:PVMMGJOEBTVCTDSJQUJPOGPSNUIFSF QMVTNVDINPSF

2 Fax

$PNQMFUFUIFRVFTUJPOOBJSFPOUIFCBDLPGUIJTDBSE
BOEGBYUIFRVFTUJPOOBJSFTJEFPOMZUP+1.847.763.9638

3 Mail

$PNQMFUFUIFRVFTUJPOOBJSFPOUIFCBDLPGUIJTDBSE
BOENBJMJUUP P.O. Box 1263, Skokie, IL 60076-8263

Copyright © 2008, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners.

www.allitebooks.com

Want your own FREE subscription?
To receive a free subscription to Oracle Magazine, you must fill out the entire card, sign it, and date
it (incomplete cards cannot be processed or acknowledged). You can also fax your application to
+1.847.763.9638. Or subscribe at our Web site at oracle.com/oraclemagazine
Yes, please send me a FREE subscription Oracle Magazine.
From time to time, Oracle Publishing allows our partners
exclusive access to our e-mail addresses for special promotions and announcements. To be included in this program,
please check this circle. If you do not wish to be included, you
will only receive notices about your subscription via e-mail.
Oracle Publishing allows sharing of our postal mailing list with
selected third parties. If you prefer your mailing address not to
be included in this program, please check this circle.
If at any time you would like to be removed from either mailing list, please contact
Customer Service at +1.847.763.9635 or send an e-mail to oracle@halldata.com.
If you opt in to the sharing of information, Oracle may also provide you with
e-mail related to Oracle products, services, and events. If you want to completely
unsubscribe from any e-mail communication from Oracle, please send an e-mail to:
unsubscribe@oracle-mail.com with the following in the subject line: REMOVE [your
e-mail address]. For complete information on Oracle Publishing’s privacy practices,
please visit oracle.com/html/privacy/html

No.

x
signature (required)

date

name

title

company

e-mail address

street/p.o. box
city/state/zip or postal code

telephone

country

fax

Would you like to receive your free subscription in digital format instead of print if it becomes available?

Yes

No

YOU MUST ANSWER ALL 10 QUESTIONS BELOW.
1

08014004

2

WHAT IS THE PRIMARY BUSINESS ACTIVITY
OF YOUR FIRM AT THIS LOCATION? (check
one only)
o
o
o
o
o
o
o

01
02
03
04
05
06
07

o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o

08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
98

Aerospace and Defense Manufacturing
Application Service Provider
Automotive Manufacturing
Chemicals
Media and Entertainment
Construction/Engineering
Consumer Sector/Consumer Packaged
Goods
Education
Financial Services/Insurance
Health Care
High Technology Manufacturing, OEM
Industrial Manufacturing
Independent Software Vendor
Life Sciences (biotech, pharmaceuticals)
Natural Resources
Oil and Gas
Professional Services
Public Sector (government)
Research
Retail/Wholesale/Distribution
Systems Integrator, VAR/VAD
Telecommunications
Travel and Transportation
Utilities (electric, gas, sanitation, water)
Other Business and Services _________

3

o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o

99
4

CORPORATE MANAGEMENT/STAFF
o 01 Executive Management (President, Chair,
CEO, CFO, Owner, Partner, Principal)
o 02 Finance/Administrative Management
(VP/Director/ Manager/Controller,
Purchasing, Administration)
o 03 Sales/Marketing Management
(VP/Director/Manager)
o 04 Computer Systems/Operations
Management
(CIO/VP/Director/Manager MIS/IS/IT, Ops)
IS/IT STAFF
o 05 Application Development/Programming
Management
o 06 Application Development/Programming
Staff
o 07 Consulting
o 08 DBA/Systems Administrator
o 09 Education/Training
o 10 Technical Support Director/Manager
o 11 Other Technical Management/Staff
o 98 Other

99
5

01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
98
o

Digital Equipment Corp UNIX/VAX/VMS
HP UNIX
IBM AIX
IBM UNIX
Linux (Red Hat)
Linux (SUSE)
Linux (Oracle Enterprise)
Linux (other)
Macintosh
MVS
Netware
Network Computing
SCO UNIX
Sun Solaris/SunOS
Windows
Other UNIX
Other
None of the Above

01
02
03
04
05
06
07
o

Hardware
Business Applications (ERP, CRM, etc.)
Application Development Tools
Database Products
Internet or Intranet Products
Other Software
Middleware Products
None of the Above

6

7

HARDWARE
o 15 Macintosh
o 16 Mainframe
o 17 Massively Parallel Processing

01
02
03
04
05
06

Less than $10,000
$10,000 to $49,999
$50,000 to $99,999
$100,000 to $499,999
$500,000 to $999,999
$1,000,000 and Over

WHAT IS YOUR COMPANY’S YEARLY SALES
REVENUE? (check one only)
o
o
o
o
o

9

More than 25,000 Employees
10,001 to 25,000 Employees
5,001 to 10,000 Employees
1,001 to 5,000 Employees
101 to 1,000 Employees
Fewer than 100 Employees

DURING THE NEXT 12 MONTHS, HOW MUCH
DO YOU ANTICIPATE YOUR ORGANIZATION
WILL SPEND ON COMPUTER HARDWARE,
SOFTWARE, PERIPHERALS, AND SERVICES FOR
YOUR LOCATION? (check one only)
o
o
o
o
o
o

8

01
02
03
04
05
06

01
02
03
04
05

$500, 000, 000 and above
$100, 000, 000 to $500, 000, 000
$50, 000, 000 to $100, 000, 000
$5, 000, 000 to $50, 000, 000
$1, 000, 000 to $5, 000, 000

WHAT LANGUAGES AND FRAMEWORKS DO
YOU USE? (check all that apply)
o
o
o
o

01
02
03
04

o
o
o
o
o
o

Minicomputer
Intel x86(32)
Intel x86(64)
Network Computer
Symmetric Multiprocessing
Workstation Services

WHAT IS YOUR COMPANY’S SIZE?
(check one only)
o
o
o
o
o
o

IN YOUR JOB, DO YOU USE OR PLAN TO PURCHASE ANY OF THE FOLLOWING PRODUCTS?
(check all that apply)
SOFTWARE
o 01 CAD/CAE/CAM
o 02 Collaboration Software
o 03 Communications
o 04 Database Management
o 05 File Management
o 06 Finance
o 07 Java
o 08 Multimedia Authoring
o 09 Networking
o 10 Programming
o 11 Project Management
o 12 Scientific and Engineering
o 13 Systems Management
o 14 Workflow

18
19
20
21
22
23

SERVICES
o 24 Consulting
o 25 Education/Training
o 26 Maintenance
o 27 Online Database
o 28 Support
o 29 Technology-Based Training
o 30 Other
99 o None of the Above

DO YOU EVALUATE, SPECIFY, RECOMMEND,
OR AUTHORIZE THE PURCHASE OF ANY OF
THE FOLLOWING? (check all that apply)
o
o
o
o
o
o
o

WHICH OF THE FOLLOWING BEST DESCRIBES
YOUR PRIMARY JOB FUNCTION?
(check one only)

o
o
o
o
o
o

WHAT IS YOUR CURRENT PRIMARY OPERATING
PLATFORM (check all that apply)

Ajax
C
C++
C#

www.allitebooks.com

o
o
o
o

13
14
15
16

Python
Ruby/Rails
Spring
Struts

o
o
10

05 Hibernate
06 J++/J#
07 Java
08 JSP
09 .NET
10 Perl
11 PHP
12 PL/SQL

o 17 SQL
o 18 Visual Basic
o 98 Other

WHAT ORACLE PRODUCTS ARE IN USE AT YOUR
SITE? (check all that apply)
ORACLE DATABASE
o 01 Oracle Database 11g
o 02 Oracle Database 10 g
o 03 Oracle9 i Database
o 04 Oracle Embedded Database
(Oracle Lite, Times Ten, Berkeley DB)
o 05 Other Oracle Database Release
ORACLE FUSION MIDDLEWARE
o 06 Oracle Application Server
o 07 Oracle Portal
o 08 Oracle Enterprise Manager
o 09 Oracle BPEL Process Manager
o 10 Oracle Identity Management
o 11 Oracle SOA Suite
o 12 Oracle Data Hubs
ORACLE DEVELOPMENT TOOLS
o 13 Oracle JDeveloper
o 14 Oracle Forms
o 15 Oracle Reports
o 16 Oracle Designer
o 17 Oracle Discoverer
o 18 Oracle BI Beans
o 19 Oracle Warehouse Builder
o 20 Oracle WebCenter
o 21 Oracle Application Express
ORACLE APPLICATIONS
o 22 Oracle E-Business Suite
o 23 PeopleSoft Enterprise
o 24 JD Edwards EnterpriseOne
o 25 JD Edwards World
o 26 Oracle Fusion
o 27 Hyperion
o 28 Siebel CRM
ORACLE SERVICES
o 28 Oracle E-Business Suite On Demand
o 29 Oracle Technology On Demand
o 30 Siebel CRM On Demand
o 31 Oracle Consulting
o 32 Oracle Education
o 33 Oracle Support
o 98 Other
99 o None of the Above

www.allitebooks.com

www.allitebooks.com

www.allitebooks.com

LICENSE AGREEMENT
THIS PRODUCT (THE “PRODUCT”) CONTAINS PROPRIETARY SOFTWARE, DATA AND INFORMATION (INCLUDING
DOCUMENTATION) OWNED BY THE McGRAW-HILL COMPANIES, INC. (“McGRAW-HILL”) AND ITS LICENSORS. YOUR
RIGHT TO USE THE PRODUCT IS GOVERNED BY THE TERMS AND CONDITIONS OF THIS AGREEMENT.
LICENSE: Throughout this License Agreement, “you” shall mean either the individual or the entity whose agent opens this package. You
are granted a non-exclusive and non-transferable license to use the Product subject to the following terms:
(i) If you have licensed a single user version of the Product, the Product may only be used on a single computer (i.e., a single CPU). If you
licensed and paid the fee applicable to a local area network or wide area network version of the Product, you are subject to the terms of the
following subparagraph (ii).
(ii) If you have licensed a local area network version, you may use the Product on unlimited workstations located in one single building
selected by you that is served by such local area network. If you have licensed a wide area network version, you may use the Product on
unlimited workstations located in multiple buildings on the same site selected by you that is served by such wide area network; provided,
however, that any building will not be considered located in the same site if it is more than five (5) miles away from any building included in
such site. In addition, you may only use a local area or wide area network version of the Product on one single server. If you wish to use the
Product on more than one server, you must obtain written authorization from McGraw-Hill and pay additional fees.
(iii) You may make one copy of the Product for back-up purposes only and you must maintain an accurate record as to the location of the
back-up at all times.
COPYRIGHT; RESTRICTIONS ON USE AND TRANSFER: All rights (including copyright) in and to the Product are owned by
McGraw-Hill and its licensors. You are the owner of the enclosed disc on which the Product is recorded. You may not use, copy, decompile,
disassemble, reverse engineer, modify, reproduce, create derivative works, transmit, distribute, sublicense, store in a database or retrieval
system of any kind, rent or transfer the Product, or any portion thereof, in any form or by any means (including electronically or otherwise)
except as expressly provided for in this License Agreement. You must reproduce the copyright notices, trademark notices, legends and logos
of McGraw-Hill and its licensors that appear on the Product on the back-up copy of the Product which you are permitted to make hereunder.
All rights in the Product not expressly granted herein are reserved by McGraw-Hill and its licensors.
TERM: This License Agreement is effective until terminated. It will terminate if you fail to comply with any term or condition of this
License Agreement. Upon termination, you are obligated to return to McGraw-Hill the Product together with all copies thereof and to purge
all copies of the Product included in any and all servers and computer facilities.
DISCLAIMER OF WARRANTY: THE PRODUCT AND THE BACK-UP COPY ARE LICENSED “AS IS.” McGRAW-HILL, ITS
LICENSORS AND THE AUTHORS MAKE NO WARRANTIES, EXPRESS OR IMPLIED, AS TO THE RESULTS TO BE OBTAINED
BY ANY PERSON OR ENTITY FROM USE OF THE PRODUCT, ANY INFORMATION OR DATA INCLUDED THEREIN AND/OR
ANY TECHNICAL SUPPORT SERVICES PROVIDED HEREUNDER, IF ANY (“TECHNICAL SUPPORT SERVICES”).
McGRAW-HILL, ITS LICENSORS AND THE AUTHORS MAKE NO EXPRESS OR IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE OR USE WITH RESPECT TO THE PRODUCT.
McGRAW-HILL, ITS LICENSORS, AND THE AUTHORS MAKE NO GUARANTEE THAT YOU WILL PASS ANY
CERTIFICATION EXAM WHATSOEVER BY USING THIS PRODUCT. NEITHER McGRAW-HILL, ANY OF ITS LICENSORS NOR
THE AUTHORS WARRANT THAT THE FUNCTIONS CONTAINED IN THE PRODUCT WILL MEET YOUR REQUIREMENTS OR
THAT THE OPERATION OF THE PRODUCT WILL BE UNINTERRUPTED OR ERROR FREE. YOU ASSUME THE ENTIRE RISK
WITH RESPECT TO THE QUALITY AND PERFORMANCE OF THE PRODUCT.
LIMITED WARRANTY FOR DISC: To the original licensee only, McGraw-Hill warrants that the enclosed disc on which the Product is
recorded is free from defects in materials and workmanship under normal use and service for a period of ninety (90) days from the date of
purchase. In the event of a defect in the disc covered by the foregoing warranty, McGraw-Hill will replace the disc.
LIMITATION OF LIABILITY: NEITHER McGRAW-HILL, ITS LICENSORS NOR THE AUTHORS SHALL BE LIABLE FOR ANY
INDIRECT, SPECIAL OR CONSEQUENTIAL DAMAGES, SUCH AS BUT NOT LIMITED TO, LOSS OF ANTICIPATED PROFITS
OR BENEFITS, RESULTING FROM THE USE OR INABILITY TO USE THE PRODUCT EVEN IF ANY OF THEM HAS BEEN
ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. THIS LIMITATION OF LIABILITY SHALL APPLY TO ANY CLAIM OR
CAUSE WHATSOEVER WHETHER SUCH CLAIM OR CAUSE ARISES IN CONTRACT, TORT, OR OTHERWISE. Some states do
not allow the exclusion or limitation of indirect, special or consequential damages, so the above limitation may not apply to you.
U.S. GOVERNMENT RESTRICTED RIGHTS: Any software included in the Product is provided with restricted rights subject to
subparagraphs (c), (1) and (2) of the Commercial Computer Software-Restricted Rights clause at 48 C.F.R. 52.227-19. The terms of this
Agreement applicable to the use of the data in the Product are those under which the data are generally made available to the general public
by McGraw-Hill. Except as provided herein, no reproduction, use, or disclosure rights are granted with respect to the data included in the
Product and no right to modify or create derivative works from any such data is hereby granted.
GENERAL: This License Agreement constitutes the entire agreement between the parties relating to the Product. The terms of any Purchase
Order shall have no effect on the terms of this License Agreement. Failure of McGraw-Hill to insist at any time on strict compliance with
this License Agreement shall not constitute a waiver of any rights under this License Agreement. This License Agreement shall be construed
and governed in accordance with the laws of the State of New York. If any provision of this License Agreement is held to be contrary to law,
that provision will be enforced to the maximum extent permissible and the remaining provisions will remain in full force and effect.

Thank you, Silvia, for helping me do this (and for giving me a reason for living).
—John

Ameetha, a more loving and caring companion to share this journey through life,
I could not have found.
—Roopesh

This page intentionally left blank

CONTENTS AT A GLANCE

Part I

Oracle Database 11g Administration

Chapter 1

Architectural Overview of Oracle Database 11g

Chapter 2

Installing and Creating a Database

Chapter 3

Instance Management

Chapter 4

Oracle Networking

Chapter 5
Chapter 6

Part II

..............

3

..........................

55

...................................

99

.....................................

133

Oracle Storage

.........................................

171

Oracle Security

........................................

203

SQL

Chapter 7

DDL and Schema Objects

................................

259

Chapter 8

DML and Concurrency

..................................

315

Chapter 9

Retrieving, Restricting, and Sorting Data Using SQL

............

367

......................

419

.......................................

459

.............................................

481

Chapter 10

Single-Row and Conversion Functions

Chapter 11

Group Functions

Chapter 12

SQL Joins

Chapter 13

Subqueries and Set Operators

............................

515

vii

OCA/OCP Oracle Database 11g All-in-One Exam Guide

viii
Part III

Advanced Database Administration

Chapter 14

Configuring the Database for Backup and Recovery

Chapter 15

Back Up with RMAN

Chapter 16

Restore and Recover with RMAN

Chapter 17

Advanced RMAN Facilities

Chapter 18

User-Managed Backup, Restore, and Recovery

Chapter 19

Flashback

Chapter 20

Automatic Storage Management

Chapter 21

The Resource Manager

Chapter 22

The Scheduler

Chapter 23

Moving and Reorganizing Data

............................

831

Chapter 24

The AWR and the Alert System

...........................

865

Chapter 25

Performance Tuning

.....................................

891

Chapter 26

Globalization

..........................................

937

Chapter 27

The Intelligent Infrastructure

Appendix

Index

543

....................................

577

.........................

607

...............................

641

................

677

.............................................

699

...........................

747

..................................

773

.........................................

805

..............................

965

.........................................

983

..............................................

987

About the CD
Glossary

...........

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1003

CONTENTS

Introduction

Part I
Chapter 1

..............................................

xxix

Oracle Database 11g Administration
Architectural Overview of Oracle Database 11g

..............

3

Exam Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Oracle Product Stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Oracle Server Family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Oracle Development Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Oracle Applications
...................................
Prerequisite Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Oracle Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
SQL Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Operating System Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Single-Instance Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Single-Instance Database Architecture . . . . . . . . . . . . . . . . . . . . .
Distributed Systems Architectures . . . . . . . . . . . . . . . . . . . . . . . .
Instance Memory Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Database Buffer Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Log Buffer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Shared Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Large Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Java Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Streams Pool
.....................................
Instance Process Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
SMON, the System Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PMON, the Process Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . .
DBWn, the Database Writer . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
LGWR, the Log Writer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
CKPT, the Checkpoint Process . . . . . . . . . . . . . . . . . . . . . . . . . . .
MMON, the Manageability Monitor . . . . . . . . . . . . . . . . . . . . . .

3
4
4
8
10
11
11
12
13
13
13
16
19
20
21
23
26
26
27
28
29
30
30
32
33
34

ix

OCA/OCP Oracle Database 11g All-in-One Exam Guide

x

Chapter 2

MMNL, the Manageability Monitor Light . . . . . . . . . . . . . . . . . .
MMAN, the Memory Manager . . . . . . . . . . . . . . . . . . . . . . . . . . .
ARCn, the Archiver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RECO, the Recoverer Process . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Some Other Background Processes . . . . . . . . . . . . . . . . . . . . . . .
Database Storage Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Physical Database Structures . . . . . . . . . . . . . . . . . . . . . . . . .
The Logical Database Structures . . . . . . . . . . . . . . . . . . . . . . . . . .
The Data Dictionary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Two-Minute Drill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Single-Instance Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Instance Memory Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Instance Process Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Database Storage Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Self Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Self Test Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

35
35
35
36
37
40
41
45
46
49
49
49
49
49
50
52

Installing and Creating a Database

..........................

55

Exam Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Identify the Tools for Administering an Oracle Database . . . . . . . . . . .
The Oracle Universal Installer . . . . . . . . . . . . . . . . . . . . . . . . . . .
Database Creation and Upgrade Tools . . . . . . . . . . . . . . . . . . . .
Tools for Issuing Ad Hoc SQL: SQL*Plus and SQL Developer
.
Oracle Enterprise Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Other Administration Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Plan an Oracle Database Installation . . . . . . . . . . . . . . . . . . . . . . . . . . .
Choice of Operating System . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Hardware and Operating System Resources . . . . . . . . . . . . . . . .
Optimal Flexible Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . .
Environment Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Install the Oracle Software by Using the Oracle Universal
Installer (OUI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Create a Database by Using the Database Configuration Assistant . . .
The Instance, the Database, and the Data Dictionary . . . . . . . . .
Using the DBCA to Create a Database . . . . . . . . . . . . . . . . . . . . .
The Scripts and Other Files Created by the DBCA . . . . . . . . . . .
The DBCA’s Other Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Two-Minute Drill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Identify the Tools for Administering an Oracle Database . . . . .
Plan an Oracle Database Installation . . . . . . . . . . . . . . . . . . . . .
Install the Oracle Software by Using the Oracle Universal
Installer (OUI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Create a Database by Using the Database Configuration
Assistant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Self Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Self Test Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

55
56
56
60
61
66
67
68
68
69
71
72
74
77
78
79
84
90
91
91
92
92
92
92
96

Contents

xi
Chapter 3

Chapter 4

Instance Management

...................................

99

Exam Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Set Database Initialization Parameters
.........................
Static and Dynamic Parameters and the Initialization
Parameter File
.....................................
The Basic Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Describe the Stages of Database Startup and Shutdown . . . . . . . . . . . .
Starting and Connecting to Database Control . . . . . . . . . . . . . .
Starting the Database Listener . . . . . . . . . . . . . . . . . . . . . . . . . . .
Starting SQL*Plus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Database Startup and Shutdown . . . . . . . . . . . . . . . . . . . . . . . . .
Use the Alert Log and Trace Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Use Data Dictionary and Dynamic Performance Views . . . . . . . . . . . .
The Data Dictionary Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Dynamic Performance Views . . . . . . . . . . . . . . . . . . . . . . . .
Two-Minute Drill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Describe the Stages of Database Startup and Shutdown . . . . . .
Set Database Initialization Parameters . . . . . . . . . . . . . . . . . . . .
Use the Alert Log and Trace Files . . . . . . . . . . . . . . . . . . . . . . . . .
Use Data Dictionary and Dynamic Performance Views . . . . . . .
Self Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Self Test Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

99
101

Oracle Networking

101
103
108
108
110
112
112
121
123
123
124
126
126
126
127
127
127
130

.....................................

133

Exam Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configure and Manage the Oracle Network . . . . . . . . . . . . . . . . . . . . .
Oracle Net and the Client-Server Paradigm . . . . . . . . . . . . . . . . .
A Word on Oracle Net and Communication Protocols . . . . . . .
Establishing a Session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Creating a Listener . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Database Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Techniques for Name Resolution . . . . . . . . . . . . . . . . . . . . . . . . .
The Listener Control Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configuring Service Aliases . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Filenames and the TNSADMIN Environment Variable . . . . . . .
Database Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Use the Oracle Shared Server Architecture . . . . . . . . . . . . . . . . . . . . . . .
The Limitations of Dedicated Server Architecture . . . . . . . . . . . .
The Shared Server Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configuring Shared Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
When to Use the Shared Server . . . . . . . . . . . . . . . . . . . . . . . . . .
Two-Minute Drill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configure and Manage the Oracle Network . . . . . . . . . . . . . . . .
Use the Oracle Shared Server Architecture
.................
Self Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Self Test Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

133
134
134
136
136
139
140
143
145
149
151
153
157
157
159
161
162
165
165
165
166
169

OCA/OCP Oracle Database 11g All-in-One Exam Guide

xii
Chapter 5

Chapter 6

Oracle Storage

.........................................

171

Exam Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Overview of Tablespaces and Datafiles . . . . . . . . . . . . . . . . . . . . . . . . .
The Oracle Data Storage Model . . . . . . . . . . . . . . . . . . . . . . . . . .
Segments, Extents, Blocks, and Rows . . . . . . . . . . . . . . . . . . . . . .
File Storage Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Create and Manage Tablespaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Tablespace Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Altering Tablespaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Dropping Tablespaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Oracle-Managed Files (OMF) . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Space Management in Tablespaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Extent Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Segment Space Management . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Two-Minute Drill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Overview of Tablespaces and Datafiles . . . . . . . . . . . . . . . . . . . .
Create and Manage Tablespaces . . . . . . . . . . . . . . . . . . . . . . . . . .
Space Management in Tablespaces . . . . . . . . . . . . . . . . . . . . . . .
Self Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Self Test Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

171
172
172
174
178
180
180
186
191
191
194
194
196
197
197
198
198
198
201

Oracle Security

........................................

203

Exam Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Create and Manage Database User Accounts . . . . . . . . . . . . . . . . . . . . .
User Account Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Authentication Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Creating Accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Grant and Revoke Privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
System Privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Object Privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Create and Manage Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Creating and Granting Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Predefined Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Enabling Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Create and Manage Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Password Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Resource Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Creating and Assigning Profiles . . . . . . . . . . . . . . . . . . . . . . . . . .
Database Security and Principle of Least Privilege . . . . . . . . . . . . . . . .
Public Privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Security-Critical Instance Parameters . . . . . . . . . . . . . . . . . . . . . .
Work with Standard Database Auditing . . . . . . . . . . . . . . . . . . . . . . . . .
Auditing SYSDBA Activity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Database Auditing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Value-Based Auditing with Triggers . . . . . . . . . . . . . . . . . . . . . . .
Fine-Grained Auditing (FGA) . . . . . . . . . . . . . . . . . . . . . . . . . . . .

203
204
205
209
213
216
216
219
223
223
224
225
229
229
230
231
234
234
235
240
241
241
244
245

Contents

xiii
Two-Minute Drill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Create and Manage Database User Accounts . . . . . . . . . . . . . . .
Grant and Revoke Privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Create and Manage Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Create and Manage Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Database Security and Principle of Least Privilege . . . . . . . . . . .
Work with Standard Database Auditing . . . . . . . . . . . . . . . . . . .
Self Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Self Test Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Part II
Chapter 7

248
248
248
249
249
249
249
249
253

SQL
DDL and Schema Objects

................................

259

Exam Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Categorize the Main Database Objects . . . . . . . . . . . . . . . . . . . . . . . . .
Object Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Naming Schema Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Object Namespaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
List the Data Types That Are Available for Columns . . . . . . . . . . . . . . .
Create a Simple Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Creating Tables with Column Specifications . . . . . . . . . . . . . . . .
Creating Tables from Subqueries . . . . . . . . . . . . . . . . . . . . . . . . .
Altering Table Definitions after Creation
..................
Dropping and Truncating Tables . . . . . . . . . . . . . . . . . . . . . . . . .
Create and Use Temporary Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Indexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Why Indexes Are Needed? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Types of Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Creating and Using Indexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Modifying and Dropping Indexes . . . . . . . . . . . . . . . . . . . . . . . .
Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Types of Constraint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Defining Constraints
..................................
Constraint State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Constraint Checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Why Use Views at All? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Simple and Complex Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
CREATE VIEW, ALTER VIEW, and DROP VIEW . . . . . . . . . . . . . .
Synonyms
................................................
Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Creating Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Using Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Two-Minute Drill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Categorize the Main Database Objects . . . . . . . . . . . . . . . . . . . .
List the Data Types That Are Available for Columns . . . . . . . . . .

259
260
260
261
262
263
266
267
268
269
270
273
275
275
276
281
282
283
283
286
288
289
290
291
293
294
295
298
298
300
303
303
303

www.allitebooks.com

OCA/OCP Oracle Database 11g All-in-One Exam Guide

xiv

Chapter 8

Create a Simple Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Create and Use Temporary Tables . . . . . . . . . . . . . . . . . . . . . . . .
Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Indexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Synonyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Self Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Self Test Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

304
304
304
304
304
305
305
305
311

DML and Concurrency

..................................

315

Exam Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Data Manipulation Language (DML) Statements . . . . . . . . . . . . . . . . .
INSERT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
UPDATE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
DELETE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
TRUNCATE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
MERGE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
DML Statement Failures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Control Transactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Database Transactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Executing SQL Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Transaction Control: COMMIT, ROLLBACK, SAVEPOINT,
SELECT FOR UPDATE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Identify and Administer PL/SQL Objects . . . . . . . . . . . . . . . . . . . . . . . .
Stored and Anonymous PL/SQL . . . . . . . . . . . . . . . . . . . . . . . . .
PL/SQL Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Monitor and Resolve Locking Conflicts . . . . . . . . . . . . . . . . . . . . . . . . .
Shared and Exclusive Locks . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Enqueue Mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Lock Contention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Deadlocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Overview of Undo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Transactions and Undo Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Managing Undo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Error Conditions Related to Undo . . . . . . . . . . . . . . . . . . . . . . . .
Parameters for Undo Management, and Retention Guarantee .
Sizing and Monitoring the Undo Tablespace . . . . . . . . . . . . . . .
Creating and Managing Undo Tablespaces . . . . . . . . . . . . . . . . .
Two-Minute Drill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Describe Each Data Manipulation Language (DML) Statement
Control Transactions
..................................
Manage Data Using DML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Identify and Administer PL/SQL Objects . . . . . . . . . . . . . . . . . .
Monitor and Resolve Locking Conflicts . . . . . . . . . . . . . . . . . . . .
Overview of Undo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

315
316
316
320
323
325
326
328
330
330
331
335
340
340
341
346
346
347
347
350
351
352
354
354
355
356
358
359
359
360
360
360
360
360

Contents

xv

Chapter 9

Chapter 10

Transactions and Undo Data . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Managing Undo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Self Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Self Test Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

361
361
361
365

Retrieving, Restricting, and Sorting Data Using SQL

............

367

Exam Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
List the Capabilities of SQL SELECT Statements . . . . . . . . . . . . . . . . . .
Introducing the SQL SELECT Statement . . . . . . . . . . . . . . . . . . .
The DESCRIBE Table Command . . . . . . . . . . . . . . . . . . . . . . . . .
Capabilities of the SELECT Statement . . . . . . . . . . . . . . . . . . . . .
Data Normalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Create the Demonstration Schemas . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The HR and WEBSTORE Schemas . . . . . . . . . . . . . . . . . . . . . . . .
Demonstration Schema Creation
........................
Execute a Basic SELECT Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Syntax of the Primitive SELECT Statement . . . . . . . . . . . . . . . . .
Rules Are Meant to Be Followed . . . . . . . . . . . . . . . . . . . . . . . . . .
SQL Expressions and Operators . . . . . . . . . . . . . . . . . . . . . . . . . .
NULL Is Nothing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Limit the Rows Retrieved by a Query . . . . . . . . . . . . . . . . . . . . . . . . . . .
The WHERE Clause . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Comparison Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Boolean Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Precedence Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sort the Rows Retrieved by a Query . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The ORDER BY Clause . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Ampersand Substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Substitution Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Define and Verify . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Two-Minute Drill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
List the Capabilities of SQL SELECT Statements . . . . . . . . . . . . .
Execute a Basic SELECT Statement . . . . . . . . . . . . . . . . . . . . . . . .
Limit the Rows Retrieved by a Query . . . . . . . . . . . . . . . . . . . . . .
Sort the Rows Retrieved by a Query . . . . . . . . . . . . . . . . . . . . . . .
Ampersand Substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Self Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Self Test Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

367
368
368
369
370
371
375
375
379
381
382
383
386
390
392
392
395
400
402
403
403
405
406
409
412
412
412
413
413
413
414
416

Single-Row and Conversion Functions

......................

419

Exam Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Describe and Use Character, Number, and Date Functions in SQL . . .
Defining a Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Types of Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Using Case Conversion Functions . . . . . . . . . . . . . . . . . . . . . . . .
Using Character Manipulations Functions . . . . . . . . . . . . . . . . .

419
420
420
420
421
423

OCA/OCP Oracle Database 11g All-in-One Exam Guide

xvi
Using Numeric Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Working with Dates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Describe Various Types of Conversion Functions Available in SQL . . .
Conversion Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Use the TO_CHAR, TO_NUMBER, and TO_DATE Conversion
Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Using the Conversion Functions . . . . . . . . . . . . . . . . . . . . . . . . .
Apply Conditional Expressions in a SELECT Statement . . . . . . . . . . . .
Nested Functions
.....................................
Conditional Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Two-Minute Drill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Describe Various Types of Functions Available in SQL . . . . . . . .
Use Character, Number, and Date Functions in SELECT
Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Describe Various Types of Conversion Functions Available
in SQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Use the TO_CHAR, TO_NUMBER, and TO_DATE Conversion
Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Apply Conditional Expressions in a SELECT Statement . . . . . . .
Self Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Self Test Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Chapter 11

Chapter 12

Group Functions

427
429
434
434
436
436
444
444
445
453
453
453
454
454
454
454
457

.......................................

459

Exam Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Group Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Definition of Group Functions . . . . . . . . . . . . . . . . . . . . . . . . . .
Using Group Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Group Data Using the GROUP BY Clause . . . . . . . . . . . . . . . . . . . . . . .
Creating Groups of Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The GROUP BY Clause . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Grouping by Multiple Columns . . . . . . . . . . . . . . . . . . . . . . . . . .
Nested Group Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Include or Exclude Grouped Rows Using the HAVING Clause . . . . . .
Restricting Group Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The HAVING Clause . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Two-Minute Drill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Describe the Group Functions . . . . . . . . . . . . . . . . . . . . . . . . . . .
Identify the Available Group Functions . . . . . . . . . . . . . . . . . . . .
Group Data Using the GROUP BY Clause . . . . . . . . . . . . . . . . . .
Include or Exclude Grouped Rows Using the HAVING Clause . .
Self Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Self Test Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

459
460
460
461
465
465
466
468
470
471
472
473
475
475
475
475
476
476
478

SQL Joins

.............................................

481

Exam Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Write SELECT Statements to Access Data from More Than One Table
Using Equijoins and Nonequijoins
.........................
Types of Joins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

481
482
482

Contents

xvii

Chapter 13

Joining Tables Using SQL:1999 Syntax . . . . . . . . . . . . . . . . . . . .
Qualifying Ambiguous Column Names . . . . . . . . . . . . . . . . . . .
The NATURAL JOIN Clause . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Natural JOIN USING Clause . . . . . . . . . . . . . . . . . . . . . . . . .
The Natural JOIN ON Clause . . . . . . . . . . . . . . . . . . . . . . . . . . . .
N-Way Joins and Additional Join Conditions . . . . . . . . . . . . . . .
Nonequijoins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Join a Table to Itself Using a Self-Join . . . . . . . . . . . . . . . . . . . . . . . . . .
Joining a Table to Itself Using the JOIN . . . ON Clause . . . . . . .
View Data That Does Not Meet a Join Condition by Using
Outer Joins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Inner Versus Outer Joins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Left Outer Joins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Right Outer Joins
.....................................
Full Outer Joins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Generate a Cartesian Product of Two or More Tables . . . . . . . . . . . . . .
Creating Cartesian Products Using Cross Joins . . . . . . . . . . . . . .
Two-Minute Drill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Write SELECT Statements to Access Data from More Than
One Table Using Equijoins and Nonequijoins . . . . . . . . . . .
Join a Table to Itself Using a Self-Join . . . . . . . . . . . . . . . . . . . . .
View Data That Does Not Meet a Join Condition Using
Outer Joins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Generate a Cartesian Product of Two or More Tables . . . . . . . . .
Self Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Self Test Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

487
487
489
492
492
495
496
498
498

Subqueries and Set Operators

............................

515

Exam Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Define Subqueries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Describe the Types of Problems That the Subqueries Can Solve . . . . .
Use of a Subquery Result Set for Comparison Purposes
......
Generate a Table from Which to SELECT . . . . . . . . . . . . . . . . . .
Generate Values for Projection . . . . . . . . . . . . . . . . . . . . . . . . . . .
Generate Rows to Be Passed to a DML Statement . . . . . . . . . . . .
List the Types of Subqueries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Single- and Multiple-Row Subqueries . . . . . . . . . . . . . . . . . . . . .
Correlated Subqueries
.................................
Write Single-Row and Multiple-Row Subqueries . . . . . . . . . . . . . . . . . .
Describe the Set Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sets and Venn Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Set Operator General Principles . . . . . . . . . . . . . . . . . . . . . . . . . .
Use a Set Operator to Combine Multiple Queries into
a Single Query . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The UNION ALL Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The UNION Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

515
516
517
517
518
518
519
520
520
521
524
525
525
526

500
500
501
503
503
505
506
508
508
509
509
509
510
512

529
529
530

OCA/OCP Oracle Database 11g All-in-One Exam Guide

xviii
The INTERSECT Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The MINUS Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
More Complex Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Control the Order of Rows Returned . . . . . . . . . . . . . . . . . . . . . . . . . . .
Two-Minute Drill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Define Subqueries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Describe the Types of Problems That the Subqueries Can Solve .
List the Types of Subqueries . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Write Single-Row and Multiple-Row Subqueries
............
Describe the Set Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Use a Set Operator to Combine Multiple Queries into
a Single Query . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Control the Order of Rows Returned . . . . . . . . . . . . . . . . . . . . . .
Self Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Self Test Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

PART III
Chapter 14

530
531
531
533
533
533
533
534
534
534
534
534
535
539

Advanced Database Administration
...........

543

Exam Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Backup and Recovery Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Categories of Failures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Statement Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
User Process Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Network Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
User Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Media Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Instance Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Instance Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Mechanics of Instance Recovery . . . . . . . . . . . . . . . . . . . . . .
The Impossibility of Database Corruption . . . . . . . . . . . . . . . . .
Tuning Instance Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The MTTR Advisor and Checkpoint Auto-Tuning . . . . . . . . . . . .
Checkpointing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Preparing the Database for Recoverability . . . . . . . . . . . . . . . . . . . . . . .
Protecting the Controlfile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Protecting the Online Redo Log Files . . . . . . . . . . . . . . . . . . . . .
Archivelog Mode and the Archiver Process . . . . . . . . . . . . . . . . .
Protecting the Archive Redo Log Files . . . . . . . . . . . . . . . . . . . . .
The Flash Recovery Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Recovery Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configure the Flash Recovery Area . . . . . . . . . . . . . . . . . . . . . . . .
Flash Recovery Area Space Usage . . . . . . . . . . . . . . . . . . . . . . . . .
Two-Minute Drill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Identify the Types of Failure That Can Occur in
an Oracle Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Describe Ways to Tune Instance Recovery . . . . . . . . . . . . . . . . . .

Configuring the Database for Backup and Recovery

543
544
546
546
547
548
549
551
552
552
553
554
555
555
557
558
558
560
563
566
567
567
568
569
570
570
571

Contents

xix

Chapter 15

Identify the Importance of Checkpoints, Redo Log Files,
and Archived Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configure ARCHIVELOG Mode . . . . . . . . . . . . . . . . . . . . . . . . . .
Configure Multiple Archive Log File Destinations to Increase
Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Overview of the Flash Recovery Area . . . . . . . . . . . . . . . . . . . . . .
Configure the Flash Recovery Area . . . . . . . . . . . . . . . . . . . . . . . .
Use the Flash Recovery Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Self Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Self Test Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

571
572
572
572
572
575

Back Up with RMAN

....................................

577

Exam Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Backup Concepts and Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Using the RMAN BACKUP Command to Create Backups . . . . . . . . . .
Server-Managed Consistent Backups . . . . . . . . . . . . . . . . . . . . . .
Server-Managed Open Backups . . . . . . . . . . . . . . . . . . . . . . . . . .
Incremental Backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Image Copies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Protect Your Backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Parallelizing Backup Operations . . . . . . . . . . . . . . . . . . . . . . . . .
Encrypting Backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configuring RMAN Defaults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Managing and Monitoring RMAN Backups . . . . . . . . . . . . . . . . . . . . . .
The LIST, REPORT, and DELETE Commands . . . . . . . . . . . . . . .
Archival Backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Dynamic Performance Views . . . . . . . . . . . . . . . . . . . . . . . .
Crosschecking Backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Two-Minute Drill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Create Consistent Database Backups . . . . . . . . . . . . . . . . . . . . . .
Back Up Your Database Without Shutting It Down . . . . . . . . . .
Create Incremental Backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Automate Database Backups . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Manage Backups, View Backup Reports, and Monitor the Flash
Recovery Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Define, Apply, and Use a Retention Policy . . . . . . . . . . . . . . . . .
Create Image File Backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Create a Whole Database Backup . . . . . . . . . . . . . . . . . . . . . . . .
Enable Fast Incremental Backup . . . . . . . . . . . . . . . . . . . . . . . . .
Create Duplex Backups and Back Up Backup Sets . . . . . . . . . . .
Create an Archival Backup for Long-Term Retention . . . . . . . . .
Create a Multisection, Compressed, and Encrypted Backup . . .
Report On and Maintain Backups . . . . . . . . . . . . . . . . . . . . . . . .
Configure Backup Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Allocate Channels to Use in Backing Up . . . . . . . . . . . . . . . . . . .
Configure Backup Optimization . . . . . . . . . . . . . . . . . . . . . . . . .
Self Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Self Test Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

577
578
580
580
582
583
588
588
590
591
592
594
594
596
597
597
599
599
599
599
600

571
571

600
600
600
600
600
600
601
601
601
601
601
601
602
604

OCA/OCP Oracle Database 11g All-in-One Exam Guide

xx
Chapter 16

Chapter 17

Restore and Recover with RMAN

.........................

607

Exam Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Data Recovery Advisor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Health Monitor and the ADR . . . . . . . . . . . . . . . . . . . . . . . .
The Capabilities and Limitations of the DRA . . . . . . . . . . . . . . .
Using the Data Recovery Advisor . . . . . . . . . . . . . . . . . . . . . . . . .
Database Restore and Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Complete Recovery from Data File Loss Using RMAN . . . . . . . . . . . . .
Recovery of Datafiles in Noarchivelog Mode . . . . . . . . . . . . . . .
Recovery of a Noncritical File in Archivelog Mode . . . . . . . . . . .
Recovering from Loss of a Critical Datafile . . . . . . . . . . . . . . . . .
Incomplete Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Autobackup and Restore of the Controlfile . . . . . . . . . . . . . . . . . . . . . .
Using Image Copies for Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Block Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Detection of Corrupt Blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Block Media Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The BLOCK RECOVER Command . . . . . . . . . . . . . . . . . . . . . . . .
Two-Minute Drill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Describe the Data Recovery Advisor . . . . . . . . . . . . . . . . . . . . . . .
Use the Data Recovery Advisor to Perform Recovery
(Controlfile, Redo Log File, and Datafile) . . . . . . . . . . . . . . .
Perform Complete Recovery from a Critical or Noncritical
Data File Loss Using RMAN . . . . . . . . . . . . . . . . . . . . . . . . . .
Perform Incomplete Recovery Using RMAN . . . . . . . . . . . . . . . .
Recover Using Incrementally Updated Backups . . . . . . . . . . . . .
Switch to Image Copies for Fast Recovery . . . . . . . . . . . . . . . . . .
Recover Using a Backup Control File . . . . . . . . . . . . . . . . . . . . . .
Perform Block Media Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . .
Self Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Self Test Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

607
608
608
609
610
613
614
614
616
619
620
623
627
629
629
630
630
631
631

631
632
632
632
632
633
633
637

Advanced RMAN Facilities

...............................

641

Exam Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Recovery Catalog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Need for a Recovery Catalog . . . . . . . . . . . . . . . . . . . . . . . . .
Creating and Connecting to the Catalog . . . . . . . . . . . . . . . . . . .
The Virtual Private Catalog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Protecting and Rebuilding the Catalog . . . . . . . . . . . . . . . . . . . .
Stored Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Using RMAN to Create Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Tablespace Point-in-Time Recovery (TSPITR) . . . . . . . . . . . . . . . . . . . .
The TSPITR Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Perform Automated TSPITR . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RMAN Performance and Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . .
Monitoring RMAN Sessions and Jobs . . . . . . . . . . . . . . . . . . . . .

641
642
642
643
645
646
649
651
654
654
655
658
658

631

Contents

xxi

Chapter 18

Tuning RMAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Tuning the BACKUP Command
.........................
Configure RMAN for Asynchronous I/O . . . . . . . . . . . . . . . . . . .
Two-Minute Drill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Identify Situations That Require an RMAN Recovery Catalog . . .
Create and Configure a Recovery Catalog . . . . . . . . . . . . . . . . . .
Synchronize the Recovery Catalog . . . . . . . . . . . . . . . . . . . . . . . .
Create and Use RMAN Stored Scripts . . . . . . . . . . . . . . . . . . . . .
Back Up the Recovery Catalog . . . . . . . . . . . . . . . . . . . . . . . . . . .
Create and Use a Virtual Private Catalog . . . . . . . . . . . . . . . . . . .
Create a Duplicate Database . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Use a Duplicate Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Restore a Database onto a New Host . . . . . . . . . . . . . . . . . . . . . .
Perform Disaster Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Identify the Situations That Require TSPITR . . . . . . . . . . . . . . . .
Perform Automated TSPITR . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Monitor RMAN Sessions and Jobs . . . . . . . . . . . . . . . . . . . . . . . .
Tune RMAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configure RMAN for Asynchronous I/O . . . . . . . . . . . . . . . . . . .
Self Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Self Test Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

662
665
665
667
667
667
667
668
668
668
668
669
669
669
669
670
670
670
671
671
674

User-Managed Backup, Restore, and Recovery

................

677

Exam Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Backup and Recovery in One Page . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
User-Managed Database Backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Backup in Noarchivelog Mode . . . . . . . . . . . . . . . . . . . . . . . . . . .
Backup in Archivelog Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Backup of the Password and Parameter Files . . . . . . . . . . . . . . . . . . . . .
Media Failure That Does Not Affect Datafiles . . . . . . . . . . . . . . . . . . . .
Recovery from Loss of a Multiplexed Controlfile . . . . . . . . . . . .
Recovery from Loss of a Multiplexed Online Redo Log File . . . .
Recovery from Loss of a Tempfile . . . . . . . . . . . . . . . . . . . . . . . . .
Recovery from Loss of Datafiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Recovery of Datafiles in Noarchivelog Mode . . . . . . . . . . . . . . .
Recovery of a Noncritical Datafile in Archivelog Mode . . . . . . .
Recovering a Critical Datafile in Archivelog Mode . . . . . . . . . . .
User-Managed Incomplete Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . .
Two-Minute Drill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Recover from a Lost TEMP File . . . . . . . . . . . . . . . . . . . . . . . . . . .
Recover from a Lost Redo Log Group . . . . . . . . . . . . . . . . . . . . .
Recover from the Loss of a Password File . . . . . . . . . . . . . . . . . .
Perform User-Managed Complete Database Recovery . . . . . . . .
Perform User-Managed Incomplete Database Recovery . . . . . . .
Perform User-Managed Backups . . . . . . . . . . . . . . . . . . . . . . . . .
Identify the Need for Backup Mode . . . . . . . . . . . . . . . . . . . . . . .
Back Up and Recover a Controlfile . . . . . . . . . . . . . . . . . . . . . . .

677
678
678
678
680
682
683
683
685
688
688
688
689
690
691
693
693
693
694
694
694
694
695
695

OCA/OCP Oracle Database 11g All-in-One Exam Guide

xxii

Chapter 19

Chapter 20

Self Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Self Test Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

695
697

Flashback

.............................................

699

Exam Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Different Flashback Technologies . . . . . . . . . . . . . . . . . . . . . . . . . .
Flashback Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Flashback Query, Transaction, and Table . . . . . . . . . . . . . . . . . .
Flashback Drop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Flashback Data Archive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
When to Use Flashback Technology . . . . . . . . . . . . . . . . . . . . . .
Flashback Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Flashback Database Architecture . . . . . . . . . . . . . . . . . . . . . . . . .
Configuring Flashback Database . . . . . . . . . . . . . . . . . . . . . . . . .
Monitoring Flashback Database . . . . . . . . . . . . . . . . . . . . . . . . . .
Using Flashback Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Limiting the Amount of Flashback Data Generated . . . . . . . . . .
Flashback Drop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Implementation of Flashback Drop . . . . . . . . . . . . . . . . . . .
Using Flashback Drop
.................................
Managing the Recycle Bin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Flashback Query . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Basic Flashback Query . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Flashback Table Query . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Flashback Versions Query . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Flashback Transaction Query . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Flashback and Undo Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Flashback Data Archive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Two-Minute Drill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Restore Dropped Tables from the Recycle Bin . . . . . . . . . . . . . . .
Perform Flashback Query . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Use Flashback Transaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Perform Flashback Table Operations . . . . . . . . . . . . . . . . . . . . . .
Configure and Monitor Flashback Database and Perform
Flashback Database Operations . . . . . . . . . . . . . . . . . . . . . . .
Set Up and Use a Flashback Data Archive . . . . . . . . . . . . . . . . . .
Self Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Self Test Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

699
700
700
701
701
702
702
704
704
705
707
709
713
715
715
717
720
723
724
725
728
729
735
736
739
739
739
739
740
740
740
740
744

Automatic Storage Management

...........................

747

Exam Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Purpose of a Logical Volume Manager . . . . . . . . . . . . . . . . . . . . . .
RAID Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Volume Sizes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Choice of RAID Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ASM Compared with Third-Party LVMs . . . . . . . . . . . . . . . . . . . .

747
748
748
749
750
750

Contents

xxiii

Chapter 21

The ASM Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Cluster Synchronization Service . . . . . . . . . . . . . . . . . . . . . .
The ASM Disks and Disk Groups . . . . . . . . . . . . . . . . . . . . . . . . .
The ASM Instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The RDBMS Instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The ASM Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Creating Raw Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Creating, Starting, and Stopping an ASM Instance . . . . . . . . . . . . . . . .
Creating ASM Disk Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Creating and Using ASM Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ASM and RMAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The ASMCMD Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Two-Minute Drill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Describe Automatic Storage Management (ASM) . . . . . . . . . . . .
Set Up Initialization Parameter Files for ASM and Database
Instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Start Up and Shut Down ASM Instances . . . . . . . . . . . . . . . . . . .
Administer ASM Disk Groups . . . . . . . . . . . . . . . . . . . . . . . . . . .
Self Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Self Test Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

751
751
751
753
754
755
756
757
760
762
763
765
766
766

The Resource Manager

..................................

773

Exam Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Need for Resource Management . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Resource Manager Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Consumer Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Resource Manager Plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Resource Manager Configuration Tools . . . . . . . . . . . . . . . . . . . .
Managing Users and Consumer Groups . . . . . . . . . . . . . . . . . . . . . . . .
Resource Manager Plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
CPU Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Use of the Ratio CPU Method . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Active Session Pool Method . . . . . . . . . . . . . . . . . . . . . . . . .
Limiting the Degree of Parallelism . . . . . . . . . . . . . . . . . . . . . . .
Controlling Jobs by Execution Time . . . . . . . . . . . . . . . . . . . . . .
Terminating Sessions by Idle Time
.......................
Restricting Generation of Undo Data . . . . . . . . . . . . . . . . . . . . .
Automatic Consumer Group Switching . . . . . . . . . . . . . . . . . . . . . . . . .
Adaptive Consumer Group Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . .
Two-Minute Drill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Understand the Database Resource Manager . . . . . . . . . . . . . . .
Create and Use Database Resource Manager Components . . . .
Self Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Self Test Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

773
774
775
775
776
778
781
781
782
786
787
789
790
791
793
794
795
799
799
800
800
803

www.allitebooks.com

767
767
767
768
771

OCA/OCP Oracle Database 11g All-in-One Exam Guide

xxiv
Chapter 22

Chapter 23

The Scheduler

.........................................

805

Exam Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Scheduler Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Scheduler Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Schedules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Job Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Creating and Scheduling Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A Self-Contained Job . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Using Programs and Schedules . . . . . . . . . . . . . . . . . . . . . . . . . .
Event-Driven Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Job Chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Lightweight Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Using Classes, Windows, and the Resource Manager . . . . . . . . . . . . . .
Using Job Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Using Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Two-Minute Drill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Create a Job, Program, and Schedule . . . . . . . . . . . . . . . . . . . . . .
Use a Time-Based or Event-Based Schedule for Executing
Scheduler Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Create Lightweight Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Use Job Chains to Perform a Series of Related Tasks . . . . . . . . .
Create Windows and Job Classes . . . . . . . . . . . . . . . . . . . . . . . . .
Use Advanced Scheduler Concepts to Prioritize Jobs . . . . . . . . .
Self Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Self Test Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

805
806
807
807
809
809
810
811
812
813
813
814
817
819
819
822
823
824
826
826
826
826
826
827
827
827
829

Moving and Reorganizing Data

............................

831

Exam Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
SQL*Loader . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
External Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Directories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Using External Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Data Pump . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Data Pump Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Directories and File Locations . . . . . . . . . . . . . . . . . . . . . . . . . . .
Direct Path or External Table Path? . . . . . . . . . . . . . . . . . . . . . . .
Using Data Pump Export and Import . . . . . . . . . . . . . . . . . . . . . . . . . .
Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Using Data Pump with the Command-Line Utilities . . . . . . . . .
Using Data Pump with Database Control . . . . . . . . . . . . . . . . . .
Tablespace Export and Import . . . . . . . . . . . . . . . . . . . . . . . . . . .
Resumable Space Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

831
832
834
835
835
838
839
840
840
841
841
842
843
845
850

Contents

xxv
Segment Reorganization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Row Chaining and Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Segment Shrink . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Two-Minute Drill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Describe and Use Methods to Move Data (Directory Objects,
SQL*Loader, External Tables) . . . . . . . . . . . . . . . . . . . . . . . . .
Explain the General Architecture of Oracle Data Pump . . . . . . .
Use Data Pump Export and Import to Move Data Between
Oracle Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Describe the Concepts of Transportable Tablespaces
and Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Manage Resumable Space Allocation
.....................
Reclaim Wasted Space from Tables and Indexes by Using
the Segment Shrink Functionality . . . . . . . . . . . . . . . . . . . . . .
Self Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Self Test Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Chapter 24

Chapter 25

The AWR and the Alert System

852
853
855
859
859
860
860
860
860
861
861
863

...........................

865

Exam Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Automatic Workload Repository . . . . . . . . . . . . . . . . . . . . . . . . . . .
Gathering AWR Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Managing the AWR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Statistics, Metrics, and Baselines
.........................
The DBMS_WORKLOAD_REPOSITORY Package . . . . . . . . . . . .
The Database Advisory Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Automatic Database Diagnostic Monitor . . . . . . . . . . . . . . .
The Advisors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Automatic Maintenance Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Using the Server-Generated Alert System . . . . . . . . . . . . . . . . . . . . . . . .
Alert Condition Monitoring and Notifications . . . . . . . . . . . . . .
Setting Thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Notification System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Two-Minute Drill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Use and Manage the Automatic Workload Repository . . . . . . . .
Use the Advisory Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Manage Alerts and Thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . .
Self Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Self Test Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

865
866
866
867
869
869
871
871
874
875
880
880
881
882
886
886
886
887
887
889

Performance Tuning

.....................................

891

Exam Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Managing Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PGA Memory Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
SGA Memory Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Automatic Memory Management . . . . . . . . . . . . . . . . . . . . . . . .
The Memory Advisors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

891
892
892
894
896
897

OCA/OCP Oracle Database 11g All-in-One Exam Guide

xxvi

Chapter 26

The SQL Tuning Advisor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Capabilities of the SQL Tuning Advisor . . . . . . . . . . . . . . . .
Using the SQL Tuning Advisor with Enterprise Manager . . . . . .
The SQL Tuning Advisor API: the DBMS_SQLTUNE Package . . .
The SQL Access Advisor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Using the SQL Access Advisor with Database Control . . . . . . . .
Using the SQL Access Advisor with DBMS_ADVISOR . . . . . . . .
Identifying and Fixing Invalid and Unusable Objects . . . . . . . . . . . . .
Invalid Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Unusable Indexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Database Replay
...........................................
Database Replay Workload Capture
......................
Database Replay Workload Preprocessing . . . . . . . . . . . . . . . . . .
Launch the Replay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Database Replay Analysis and Reporting . . . . . . . . . . . . . . . . . . .
Two-Minute Drill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Use Automatic Memory Management . . . . . . . . . . . . . . . . . . . . .
Use Memory Advisors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Troubleshoot Invalid and Unusable Objects
...............
Implement Automatic Memory Management . . . . . . . . . . . . . . .
Manually Configure SGA Parameters . . . . . . . . . . . . . . . . . . . . .
Configure Automatic PGA Memory Management . . . . . . . . . . .
Use the SQL Tuning Advisor . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Use the SQL Access Advisor to Tune a Workload . . . . . . . . . . . .
Understand Database Replay . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Self Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Self Test Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

900
901
901
908
910
910
913
916
916
918
921
921
922
923
924
927
927
928
928
928
928
929
929
929
929
930
933

Globalization

..........................................

937

Exam Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Globalization Requirements and Capabilities
...................
Character Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Language Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Territory Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Other NLS Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Using Globalization Support Features . . . . . . . . . . . . . . . . . . . . . . . . . .
Choosing a Character Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Changing Character Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Globalization Within the Database . . . . . . . . . . . . . . . . . . . . . . .
Globalization at the Instance Level . . . . . . . . . . . . . . . . . . . . . . .
Client-Side Environment Settings . . . . . . . . . . . . . . . . . . . . . . . .
Session-Level Globalization Settings . . . . . . . . . . . . . . . . . . . . . .
Statement Globalization Settings . . . . . . . . . . . . . . . . . . . . . . . . .
Languages and Time Zones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Linguistic Sorting and Selection . . . . . . . . . . . . . . . . . . . . . . . . . .
The Locale Builder
....................................
Using Time Zones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

937
938
938
940
942
944
944
945
946
947
948
948
950
951
952
953
954
954

Contents

xxvii

Chapter 27

Appendix

Two-Minute Drill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Customize Language-Dependent Behavior for the Database
and Individual Sessions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Work with Database and NLS Character Sets . . . . . . . . . . . . . . .
Self Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Self Test Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

957
957
958
958
961

The Intelligent Infrastructure

..............................

965

Exam Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Enterprise Manager Support Workbench . . . . . . . . . . . . . . . . . . . .
The Automatic Diagnostic Repository (ADR) . . . . . . . . . . . . . . .
Problems and Incidents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The ADR Command-Line Interface (ADRCI) . . . . . . . . . . . . . . .
The Support Workbench . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Patches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Types of Patch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Integration with MetaLink and the Patch Advisor . . . . . . . . . . .
Applying Patches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Two-Minute Drill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Use the Enterprise Manager Support Workbench . . . . . . . . . . . .
Manage Patches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Set Up the Automatic Diagnostic Repository . . . . . . . . . . . . . . .
Self Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Self Test Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

965
966
966
967
967
968
971
971
971
972
978
978
978
979
979
980

About the CD

.........................................

983

System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Installing and Running MasterExam . . . . . . . . . . . . . . . . . . . . . . . . . . .
MasterExam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Electronic Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Removing Installation(s) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Technical Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
LearnKey Technical Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

984
984
984
984
984
985
985
985

Glossary

987

Index

..............................................

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1003

This page intentionally left blank

INTRODUCTION

There is an ever increasing demand for staff with IT industry certification. The
benefits to employers are significant—they can be certain that staff have a certain
level of competence—and the benefits to the individuals, in terms of demand for
their services, are equally great. Many employers are now requiring technical staff to
have certifications, and many IT purchasers will not buy from firms that do not have
certified staff. The Oracle certifications are among the most sought after. But apart
from rewards in a business sense, knowing that you are among a relatively small
pool of elite Oracle professionals and have proved your competence is a personal
reward well worth attaining.
Your studies of the fascinating world of Oracle database administration are about
to begin—you can continue these studies for the rest of your working life. Enjoy!

Oracle Certification
There are several Oracle certification tracks—this book is concerned with the Oracle
Database Administration certification track, specifically for release 11g of the database.
There are three levels of DBA certification: Certified Associate (OCA), Certified
Professional (OCP), and Certified Master (OCM). The OCA qualification is based on
two examinations; the OCP qualification requires passing a third examination. These
examinations can be taken at any Prometric Center and typically consist of 60 or 70
questions to be completed in 90 minutes, with 60–70 percent correct needed as the
passing score. The OCM qualification requires completing a further two-day evaluation
at an Oracle testing center, involving simulations of complex environments and use of
advanced techniques that are not covered in this book.
To prepare for the OCA/OCP examinations, you can attend Oracle University
instructor-led training courses, you can study Oracle University online learning
material, or you can read this book. In all cases, you should also refer to the Oracle
Documentation Library for details of syntax. This book will be a valuable addition
to other study methods, but it is also sufficient by itself. It has been designed with
the examination objectives in mind, though it also includes a great deal of information
that will be useful in the course of your work.

xxix

OCA/OCP Oracle Database 11g All-in-One Exam Guide

xxx
However, it is not enough to buy the book, place it under your pillow, and assume
that knowledge will permeate the brain by a process of osmosis: you must read it
thoroughly, work through the exercises and sample questions, and experiment further
with various commands. As you become more familiar with the Oracle environment,
you will realize that there is one golden rule: When it doubt, try it out.
In a multitude of cases, you will find that a simple test that takes a couple of
minutes can save hours of speculation and poring through manuals. If anything is
ever unclear, construct an example and see what happens.
This book was developed using Windows and Linux, but to carry out the exercises
and your further investigations, you can use any platform that is supported for Oracle.

In This Book
This book is organized to serve as an in-depth preparation for the OCA and OCP Oracle
Database 11g examinations. All the official certification objectives are carefully covered
in the book. There are three parts, which in effect build up a case study of configuring a
database application from nothing to a fully functional system. Part I assumes no prior
knowledge or software installed and goes through the basics of installing the Oracle
software and creating a database. Then Part II covers the SQL language, using it to
create and use an application in the database created in Part I. Part III deals with the
maintenance phase of running the database application (matters such as backup and
tuning), and some more advanced database capabilities.

On the CD-ROM
The CD-ROM contains the entire contents of the book in electronic form, as well as
practice tests that simulate each of the real Oracle Database 11g OCA/OCP certification
tests. For more information on the CD-ROM, please see the appendix.

Exam Readiness Checklist
At the end of this introduction, you will find an Exam Readiness Checklist. We
constructed this table to allow you to cross-reference the official exam objectives
with the certification objectives as we present and cover them in this book. This
has a reference for each objective exactly as Oracle Corporation presents it, with
the study guide chapter in which the objective is covered.
There is no need to sit the three examinations in order. You can take them whenever
you please, but you will probably gain the highest marks if you sit all three after
completing the book. This is because the content of the exams builds up slowly, and
there is considerable overlap of objectives between the exams. Topics dealt with later
will revisit and reinforce topics dealt with previously.

In Every Chapter
This book includes a set of chapter components that call your attention to important
items, reinforce important points, and provide helpful exam-taking hints. Take a look
at what you’ll find in every chapter:

Introduction

xxxi
• Opening bullets at the beginning of every chapter are the official exam
objectives (by number) covered in the chapter. Because the exams have
overlapping objectives, any one chapter may cover objectives from more
than one of the exams.
• Exam Tips call attention to information about, and potential pitfalls in, the exam.
• Exercises are interspersed throughout the chapters; they allow you to get the
hands-on experience you need in order to pass the exams. They help you
master skills that are likely to be an area of focus on the exam. Don’t just read
through the exercises; they are hands-on practice that you should be comfortable
completing. Learning by doing is an effective way to increase your competency
with a product.
• Tips describe the issues that come up most often in real-world settings. They
provide a valuable perspective on certification- and product-related topics.
They point out common mistakes and address questions that have arisen from
on-the-job discussions and experience.
• The Two-Minute Drill at the end of every chapter is a checklist of the exam
objectives covered in the chapter. You can use it for a quick, last-minute review
before the test.
• The Self Test offers questions similar to those found on the certification exam.
The answers to these questions, as well as explanations of the answers, can
be found at the end of each chapter. By taking the Self Test after completing
each chapter, you’ll reinforce what you’ve learned from that chapter, while
becoming familiar with the structure of the exam questions.

Some Pointers
Once you’ve finished reading this book, set aside some time to do a thorough review.
You might want to return to the book several times and make use of all the methods it
offers for reviewing the material:
• Reread all the Two-Minute Drills or have someone quiz you You also can
use the drills as a way to do a quick cram before the exam. You might want to
make some flash cards out of 3 × 5 index cards that have the Two-Minute Drill
material on them.
• Reread all the Exam Tips Remember that these notes are based on the
exams. The authors have tried to draw your attention to what you should
expect—and what you should be on the lookout for.
• Retake the Self Tests It is a good idea to take the Self Test right after you’ve
read the chapter because the questions help reinforce what you’ve just learned.
• Complete the Exercises Did you do the chapter exercises when you read each
chapter? If not, do them! These exercises are designed to cover exam topics, and
there’s no better way to get to know this material than by practicing. Be sure
you understand why you are performing each step in each exercise. If there is
something you are not completely clear about, reread that section in the chapter.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

xxxii

Exam Readiness Checklist:
Exams 1Z0-051, 1Z0-052, and 1Z0-053
Examination 1Z0-051, Oracle Database 11g:
SQL Fundamentals I, Objectives
ID

Name

051

Oracle Database 11g: SQL Fundamentals I

Chapter

051.1

Retrieving Data Using the SQL SELECT Statement

051.1.1

List the capabilities of SQL SELECT statements

9

051.1.2

Execute a basic SELECT statement

9

051.2

Restricting and Sorting Data

051.2.1

Limit the rows that are retrieved by a query

9

051.2.2

Sort the rows that are retrieved by a query

9

051.2.3

Use ampersand substitution to restrict and sort output at runtime

9

051.3

Using Single-Row Functions to Customize Output

051.3.1

Describe various types of functions available in SQL

10

051.3.2

Use character, number, and date functions in SELECT statements

10

051.4

Using Conversion Functions and Conditional Expressions

051.4.1

Describe various types of conversion functions that are available in SQL

10

051.4.2

Use the TO_CHAR, TO_NUMBER, and TO_DATE conversion functions

10

051.4.3

Apply conditional expressions in a SELECT statement

10

051.5

Reporting Aggregated Data Using the Group Functions

051.5.1

Identify the available group functions

11

051.5.2

Describe the use of group functions

11

051.5.3

Group data by using the GROUP BY clause

11

051.5.4

Include or exclude grouped rows by using the HAVING clause

11

051.6

Displaying Data from Multiple Tables

051.6.1

Write SELECT statements to access data from more than one table
using equijoins and nonequijoins

12

051.6.2

Join a table to itself by using a self-join

12

051.6.3

View data that generally does not meet a join condition by using
outer joins

12

051.6.4

Generate a Cartesian product of all rows from two or more tables

12

051.7

Using Subqueries to Solve Queries

051.7.1

Define subqueries

13

051.7.2

Describe the types of problems that the subqueries can solve

13

051.7.3

List the types of subqueries

13

051.7.4

Write single-row and multiple-row subqueries

13

Introduction

xxxiii
ID

Name

051.8

Using the Set Operators

Chapter

051.8.1

Describe set operators

051.8.2

Use a set operator to combine multiple queries into a single query

13

051.8.3

Control the order of rows returned

13

051.9

Manipulating Data

051.9.1

Describe each data manipulation language (DML) statement

8

051.9.2

Insert rows into a table

8

051.9.3

Update rows in a table

8

051.9.4

Delete rows from a table

8

051.9.5

Control transactions

8

051.10

Using DDL Statements to Create and Manage Tables

051.10.1

Categorize the main database objects

7

051.10.2

Review the table structure

7

051.10.3

List the data types that are available for columns

7

051.10.4

Create a simple table

7

051.10.5

Explain how constraints are created at the time of table creation

7

051.10.6

Describe how schema objects work

7

051.11

Creating Other Schema Objects

051.11.1

Create simple and complex views

7

051.11.2

Retrieve data from views

7

051.11.3

Create, maintain, and use sequences

7

051.11.4

Create and maintain indexes

7

051.11.5

Create private and public synonyms

7

13

Examination 1Z0-052, Oracle Database 11g:
Administration I, Objectives
ID

Name

052

Oracle Database 11g: Administration Workshop I

Chapter

052.1

Exploring the Oracle Database Architecture

052.1.1

Explain the memory structures

1

052.1.2

Describe the process structures

1

052.1.3

Overview of storage structures

1

052.2

Preparing the Database Environment

052.2.1

Identify the tools for administering an Oracle database

2

052.2.2

Plan an Oracle database installation

2

052.2.3

Install the Oracle software by using Oracle Universal Installer (OUI)

2

www.allitebooks.com

OCA/OCP Oracle Database 11g All-in-One Exam Guide

xxxiv
ID

Name

052.3

Creating an Oracle Database

Chapter

052.3.1

Create a database by using the Database Configuration Assistant (DBCA)

052.4

Managing the Oracle Instance

052.4.1

Setting database initialization parameters

3

052.4.2

Describe the stages of database startup and shutdown

3

052.4.3

Using alert log and trace files

3

052.4.4

Using data dictionary and dynamic performance views

3

052.5

Configuring the Oracle Network Environment

052.5.1

Configure and manage the Oracle network

4

052.5.2

Using the Oracle Shared Server architecture

4

052.6

Managing Database Storage Structures

052.6.1

Overview of tablespace and datafiles

5

052.6.2

Create and manage tablespaces

5

052.6.3

Space management in tablespaces

5

052.7

Administering User Security

052.7.1

Create and manage database user accounts

6

052.7.2

Grant and revoke privileges

6

052.7.3

Create and manage roles

6

052.7.4

Create and manage profiles

6

052.8

Managing Schema Objects

052.8.1

Create and modify tables

7

052.8.2

Manage constraints

7

052.8.3

Create indexes

7

052.8.4

Create and use temporary tables

7

052.9

Managing Data and Concurrency

052.9.1

Manage data using DML

8

052.9.2

Identify and administer PL/SQL objects

8

052.9.3

Monitor and resolve locking conflicts

8

052.10

Managing Undo Data

052.10.1

Overview of undo

8

052.10.2

Transactions and undo data

8

052.10.3

Managing undo

8

052.11

Implementing Oracle Database Security

052.11.1

Database security and the principle of least privilege

2

6

Introduction

xxxv
ID

Name

Chapter

052.11.2

Work with standard database auditing

6

052.12

Database Maintenance

052.12.1

Use and manage optimizer statistics

24

052.12.2

Use and manage Automatic Workload Repository (AWR)

24

052.12.3

Use advisory framework

24

052.12.4

Manage alerts and thresholds

24

052.13

Performance Management

052.13.1

Use Automatic Memory Management

25

052.13.2

Use Memory Advisors

25

052.13.3

Troubleshoot invalid and unusable objects

25

052.14

Backup and Recovery Concepts

052.14.1

Identify the types of failure that can occur in an Oracle database

14

052.14.2

Describe ways to tune instance recovery

14

052.14.3

Identify the importance of checkpoints, redo log files, and archived log files

14

052.14.4

Overview of flash recovery area

14

052.14.5

Configure ARCHIVELOG mode

14

052.15

Performing Database Backups

052.15.1

Create consistent database backups

15

052.15.2

Back up your database without shutting it down

15

052.15.3

Create incremental backups

15

052.15.4

Automate database backups

15

052.15.5

Manage backups, view backup reports, and monitor the flash recovery area

15

052.16

Performing Database Recovery

052.16.1

Overview of Data Recovery Advisor

16

052.16.2

Use Data Recovery Advisor to perform recovery (control file, redo log
file and data file)

16

052.17

Moving Data

052.17.1

Describe and use methods to move data (directory objects,
SQL*Loader, external tables)

23

052.17.2

Explain the general architecture of Oracle Data Pump

23

052.17.3

Use Data Pump Export and Import to move data between Oracle
databases

23

052.18

Intelligent Infrastructure Enhancements

052.18.1

Use the Enterprise Manager Support Workbench

27

052.18.2

Managing patches

27

OCA/OCP Oracle Database 11g All-in-One Exam Guide

xxxvi
Examination 1Z0-053, Oracle Database 11g:
Administration II, Objectives
ID

Name

Chapter

053

Oracle Database 11g: Administration Workshop II

053.1

Database Architecture and ASM

053.1.1

Describe Automatic Storage Management (ASM)

053.1.2

Set up initialization parameter files for ASM and database instances

20

053.1.3

Start up and shut down ASM instances

20

053.1.4

Administer ASM disk groups

20

053.2

Configuring for Recoverability

053.2.1

Configure multiple archive log file destinations to increase availability

14

053.2.2

Define, apply, and use a retention policy

17

053.2.3

Configure the Flash Recovery Area

14

053.2.4

Use Flash Recovery Area

14

053.3

Using the RMAN Recovery Catalog

053.3.1

Identify situations that require RMAN recovery catalog

17

053.3.2

Create and configure a recovery catalog

17

053.3.3

Synchronize the recovery catalog

17

053.3.4

Create and use RMAN stored scripts

17

053.3.5

Back up the recovery catalog

17

053.3.6

Create and use a virtual private catalog

17

053.4

Configuring Backup Specifications

053.4.1

Configure backup settings

15

053.4.2

Allocate channels to use in backing up

15

053.4.3

Configure backup optimization

15

053.5

Using RMAN to Create Backups

053.5.1

Create image file backups

15

053.5.2

Create a whole database backup

15

053.5.3

Enable fast incremental backup

15

053.5.4

Create duplex backup and back up backup sets

15

053.5.5

Create an archival backup for long-term retention

15

053.5.6

Create a multisection, compressed, and encrypted backup

15

053.5.7

Report on and maintain backups

15

053.6

Performing User-Managed Backup and Recovery

053.6.1

Recover from a lost TEMP file

18

053.6.2

Recover from a lost redo log group

18

053.6.3

Recover from the loss of password file

18

20

Introduction

xxxvii
ID

Name

Chapter

053.6.4

Perform user-managed complete database recovery

18

053.6.5

Perform user-managed incomplete database recovery

18

053.6.6

Perform user-managed and server-managed backups

18

053.6.7

Identify the need of backup mode

18

053.6.8

Back up and recover a control file

18

053.7

Using RMAN to Perform Recovery

053.7.1

Perform complete recovery from a critical or noncritical data file loss
using RMAN

16

053.7.2

Perform incomplete recovery using RMAN

16

053.7.3

Recover using incrementally updated backups

16

053.7.4

Switch to image copies for fast recovery

16

053.7.5

Restore a database onto a new host

17

053.7.6

Recover using a backup control file

16

053.7.7

Perform disaster recovery

17

053.8

Using RMAN to Duplicate a Database

053.8.1

Creating a duplicate database

17

053.8.2

Using a duplicate database

17

053.9

Performing Tablespace Point-in-Time Recovery

053.9.1

Identify the situations that require TSPITR

17

053.9.2

Perform automated TSPITR

17

053.10

Monitoring and Tuning RMAN

053.10.1

Monitoring RMAN sessions and jobs

17

053.10.2

Tuning RMAN

17

053.10.3

Configure RMAN for asynchronous I/O

17

053.11

Using Flashback Technology

053.11.1

Restore dropped tables from the recycle bin

19

053.11.2

Perform Flashback Query

19

053.11.3

Use Flashback Transaction

19

053.12

Additional Flashback Operations

053.12.1

Perform Flashback Table operations

19

053.12.2

Configure, monitor Flashback Database, and perform Flashback
Database operations

19

053.12.3

Set up and use a Flashback Data Archive

19

053.13

Diagnosing the Database

053.13.1

Set up Automatic Diagnostic Repository

27

053.13.2

Using Support Workbench

27

053.13.3

Perform Block Media Recovery

16

OCA/OCP Oracle Database 11g All-in-One Exam Guide

xxxviii
ID

Name

053.14

Managing Memory

Chapter

053.14.1

Implement Automatic Memory Management

25

053.14.2

Manually configure SGA parameters

25

053.14.3

Configure automatic PGA memory management

25

053.15

Managing Database Performance

053.15.1

Use the SQL Tuning Advisor

25

053.15.2

Use the SQL Access Advisor to tune a workload

25

053.15.3

Understand Database Replay

25

053.16

Space Management

053.16.1

Manage resumable space allocation

23

053.16.2

Describe the concepts of transportable tablespaces and databases

23

053.16.3

Reclaim wasted space from tables and indexes by using the segment
shrink functionality

23

053.17

Managing Resources

053.17.1

Understand the database resource manager

21

053.17.2

Create and use database resource manager components

21

053.18

Automating Tasks with the Scheduler

22

053.18.1

Create a job, program, and schedule

22

053.18.2

Use a time-based or event-based schedule for executing Scheduler jobs

22

053.18.3

Create lightweight jobs

22

053.18.4

Use job chains to perform a series of related tasks

22

053.19

Administering the Scheduler

053.19.1

Create windows and job classes

22

053.19.2

Use advanced Scheduler concepts to prioritize jobs

22

053.20

Globalization

053.20.1

Customize language-dependent behavior for the database and
individual sessions

26

053.20.2

Working with database and NLS character sets

26

PART I
Oracle Database 11g
Administration

■
■
■
■
■
■

Chapter 1
Chapter 2
Chapter 3
Chapter 4
Chapter 5
Chapter 6

Architectural Overview of Oracle Database 11g
Installing and Creating a Database
Instance Management
Oracle Networking
Oracle Storage
Oracle Security

This page intentionally left blank

CHAPTER 1
Architectural Overview of
Oracle Database 11g

Exam Objectives
In this chapter you will learn to
• 052.1.1 Explain the Memory Structures
• 052.1.2 Describe the Process Structures
• 052.1.3 Identify the Storage Structures

3

OCA/OCP Oracle Database 11g All-in-One Exam Guide

4
This guide is logically structured to enable a thorough understanding of the Oracle
server product and the fundamentals of SQL (Structure Query Language, pronounced
sequel). The authors seek to relate your learning as much to the real world as possible
to concretize some of the abstract concepts to follow, by introducing a hypothetical
scenario that will be systematically expanded as you progress through the book. This
approach involves nominating you as the DBA in charge of setting up an online store.
You will appreciate the various roles a DBA is expected to fulfill as well as some of the
technology areas with which a DBA is expected to be familiar.
The nonexaminable discussion of the Oracle product stack is followed by considering
several prerequisites for fully understanding the tasks involved in setting up an Oracle 11g
database system. This discussion leads into the examinable objectives in this chapter,
which are the Single-Instance Architecture and the Memory, Process, and Storage
Structures.

Oracle Product Stack
No Oracle guide is complete without contextualizing the product under study. This
section discusses the three core product families currently available from Oracle
Corporation. End users of Oracle technology typically use a subset of the available
products that have been clustered into either the server, development tools, or
applications product families.

Oracle Server Family
The three primary groupings of products within the server technology family consist of
the database, application server, and enterprise manager suites. These form the basic
components for Oracle’s vision of grid computing. The concept underlying the Grid is
virtualization. End users request a service (typically from a web-based application), but
they neither know nor need to know the source of that service. Simplistically, the
database server is accessible to store data, the application server hosts the infrastructure
for the service being requested by the end user, and the enterprise manager product
provides administrators with the management interface. The platforms or physical
servers involved in supplying the service are transparent to the end user. Virtualization
allows resources to be optimally used, by provisioning servers to the areas of greatest
requirement in a manner transparent to the end user.

Database Server
The database server comprises Oracle instances and databases with many features like
Streams, Partitioning, Warehousing, Replication, and Real Application Clusters (RAC),
but ultimately it provides a reliable, mature, robust, high-performance enterprisequality data store, built on an object-relational database system. Historically, one of
the projects undertaken in the late 1970s to animate the relational theory proposed
by Dr. E.F. Codd resulted in the creation of a relational database management system
(RDBMS) that later became known as the Oracle Server. The Oracle Server product
is well established in the worldwide database market, and the product is central to

Chapter 1: Architectural Overview of Oracle Database 11g

5

Figure 1-1 The indirect connection between a user and a database

www.allitebooks.com

PART I

Oracle Corporation’s continued growth, providing the backbone for many of its other
products and offerings. This book is dedicated to describing the essential features of
the Oracle Server and the primary mechanisms used to interact with it. It covers the
aspects that are measured in the certification exams, but by no means explores the
plethora of features available in the product.
An Oracle database is a set of files on disk. It exists until these files are deleted.
There are no practical limits to the size and number of these files, and therefore no
practical limits to the size of a database. Access to the database is through the Oracle
instance. The instance is a set of processes and memory structures: it exists on the
CPU(s) and in the memory of the server node, and its existence is temporary. An
instance can be started and stopped. Users of the database establish sessions against
the instance, and the instance then manages all access to the database. It is absolutely
impossible in the Oracle environment for any user to have direct contact with the
database. An Oracle instance with an Oracle database makes up an Oracle server.
The processing model implemented by the Oracle server is that of client-server
processing, often referred to as two-tier. In the client-server model, the generation of the
user interface and much of the application logic is separated from the management of
the data. For an application developed using SQL (as all relational database applications
will be), this means that the client tier generates the SQL commands, and the server
tier executes them. This is the basic client-server split, usually with a local area
network dividing the two tiers. The network communications protocol used between
the user process and the server process is Oracle’s proprietary protocol, Oracle Net.
The client tier consists of two components: the users and the user processes. The
server tier has three components: the server processes that execute the SQL, the instance,
and the database itself. Each user interacts with a user process. Each user process
interacts with a server process, usually across a local area network. The server processes
interact with the instance, and the instance with the database. Figure 1-1 shows this
relationship diagrammatically. A session is a user process in communication with a
server process. There will usually be one user process per user and one server process
per user process. The user and server processes that make up sessions are launched on
demand by users and terminated when no longer required; this is the log-on and logoff cycle. The instance processes and memory structures are launched by the database
administrator and persist until the administrator deliberately terminates them; this is
the database startup and shutdown cycle.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

6
The user process can be any client-side software that is capable of connecting to an
Oracle server process. Throughout this book, two user processes will be used extensively:
SQL*Plus and SQL Developer. These are simple processes provided by Oracle for
establishing sessions against an Oracle server and issuing ad hoc SQL. What the user
process actually is does not matter to the Oracle server at all. When an end user fills in
a form and clicks a SUBMIT button, the user process will generate an INSERT statement
(detailed in Chapter 8) and send it to a server process for execution against the
instance and the database. As far as the server is concerned, the INSERT statement
might just as well have been typed into SQL*Plus as what is known as ad hoc SQL.
Never forget that all communication with an Oracle server follows this client-server
model. The separation of user code from server code dates back to the earliest releases
of the database and is unavoidable. Even if the user process is running on the same
machine as the server (as is the case if, for example, one is running a database on one’s
own laptop PC for development or training purposes), the client-server split is still
enforced, and network protocols are still used for the communications between the
two processes. Applications running in an application server environment (described
in the next section) also follow the client-server model for their database access.

Application Server
With the emergence of the Web as the de facto standard platform for delivering
applications to end users has arisen the need for application servers. An application
server allows client-side software, traditionally installed on end-user computers, to
be replaced by applications hosted and executing from a centralized location. The
application user interface is commonly exposed to users via their web browsers.
These applications may make use of data stored in one or more database servers.
Oracle Application Server provides a platform for developing, deploying, and
managing web applications. A web application can be defined as any application with
which users communicate via HTTP. Web applications usually run in at least three
tiers: a database tier manages access to the data, the client tier (often implemented via
a web browser) handles the local window management for communications with the
users, and an application tier in the middle executes the program logic that generates
the user interface and the SQL calls to the database.
It is possible for an application to use a one-to-one mapping of end-user session
to database session: each user will establish a browser-based session against the
application server, and the application server will then establish a session against
the database server on the user’s behalf. However, this model has been proven to be
highly inefficient when compared to the connection pooling model. With connection
pooling, the application server establishes a relatively small number of persistent
database sessions and makes them available on demand (queuing requests if
necessary) to a relatively large number of end-user sessions against the application
server. Figure 1-2 illustrates the three-tier architecture using connection pooling.
From the point of view of the database, it makes no difference whether a SQL
statement comes from a client-side process such as SQL*Plus or Microsoft Access or
from a pooled session to an application server. In the former case, the user process
occurs on one machine; in the latter, the user process has been divided into two tiers:
an applications tier that generates the user interface and a client tier that displays it.

Chapter 1: Architectural Overview of Oracle Database 11g

7

TIP DBAs often find themselves pressed into service as Application Server
administrators. Be prepared for this. There is a separate OCP curriculum for
Application Server, for which it may well be worth studying.

Enterprise Manager
The increasing size and complexity of IT installations can make management of each
component quite challenging. Management tools can make the task easier, and
consequently increase staff productivity.
Oracle Enterprise Manager comes in three forms:
• Database Control
• Application Server Control
• Grid Control
Oracle Enterprise Manager Database Control is a graphical tool for managing one
database, which may be a Real Application Clusters (RAC) clustered database. RAC
databases are covered in more advanced books; they are mentioned here because
they can be managed through the tool. Database Control has facilities for real-time
management and monitoring, for running scheduled jobs such as backup operations,
and for reporting alert conditions interactively and through e-mail. A RAC database
will have a Database Control process running on each node where there is a database
instance; these processes communicate with each other, so that each has a complete
picture of the state of the RAC.
Oracle Enterprise Manager Application Server Control is a graphical tool for managing
one or more application server instances. The technology for managing multiple instances
is dependent on the version. Up to and including Oracle Application Server 10g release 2,
multiple application servers were managed as a farm, with a metadata repository (typically
residing in an Oracle database) as the central management point. This is an excellent
management model and offers superb capabilities for deploying and maintaining
applications, but it is proprietary to Oracle. From Application Server 10g release 3
onward, the technology is based on J2EE clustering, which is not proprietary to Oracle.

PART I

Figure 1-2
The connection
pooling model

OCA/OCP Oracle Database 11g All-in-One Exam Guide

8
Both Database Control and Application Server Control consist of a Java process
running on the server machine, which listens for HTTP or HTTPS connection requests.
Administrators connect to these processes from a browser. Database Control then
connects to the local database server, and Application Server Control connects to the
local application server.
Oracle Enterprise Manager Grid Control globalizes the management environment.
A management repository (residing in an Oracle database) and one or more management
servers manage the complete environment: all the databases and application servers,
wherever they may be. Grid Control can also manage the nodes, or machines, on
which the servers run, and (through plug-ins) a wide range of third-party products.
Each managed node runs an agent process, which is responsible for monitoring the
managed targets on the node: executing jobs against them and reporting status,
activity levels, and alert conditions back to the management server(s).
Grid Control provides a holistic view of the environment and, if well configured,
can significantly enhance the productivity of administration staff. It becomes possible
for one administrator to manage effectively hundreds or thousands of targets. The
inherent management concept is management by exception. Instead of logging on to
each target server to check for errors or problems, Grid Control provides a summary
graphic indicating the availability of targets in an environment. The interface supports
honing into the targets that are generating exceptions, using drill-down web links,
thereby assisting with expedient problem identification.
EXAM TIP Anything that can be done with OEM can also be done
through SQL statements. The OCP examinations test the use of SQL for
administration work extensively. It is vital to be familiar with command-line
techniques.

Oracle Development Tools
Oracle provides several tools for developing applications and utility programs, and
supports a variety of languages. The programming languages that are parsed and
executed internally within the Oracle Server are Structured Query Language (SQL),
Procedural SQL (PL/SQL), and Java. Oracle development technologies written
externally to the database include products found in Oracle Developer Suite (Forms,
Reports, and Discoverer), Oracle Application Server, and other third-generation
languages (3GLs). There is also a wide variety of third-party tools and environments
that can be used for developing applications that will connect to an Oracle database;
in particular .NET from Microsoft, for which Oracle provides a comprehensive
developers’ toolkit.

Internal Languages
SQL is used for data-related activities but cannot be used on its own for developing
complete applications. It has no real facilities for developing user interfaces, and it
also lacks the procedural structures needed for advanced data manipulation. The
other two languages available within the database fill these gaps. They are PL/SQL
and Java. PL/SQL is a 3GL proprietary to Oracle. It supports the regular procedural

Chapter 1: Architectural Overview of Oracle Database 11g

9

TIP All DBAs must be fully acquainted with SQL and with PL/SQL. Knowledge
of Java and other languages is not usually required but is often helpful.

External Languages
Other languages are available for developing client-server applications that run
externally to the database. The most commonly used are C and Java, but it is possible
to use most of the mainstream 3GLs. For most languages, Oracle provides the OCI
(Oracle Call Interface) libraries that let code written in these languages connect to
an Oracle database and invoke SQL commands.
Applications written in C or other procedural languages make use of the OCI
libraries to establish sessions against the database server. These libraries are proprietary
to Oracle. This means that any code using them will be specifically written for Oracle,
and would have to be substantially rewritten before it could run against any other
database. Java applications can avoid this problem. Oracle provides database connectivity
for both thick and thin Java clients.
A thick Java client is Oracle aware. It uses the supplied OCI class library to connect
to the database. This means that the application can make use of all the database’s
capabilities, including features that are unique to the Oracle environment. Java thickclient applications can exploit the database to the full. But they can never work with
a third-party product, and they require the OCI client software to be installed.

PART I

constructs (such as conditional branching based on if-then-else and iterative looping)
and facilities for user interface design. SQL calls may be embedded in the PL/SQL
code. Thus, a PL/SQL application might use SQL to retrieve one or more rows from
the database, perform various actions based on their content, and then issue more
SQL to write rows back to the database. Java offers a similar capability to embed SQL
calls within the Java code. This is industry-standard technology: any Java programmer
should be able to write code that will work with an Oracle database (or indeed with
any other Java-compliant database).
All Oracle DBAs must be fully acquainted with SQL and PL/SQL. This is assumed,
and required, knowledge.
Knowledge of Java is not assumed and indeed is rarely required. A main reason
for this is that bespoke Java applications are now rarely run within the database. Early
releases of Oracle’s application server could not run some of the industry-standard
Java application components, such as servlets and Enterprise JavaBeans (EJBs). To get
around this serious divergence from standards, Oracle implemented a Java engine
within the database that did conform to the standards. However, from Oracle Application
Server release 9i, it has been possible to run servlets and EJBs where they should be
run: on the application server middle tier. Because of this, it has become less common
to run Java within the database.
The DBA is likely to spend a large amount of time tuning and debugging SQL and
PL/SQL. Oracle’s model for the division of responsibility here is clear: the database
administrator identifies code with problems and passes it to the developers for fixing.
But in many cases, developers lack the skills (or perhaps the inclination) to do this
and the database administrator has to fill this role.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

10
A thin Java client is not aware of the database against which it is running: it works
with a virtual database defined according to the Java standard, and it lets the container
within which it is running map this virtual database onto the Oracle database. This
gives the application portability across database versions and providers: a thin Java
client application could be deployed in a non-Oracle environment without any
changes. But any Oracle features that are not part of the Java database connectivity
(JDBC) standard will not be available.
The choice between thick and thin Java clients should be made by a team of
informed individuals and influenced by a number of factors, including performance;
the need for Oracle-specific features; corporate standards; application portability; and
programmer productivity. Oracle’s JDeveloper tool can be used to develop both thickand thin-client Java applications.

Oracle Developer Suite
Some organizations do not want to use a 3GL to develop database applications.
Oracle Corporation provides rapid application development tools as part of the
Oracle Developer Suite. Like the languages, these application development tools end
up doing the same thing: constructing SQL statements that are sent to the database
server for execution.
Oracle Forms Developer builds applications that run on an Oracle Application
Server middle tier and display in a browser. Forms applications are generally quick to
develop and are optimized for interfacing with database objects. Specialized triggers
and components support feature-rich web-based database applications.
Oracle Reports is a tool for generating and formatting reports, either on demand
or according to a schedule. Completed reports can be cached for distribution. Oracle
Reports, like Forms, is a full development environment and requires a programmer to
generate specialized reports. The huge advantage provided by Oracle Reports is that
the output is infinitely customizable and end users can get exactly what they requested.
Oracle Discoverer is a tool for ad hoc report generation that empowers end users
to develop reports themselves. Once Oracle Discoverer, which runs on an Oracle
Application Server middle tier, has been appropriately configured, programmer input
is not needed, since the end users do their own development.

Oracle Applications
The number of Oracle Applications products has increased substantially in recent
years due to a large number of corporate acquisitions, but two remain predominant.
The Oracle E-Business Suite is a comprehensive suite of applications based around an
accounting engine, and Oracle Collaboration Suite is a set of office automation tools.
Oracle E-Business Suite, based around a core set of financial applications, includes
facilities for accounting; human resources; manufacturing; customer relationship
management; customer services; and much more. All the components share a
common data model. The current release has a user interface written with Oracle
Developer Forms and Java; it runs on Oracle Application Server and stores data in
an Oracle database.

Chapter 1: Architectural Overview of Oracle Database 11g

11

Exercise 1-1: Investigate DBMSs in Your Environment This is a paperbased exercise, with no specific solution.
Identify the applications, application servers, and databases used in your
environment. Then, concentrating on the databases, try to get a feeling for how big
and busy they are. Consider the number of users; the volatility of the data; the data
volumes. Finally, consider how critical they are to the organization: how much
downtime or data loss can be tolerated for each applications and database? Is it
possible to put a financial figure on this?
The result of this exercise should indicate the criticality of the DBA’s role.

Prerequisite Concepts
The Oracle Database Server product may be installed on a wide variety of hardware
platforms and operating systems. Most companies prefer one of the popular Unix
operating systems or Microsoft Windows. Increasingly, information technology
graduates who opt to pursue a career in the world of Oracle Server technologies lack
the exposure to Unix, and you are strongly advised (if you are in such a position), to
consider courses on Unix fundamentals, shell scripting, and system administration.
In smaller organizations, a DBA may very well concurrently fulfill the roles of system
administrator and database administrator (and sometimes even, software developer).
As organizations grow in size, IT departments become very segmented and specialized
and it is common to have separate Operating Systems, Security, Development, and
DBA departments. In fact, larger organizations often have DBA Teams working only
with specific operating systems.
This section discusses several basic concepts you need to know to get up and
running with an installation of the Oracle database. The actual installation is covered
in Chapter 2.

Oracle Concepts
The Oracle Database Server comprises two primary components called the Instance
and the Database. It is easy to get confused since the term “Database” is often used
synonymously with the term “Server.” The instance component refers to a set of
operating system processes and memory structures initialized upon startup, while the
database component refers to the physical files used for data storage and database
operation. You must therefore expect your Oracle Server installation to consume
memory, process, and disk resources on your server. Oracle supplies many tools you
may use when interacting with the database, the most common of which are: Oracle
Universal Installer (OUI), which is used to install and remove Oracle software;

PART I

Oracle Collaboration Suite includes (among other things) servers for e-mail, diary
management, voicemail and fax, web conferencing, and (perhaps most impressive)
file serving. There is complete integration between the various components. The
applications run on Oracle Application Servers, and can be accessed through a web
interface from browsers or made available on mobile wireless devices.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

12
Database Configuration Assistant (DBCA), which may be used to create, modify, or
delete databases; and SQL*Plus and SQL Developer, which provide interfaces for
writing and executing SQL. These tools are described in Chapter 2.

SQL Concepts
SQL is a powerful language integral to working with Oracle databases. We introduce
the concepts of tables, rows, columns, and basic SQL queries here to support your
learning as you perform the basic DBA tasks. A complete and thorough discussion
of these concepts is detailed in Part 2 of this guide.

Tables, Rows, and Columns
Data in an Oracle database is primarily stored in two-dimensional relational tables.
Each table consists of rows containing data that is segmented across each column. A
table may contain many rows but has a fixed number of columns. Data about the
Oracle Server itself is stored in a special set of tables known as data dictionary tables.
Figure 1-3 shows the DICTIONARY table comprising two columns called TABLE_NAME
and COMMENTS. Thirteen rows of data have been retrieved from this table.
Relational tables conform to certain rules that constrain and define the data. At
the column level, each column must be of a certain data type, such as numeric, datetime, or character. The character data type is the most generic and allows the storage
of any character data. At the row level, each row usually has some uniquely identifying
characteristic: this could be the value of one column, such as the TABLE_NAME shown
in the example just given, that cannot be repeated in different rows.

Figure 1-3

Querying the DICTIONARY table

Chapter 1: Architectural Overview of Oracle Database 11g

13
Basic Queries

Operating System Concepts
The database installation will consume physical disk storage, and you are encouraged
to start considering the hardware you have earmarked for your installation. The two
primary disk space consumers are Oracle program files and Oracle database datafiles.
The program files are often referred to as the Oracle binaries, since they collectively
represent the compiled C programs essential for creating and maintaining databases.
Once the Oracle 11g binaries are installed, they consume about 3GB of disk space, but
this usage remains relatively stable. The datafiles, however, host the actual rows of
data and shrink and grow as the database is used. The default seed database that is
relatively empty consumes about 2GB of disk space. Another important hardware
consideration is memory (RAM). You will require a minimum of 512MB of RAM,
but at least 1GB is required for a usable system.
Most Unix platforms require preinstallation tasks, which involve ensuring that
operating system users, groups, patches, kernel parameters, and swap space are
adequately specified. Consult with an operating system specialist if you are unfamiliar
with these tasks. The superuser (or root) privilege is required to modify these operating
system parameters. Commands for checking these resources are described in Chapter 2.

Single-Instance Architecture
In this book, you will deal largely with the most common database environment: one
instance on one computer, opening a database stored on local disks. The more complex
distributed architectures, involving multiple instances and multiple databases, are
beyond the scope of the OCP examination (though not the OCM qualification), but
you may realistically expect to see several high-level summary questions on distributed
architectures.

Single-Instance Database Architecture
The instance consists of memory structures and processes. Its existence is transient, in
your RAM and on your CPU(s). When you shut down the running instance, all trace
of its existence goes away at the same time. The database consists of physical files, on

PART I

Figure 1-3 introduces a classic SQL query executed using the SQL Developer tool
supplied by Oracle. There are many tools that provide a SQL interface to the database,
the most common of which is SQL*Plus. Although the details of SQL queries are
discussed in Part 2, they are generally intuitive, and for your immediate needs it is
sufficient to interpret the query in Figure 1-3 as follows. The keywords in the statement
are SELECT, FROM, WHERE, and LIKE. The asterisk in the first line instructs Oracle
to retrieve all columns from the table called DICTIONARY. Therefore both columns,
called TABLE_NAME and COMMENTS respectively, are retrieved. The second line
contains a conditional WHERE clause that restricts the rows retrieved to only those
which have a data value beginning with the characters “V$SYS” in the TABLE_NAME
column.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

14
disk. Whether running or stopped, these remain. Thus the lifetime of the instance is
only as long as it exists in memory: it can be started and stopped. By contrast, the
database, once created, persists indefinitely—until you deliberately delete the files
that are associated with the database.
The processes that make up the instance are known as background processes
because they are present and running at all times while the instance is active. These
processes are for the most part completely self-administering, though in some cases
it is possible for the DBA to influence the number of them and their operation.
The memory structures, which are implemented in shared memory segments
provided by the operating system, are known as the system global area, or SGA. This is
allocated at instance startup and released on shutdown. Within certain limits, the SGA
in the 11g instance and the components within it can be resized while the instance is
running, either automatically or in response to the DBA’s instructions.
User sessions consist of a user process running locally to the user machine
connecting to a server process running locally to the instance on the server machine.
The technique for launching the server processes, which are started on demand for
each session, is covered in Chapter 4. The connection between user process and server
process is usually across a local area network and uses Oracle’s proprietary Oracle Net
protocol layered on top of an industry-standard protocol (usually TCP). The user
process–to–server process split implements the client-server architecture: user processes
generate SQL; server processes execute SQL. The server processes are sometimes
referred to as foreground processes, in contrast with the background processes that make
up the instance. Associated with each server process is an area of nonsharable memory,
called the program global area, or PGA. This is private to the session, unlike the system
global area, which is available to all the foreground and background processes. Note
that background processes also have a PGA. The size of any one session’s PGA will vary
according to the memory needs of the session at any one time; the DBA can define an
upper limit for the total of all the PGAs, and Oracle manages the allocation of this to
sessions dynamically.
TIP You will sometimes hear the term shadow process. Be cautious of using
this. Some people use it to refer to foreground processes; others use it for
background processes.
Memory management in 11g can be completely automated: the DBA need do
nothing more than specify an overall memory allocation for both the SGA and the
PGA and let Oracle manage this memory as it thinks best. Alternatively, the DBA can
determine memory allocations. There is an in-between technique, where the DBA
defines certain limits on what the automatic management can do.
EXAM TIP SGA memory is shared across all background and foreground
processes; PGA memory can be accessed only by the foreground process of
the session to which it has been allocated. Both SGA and PGA memory can
be automatically managed.

Chapter 1: Architectural Overview of Oracle Database 11g

15

• A user interacts with a user process.
• A user process interacts with a server process.
• A server process interacts with an instance.
• An instance interacts with a database.

www.allitebooks.com

PART I

The physical structures that make up an Oracle database are the datafiles, the redo
log, and the controlfile. Within the visible physical structure of the datafiles lie the logical
structures seen by end users (developers, business analysts, data warehouse architects,
and so on). The Oracle architecture guarantees abstraction of the logical from the
physical: there is no need for a programmer to know the physical location of any data,
since they only interact with logical structures, such as tables. Similarly, it is impossible
for a system administrator to know what data resides in any physical structure: the
operating system files, not their contents, are all that is visible. It is only you, the database
administrator, who is permitted (and required) to see both sides of the story.
Data is stored in datafiles. There is no practical limit to the number or size of datafiles,
and the abstraction of logical storage from physical storage means that datafiles can be
moved or resized and more datafiles can be added without end users being aware of this.
The relationship between physical and logical structures is maintained and documented
in the data dictionary, which contains metadata describing the whole database. By
querying certain views in the data dictionary, the DBA can determine precisely where
every part of every table is located.
The data dictionary is a set of tables stored within the database. There is a recursive
problem here: the instance needs to be aware of the physical and logical structure of the
database, but the information describing this is itself within the database. The solution
to this problem lies in the staged startup process, which is detailed in Chapter 3.
A requirement of the RDBMS standard is that the database must not lose data. This
means that it must be backed up, and furthermore that any changes made to data
between backups must be captured in such a manner that they can be applied to a
restored backup. This is the forward recovery process. Oracle implements the capture
of changes through the redo log. The redo log is a sequential record of all change vectors
applied to data. A change vector is the alteration made by a DML (Data Manipulation
Language: INSERT, UPDATE, or DELETE) statement. Whenever a user session makes any
changes, the data itself in the data block is changed, and the change vector is written out
to the redo log, in a form that makes it repeatable. Then in the event of damage to a
datafile, a backup of the file can be restored and Oracle will extract the relevant change
vectors from the redo log and apply them to the data blocks within the file. This ensures
that work will never be lost—unless the damage to the database is so extensive as to lose
not only one or more datafiles, but also either their backups or the redo log.
The controlfile stores the details of the physical structures of the database and is the
starting point for the link to the logical structures. When an instance opens a database,
it begins by reading the controlfile. Within the controlfile is information the instance
can then use to connect to the rest of the database, and the data dictionary within it.
The architecture of a single-instance database can be summarized as consisting of
four interacting components:

OCA/OCP Oracle Database 11g All-in-One Exam Guide

16

Figure 1-4 The indirect connection between a user and a database

Figure 1-4 represents this graphically.
It is absolutely impossible for any client-side process to have any contact with the
database: all access must be mediated by server-side processes. The client-server split is
between the user process, which generates SQL, and the server process, which executes it.

Distributed Systems Architectures
In the single-instance environment, one instance opens one database. In a distributed
environment, there are various possibilities for grouping instances and databases.
Principally:
• Real Application Clusters (RAC), where multiple instances open one database
• Streams, where multiple Oracle servers propagate transactions between
each other
• Dataguard, where a primary database updates a standby database
Combinations of these options can result in a system that can achieve the goals of
100 percent uptime and no data loss, with limitless scalability and performance.

Real Application Clusters (RAC)
RAC provides amazing capabilities for performance, fault tolerance, and scalability
(and possibly cost savings) and is integral to the Oracle’s concept of the Grid. With
previous releases, RAC (or its precursor, Oracle Parallel Server) was an expensive addon option, but from database release 10g onward, RAC is bundled with the Standard
Edition license. This is an indication of how much Oracle Corporation wants to push
users toward the RAC environment. Standard Edition RAC is limited to a certain
number of computers and a certain number of CPUs and cores per computer, but

Chapter 1: Architectural Overview of Oracle Database 11g

17

TIP Don’t convert to RAC just because you can.You need to be certain of
what you want to achieve before embarking on what is a major exercise that
may not be necessary.

Streams
There are various circumstances that make it desirable to transfer data from one
database to another. Fault tolerance is one: if an organization has two (or more)
geographically separated databases, both containing identical data and both available
at all times for users to work on, then no matter what goes wrong at one site, work
should be able to continue uninterrupted at the other. Another reason is tuning: the
two databases can be configured for different types of work, such as a transaction
processing database and a data warehouse.
Keeping the databases synchronized will have to be completely automatic, and all
changes made at either site will need to be propagated in real or near-real time to the
other site. Another reason could be maintenance of a data warehouse. Data sets
maintained by an OLTP database will need to be propagated to the warehouse database,
and subsequently these copies will need to be periodically refreshed with changes. The
data might then be pushed further out, perhaps to a series of data marts, each with a
subset of the warehouse. Streams is a facility for capturing changes made to tables and
applying them to remote copies of the tables.

PART I

even within these limitations it gives access to a phenomenally powerful environment.
RAC is an extra-cost option for the Enterprise Edition, where the scalability becomes
effectively limitless: bounded only by the clustering capacity of the underlying
operating system and hardware.
A RAC database can be configured for 100 percent uptime. One instance can be
brought down (either for planned maintenance, or perhaps because the computer
on which it is running crashes) and the database will remain accessible through a
surviving instance on another machine. Sessions against the failed instance can be
reestablished against a surviving instance without the end user being aware of any
disruption.
Transparent scalability comes from the ability to add instances, running on
different machines, to a RAC dynamically. They will automatically take on some
of the workload without users needing to be aware of the fact that now more
instances are available.
Some applications will have a performance benefit from running on a RAC.
Parallel processing can improve the performance of some work, such as long-running
queries and large batch updates. In a single-instance database, assigning multiple
parallel execution servers to such jobs will help—but they will all be running in one
instance on one machine. In a RAC database, the parallel execution servers can run on
different instances, which may get around some of the bottlenecks inherent in singleinstance architecture. Other work, such as processing the large number of small
transactions typically found in an OLTP system, will not gain a performance benefit.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

18
Streams can be bidirectional: identical tables at two or more sites, with all user
transactions executed at each site broadcast to and applied at the other sites. This is
the streaming model needed for fault tolerance. An alternative model is used in the
data warehouse example, where data sets (and ensuing changes made to them) are
extracted from tables in one database and pushed out to tables in another database.
In this model, the flow of information is more likely to be unidirectional, and the
table structures may well not be identical at the downstream sites.

Dataguard
Dataguard systems have one primary database against which transactions are
executed, and one or more standby databases used for fault tolerance or for query
processing. The standbys are instantiated from a backup of the primary, and updated
(possibly in real time) with all changes applied to the primary.
Standbys come in two forms. A physical standby is byte-for-byte identical with the
primary, for the purpose of zero data loss. Even if the primary is totally destroyed, all
data will be available on the standby. The change vectors applied to the primary are
propagated to the physical standby in the form of redo records, and applied as though
a restored backup database were being recovered. A logical standby contains the same
data as the primary, but with possibly different data structures, typically to facilitate
query processing. The primary database may have data structures (typically indexes)
optimized for transaction processing, while the logical standby may have structures
optimized for data warehouse type work. Change vectors that keep the logical standby
in synch with the primary are propagated in the form of SQL statements, using the
Streams mechanism.
Exercise 1-2: Determine if the Database Is Single Instance or Part
of a Distributed System In this exercise, you will run queries to determine
whether the database is a self-contained system, or if it is part of a larger distributed
environment. Either SQL Developer or SQL*Plus may be used. If you do not have
access to an Oracle database yet to practice this exercise, you can skip to Chapter 2,
complete an installation, and return to this exercise.
1. Connect to the database as user SYSTEM.
2. Determine if the instance is part of a RAC database:
select parallel from v$instance;

This will return NO if it is a single-instance database.
3. Determine if the database is protected against data loss by a standby database:
select protection_level from v$database;

This will return UNPROTECTED if the database is indeed unprotected.
4. Determine if Streams has been configured in the database:
select * from dba_streams_administrator;

This will return no rows, if Streams has never been configured.

Chapter 1: Architectural Overview of Oracle Database 11g

19
An Oracle instance consists of a block of shared memory known as the system global
area, or SGA, and a number of background processes. The SGA contains three
mandatory data structures:
• The database buffer cache
• The log buffer
• The shared pool
It may, optionally, also contain
• A large pool
• A Java pool
• A Streams pool
These memory structures are depicted in Figure 1-5, and the three primary
structures are detailed in the sections that follow.
User sessions also need memory on the server side. This is nonsharable and is
known as the program global area, or PGA. Each session will have its own, private PGA.
Managing the size of these structures can be largely automatic, or the DBA can
control the sizing himself. It is generally good practice to use the automatic
management.
EXAM TIP Which SGA structures are required, and which are optional? The
database buffer cache, log buffer, and shared pool are required; the large pool,
Java pool, and Streams pool are optional.

Figure 1-5 The key memory structures present in the SGA

PART I

Instance Memory Structures

OCA/OCP Oracle Database 11g All-in-One Exam Guide

20
The Database Buffer Cache
The database buffer cache is Oracle’s work area for executing SQL. When updating data,
users’ sessions don’t directly update the data on disk. The data blocks containing the
data of interest are first copied into the database buffer cache (if they are not already
there). Changes (such as inserting new rows and deleting or modifying existing rows)
are applied to these copies of the data blocks in the database buffer cache. The blocks
will remain in the cache for some time afterward, until the buffer they are occupying
is needed for caching another block.
When querying data, the data also goes via the cache. The session works out which
blocks contain the rows of interest and copies them into the database buffer cache (if
they are not already there); the relevant rows are then transferred into the session’s
PGA for further processing. And again, the blocks remain in the database buffer cache
for some time afterward.
Take note of the term block. Datafiles are formatted into fixed-sized blocks. Table
rows, and other data objects such as index keys, are stored in these blocks. The database
buffer cache is formatted into memory buffers each sized to hold one block. Unlike
blocks, rows are of variable length; the length of a row will depend on the number of
columns defined for the table, whether the columns actually have anything in them,
and if so, what. Depending on the size of the blocks (which is chosen by the DBA)
and the size of the rows (which is dependent on the table design and usage), there
may be several rows per block or possibly a row may stretch over several blocks. The
structure of a data block will be described in the section “Database Storage Structures”
later in this chapter.
Ideally, all the blocks containing data that is frequently accessed will be in the
database buffer cache, therefore minimizing the need for disk I/O. As a typical use of
the database buffer cache, consider a sales rep in the online store retrieving a customer
record and updating it, with these statements:
select customer_id, customer_name from customers;
update customers set customer_name='Sid' where customer_id=100;
commit;

To execute the SELECT statement submitted by the user process, the session’s
server process will scan the buffer cache for the data block that contains the relevant
row. If it finds it, a buffer cache hit has occurred. In this example, assume that a buffer
cache miss occurred and the server process reads the data block containing the relevant
row from a datafile into a buffer, before sending the results to the user process, which
formats the data for display to the sales rep.
The user process then submits the UPDATE statement and the COMMIT statement
to the server process for execution. Provided that the block with the row is still available
in the cache when the UPDATE statement is executed, the row will be updated in the
buffer cache. In this example, the buffer cache hit ratio will be 50 percent: two accesses
of a block in the cache, but only one read of the block from disk. A well-tuned database
buffer cache can result in a cache hit ratio well over 90 percent.
A buffer storing a block whose image in the cache is not the same as the image on
disk is often referred to as a dirty buffer. A buffer will be clean when a block is first copied

Chapter 1: Architectural Overview of Oracle Database 11g

21

TIP Determining the optimal size of the database buffer cache is application
specific and a matter of performance tuning. It is impossible to give anything
but the vaguest guidelines without detailed observations, but it is probably
true to say that the majority of databases will operate well with a cache sized
in hundreds of megabytes up to a few gigabytes.Very few applications will
perform well with a cache smaller than this, and not many will need a cache
of hundreds of gigabytes.
The database buffer cache is allocated at instance startup time. Prior to release 9i
of the database it was not possible to resize the database buffer cache subsequently
without restarting the database instance, but from 9i onward it can be resized up or
down at any time. This resizing can be either manual or (from release 10g onward)
automatic according to workload, if the automatic mechanism has been enabled.
TIP The size of the database buffer cache can be adjusted dynamically and can
be automatically managed.

The Log Buffer
The log buffer is a small, short-term staging area for change vectors before they are
written to the redo log on disk. A change vector is a modification applied to something;
executing DML statements generates change vectors applied to data. The redo log is
the database’s guarantee that data will never be lost. Whenever a data block is changed,
the change vectors applied to the block are written out to the redo log, from which
they can be extracted and applied to datafile backups if it is ever necessary to restore
a datafile.

PART I

into it: at that point, the block image in the buffer is the same as the block image on
disk. The buffer will become dirty when the block in it is updated. Eventually, dirty
buffers must be written back to the datafiles, at which point the buffer will be clean
again. Even after being written to disk, the block remains in memory; it is possible that
the buffer will not be overwritten with another block for some time.
Note that there is no correlation between the frequency of updates to a buffer (or
the number of COMMITs) and when it gets written back to the datafiles. The write to
the datafiles is done by the database writer background process.
The size of the database buffer cache is critical for performance. The cache should
be sized adequately for caching all the frequently accessed blocks (whether clean or
dirty), but not so large that it caches blocks that are rarely needed. An undersized
cache will result in excessive disk activity, as frequently accessed blocks are continually
read from disk, used, overwritten by other blocks, and then read from disk again. An
oversized cache is not so bad (so long as it is not so large that the operating system
has to swap pages of virtual memory in and out of real memory) but can cause
problems; for example, startup of an instance is slower if it involves formatting a
massive database buffer cache.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

22
Redo is not written directly to the redo log files by session server processes. If it
were, the sessions would have to wait for disk I/O operations to complete whenever
they executed a DML statement. Instead, sessions write redo to the log buffer, in
memory. This is much faster than writing to disk. The log buffer (which may contain
change vectors from many sessions, interleaved with each other) is then written out to
the redo log files. One write of the log buffer to disk may therefore be a batch of many
change vectors from many transactions. Even so, the change vectors in the log buffer
are written to disk in very nearly real time—and when a session issues a COMMIT
statement, the log buffer write really does happen in real time. The writes are done
by the log writer background process, the LGWR.
The log buffer is small (in comparison with other memory structures) because it is
a very short-term storage area. Change vectors are inserted into it and are streamed to
disk in near real time. There is no need for it to be more than a few megabytes at the
most, and indeed making it much bigger than the default value can be seriously bad
for performance. The default is determined by the Oracle server and is based on the
number of CPUs on the server node.
It is not possible to create a log buffer smaller than the default. If you attempt to,
it will be set to the default size anyway. It is possible to create a log buffer larger than
the default, but this is often not a good idea. The problem is that when a COMMIT
statement is issued, part of the commit processing involves writing the contents of the
log buffer to the redo log files on disk. This write occurs in real time, and while it is
in progress, the session that issued the COMMIT will hang. Commit processing is a
critical part of the Oracle architecture. The guarantee that a committed transaction
will never be lost is based on this: the commit-complete message is not returned to
the session until the data blocks in the cache have been changed (which means that
the transaction has been completed) and the change vectors have been written to the
redo log on disk (and therefore the transaction could be recovered if necessary). A
large log buffer means that potentially there is more to write when a COMMIT is
issued, and therefore it may take a longer time before the commit-complete message
can be sent, and the session can resume work.
TIP Raising the log buffer size above the default may be necessary for some
applications, but as a rule start tuning with the log buffer at its default size.
The log buffer is allocated at instance startup, and it cannot be resized without
restarting the instance. It is a circular buffer. As server processes write change vectors
to it, the current write address moves around. The log writer process writes the vectors
out in batches, and as it does so, the space they occupied becomes available and can
be overwritten by more change vectors. It is possible that at times of peak activity,
change vectors will be generated faster than the log writer process can write them
out. If this happens, all DML activity will cease (for a few milliseconds) while the
log writer clears the buffer.
The process of flushing the log buffer to disk is one of the ultimate bottlenecks in
the Oracle architecture. You cannot do DML faster than the LGWR can flush the change
vectors to the online redo log files.

Chapter 1: Architectural Overview of Oracle Database 11g

23

EXAM TIP The size of the log buffer is static, fixed at instance startup. It
cannot be automatically managed.

The Shared Pool
The shared pool is the most complex of the SGA structures. It is divided into dozens of
substructures, all of which are managed internally by the Oracle server. This discussion
of architecture will briefly discuss only four of the shared pool components:
• The library cache
• The data dictionary cache
• The PL/SQL area
• The SQL query and PL/SQL function result cache
Several other shared pool structures are described in later chapters. All the
structures within the shared pool are automatically managed. Their size will vary
according to the pattern of activity against the instance, within the overall size of the
shared pool. The shared pool itself can be resized dynamically, either in response to
the DBA’s instructions or through being managed automatically.
EXAM TIP The shared pool size is dynamic and can be automatically
managed.

The Library Cache
The library cache is a memory area for storing recently executed code, in its parsed
form. Parsing is the conversion of code written by programmers into something
executable, and it is a process which Oracle does on demand. By caching parsed
code in the shared pool, it can be reused greatly improving performance. Parsing
SQL code takes time. Consider a simple SQL statement:
select * from products where product_id=100;

Before this statement can be executed, the Oracle server has to work out what
it means, and how to execute it. To begin with, what is products? Is it a table, a
synonym, or a view? Does it even exist? Then the “*”—what are the columns that
make up the products table (if it is a table)? Does the user have permission to see
the table? Answers to these questions and many others have to be found by querying
the data dictionary.

PART I

TIP If redo generation is the limiting factor in a database’s performance, the
only option is to go to RAC. In a RAC database, each instance has its own log
buffer, and its own LGWR. This is the only way to parallelize writing redo data
to disk.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

24
TIP The algorithm used to find SQL in the library cache is based on the ASCII
values of the characters that make up the statement. The slightest difference
(even something as trivial as SELECT instead of select) means that the
statement will not match but will be parsed again.
Having worked out what the statement actually means, the server has to decide
how best to execute it. Is there an index on the product_id column? If so, would it
be quicker to use the index to locate the row, or to scan the whole table? More queries
against the data dictionary? It is quite possible for a simple one-line query against a
user table to generate dozens of queries against the data dictionary, and for the parsing
of a statement to take many times longer than eventually executing it. The purpose of
the library cache of the shared pool is to store statements in their parsed form, ready
for execution. The first time a statement is issued, it has to be parsed before
execution—the second time, it can be executed immediately. In a well-designed
application, it is possible that statements may be parsed once and executed millions
of times. This saves a huge amount of time.

The Data Dictionary Cache
The data dictionary cache is sometimes referred to as the row cache. Whichever term
you prefer, it stores recently used object definitions: descriptions of tables, indexes,
users, and other metadata definitions. Keeping such definitions in memory in the
SGA, where they are immediately accessible to all sessions, rather than each session
having to read them repeatedly from the data dictionary on disk, enhances parsing
performance.
The data dictionary cache stores object definitions so that when statements do
have to be parsed, they can be parsed quickly—without having to query the data
dictionary. Consider what happens if these statements are issued consecutively:
select sum(order_amount) from orders;
select * from orders where order_no=100;

Both statements must be parsed because they are different statements—but parsing
the first SELECT statement will have loaded the definition of the orders table and its
columns into the data dictionary cache, so parsing the second statement will be faster
than it would otherwise have been, because no data dictionary access will be needed.
TIP Shared pool tuning is usually oriented toward making sure that the library
cache is the right size. This is because the algorithms Oracle uses to allocate
memory in the SGA are designed to favor the dictionary cache, so if the
library cache is correct, then the dictionary cache will already be correct.

The PL/SQL Area
Stored PL/SQL objects are procedures, functions, packaged procedures and functions,
object type definitions, and triggers. These are all stored in the data dictionary, as
source code and also in their compiled form. When a stored PL/SQL object is invoked
by a session, it must be read from the data dictionary. To prevent repeated reading, the
objects are then cached in the PL/SQL area of the shared pool.

Chapter 1: Architectural Overview of Oracle Database 11g

25

TIP PL/SQL can be issued from user processes, rather than being stored
in the data dictionary. This is called anonymous PL/SQL. Anonymous PL/SQL
cannot be cached and reused but must be compiled dynamically. It will
therefore always perform worse than stored PL/SQL. Developers should
be encouraged to convert all anonymous PL/SQL into stored PL/SQL.

The SQL Query and PL/SQL Function Result Cache
The result cache is a release 11g new feature. In many applications, the same query is
executed many times, by either the same session or many different sessions. Creating
a result cache lets the Oracle server store the results of such queries in memory. The
next time the query is issued, rather than running the query the server can retrieve the
cached result.
The result cache mechanism is intelligent enough to track whether the tables
against which the query was run have been updated. If this has happened, the query
results will be invalidated and the next time the query is issued, it will be rerun. There
is therefore no danger of ever receiving an out-of-date cached result.
The PL/SQL result cache uses a similar mechanism. When a PL/SQL function
is executed, its return value can be cached, ready for the next time the function is
executed. If the parameters passed to the function, or the tables that the function
queries, are different, the function will be reevaluated; otherwise, the cached value
will be returned.
By default, use of the SQL query and PL/SQL function result cache is disabled,
but if enabled programmatically, it can often dramatically improve performance.
The cache is within the shared pool, and unlike the other memory areas described
previously, it does afford the DBA some control, as a maximum size can be specified.

Sizing the Shared Pool
Sizing the shared pool is critical for performance. It should be large enough to cache all
the frequently executed code and frequently needed object definitions (in the library
cache and the data dictionary cache) but not so large that it caches statements that have
only been executed once. An undersized shared pool cripples performance because
server sessions have repeatedly to grab space in it for parsing statements, which are then
overwritten by other statements and therefore have to be parsed again when they are
reexecuted. An oversized shared pool can impact badly on performance because it takes
too long to search it. If the shared pool is less than the optimal size, performance will
degrade. But there is a minimum size below which statements will fail.
Memory in the shared pool is allocated according to an LRU (least recently used)
algorithm. When the Oracle server needs space in the shared pool, it will overwrite the
object that has been unused for the longest time. If the object is later needed again, it
will have to be reloaded—possibly displacing another object in the shared pool.

www.allitebooks.com

PART I

The first time a PL/SQL object is used, it must be read from the data dictionary
tables on disk, but subsequent invocations will be much faster, because the object
will already be available in the PL/SQL area of the shared pool.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

26
TIP Determining the optimal size is a matter for performance tuning, but it
is probably safe to say that most databases will need a shared pool of several
hundred megabytes. Some applications will need more than a gigabyte, and
very few will perform adequately with less than a hundred megabytes.
The shared pool is allocated at instance startup time. Prior to release 9i of the
database it was not possible to resize the shared pool subsequently without restarting
the database instance, but from 9i onward it can be resized up or down at any time.
This resizing can be either manual or (from release 10g onward) automatic according
to workload, if the automatic mechanism has been enabled.
EXAM TIP The shared pool size is dynamic and can be automatically managed.

The Large Pool
The large pool is an optional area that, if created, will be used automatically by
various processes that would otherwise take memory from the shared pool. One
major use of the large pool is by shared server processes, described in Chapter 4 in the
section “Use the Oracle Shared Server Architecture.” Parallel execution servers will also
use the large pool, if there is one. In the absence of a large pool, these processes will
use memory on the shared pool. This can cause contention for the shared pool, which
may have negative results. If shared servers or parallel servers are being used, a large
pool should always be created. Some I/O processes may also make use of the large
pool, such as the processes used by the Recovery Manager when it is backing up to a
tape device.
Sizing the large pool is not a matter for performance. If a process needs the large
pool of memory, it will fail with an error if that memory is not available. Allocating
more memory than is needed will not make statements run faster. Furthermore, if a
large pool exists, it will be used: it is not possible for a statement to start off by using
the large pool, and then revert to the shared pool if the large pool is too small.
From 9i release 2 onward it is possible to create and to resize a large pool after
instance startup. With earlier releases, it had to be defined at startup and was a fixed
size. From release 10g onward, creation and sizing of the large pool can be completely
automatic.
EXAM TIP The large pool size is dynamic and can be automatically managed.

The Java Pool
The Java pool is only required if your application is going to run Java stored procedures
within the database: it is used for the heap space needed to instantiate the Java objects.
However, a number of Oracle options are written in Java, so the Java pool is considered

Chapter 1: Architectural Overview of Oracle Database 11g

27

EXAM TIP The Java pool size is dynamic and can be automatically managed.

The Streams Pool
The Streams pool is used by Oracle Streams. This is an advanced tool that is beyond
the scope of the OCP examinations or this book, but for completeness a short
description follows.
The mechanism used by Streams is to extract change vectors from the redo log and
to reconstruct statements that were executed from these—or statements that would
have the same net effect. These statements are executed at the remote database. The
processes that extract changes from redo and the processes that apply the changes
need memory: this memory is the Streams pool. From database release 10g it is
possible to create and to resize the Streams pool after instance startup; this creation
and sizing can be completely automatic. With earlier releases it had to be defined at
startup and was a fixed size.
EXAM TIP The Streams pool size is dynamic and can be automatically
managed.
Exercise 1-3: Investigate the Memory Structures of the Instance In
this exercise, you will run queries to determine the current sizing of various memory
structures that make up the instance. Either SQL Developer or SQL*Plus may be used.
1. Connect to the database as user SYSTEM.
2. Show the current, maximum, and minimum sizes of the SGA components
that can be dynamically resized:
select COMPONENT,CURRENT_SIZE,MIN_SIZE,MAX_SIZE
from v$sga_dynamic_components;

PART I

standard nowadays. Note that Java code is not cached in the Java pool: it is cached in
the shared pool, in the same way that PL/SQL code is cached.
The optimal size of the Java pool is dependent on the Java application, and how
many sessions are running it. Each session will require heap space for its objects. If
the Java pool is undersized, performance may degrade due to the need to continually
reclaim space. In an EJB (Enterprise JavaBean) application, an object such as a stateless
session bean may be instantiated and used, and then remain in memory in case it
is needed again: such an object can be reused immediately. But if the Oracle server
has had to destroy the bean to make room for another, then it will have to be
reinstantiated next time it is needed. If the Java pool is chronically undersized,
then the applications may simply fail.
From 10g onward it is possible to create and to resize a large pool after instance
startup; this creation and sizing of the large pool can be completely automatic. With
earlier releases, it had to be defined at startup and was a fixed size.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

28
This illustration shows the result on an example database:

The example shows an instance without Streams, hence a Streams pool of
size zero. Neither the large pool nor the Java pool has changed since instance
startup, but there have been changes made to the sizes of the shared pool and
database buffer cache. Only the default pool of the database buffer cache has
been configured; this is usual, except in highly tuned databases.
3. Determine how much memory has been, and is currently, allocated to
program global areas:
select name,value from v$pgastat
where name in ('maximum PGA allocated','total PGA allocated');

Instance Process Structures
The instance background processes are the processes that are launched when the
instance is started and run until it is terminated. There are five background processes
that have a long history with Oracle; these are the first five described in the sections
that follow: System Monitor (SMON), Process Monitor (PMON), Database Writer
(DBWn), Log Writer (LGWR), and Checkpoint Process (CKPT). A number of others
have been introduced with the more recent releases; notable among these are
Manageability Monitor (MMON) and Memory Manager (MMAN). There are also
some that are not essential but will exist in most instances. These include Archiver
(ARCn) and Recoverer (RECO). Others will exist only if certain options have been
enabled. This last group includes the processes required for RAC and Streams.
Additionally, some processes exist that are not properly documented (or are not
documented at all). The processes described here are those that every OCP candidate
will be expected to know.

Chapter 1: Architectural Overview of Oracle Database 11g

29

SMON, the System Monitor
SMON initially has the task of mounting and opening a database. The steps involved
in this are described in detail in Chapter 3. In brief, SMON mounts a database by
locating and validating the database controlfile. It then opens a database by locating
and validating all the datafiles and online log files. Once the database is opened and
in use, SMON is responsible for various housekeeping tasks, such as coalescing free
space in datafiles.

Figure 1-6 Typical interaction of instance processes and the SGA

PART I

Figure 1-6 provides a high-level description of the typical interaction of several
key processes and SGA memory structures. The server process is representative of the
server side of a client-server connection, with the client component consisting of a
user session and user process described earlier. The server process interacts with the
datafiles to fetch a data block into the buffer cache. This may be modified by some
DML, dirtying the block in the buffer cache. The change vector is copied into the
circular log buffer that is flushed in almost real-time by the log writer process (LGWR)
to the online redo log files. If archivelog mode of the database is configured, the
archiver process (ARCn) copies the online redo log files to an archive location.
Eventually, some condition may cause the database writer process (DBWn) to write
the dirty block to one of the datafiles. The mechanics of the background processes and
their interaction with various SGA structures are detailed in the sections that follow.
There is a platform variation that must be cleared up before discussing processes.
On Linux and Unix, all the Oracle processes are separate operating system processes,
each with a unique process number. On Windows, there is one operating system
process (called ORACLE.EXE) for the whole instance: the Oracle processes run as
separate threads within this one process.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

30
PMON, the Process Monitor
A user session is a user process that is connected to a server process. The server process
is launched when the session is created and destroyed when the session ends. An
orderly exit from a session involves the user logging off. When this occurs, any work
done will be completed in an orderly fashion, and the server process will be terminated.
If the session is terminated in a disorderly manner (perhaps because the user’s PC is
rebooted), then the session will be left in a state that must be cleared up. PMON
monitors all the server processes and detects any problems with the sessions. If a
session has terminated abnormally, PMON will destroy the server process, return its
PGA memory to the operating system’s free memory pool, and roll back any incomplete
transaction that may have been in progress.
EXAM TIP If a session terminates abnormally, what will happen to an active
transaction? It will be rolled back, by the PMON background process.

DBWn, the Database Writer
Always remember that sessions do not as a general rule write to disk. They write data
(or changes to existing data) to buffers in the database buffer cache. It is the database
writer that subsequently writes the buffers to disk. It is possible for an instance to
have several database writers (up to a maximum of twenty), which will be called
DBW0, DBW1, and so on: hence the use of the term DBWn to refer to “the” database
writer. The default is one database writer per eight CPUs, rounded up.
TIP How many database writers do you need? The default number may well
be correct. Adding more may help performance, but usually you should look at
tuning memory first. As a rule, before you optimize disk I/O, ask why there is
any need for disk I/O.
DBWn writes dirty buffers from the database buffer cache to the datafiles—but
it does not write the buffers as they become dirty. On the contrary: it writes as few
buffers as possible. The general idea is that disk I/O is bad for performance, so don’t
do it unless it really is needed. If a block in a buffer has been written to by a session,
there is a reasonable possibility that it will be written to again—by that session, or a
different one. Why write the buffer to disk, if it may well be dirtied again in the near
future? The algorithm DBWn uses to select dirty buffers for writing to disk (which will
clean them) will select only buffers that have not been recently used. So if a buffer is
very busy, because sessions are repeatedly reading or writing it, DBWn will not write
it to disk. There could be hundreds or thousands of writes to a buffer before DBWn
cleans it. It could be that in a buffer cache of a million buffers, a hundred thousand of
them are dirty—but DBWn might only write a few hundred of them to disk at a time.
These will be the few hundred that no session has been interested in for some time.
DBWn writes according to a very lazy algorithm: as little as possible, as rarely as
possible. There are four circumstances that will cause DBWn to write: no free buffers,
too many dirty buffers, a three-second timeout, and when there is a checkpoint.

Chapter 1: Architectural Overview of Oracle Database 11g

31

First, when there are no free buffers. If a server process needs to copy a block into
the database buffer cache, it must find a free buffer. A free buffer is a buffer that is
neither dirty (updated, and not yet written back to disk) nor pinned (a pinned buffer
is one that is being used by another session at that very moment). A dirty buffer must
not be overwritten because if it were changed, data would be lost, and a pinned buffer
cannot be overwritten because the operating system’s memory protection mechanisms
will not permit this. If a server process takes too long (this length of time is internally
determined by Oracle) to find a free buffer, it signals the DBWn to write some dirty
buffers to disk. Once this is done, these will be clean, free, and available for use.
Second, there may be too many dirty buffers—”too many” being another internal
threshold. No one server process may have had a problem finding a free buffer, but
overall, there could be a large number of dirty buffers: this will cause DBWn to write
some of them to disk.
Third, there is a three-second timeout: every three seconds, DBWn will clean a few
buffers. In practice, this event may not be significant in a production system because
the two previously described circumstances will be forcing the writes, but the timeout
does mean that even if the system is idle, the database buffer cache will eventually be
cleaned.
Fourth, there may be a checkpoint requested. The three reasons already given will
cause DBWn to write a limited number of dirty buffers to the datafiles. When a
checkpoint occurs, all dirty buffers are written. This could mean hundreds of thousands
of them. During a checkpoint, disk I/O rates may hit the roof, CPU usage may go to 100
percent, end user sessions may experience degraded performance, and people may start
complaining. Then when the checkpoint is complete (which may take several minutes),
performance will return to normal. So why have checkpoints? The short answer is, don’t
have them unless you have to.
EXAM TIP What does DBWn do when a transaction is committed? It does
absolutely nothing.
The only moment when a checkpoint is absolutely necessary is as the database is
closed and the instance is shut down—a full description of this sequence is given in
Chapter 3. A checkpoint writes all dirty buffers to disk: this synchronizes the buffer
cache with the datafiles, the instance with the database. During normal operation,
the datafiles are always out of date, as they may be missing changes (committed and
uncommitted). This does not matter, because the copies of blocks in the buffer cache
are up to date, and it is these that the sessions work on. But on shutdown, it is necessary
to write everything to disk. Automatic checkpoints only occur on shutdown, but a
checkpoint can be forced at any time with this statement:
alter system checkpoint;

PART I

EXAM TIP What will cause DBWR to write? No free buffers, too many dirty
buffers, a three-second timeout, or a checkpoint.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

32
Note that from release 8i onward, checkpoints do not occur on log switch (log
switches are discussed in Chapter 14).
The checkpoint described so far is a full checkpoint. Partial checkpoints occur more
frequently; they force DBWn to write all the dirty buffers containing blocks from just
one or more datafiles rather than the whole database: when a datafile or tablespace is
taken offline; when a tablespace is put into backup mode; when a tablespace is made
read only. These are less drastic than full checkpoints and occur automatically
whenever the relevant event happens.
To conclude, the DBWn writes on a very lazy algorithm: as little as possible, as
rarely as possible—except when a checkpoint occurs, when all dirty buffers are written
to disk, as fast as possible.

LGWR, the Log Writer
LGWR writes the contents of the log buffer to the online log files on disk. A write of
the log buffer to the online redo log files is often referred to as flushing the log buffer.
When a session makes any change (by executing INSERT, UPDATE, or DELETE
commands) to blocks in the database buffer cache, before it applies the change to the
block it writes out the change vector that it is about to apply to the log buffer. To
avoid loss of work, these change vectors must be written to disk with only minimal
delay. To this end, the LGWR streams the contents of the log buffer to the online redo
log files on disk in very nearly real-time. And when a session issues a COMMIT, the
LGWR writes in real-time: the session hangs, while LGWR writes the buffer to disk.
Only then is the transaction recorded as committed, and therefore nonreversible.
LGWR is one of the ultimate bottlenecks in the Oracle architecture. It is impossible
to perform DML faster than LGWR can write the change vectors to disk. There are
three circumstances that will cause LGWR to flush the log buffer: if a session issues
a COMMIT; if the log buffer is one-third full; if DBWn is about to write dirty buffers.
First, the write-on-commit. To process a COMMIT, the server process inserts a
commit record into the log buffer. It will then hang, while LGWR flushes the log
buffer to disk. Only when this write has completed is a commit-complete message
returned to the session, and the server process can then continue working. This is the
guarantee that transactions will never be lost: every change vector for a committed
transaction will be available in the redo log on disk and can therefore be applied to
datafile backups. Thus, if the database is ever damaged, it can be restored from backup
and all work done since the backup was made can be redone.
TIP It is in fact possible to prevent the LGWR write-on-commit. If this is
done, sessions will not have to wait for LGWR when they commit: they issue
the command and then carry on working. This will improve performance
but also means that work can be lost. It becomes possible for a session to
COMMIT, then for the instance to crash before LGWR has saved the change
vectors. Enable this with caution! It is dangerous, and hardly ever necessary.
There are only a few applications where performance is more important than
data loss.

Chapter 1: Architectural Overview of Oracle Database 11g

33

EXAM TIP When will LGWR flush the log buffer to disk? On COMMIT; when
the buffer is one-third full; just before DBWn writes.

CKPT, the Checkpoint Process
The purpose of the CKPT changed dramatically between release 8 and release 8i of the
Oracle database. In release 8 and earlier, checkpoints were necessary at regular intervals
to make sure that in the event of an instance failure (for example, if the server machine
should be rebooted) the database could be recovered quickly. These checkpoints were
initiated by CKPT. The process of recovery is repairing the damage done by an instance
failure; it is fully described in Chapter 14.
After a crash, all change vectors referring to dirty buffers (buffers that had not
been written to disk by DBWn at the time of the failure) must be extracted from
the redo log, and applied to the data blocks. This is the recovery process. Frequent
checkpoints would ensure that dirty buffers were written to disk quickly, thus
minimizing the amount of redo that would have to be applied after a crash and
therefore minimizing the time taken to recover the database. CKPT was responsible
for signaling regular checkpoints.
From release 8i onward, the checkpoint mechanism changed. Rather than letting
DBWn get a long way behind and then signaling a checkpoint (which forces DBWn to
catch up and get right up to date, with a dip in performance while this is going on)

PART I

Second, when the log buffer is one-third full, LGWR will flush it to disk. This is
done primarily for performance reasons. If the log buffer is small (as it usually should
be) this one-third-full trigger will force LGWR to write the buffer to disk in very nearly
real time even if no one is committing transactions. The log buffer for many applications
will be optimally sized at only a few megabytes. The application will generate enough
redo to fill one third of this in a fraction of a second, so LGWR will be forced to
stream the change vectors to disk continuously, in very nearly real time. Then, when
a session does COMMIT, there will be hardly anything to write: so the COMMIT will
complete almost instantaneously.
Third, when DBWn needs to write dirty buffers from the database buffer cache to
the datafiles, before it does so it will signal LGWR to flush the log buffer to the online
redo log files. This is to ensure that it will always be possible to reverse an uncommitted
transaction. The mechanism of transaction rollback is fully explained in Chapter 8.
For now, it is necessary to know that it is entirely possible for DBWn to write an
uncommitted transaction to the datafiles. This is fine, so long as the undo data needed
to reverse the transaction is guaranteed to be available. Generating undo data also
generates change vectors. As these will be in the redo log files before the datafiles are
updated, the undo data needed to roll back a transaction (should this be necessary)
can be reconstructed if necessary.
Note that it can be said that there is a three-second timeout that causes LGWR
to write. In fact, the timeout is on DBWR—but because LGWR will always write just
before DBWn, in effect there is a three-second timeout on LGWR as well.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

34
from 8i onward the DBWn performs incremental checkpoints instead of full checkpoints.
The incremental checkpoint mechanism instructs DBWn to write out dirty buffers at a
constant rate, so that there is always a predictable gap between DBWn (which writes
blocks on a lazy algorithm) and LGWR (which writes change vectors in near real
time). Incremental checkpointing results in much smoother performance and more
predictable recovery times than the older full checkpoint mechanism.
TIP The faster the incremental checkpoint advances, the quicker recovery
will be after a failure. But performance will deteriorate due to the extra disk
I/O, as DBWn has to write out dirty buffers more quickly. This is a conflict
between minimizing downtime and maximizing performance.
The CKPT no longer has to signal full checkpoints, but it does have to keep track
of where in the redo stream the incremental checkpoint position is, and if necessary
instruct DBWn to write out some dirty buffers in order to push the checkpoint
position forward. The current checkpoint position, also known as the RBA (the redo
byte address), is the point in the redo stream at which recovery must begin in the
event of an instance crash. CKPT continually updates the controlfile with the current
checkpoint position.
EXAM TIP When do full checkpoints occur? Only on request, or as part of an
orderly database shutdown.

MMON, the Manageability Monitor
MMON is a process that was introduced with database release 10g and is the enabling
process for many of the self-monitoring and self-tuning capabilities of the database.
The database instance gathers a vast number of statistics about activity and
performance. These statistics are accumulated in the SGA, and their current values can
be interrogated by issuing SQL queries. For performance tuning and also for trend
analysis and historical reporting, it is necessary to save these statistics to long-term
storage. MMON regularly (by default, every hour) captures statistics from the SGA and
writes them to the data dictionary, where they can be stored indefinitely (though by
default, they are kept for only eight days).
Every time MMON gathers a set of statistics (known as a snapshot), it also launches
the Automatic Database Diagnostic Monitor, the ADDM. The ADDM is a tool that
analyses database activity using an expert system developed over many years by many
DBAs. It observes two snapshots (by default, the current and previous snapshots) and
makes observations and recommendations regarding performance. Chapter 5
describes the use of ADDM (and other tools) for performance tuning.
EXAM TIP By default, MMON gathers a snapshot and launches the ADDM
every hour.

Chapter 1: Architectural Overview of Oracle Database 11g

35

MMNL, the Manageability Monitor Light
MMNL is a process that assists the MMON. There are times when MMON’s scheduled
activity needs to be augmented. For example, MMON flushes statistical information
accumulated in the SGA to the database according to an hourly schedule by default. If
the memory buffers used to accumulate this information fill before MMON is due to
flush them, MMNL will take responsibility for flushing the data.

MMAN, the Memory Manager
MMAN is a process that was introduced with database release 10g. It enables the
automatic management of memory allocations.
Prior to release 9i of the database, memory management in the Oracle environment
was far from satisfactory. The PGA memory associated with session server processes
was nontransferable: a server process would take memory from the operating system’s
free memory pool and never return it—even though it might only have been needed
for a short time. The SGA memory structures were static: defined at instance startup
time, and unchangeable unless the instance was shut down and restarted.
Release 9i changed that: PGAs can grow and shrink, with the server passing out
memory to sessions on demand while ensuring that the total PGA memory allocated
stays within certain limits. The SGA and the components within it (with the notable
exception of the log buffer) can also be resized, within certain limits. Release 10g
automated the SGA resizing: MMAN monitors the demand for SGA memory
structures and can resize them as necessary.
Release 11g takes memory management a step further: all the DBA need do is set
an overall target for memory usage, and MMAN will observe the demand for PGA
memory and SGA memory, and allocate memory to sessions and to SGA structures
as needed, while keeping the total allocated memory within a limit set by the DBA.
TIP The automation of memory management is one of the major technical
advances of the later releases, automating a large part of the DBA’s job and
giving huge benefits in performance and resource utilization.

ARCn, the Archiver
This is an optional process as far as the database is concerned, but usually a required
process by the business. Without one or more ARCn processes (there can be from one
to thirty, named ARC0, ARC1, and so on) it is possible to lose data in the event of a
failure. The process and purpose of launching ARCn to create archive log files is
described in detail in Chapter 14. For now, only a summary is needed.

PART I

As well as gathering snapshots, MMON continuously monitors the database and
the instance to check whether any alerts should be raised. Use of the alert system is
covered in the second OCP exam and discussed in Chapter 24. Some alert conditions
(such as warnings when limits on storage space are reached) are enabled by default;
others can be configured by the DBA.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

36
All change vectors applied to data blocks are written out to the log buffer (by the
sessions making the changes) and then to the online redo log files (by the LGWR).
There are a fixed number of online redo log files of a fixed size. Once they have been
filled, LGWR will overwrite them with more redo data. The time that must elapse
before this happens is dependent on the size and number of the online redo log files,
and the amount of DML activity (and therefore the amount of redo generated) against
the database. This means that the online redo log only stores change vectors for recent
activity. In order to preserve a complete history of all changes applied to the data, the
online log files must be copied as they are filled and before they are reused. The ARCn
process is responsible for doing this. Provided that these copies, known as archive redo
log files, are available, it will always be possible to recover from any damage to the
database by restoring datafile backups and applying change vectors to them extracted
from all the archive log files generated since the backups were made. Then the final
recovery, to bring the backup right up to date, will come by using the most recent
change vectors from the online redo log files.
EXAM TIP LGWR writes the online log files; ARCn reads them. In normal
running, no other processes touch them at all.
Most production transactional databases will run in archive log mode, meaning that
ARCn is started automatically and that LGWR is not permitted to overwrite an online
log file until ARCn has successfully archived it to an archive log file.
TIP The progress of the ARCn processes and the state of the destination(s)
to which they are writing must be monitored. If archiving fails, the database
will eventually hang. This monitoring can be done through the alert system.

RECO, the Recoverer Process
A distributed transaction involves updates to two or more databases. Distributed
transactions are designed by programmers and operate through database links.
Consider this example:
update orders set order_status=complete where customer_id=1000;
update orders@mirror set order_status=complete where customer_id=1000;
commit;

The first update applies to a row in the local database; the second applies to a row in a
remote database identified by the database link MIRROR.
The COMMIT command instructs both databases to commit the transaction,
which consists of both statements. A full description of commit processing appears
in Chapter 8. Distributed transactions require a two-phase commit. The commit in each
database must be coordinated: if one were to fail and the other were to succeed, the
data overall would be in an inconsistent state. A two-phase commit prepares each
database by instructing its LGWRs to flush the log buffer to disk (the first phase), and

Chapter 1: Architectural Overview of Oracle Database 11g

37

Some Other Background Processes
It is unlikely that processes other than those already described will be examined, but
for completeness descriptions of the remaining processes usually present in an instance
follow. Figure 1-7 shows a query that lists all the processes running in an instance on
a Windows system. There are many more processes that may exist, depending on what
options have been enabled, but those shown in the figure will be present in most
instances.
The processes not described in previous sections are
• CJQ0, J000 These manage jobs scheduled to run periodically. The job queue
Coordinator, CJQn, monitors the job queue and sends jobs to one of several
job queue processes, Jnnn, for execution. The job scheduling mechanism is
measured in the second OCP examination and covered in Chapter 22.

Figure 1-7 The background processes typically present in a single instance

PART I

once this is confirmed, the transaction is flagged as committed everywhere (the second
phase). If anything goes wrong anywhere between the two phases, RECO takes action
to cancel the commit and roll back the work in all databases.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

38
• D000 This is a dispatcher process that will send SQL calls to shared server
processes, Snnn, if the shared server mechanism has been enabled. This is
described in Chapter 4.
• DBRM The database resource manager is responsible for setting resource
plans and other Resource Manager–related tasks. Using the Resource Manager
is measured in the second OCP examination and covered in Chapter 21.
• DIA0 The diagnosability process zero (only one is used in the current
release) is responsible for hang detection and deadlock resolution. Deadlocks,
and their resolution, are described in Chapter 8.
• DIAG The diagnosability process (not number zero) performs diagnostic
dumps and executes oradebug commands (oradebug is a tool for investigating
problems within the instance).
• FBDA The flashback data archiver process archives the historical rows of
tracked tables into flashback data archives. This is a facility for ensuring that
it is always possible to query data as it was at a time in the past.
• PSP0 The process spawner has the job of creating and managing other
Oracle processes, and is undocumented.
• QMNC, Q000 The queue manager coordinator monitors queues in the
database and assigns Qnnn processes to enqueue and dequeue messages to
and from these queues. Queues can be created by programmers (perhaps as
a means for sessions to communicate) and are also used internally. Streams,
for example, use queues to store transactions that need to be propagated to
remote databases.
• SHAD These appear as TNS V1–V3 processes on a Linux system. They are
the server processes that support user sessions. In the figure there is only one,
dedicated to the one user process that is currently connected: the user who
issued the query.
• SMCO, W000 The space management coordinator process coordinates
the execution of various space management–related tasks, such as proactive
space allocation and space reclamation. It dynamically spawns slave processes
(Wnnn) to implement the task.
• VKTM The virtual keeper of time is responsible for keeping track of time and
is of particular importance in a clustered environment.
Exercise 1-4: Investigate the Processes Running in Your Instance In
this exercise you will run queries to see what background processes are running on
your instance. Either SQL Developer or SQL*Plus may be used.
1. Connect to the database as user SYSTEM.
2. Determine what processes are running, and how many of each:
select program from v$session order by program;
select program from v$process order by program;

Chapter 1: Architectural Overview of Oracle Database 11g

39

3. Demonstrate the launching of server processes as sessions are made, by
counting the number of server processes (on Linux or any Unix platform) or
the number of Oracle threads (on Windows). The technique is different on
the two platforms, because on Linux/Unix, the Oracle processes are separate
operating system processes, but on Windows they are threads within one
operating system process.
A. On Linux, run this command from an operating system prompt:
ps -ef|grep oracle|wc -l

This will count the number of processes running that have the string
oracle in their name; this will include all the session server processes
(and possibly a few others).
Launch a SQL*Plus session, and rerun the preceding command. You
can use the host command to launch an operating shell from within
the SQL*Plus session. Notice that the number of processes has increased.
Exit the session, rerun the command, and you will see that the number
has dropped down again. The illustration shows this fact:

PART I

These queries produce similar results: each process must have a session
(even the background processes), and each session must have a process.
The processes that can occur multiple times will have a numeric suffix,
except for the processes supporting user sessions: these will all have the
same name.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

40
Observe in the illustration how the number of processes changes from 4
to 5 and back again: the difference is the launching and terminating of the
server process supporting the SQL*Plus session.
B. On Windows, launch the task manager. Configure it to show the number of
threads within each process: from the View menu, choose Select Columns
and tick the Thread Count check box. Look for the ORACLE.EXE process, and
note the number of threads. In the next illustration, this is currently at 33.

Launch a new session against the instance, and you will see the thread count
increment. Exit the session, and it will decrement.

Database Storage Structures
The Oracle database provides complete abstraction of logical storage from physical.
The logical data storage is in segments. There are various segment types; a typical
segment is a table. The segments are stored physically in datafiles. The abstraction of
the logical storage from the physical storage is accomplished through tablespaces. The
relationships between the logical and physical structures, as well as their definitions,
are maintained in the data dictionary.
There is a full treatment of database storage, logical and physical, in Chapter 5.

Chapter 1: Architectural Overview of Oracle Database 11g

41
The Physical Database Structures

EXAM TIP What three file types must be present in a database? The
controlfile, the online redo log files, and any number of datafiles.

The Controlfile
First a point of terminology: some DBAs will say that a database can have multiple
controlfiles, while others will say that it has one controlfile, of which there may be
multiple copies. This book will follow the latter terminology, which conforms to
Oracle Corporation’s use of phrases such as “multiplexing the controlfile,” which
means to create multiple copies.
The controlfile is small but vital. It contains pointers to the rest of the database:
the locations of the online redo log files and of the datafiles, and of the more recent
archive log files if the database is in archive log mode. It also stores information
required to maintain database integrity: various critical sequence numbers and
timestamps, for example. If the Recovery Manager tool (described in Chapters 15, 16,
and 17) is being used for backups, then details of these backups will also be stored in
the controlfile. The controlfile will usually be no more than a few megabytes big, but
your database can’t survive without it.
Every database has one controlfile, but a good DBA will always create multiple
copies of the controlfile so that if one copy is damaged, the database can quickly be
repaired. If all copies of the controlfile are lost, it is possible (though perhaps
awkward) to recover, but you should never find yourself in that situation. You don’t
have to worry about keeping multiplexed copies of the controlfile synchronized—
Oracle will take care of that. Its maintenance is automatic—your only control is how
many copies to have, and where to put them.
If you get the number of copies, or their location, wrong at database creation time,
you can add or remove copies later, or move them around—but you should bear in
mind that any such operations will require downtime, so it is a good idea to get it right
at the beginning. There is no right or wrong when determining how many copies to
have. The minimum is one; the maximum possible is eight. All organizations should
have a DBA standards handbook, which will state something like “all production
databases will have three copies of the controlfile, on three separate devices,” three
being a number picked for illustration only, but a number that many organizations
are happy with. There is no rule that says two copies is too few, or seven copies is too
many; there are only corporate standards, and the DBA’s job is to ensure that the
databases conform to these.

PART I

There are three file types that make up an Oracle database, plus a few others that exist
externally to the database and are, strictly speaking, optional. The required files are
the controlfile, the online redo log files, and the datafiles. The external files that will
usually be present (there are others, needed for advanced options) are the initialization
parameter file, the password file, the archive redo log files, and the log and trace files.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

42
Damage to any controlfile copy will cause the database instance to terminate
immediately. There is no way to avoid this: Oracle Corporation does not permit
operating a database with less than the number of controlfiles that have been requested.
The techniques for multiplexing or relocating the controlfile are covered in Chapter 14.

The Online Redo Log Files
The redo log stores a chronologically ordered chain of every change vector applied to
the database. This will be the bare minimum of information required to reconstruct,
or redo, all work that has been done. If a datafile (or the whole database) is damaged
or destroyed, these change vectors can be applied to datafile backups to redo the
work, bringing them forward in time until the moment that the damage occurred.
The redo log consists of two file types: the online redo log files (which are required
for continuous database operation) and the archive log files (which are optional for
database operation, but mandatory for point-in-time recovery).
Every database has at least two online redo log files, but as with the controlfile,
a good DBA creates multiple copies of each online redo log file. The online redo log
consists of groups of online redo log files, each file being known as a member. An
Oracle database requires at least two groups of at least one member each to function.
You may create more than two groups for performance reasons, and more than one
member per group for security (an old joke: this isn’t just data security, it is job
security). The requirement for a minimum of two groups is so that one group can
accept the current changes, while the other group is being backed up (or archived,
to use the correct term).
EXAM TIP Every database must have at least two online redo log file groups
to function. Each group should have at least two members for safety.
One of the groups is the current group: changes are written to the current online
redo log file group by LGWR. As user sessions update data in the database buffer
cache, they also write out the minimal change vectors to the redo log buffer. LGWR
continually flushes this buffer to the files that make up the current online redo log file
group. Log files have a predetermined size, and eventually the files making up the
current group will fill. LGWR will then perform what is called a log switch. This
makes the second group current and starts writing to that. If your database is
configured appropriately, the ARCn process(es) will then archive (in effect, back up)
the log file members making up the first group. When the second group fills, LGWR
will switch back to the first group, making it current, and overwriting it; ARCn will
then archive the second group. Thus, the online redo log file groups (and therefore
the members making them up) are used in a circular fashion, and each log switch
will generate an archive redo log file.
As with the controlfile, if you have multiple members per group (and you should!)
you don’t have to worry about keeping them synchronized. LGWR will ensure that it
writes to all of them, in parallel, thus keeping them identical. If you lose one member
of a group, as long as you have a surviving member, the database will continue to
function.

Chapter 1: Architectural Overview of Oracle Database 11g

43

The Datafiles
The third required file type making up a database is the datafile. At a minimum, you
must have two datafiles, to be created at database creation time. With previous releases
of Oracle, you could create a database with only one datafile—10g and 11g require
two, at least one each for the SYSTEM tablespace (that stores the data dictionary) and
the SYSAUX tablespace (that stores data that is auxiliary to the data dictionary). You
will, however, have many more than that when your database goes live, and will
usually create a few more to begin with.
Datafiles are the repository for data. Their size and numbers are effectively
unlimited. A small database, of only a few gigabytes, might have just half a dozen
datafiles of only a few hundred megabytes each. A larger database could have
thousands of datafiles, whose size is limited only by the capabilities of the host
operating system and hardware.
The datafiles are the physical structures visible to the system administrators.
Logically, they are the repository for the segments containing user data that the
programmers see, and also for the segments that make up the data dictionary. A
segment is a storage structure for data; typical segments are tables and indexes.
Datafiles can be renamed, resized, moved, added, or dropped at any time in the
lifetime of the database, but remember that some operations on some datafiles may
require downtime.
At the operating system level, a datafile consists of a number of operating system
blocks. Internally, datafiles are formatted into Oracle blocks. These blocks are
consecutively numbered within each datafile. The block size is fixed when the datafile is
created, and in most circumstances it will be the same throughout the entire database.
The block size is a matter for tuning and can range (with limits depending on the
platform) from 2KB up to 64KB. There is no relationship between the Oracle block
size and the operating system block size.
TIP Many DBAs like to match the operating system block size to the Oracle
block size. For performance reasons, the operating system blocks should
never be larger than the Oracle blocks, but there is no reason not have them
smaller. For instance, a 1KB operating system block size and an 8KB Oracle
block size is perfectly acceptable.

PART I

The size and number of your log file groups are a matter of tuning. In general, you
will choose a size appropriate to the amount of activity you anticipate. The minimum
size is fifty megabytes, but some very active databases will need to raise this to several
gigabytes if they are not to fill every few minutes. A very busy database can generate
megabytes of redo a second, whereas a largely static database may generate only a few
megabytes an hour. The number of members per group will be dependent on what level
of fault tolerance is deemed appropriate, and is a matter to be documented in corporate
standards. However, you don’t have to worry about this at database creation time. You
can move your online redo log files around, add or drop them, and create ones of
different sizes as you please at any time later on. Such operations are performed
“online” and don’t require downtime—they are therefore transparent to the end users.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

44
Within a block, there is a header section and a data area, and possibly some free
space. The header section contains information such as the row directory, which lists
the location within the data area of the rows in the block (if the block is being used
for a table segment) and also row locking information if there is a transaction
working on the rows in the block. The data area contains the data itself, such as rows
if it is part of a table segment, or index keys if the block is part of an index segment.
When a user session needs to work on data for any purpose, the server process
supporting the session locates the relevant block on disk and copies it into a free
buffer in the database buffer cache. If the data in the block is then changed (the buffer
is dirtied) by executing a DML command against it, eventually DBWn will write the
block back to the datafile on disk.
EXAM TIP Server processes read from the datafiles; DBWn writes to
datafiles.
Datafiles should be backed up regularly. Unlike the controlfile and the online
redo log files, they cannot be protected by multiplexing (though they can, of course,
be protected by operating system and hardware facilities, such as RAID). If a datafile
is damaged, it can be restored from backup and then recovered (to recover a datafile
means to bring it up to date) by applying all the redo generated since the backup was
made. The necessary redo is extracted from the change vectors in the online and
archive redo log files. The routines for datafile backup, restore, and recovery are
described in Chapters 15–18.

Other Database Files
These files exist externally to the database. They are, for practical purposes,
necessary—but they are not strictly speaking part of the database.
• The instance parameter file When an Oracle instance is started, the SGA
structures initialize in memory and the background processes start according
to settings in the parameter file. This is the only file that needs to exist in order
to start an instance. There are several hundred parameters, but only one is
required: the DB_NAME parameter. All others have defaults. So the parameter
file can be quite small, but it must exist. It is sometimes referred to as a pfile
or spfile, and its creation is described in Chapter 3.
• The password file Users establish sessions by presenting a username and a
password. The Oracle server authenticates these against user definitions stored
in the data dictionary. The data dictionary is a set of tables in the database; it
is therefore inaccessible if the database is not open. There are occasions when
you need to be authenticated before the data dictionary is available: when
you need to start the database, or indeed create it. An external password file is
one means of doing this. It contains a small number (typically less than half
a dozen) of user names and passwords that exist outside the data dictionary,
and which can therefore be used to connect to an instance before the data
dictionary is available. Creating the password file is described in Chapter 3.

Chapter 1: Architectural Overview of Oracle Database 11g

45

• Alert log and trace files The alert log is a continuous stream of messages
regarding certain critical operations affecting the instance and the database.
Not everything is logged: only events that are considered to be really important,
such as startup and shutdown; changes to the physical structures of the
database; changes to the parameters that control the instance. Trace files are
generated by background processes when they detect error conditions, and
sometimes to report specific events.

The Logical Database Structures
The physical structures that make up a database are visible as operating system files
to your system administrators. Your users see logical structures such as tables. Oracle
uses the term segment to describe any structure that contains data. A typical segment is
a table, containing rows of data, but there are more than a dozen possible segment
types in an Oracle database. Of particular interest (for examination purposes) are
table segments, index segments, and undo segments, all of which are investigated in
detail later on. For now, you need only know that tables contain rows of information;
that indexes are a mechanism for giving fast access to any particular row; and that undo
segments are data structures used for storing the information that might be needed to
reverse, or roll back, any transactions that you do not wish to make permanent.
Oracle abstracts the logical from the physical storage by means of the tablespace. A
tablespace is logically a collection of one or more segments, and physically a collection
of one or more datafiles. Put in terms of relational analysis, there is a many-to-many
relationship between segments and datafiles: one table may be cut across many
datafiles, one datafile may contain bits of many tables. By inserting the tablespace entity
between the segments and the files, Oracle resolves this many-to-many relationship.
A number of segments must be created at database creation time: these are the
segments that make up the data dictionary. These segments are stored in two tablespaces,
called SYSTEM and SYSAUX. The SYSAUX tablespace was new with release 10g: in
previous releases, the entire data dictionary went into SYSTEM. The database creation
process must create at least these two tablespaces, with at least one datafile each, to store
the data dictionary.
EXAM TIP The SYSAUX tablespace must be created at database creation
time in Oracle 10g and later. If you do not specify it, one will be created by
default.
A segment consists of a number of blocks. Datafiles are formatted into blocks, and
these blocks are assigned to segments as the segments grow. Because managing space

PART I

• Archive redo log files When an online redo log file fills, the ARCn process
copies it to an archive redo log file. Once this is done, the archive log is no
longer part of the database in that it is not required for continued operation
of the database. It is, however, essential if it is ever necessary to recover a
datafile backup, and Oracle does provide facilities for managing the archive
redo log files.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

46
one block at a time would be a time-consuming process, blocks are grouped into
extents. An extent is a contiguous series of blocks that are consecutively numbered
within a datafile, and segments will grow by an extent at a time. These extents need
not be adjacent to each other, or even in the same datafile; they can come from any
datafile that is part of the tablespace within which the segment resides.
Figure 1-8 shows the Oracle data storage hierarchy, with the separation of logical
from physical storage.
The figure shows the relationships between the storage structures. Logically, a
tablespace can contain many segments, each consisting of many extents. An extent is
a set of Oracle blocks. Physically, a datafile consists of many operating system blocks.
The two sides of the model are connected by the relationships showing that one
tablespace can consist of multiple datafiles, and at the lowest level that one Oracle
block consists of one or more operating system blocks.

The Data Dictionary
The data dictionary contains metadata that describes the database, both physically
and logically, and its contents. User definitions, security information, integrity
constraints, and (with release 10g and later) performance monitoring information are
all stored in the data dictionary. It is stored as a set of segments in the SYSTEM and
SYSAUX tablespaces.
In many ways, the segments that make up the data dictionary are segments like any
other: just tables and indexes. The critical difference is that the data dictionary tables are
generated at database creation time, and you are not allowed to access them directly.
There is nothing to stop an inquisitive DBA from investigating the data dictionary
directly, but if you do any updates to it, you may cause irreparable damage to your
database, and certainly Oracle Corporation will not support you. Creating a data
Figure 1-8
The Oracle logical
and physical storage
hierarchy

The logical structures:

The physical structures:

Tablespaces

Datafiles

Segments

Extents

Oracle blocks

Operating
system blocks

Chapter 1: Architectural Overview of Oracle Database 11g

47

EXAM TIP Which view will show you ALL the tables in the database? DBA_
TABLES, not ALL_TABLES.
The relationship between tablespaces and datafiles is maintained in the database
controlfile. This lists all the datafiles, stating which tablespace they are a part of.
Without the controlfile, there is no way that an instance can locate the datafiles and
then identify those that make up the SYSTEM tablespace. Only when the SYSTEM
tablespace has been opened is it possible for the instance to access the data dictionary,
at which point it becomes possible to open the database.
SQL code always refers to objects defined in the data dictionary. To execute a simple
query against a table, the Oracle server must first query the data dictionary to find out
if the table exists, and the columns that make it up. Then it must find out where,
physically, the table is. This requires reading the extent map of the segment. The extent
map lists all the extents that make up the table, with the detail of which datafile each
extent is in, what block of the datafile the extent starts at, and how many blocks it
continues for.
Exercise 1-5: Investigate the Storage Structures in Your
Database In this exercise you will create a table segment, and then work out
where it is physically. Either SQL Developer or SQL*Plus may be used.
1. Connect to the database as user SYSTEM.
2. Create a table without nominating a tablespace—it will be created in your
default tablespace, with one extent:
create table tab24 (c1 varchar2(10));

PART I

dictionary is part of the database creation process. It is maintained subsequently by data
definition language commands. When you issue the CREATE TABLE command, you are
in fact inserting rows into data dictionary tables, as you are with commands such as
CREATE USER or GRANT.
To query the dictionary, Oracle provides a set of views. Most of these views come
in three forms, prefixed DBA_, ALL_, or USER_. Any view prefixed USER_ will describe
objects owned by the user querying the view. So no two distinct users will see the
same contents while querying a view prefixed with USER_. If user JOHN queries
USER_TABLES, he will see information about his tables; if you query USER_TABLES,
you will see information about your tables. Any view prefixed ALL_ will display rows
describing objects to which you have access. So ALL_TABLES shows rows describing
your own tables, plus rows describing tables belonging to other users that you have
permission to see. Any view prefixed DBA_ has rows for every object in the database,
so DBA_TABLES has one row for every table in the database, no matter who created
it. These views are created as part of the database creation process, along with a large
number of PL/SQL packages that are provided by Oracle to assist database administrators
in managing the database and programmers in developing applications. PL/SQL code
is also stored in the data dictionary.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

48
3. Identify the tablespace in which the table resides, the size of the extent, the
file number the extent is in, and which block of the file the extent starts at:
select tablespace_name, extent_id, bytes, file_id, block_id
from dba_extents where owner='SYSTEM' and segment_name='TAB24';

4. Identify the file by name: substitute the file_id from the previous query when
prompted:
select name from v$datafile where file#=&file_id;

5. Work out precisely where in the file the extent is, in terms of how many bytes
into the file it begins. This requires finding out the tablespace’s block size.
Enter the block_id and tablespace_name returned by the query in Step 3 when
prompted.
select block_size * &block_id from dba_tablespaces
where tablespace_name='&tablespace_name';

The illustration that follows shows these steps, executed from SQL*Plus:

The illustration shows that the table exists in one extent that is 64KB large. This
extent is in the file /home/db11g/app/db11g/oradata/orcl/system01.dbf
and begins about 700MB into the file.

Chapter 1: Architectural Overview of Oracle Database 11g

49

Single-Instance Architecture
• An Oracle server is an instance connected to a database.
• An instance is a block of shared memory and a set of background processes.
• A database is a set of files on disk.
• A user session is a user process connected to a server process.

Instance Memory Structures
• The instance shared memory is the system global area (the SGA).
• A session’s private memory is its program global area (the PGA).
• The SGA consists of a number of substructures, some of which are required
(the database buffer cache, the log buffer, and the shared pool) and some of
which are optional (the large pool, the Java pool, and the Streams pool).
• The SGA structures can be dynamically resized and automatically managed,
with the exception of the log buffer.

Instance Process Structures
• Session server processes are launched on demand when users connect.
• Background processes are launched at instance startup and persist until
shutdown.
• Server processes read from the database; background processes write to the
database.
• Some background processes will always be present (in particular SMON,
PMON, DBWn, LGWR, CKPT, and MMON); others will run depending on
what options have been enabled.

Database Storage Structures
• There are three required file types in a database: the controlfile, the online
redo log files, and the datafiles.
• The controlfile stores integrity information and pointers to the rest of the
database.
• The online redo logs store recent change vectors applied to the database.
• The datafiles store the data.
• External files include the parameter file, the password file, archive redo logs,
and the log and trace files.

PART I

Two-Minute Drill

OCA/OCP Oracle Database 11g All-in-One Exam Guide

50
• Logical data storage (segments) is abstracted from physical data storage
(datafiles) by tablespaces.
• A tablespace can consist of multiple datafiles.
• Segments consist of multiple extents, which consist of multiple Oracle blocks,
which consist of one or more operating system blocks.
• A segment can have extents in several datafiles.

Self Test
1. Which statements regarding instance memory and session memory are
correct? (Choose two answers.)
A. SGA memory is private memory segments; PGA memory is shared
memory segments.
B. Sessions can write to the PGA, not the SGA.
C. The SGA is written to by all sessions; a PGA is written by one session.
D. The PGA is allocated at instance startup.
E. The SGA is allocated at instance startup.
2. How do sessions communicate with the database? (Choose the best answer.)
A. Server processes use Oracle Net to connect to the instance.
B. Background processes use Oracle Net to connect to the database.
C. User processes read from the database and write to the instance.
D. Server processes execute SQL received from user processes.
3. What memory structures are a required part of the SGA? (Choose three answers.)
A. The database buffer cache
B. The Java pool
C. The large pool
D. The log buffer
E. The program global area
F. The shared pool
G. The Streams pool
4. Which SGA memory structure(s) cannot be resized dynamically after instance
startup? (Choose one or more correct answers.)
A. The database buffer cache
B. The Java pool
C. The large pool
D. The log buffer
E. The shared pool

Chapter 1: Architectural Overview of Oracle Database 11g

51
F. The Streams pool
5. Which SGA memory structure(s) cannot be resized automatically after
instance startup? (Choose one or more correct answers.)
A. The database buffer cache
B. The Java pool
C. The large pool
D. The log buffer
E. The shared pool
F. The Streams pool
G. All SGA structures can be resized automatically after instance startup
6. When a session changes data, where does the change get written? (Choose the
best answer.)
A. To the data block in the cache, and the redo log buffer
B. To the data block on disk, and the current online redo log file
C. The session writes to the database buffer cache, and the log writer writes to
the current online redo log file
D. Nothing is written until the change is committed
7. Which of these background processes is optional? (Choose the best answer.)
A. ARCn, the archive process
B. CKPT, the checkpoint process
C. DBWn, the database writer
D. LGWR, the log writer
E. MMON, the manageability monitor
8. What happens when a user issues a COMMIT? (Choose the best answer.)
A. The CKPT process signals a checkpoint.
B. The DBWn process writes the transaction’s changed buffers to the datafiles.
C. The LGWR flushes the log buffer to the online redo log.
D. The ARCn process writes the change vectors to the archive redo log.
9. An Oracle instance can have only one of some processes, but several of others.
Which of these processes can occur several times? (Choose three answers.)
A. The archive process
B. The checkpoint process
C. The database writer process
D. The log writer process
E. The session server process

PART I

G. All SGA structures can be resized dynamically after instance startup

OCA/OCP Oracle Database 11g All-in-One Exam Guide

52
10. How can one segment can be spread across many datafiles? (Choose the best
answer.)
A. By allocating an extent with blocks in multiple datafiles
B. By spreading the segment across multiple tablespaces
C. By assigning multiple datafiles to a tablespace
D. By using an Oracle block size that is larger than the operating system block
size
11. Which statement is correct regarding the online redo log? (Choose the best
answer.)
A. There must be at least one log file group, with at least one member.
B. There must be at least one log file group, with at least two members.
C. There must be at least two log file groups, with at least one member each.
D. There must be at least two log file groups, with at least two members each.
12. Where is the current redo byte address, also known as the incremental
checkpoint position, recorded? (Choose the best answer.)
A. In the controlfile
B. In the current online log file group
C. In the header of each datafile
D. In the system global area

Self Test Answers
1. þ C and E. The SGA is shared memory, updated by all sessions; PGAs are
private to each session. The SGA is allocated at startup time (but it can be
modified later).
ý A, B, and D. A is wrong because it reverses the situation: it is the SGA that
exists in shared memory, not the PGA. B is wrong because sessions write to
both their own PGA and to the SGA. D is wrong because (unlike the SGA) the
PGA is only allocated on demand.
2. þ D. This is the client-server split: user processes generate SQL; server
processes execute SQL.
ý A, B, and C. A and B are wrong because they get the use of Oracle Net
wrong. Oracle Net is the protocol between a user process and a server process.
C is wrong because it describes what server processes do, not what user
processes do.
3. þ A, D, and F. Every instance must have a database buffer cache, a log
buffer, and a shared pool.

Chapter 1: Architectural Overview of Oracle Database 11g

53

4. þ D. The log buffer is fixed in size at startup time.
ý A, B, C, E, F, and G. A, B, C, E, and F are wrong because these are the
SGA’s resizable components. G is wrong because the log buffer is static.
5. þ D. The log buffer cannot be resized manually, never mind automatically.
ý A, B, C, E, F, and G. A, B, C, E, and F are wrong because these SGA
components can all be automatically managed. G is wrong because the log
buffer is static.
6. þ A. The session updates the copy of the block in memory and writes out
the change vector to the log buffer.
ý B, C, and D. B is wrong, because while this will happen, it does not
happen when the change is made. C is wrong because it confuses the session
making changes in memory with LGWR propagating changes to disk. D is
wrong because all changes to data occur in memory as they are made—the
COMMIT is not relevant.
7. þ A. Archiving is not compulsory (though it is usually a good idea).
ý B, C, D, and E. CKPT, DBWn, LGWR, and MMON are all necessary
processes.
8. þ C. On COMMIT, the log writer flushes the log buffer to disk. No other
background processes need do anything.
ý A, B, and D. A is wrong because checkpoints only occur on request, or
on orderly shutdown. B is wrong because the algorithm DBWn uses to select
buffers to write to the datafiles is not related to COMMIT processing, but to
how busy the buffer is. D is wrong because ARCn only copies filled online
redo logs; it doesn’t copy change vectors in real time.
9. þ A, C, and E. A and C are correct because the DBA can choose to configure
multiple archive and database writer processes. E is correct because one server
process will be launched for every concurrent session.
ý B and D. These are wrong because an instance can have only one log
writer process and only one checkpoint process.
10. þ C. If a tablespace has several datafiles, segments can have extents in all
of them.
ý A, B, and D. A is wrong because one extent consists of consecutive
block in one datafile. B is wrong because one segment can only exist in one
tablespace (though one tablespace can contain many segments). D is wrong
because while this can certainly be done, one block can only exist in one
datafile.

PART I

ý B, C, E, and G. B, C, and G are wrong because the Java pool, the large
pool, and the Streams pool are only needed for certain options. E is wrong
because the PGA is not part of the SGA at all.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

54
11. þ C. Two groups of one member is the minimum required for the database
to function.
ý A, B, and D. A and B are wrong because at least two groups are always
required. D is wrong because while it is certainly advisable to multiplex the
members, it is not a mandatory requirement.
12. þ A. The checkpoint process writes the RBA to the controlfile.
ý B, C, and D. The online logs, the datafiles, and SGA have no knowledge of
where the current RBA is.

CHAPTER 2
Installing and Creating
a Database

Exam Objectives
In this chapter you will learn to
• 052.2.1 Identify the Tools for Administering an Oracle Database
• 052.2.2 Plan an Oracle Database Installation
• 052.2.3 Install the Oracle Software by Using Oracle Universal Installer (OUI)
• 052.3.1 Create a Database by Using the Database Configuration Assistant (DBCA)

55

OCA/OCP Oracle Database 11g All-in-One Exam Guide

56
Perhaps the simplest yet most important strategic task in the life of an Oracle database
occurs at the planning and installation phase. Although the decisions you make at
this point are not cast in stone, they will often be complex to undo. For example,
choosing a database name, the locations of the installation binaries, and those of
other important files might seem trivial, but once you have committed to these
settings, they are usually permanent. It is therefore well worth the effort to consider
the key factors that influence planning, installing, and creating a database.
This chapter begins by introducing the essential bread-and-butter tools used by
Oracle DBAs and proceeds to discuss planning a database installation. Once the plan
is made, installing the Oracle software is described and the chapter culminates with
you creating your very own database.

Identify the Tools for Administering
an Oracle Database
Oracle Corporation provides a number of tools for managing the Oracle environment.
First there is the Oracle Universal Installer (OUI) used (as its name suggests) to install
any Oracle software. Second is the Database Configuration Assistant (DBCA), the tool
for creating a database. A related tool used during upgrades is the Database Upgrade
Assistance (DBUA), but a discussion of DBUA is beyond the scope of the exams. These
can be launched from the OUI or run separately. Third, the OUI will install a number of
other tools for managing a database and related components, notably SQL*Plus. Depending
on the installation type chosen, it may also install SQL Developer.
Oracle Enterprise Manager (OEM) Database Control is also installed by the OUI
and will be used extensively in this book.

The Oracle Universal Installer
Historically, managing Oracle software could be a painful task. This was because the DBA
was largely responsible for ensuring that incompatible products were kept separate. It was
not uncommon to install one product, a second, and a third satisfactorily—then installation
of a fourth would break the other three. The problem of incompatibilities lies in the use
of the base libraries. The base libraries provide facilities that are common to all Oracle
products. For example, all Oracle products use the Oracle Net communications protocol;
it is impossible to install a product without it. If two products are built on the same
version of the base libraries, then (theoretically) they can coexist in the same Oracle Home. An
Oracle Home is the location of an Oracle product installation: a set of files in a directory
structure. Before the Oracle Universal Installer, each product had its own self-contained
installation routine, which was sometimes not too clever at identifying incompatibilities
with already installed products.
The OUI is written in Java, using JDK/JRE1.5. This means that it is the same on
all platforms. The OUI can be installed as a self-contained product in its own Oracle
Home, but this is not usually necessary, as it is shipped with every Oracle product and
can be launched from the product installation media; it will install itself into the

Chapter 2: Installing and Creating a Database

57

TIP Always use the latest version of the OUI that you have available. There
can be issues with updating the OUI inventory if you try to revert to earlier
versions after using a later version.

The OUI Inventory
Central to the OUI is the inventory. This is a set of files that should ideally exist outside
any Oracle Home. The inventory stores details of all the Oracle products installed on
the machine, including the exact version, the location, and in some cases details of
patches that have been applied. Every run of the OUI will check the inventory for
incompatibilities before permitting an install into an existing Oracle Home to
proceed, and will then update the inventory with details of all products installed or
upgraded. The location of the Unix inventory can be chosen by the DBA the first time
the OUI (any version) is run on the machine. On Windows, the location is always
created in
%SystemRoot%\Program files\Oracle\Inventory

All platforms have a hard-coded, platform-specific location where the OUI will
search for an existing inventory pointer. On Linux this is a file:
/etc/oraInst.loc

On Solaris it is also a file:
/var/opt/oracle/oraInst.loc

On Windows it is a key in the registry:
HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE\inst_loc

When the OUI starts, it will look for this file (or registry key). If it does not exist,
OUI assumes that there has never been any Oracle software installed on the machine,
and it will create the file (or registry key) and write to it the location of the new
inventory that is to be created. All subsequent runs of the OUI, no matter what
version, will then be able to find the inventory.
This mechanism for creating an inventory pointer does raise an issue with
operating system privileges: on Linux or Unix, the user running the installer for the
first time will need permission to write to the appropriate directory. Usually only the
root user can write to /etc or /var. As it is not acceptable for security reasons to run
the OUI as the root user, OUI will generate a script (the orainstRoot.sh script) to
be run by the root user that will create the oraInst.loc file. On Windows, the user
running the OUI will need privileges to create the registry key.

PART I

Oracle Home along with the product. There are different versions of the OUI, and if
a product comes with an earlier version than one already installed on the machine,
then it will usually be a good idea (and may indeed be necessary) to install the
product using the already-installed version. When the OUI prompts for the location
of a products.xml file, specify the media with the product you want to install.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

58
TIP To relocate the inventory, first copy it (the whole directory system to
which the inventory pointer is pointing) to the new location, and then edit
the pointer file (or registry key). Sometimes, you may want to create a new
inventory but keep the old one. On Linux, simply delete the oraInst.loc
file, run the OUI, and choose a location for the new inventory. From then on,
edit oraInst.loc to switch between the two inventories.

The Prerequisite Tests
The OUI checks certain requirements on the server machine before it will run. These
are platform specific and are provided in this file on the installation media:
• /install/oraparam.ini (Unix)
• \install\oraparam.ini (Windows)
The requirements are not too demanding, doing little more than checking that the
graphics device on which the installer is displaying can show at least 256 colors.
The oraparam.ini file also specifies the location of the file products.xml,
which is the file with details of all the products that can be installed from this media.
Each product will have its own requirements, and these may be demanding (or
irritating, if you know they actually don’t matter). The product requirements are listed
in a set of XML files. Typical of these is
• /stage/prereq/db/refhost.xml (Unix)
• \stage\prereq\db\refhost.xml (Windows)
The Windows file is usually very simple, specifying little more than a calculation
for necessary swap space, and the operating system release:



















Chapter 2: Installing and Creating a Database

59

It is worth noting the swap space calculation, which is based on the amount of main
memory detected. For instance, if OUI detects physical memory of 512MB–2048MB, it
will demand a swap file of 1.5 times the amount of physical memory. OUI is not
intelligent enough to realize that Windows can resize its swap file, so that even if the
present size is far less than this, it could expand to far more. Also note that the Windows
Vista base version (Windows version 6.0) is listed, but not with any service packs.
The Unix prerequisites are more demanding, in that as well as a calculation for
required swap space they specify a whole list of packages and kernel settings, with
several sections for the various supported Unix versions. Following is a print of a
typical section:
































PART I














OCA/OCP Oracle Database 11g All-in-One Exam Guide

60
Obtaining the required packages can be a quite challenging for some Unix
distributions. Also, some of the kernel settings (such as the ip_local_port_range)
may conflict with local system administration policies. If you cannot get your system
into a state where it will pass the prerequisite tests, you have three options. First, you
can edit the oraparam.ini file or the refhost.xml file to change the value or to
remove the test entirely. This will “fix” the problem permanently. Second, you can run
the OUI with a switch that tells it to ignore the prerequisite tests. Third, you can run
the OUI and during the run tell it to ignore any failures. This last option can only
work when running OUI interactively, not when doing a silent install.
If at all possible, do not do any of these! In practice, often the problem is not that
the products will not work. For example, on Linux, some of the kernel settings and
packages are not really needed for an entry-level installation. The problem, however,
lies with the supportability of your installation. If you ever raise an SR (an SR is a
Service Request, passed to Oracle Support Services through MetaLink) and your
system does not conform to the prerequisites, the support analysts may well refuse to
help you. So if you have to break one of the rules to get an installation through, fix it
as soon as possible afterward.

Running the OUI
Oracle products are shipped on CDs or DVDs, or can be downloaded from Oracle
Corporation’s web site. The installation can be done directly from the CD or DVD,
but it is usually better to copy the CD or DVD to disk first (this is called staging), and
install from there. This does save time, since you aren’t prompted to insert different
media during the installation. The downloaded versions are usually ZIP files, or for
Linux and Unix compressed TAR or CPIO files. Use whatever operating system utility
is appropriate to expand them.
To launch the OUI, on Windows run the setup.exe file in the root directory, on
Linux and Unix, run the runInstaller shell script.

Database Creation and Upgrade Tools
The Database Configuration Assistant (DBCA) is a graphical tool used for creating
and modifying a database. Creating a database is not a big deal using DBCA. The
wizard-driven approach guides you through the database creations options, allowing
you to determine parameter values and file location options. DBCA then generates the
appropriate scripts to create a database with the options you have chosen. DBCA
ensures there are no syntax errors and proceeds to run these scripts. Everything that
DBCA does can also be done manually using a command-line utility. DBCA is
commonly launched by OUI. When you opt for this, OUI instantiates the Oracle
Home and then goes on to run DBCA.
As with database creation, database upgrade can be done manually or through a
graphical tool. The graphical tool is the Database Upgrade Assistant (DBUA). It, too,
can be called by OUI, if OUI detects an existing database Oracle Home of an earlier
version. The DBUA will ensure that no steps are missed, but many DBAs prefer to do
upgrades manually. They believe that it gives them more control, and in some cases a
manual upgrade can be quicker.

Chapter 2: Installing and Creating a Database

61

Tools for Issuing Ad Hoc SQL: SQL*Plus
and SQL Developer
There are numerous tools that can be used to connect to an Oracle database. Two
of the most basic are SQL*Plus and SQL Developer. These are provided by Oracle
Corporation and are perfectly adequate for much of the work that a database
administrator needs to do. The choice between them is partly a matter of personal
preference, partly to do with the environment, and partly to do with functionality.
SQL Developer undoubtedly offers far more function than SQL*Plus, but it is more
demanding in that it needs a graphical terminal, whereas SQL*Plus can be used on
character-mode devices.

SQL*Plus
SQL*Plus is available on all platforms to which the database has been ported, and it
is installed into both Oracle database and Oracle client Oracle Homes. On Linux, the
executable file is sqlplus. The location of this file will be installation specific but
will typically be something like
/u01/app/oracle/product/db_1/bin/sqlplus

Your Linux account should be set up appropriately to run SQL*Plus. There are
some environment variables that will need to be set. These are
• ORACLE_HOME
• PATH
• LD_LIBRARY_PATH
The PATH must include the bin directory in the Oracle Home. The LD_LIBRARY_
PATH should include the lib directory in the Oracle Home, but in practice you may
get away without setting this. Figure 2-1 shows a Linux terminal window and some
tests to see if the environment is correct.
In Figure 2-1, first the echo command checks whether the three variables have
been set up correctly: there is an ORACLE_HOME, and the bin and lib directories
in it have been set as the first elements of the PATH and LD_LIBRARY_PATH variables.
Then which confirms that the SQL*Plus executable file really is available, in the
PATH. Finally, SQL*Plus is launched with a username, a password, and a connect
identifier passed to it on the command line.
Following the logon, the next lines of text display the version of SQL*Plus being
used, which is 11.1.0.6.0, the version of the database to which the connection has
been made (which happens to be the same as the version of the SQL*Plus tool), and
which options have been installed within the database. The last line is the prompt to
the user, SQL>, at which point they can enter any SQL*Plus or SQL command.

PART I

Both DBCA and DBUA are written in Java and therefore require a graphics
terminal to display.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

62
Figure 2-1
Checking the Linux
session setup

Historically, there were always two versions of SQL*Plus for Microsoft Windows:
the character version and the graphical version. The character version is the executable
file sqlplus.exe, the graphical version was sqlplusw.exe; with the current
release the graphical version no longer exists, but many DBAs will prefer to use it, and
the versions shipped with earlier releases are perfectly good tools for working with an
11g database. There are no problems with mixing client versions: an 11g SQL*Plus
client can connect to a 9i database, and a 9i SQL*Plus client can connect to an 11g
database; changes in Oracle Net may make it impossible to go back further than 9i.
Following a default installation of either the Oracle database or just the Oracle client
on Windows, SQL*Plus will be available as a shortcut on the Windows Start menu.
The tests of the environment and the need to set the variables if they are not
correct, previously described for a Linux installation, are not usually necessary on a
Windows installation. This is because the variables are set in the Windows registry by
the Oracle Universal Installer when the software is installed. If SQL*Plus does not
launch successfully, check the registry variables. Figure 2-2 shows the relevant section
of the registry, viewed with the Windows regedit.exe registry editor utility. Within
the registry editor, navigate to the key
HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE\KEY_OraDb11g_home1

The final element of this navigation path will have a different name if there have
been several 11g installations on the machine.

SQL Developer
SQL Developer is a tool for connecting to an Oracle database (or, in fact, some nonOracle databases too) and issuing ad hoc SQL commands. It can also manage PL/SQL
objects. Unlike SQL*Plus, it is a graphical tool with wizards for commonly needed
actions. SQL Developer is written in Java, and requires a Java Runtime Environment
(JRE) to run. It is available on all platforms that support the appropriate version of
the JRE. SQL Developer does not need to be installed with the Oracle Universal

Chapter 2: Installing and Creating a Database

63
PART I

Figure 2-2 The Oracle registry variable

Installer. It is not installed in an Oracle Home but is completely self-contained. The
latest version can be downloaded from Oracle Corporation’s web site.
To install SQL Developer, unzip the ZIP file. That’s all. It does require at least JRE
release 1.5, to be available. If a JRE is not available on the machine being used, there
are downloadable versions of SQL Developer for Windows that include it. (These
versions include a Java Developers Kit or JDK which includes the JRE.) For platforms
other than Windows, JRE1.5 must be preinstalled. Download it from Sun Microsystem’s
web site, and install it according to the platform-specific directions. To check that the
JRE is available and its version, run the following command from an operating system
prompt:
java -version

This should return something like
java version 1.5.0_13
Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_13-b05)
Java HotSpot(TM) Client VM (build 1.5.0_13-b05, mixed mode, sharing)

If the version number returned is not what you expect, using which java may help
identify the problem: the search path could be locating an incorrect version.
Once SQL Developer has been unzipped, change your current directory to the
directory in which SQL Developer was unzipped, and launch it. On Windows, the
executable file is sqldeveloper.exe. On Linux, it is the sqldeveloper.sh shell
script. Remember to check that the DISPLAY environment variable has been set to a
suitable value (such as 127.0.0.1:0.0, if SQL Developer is being run on the system
console) before running the shell script.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

64
Any problems with installing the JRE and launching SQL Developer should be
referred to your system administrator.
TIP Database 11g does ship with a release of SQL Developer, and OUI
will unzip it into a directory in the Oracle Home, but this will not be the
up-to-date version. As of the time of writing, the version shipped with the
production release of the 11g database is version 1.1, but the current version
is 1.5.
Figure 2-3 shows the SQL Developer User Interface after connecting to a database
and issuing a simple query.
The general layout of the SQL Developer window comprises a left pane for
navigation around objects, and a right pane to display and enter information.
In the figure, the left-hand pane shows that a connection has been made to a database.
The connection is called orcl_sys. This name is just a label chosen when the connection
was defined, but most developers will use some sort of naming convention—in this case,
the name chosen is the database identifier, which is orcl, and the name of the user the
connection was made as, which was sys. The branches beneath list all the possible object
types that can be managed. Expanding the branches would list the objects themselves. The

Figure 2-3

The SQL Developer user interface

Chapter 2: Installing and Creating a Database

65

• File A normal Windows-like file menu, from where one can save work and
exit from the tool.
• Edit A normal Windows-like edit menu, from where one can undo, redo,
copy, paste, find, and so on.
• View

The options for customizing the SQL Developer user interface.

• Navigate Facilities for moving between panes, and also for moving around
code that is being edited.
• Run Forces execution of the SQL statements, SQL script, or PL/SQL block
that is being worked on.
• Debug Rather than running a whole block of code, step through it line by
line with breakpoints.
• Source Options for use when writing SQL and PL/SQL code, such as
keyword completion and automatic indenting.
• Migration Tools for converting applications designed for third-party databases
(Microsoft Access and SQL Server, and MySQL) to the Oracle environment.
• Tools Links to external programs, including SQL*Plus.
• Help

It’s pretty good.

SQL Developer can be a very useful tool, and it is very customizable. Experiment
with it, read the Help, and set up the user interface the way that works best for you.
Exercise 2-1: Install SQL Developer on Windows In this exercise, you will
install SQL Developer on a Windows machine.
1. Download the current version of SQL Developer. The URL is
http://www.oracle.com/technology/software/products/sql/index.html

Click the radio button to accept the license agreement, and then select the file
that includes the JDK (if you do not already have this) or without the JDK if it
already available on the machine.
The file will be called something like sqldeveloper-1.2.1.3213.zip,
depending on the version.
2. Move the file to an empty directory, and expand it. You will need WinZip or
a similar utility installed to do this. The next illustration shows the contents

PART I

right-hand pane has an upper part prompting the user to enter a SQL statement, and a
lower part that will display the result of the statement. The layout of the panes and the
tabs visible on them are highly customizable.
The menu buttons across the top menu bar give access to standard facilities:

OCA/OCP Oracle Database 11g All-in-One Exam Guide

66
of the directory into which the file was unzipped, viewed from a command
window.

Note the presence of the readme.html file. This contains the product release
notes—open it in a browser, and read them.
3. Confirm success of your installation by running the sqldeveloper.exe
executable file, either from the command prompt or by double-clicking it in
Windows Explorer.

Oracle Enterprise Manager
The version of Oracle Enterprise Manager relevant to the OCP examination is Database
Control. This is a tool for managing one database (which can be a RAC database),
whereas Grid Control can manage many databases (and more). Database Control is
installed into the Oracle Home. It consists of a Java process that monitors a port for
incoming connection requests. If there are several database instances running off the
same Oracle Home, each instance will be accessible through Database Control on a
different port.
Database Control connects to the database on behalf of the user. It has built-in
monitoring capability and will display real-time information regarding alert conditions,
activity, and resource usage. It also gives access to many wizards that can make
database management and tuning tasks feasible for novice DBAs, and quick to carry
out for experienced DBAs.
Starting and stopping the Database Control process is described in Chapter 3;
using it for management tasks is demonstrated in most subsequent chapters.
TIP Oracle Enterprise Manager can be a very useful tool, but never use it
without understanding what it is doing. Many DBAs like to work from the
SQL*Plus or SQL Developer command line to understand exactly how
to do something, and then use Enterprise Manager to make doing it easy.
It is also a nice tool for checking syntax for a command you’ve forgotten.

Chapter 2: Installing and Creating a Database

67
Other Administration Tools

Oracle Net Manager, Oracle Net Configuration Assistant
These are two Java graphical tools for configuring the Oracle networking environment.
There is considerable overlap in their functionality, but each does have some capability
lacking in the other. Most network administration tasks can also be done through
Database Control, and all can be done by editing configuration files by hand.
Historically, manual editing of the Oracle Net configuration files could be an
extremely dodgy business: many DBAs believed that the files were very sensitive to
trifling variations in format such as use of white spaces, abbreviations, and case. For
this reason alone, the graphical tools have always been popular. Recent releases of
Oracle Net appear to be less sensitive to such issues, but the graphical tools are still
useful for preventing silly syntax errors.

Data Loading and Unloading Utilities
The classical utilities for transferring data between Oracle databases are the Export and
Import tools. Export runs queries against a database to extract object definitions and data,
and writes them out to an operating system file as a set of DDL and DML commands.
Import reads the file and executes the DDL and DML statements to create the objects and
enter the data into them. These utilities were very useful for transferring data between
databases, because the transfer could go across operating systems and Oracle versions, but
because they work through regular user sessions (they are client-server tools), they were
not always suitable for large-scale operations. Export files can only be read by Import.
The replacement for Export and Import is Data Pump, introduced with release 10g.
Functionally, Data Pump is very similar: it extracts data from one database, writes it out
to a file, and inserts it into another database (possibly a different version, on a different
platform). But the implementation is completely different. Data Pump uses background
processes, not server sessions, to read and write data. This makes it much faster.
Launching, controlling, and monitoring Data Pump jobs is done through client-server
sessions, but the job itself all happens within the instance. Export and Import are still
supported, but Data Pump is the preferred utility. Data Pump–generated files can only
be read by Data Pump: there is no compatibility with Export and Import.
SQL*Loader is a tool for loading large amounts of data into an Oracle database
from operating system files. These files can be laid out in a number of formats. There
are restrictions on the formats SQL*Loader can use, but it is a pretty versatile tool and
can be configured to parse many file layouts. Typical usage is the regular upload of data
into an Oracle database from a third-party feeder system: the third-party database will
write the data out in an agreed format, and SQL*Loader will then load it.
EXAM TIP Data Pump can read only files generated by Data Pump, but
SQL*Loader can read files generated by any third-party product, so long
as the file is formatted in a way that can be parsed.

PART I

There are a number of other utilities that will be used in the course of this book. In
many cases, there are both graphical and command-line interfaces. All of these are
installed into the Oracle Home.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

68
Data Pump and SQL*Loader are described in Chapter 23. Both utilities have
command-line interfaces and a graphical interface through Database Control.
TIP Export and Import will be useful for a long time to come. Data Pump is
available only for releases 10g and 11g, so whenever it is necessary to transfer
data to or from 9i and earlier databases, the older utilities will still be needed.
It is well worth getting familiar with them.

Backup Utilities
It is possible to back up an Oracle database using operating system utilities. Operating
system backups (known as user-managed backups) are fully supported, and there are
circumstances when they may be the best option. But the preferred tool is RMAN, the
Recovery Manager. RMAN backups are known as server-managed backups. RMAN is
introduced and used for simple backup and restore operations in Chapters 15–17.
RMAN server-managed backups have capabilities that user-managed backups
cannot provide. These include incremental backups, where only the changed blocks of
a datafile are backed up; block-level restore and recovery, where if the damage to a file
is only to a small part of the file, just that part can be repaired; the application of an
incremental backup to full backup, to roll it forward; and validating the datafiles to
detect corruptions before end users hit them.
TIP The degree of knowledge of backup and recovery techniques tested by
the OCP examinations may not be adequate for a DBA to be considered fully
competent. Remember that the OCP curriculum is only an introduction to
database administration. Backup is a critical task and will require further study.
The Oracle Secure Backup facility lets the DBA manage backup of the entire
environment: Oracle Application Servers, remote clients, and operating system files,
as well as the database. It is developed by Oracle in conjunction with operating
system and hardware vendors.

Plan an Oracle Database Installation
Before running OUI, it is necessary to confirm adequate hardware and operating
system resources, to make a decision about where to install the software, and to
consider setting some environment variables.

Choice of Operating System
Some people become almost religiously attached to their favorite operating system.
Try to avoid this. All operating systems have good and bad points: none are suitable
for all applications. In general, Oracle Corporation supports all the mainstream
platforms, including
• Linux on Intel and AMD

Chapter 2: Installing and Creating a Database

69
• Microsoft Windows on Intel and AMD
• AIX on POWER
• HPUX on PA-RISC
These platforms are probably the most common, but there are many others. Some
operating systems are available in both 32-bit and 64-bit versions to support different
popular machine architectures. Usually, Oracle ports the database to both. When
selecting an operating system, the choice should be informed by many factors, including
• Cost
• Ease of use
• Choice of hardware
• Available skills
• Scalability
• Fault tolerance
• Performance
There are other factors, and not only technical ones. Corporate standards will be
particularly important.
Linux deserves a special mention. Oracle Corporation has made a huge
commitment to Linux, and Linux is used as the development platform for many
products (including database release 11g). Linux comes in several distributions. The
most popular for Oracle servers are Red Hat and SUSE, but do not ignore the Oracle
distribution: Enterprise Linux. This is very well packaged and fully supported by
Oracle Corporation. This means you can have one support line for the entire server
technology stack.

Hardware and Operating System Resources
Determining the necessary hardware resources for an Oracle database server requires
knowledge of the anticipated data volumes and transaction workload. There are sizing
guides available on MetaLink. The minimum hardware requirements for a usable
system are
• 1GB RAM
• 1.5GB swap space
• 400MB in the TEMP location
• 1.5GB–3.5GB for the Oracle Home
• 1.5GB for the demonstration seed database
• 2.4GB for the flash recovery area
• A single 1GHz CPU

PART I

• Solaris on SPARC

OCA/OCP Oracle Database 11g All-in-One Exam Guide

70
The wide range in space for the Oracle Home is because of platform variations.
Around 2.5GB is typical for the Windows NTFS file system, 3.5GB for the Linux ext3 file
system. The flash recovery area is optional. Even if defined, there is no check made as to
whether the space is actually available. Machines of a lower specification than that just
given can be used for learning or development but would not be suitable for anything
else. The TEMP location is a directory specified by the TEMP environment variable.
The server operating system must be checked for compliance with the Oracle
certified platforms, bearing in mind these issues:
• That some operating systems come in 32-bit and 64-bit versions
• Correct version and patch level
• Required packages
• Kernel parameters
These prerequisite factors will be checked by the OUI.
Exercise 2-2: Confirm Available Hardware Resources In this exercise,
you will check what resources are available, first for Windows and second for Linux.
Windows:
1. Right-click My Computer, and bring up the Properties dialog box. Note the
amount of RAM. This should be at least 512MB, preferable 1GB.
2. Choose the Advanced tab, and then in the Performance section click the
SETTINGS button.
3. In the Performance Options dialog box select the Advanced tab. Note the
virtual memory setting. This should be at least one and a half times the
memory reported in Step 1.
4. Open a command window, and find the location of your temporary data
directory with this command:
C:\> echo %TEMP%

This will return something like
C:\ Temp

Check that there is at least 400MB free space on the file system returned (in
this example, it is drive C:).
5. Identify a file system with 5GB free space for the Oracle Home and a database.
This must be a local disk, not on a file server. If you want to stage the
installation media (you probably do), that will need another 1.5GB, which
can be on a file server.
Linux:
1. From an operating system prompt, run free to show main memory and
swap space, which should ideally both be at least 1GB. These are the values
in the total column. In the illustration that follows, they are both about 2GB.

Chapter 2: Installing and Creating a Database

71
PART I

2. Run df -h to show the free space in each mounted file system. Confirm that
there is a file system with 5GB free for the Oracle Home and the database.
Confirm that there is 400MB free in /tmp if it exists as a separate file system;
if there is no specific file system for /tmp (as is the case in the illustration),
you can assume that it is in the root file system. In the illustration, there is
23GB free in the root file system.
3. Use rpm to check that all required packages are installed, at the correct (or
later) version. In the illustration, the sysstat package is being checked.
4. Use sysctl to check that all the required kernel settings have been made—
you may need to have root privilege to do this. In the illustration, the IP port
range is being checked.

Optimal Flexible Architecture
The Oracle Home will need a file system into which it can be installed. Oracle
Corporation has designed OFA, the Optimal Flexible Architecture, as a file system
directory structure that should make maintaining multiple versions of multiple Oracle
products straightforward. The heart of OFA is two environment variables: ORACLE_
BASE and ORACLE_HOME. The ORACLE_BASE directory is one directory on the
server, beneath which all the Oracle software (all products, all versions) should be
installed. Each version of each product will then have its own ORACLE_HOME,
beneath the ORACLE_BASE. This structure should ensure that many databases can be
created and upgraded without ever ending up with files in inappropriate locations.
The Linux and Unix OFA standard for ORACLE_BASE is that it should be a
directory named according the template /pm/h/u, where p is a string constant such
as u, m is a numeric constant such as 01, h is a standard directory name such as app,
and u is the operating system account that will own all the Oracle software, such as
oracle.
The Windows OFA standard for ORACLE_BASE is \oracle\app off the root of
any suitable drive letter.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

72
The OFA standard for the database ORACLE_HOME is $ORACLE_BASE/product/v/
db_n, where product is the constant product, v is the release number of the product
such as 11.1.0, and db_n is a name derived by the installer based on which product
it is, such as db for database, and an incrementing number for each installation of
that product, such as 1.
Typical Linux values for ORACLE_BASE and ORACLE_HOME are
/u01/app/oracle
/u01/app/oracle/product/11.1.0/db_1

and typical Windows values are
D:\oracle\app
D:\oracle\app\product\11.1.0\db_1

The OFA location for the database itself is ORACLE_BASE/q/d, where q is the string
oradata and d is the name of the database. A Linux example for a database called
orcl is
/u01/app/oracle/oradata/orcl

Within the database directory, the controlfile copies, online redo logfiles, and
datafiles should be named as follows:
File Type

Name

Variable

Examples

Controlfile

controlnn.ctl

nn is a unique number

control01.ctl, control02.ctl

Redo logfiles

redonn.log

nn is the online redo
logfile group number

redo01.log, redo02.log

Datafiles

tablespacenamenn.dbf

the datafile’s
tablespace name and
a number

system01.dbf, system02.dbf

TIP OFA does not specify the naming convention for multiplexed online
redo logfiles. Many DBAs suffix the OFA name with a letter to differentiate
members in the same group: redo01a.log, redo01b.log.

Environment Variables
One significant difference between Windows and Unix operating systems is in the
way in which environment variables are set. Within the Unix family, there are further
variations depending on the shell being used. On Windows operating systems, there
is the registry: Unix has no equivalent of this.
The Oracle database makes use of several environment variables, some of which
can be set before even running the OUI. The OUI will prompt for them, using the
preset values as defaults. On Linux, the one variable that must be set before the
installer can run is DISPLAY.

Chapter 2: Installing and Creating a Database

73
Variables in Windows

HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE

and defines variables for each installed Oracle product beneath this. Figure 2-2 earlier
shows the variables set for the ORACLE key, and then those set one level down, in the
key KEY_OraDb11g_home1.
At the ORACLE level, the variable inst_loc defines the location of the OUI
inventory, described previously. Beneath this level there are keys for each installed
product. In the example shown, there are two products installed: JInitiator (which is
Oracle’s client-side JVM for running the Forms viewing applet—two versions have
been installed on the system) and Database 11g. In the key KEY_OraDb11g_home1
there are a number of variables, two of the more significant being the ORACLE_BASE
and the ORACLE_HOME. Others specify the locations of various components and
the options Windows should use for automatic startup and shutdown of a database
instance called ORCL.
TIP There is no easy way to query the value of a Windows registry variable,
other than by looking at the registry with a tool such as the regedit.exe
registry editing tool. For this reason, many DBAs like to set variables at the
session level, from where they can be easily retrieved and used. Figure 2-4
shows an example of doing this.

Figure 2-4 Setting and using Windows environment variables

PART I

Variables can be set at various levels with various degrees of persistence on a Windows
system, ranging from permanent, system-wide variables set in the Windows registry to
variables set interactively within a command shell. As a general rule, variables set at a
higher level (such as within the registry) can be overruled at a lower level (such as
within a shell).
The highest level for variables is in the registry. The OUI creates a key in the registry,

OCA/OCP Oracle Database 11g All-in-One Exam Guide

74
The commands for setting up the environment in the manner desired would
usually be specified in a batch file that could be invoked from the command line
or as a login script.

Variables in Linux
The syntax for setting and reading environment variables varies from one shell to
another. The examples that follow are for the bash shell, because that is possibly the
most widely used Linux shell.
Linux environment variables are always session specific. They must all be set up
for each session—there is no equivalent of the Windows registry setting up variables
with a scope that can include all sessions. To simulate setting what might be thought
of as “global” variables applying to all sessions by all users, set them in the /etc/
profile file, which is executed at each logon.
Figure 2-5 shows examples of setting and using bash shell environment variables.
Note that in Figure 2-5 two more variables are being set on Linux than in Figure 2-4
on Windows. The LD_LIBRARY_PATH variable should include all dynamically linked
libraries that may be needed, and the DISPLAY must be set to point to the terminal on
which the user is working.
EXAM WATCH If the DISPLAY variable is not set appropriately, OUI will not
be able to open any windows and will throw an error.

Install the Oracle Software by Using
the Oracle Universal Installer (OUI)
To run the OUI for the first time, log on to the server machine as an operating system
user with permission to read the installation media (or the directory to which it has
been staged) and to write to the directory chosen for the ORACLE_BASE. Then launch
the OUI by running
setup.exe (Windows)
runInstaller.sh (Linux)
Figure 2-5
Setting and using
environment
variables in the
bash shell

Chapter 2: Installing and Creating a Database

75
To bypass the prerequisite checks (not advised, but may be useful), add a switch:

It is possible to do an unmanaged installation known as a silent install. This will
be necessary if there is no graphics device, and is very convenient if you are performing
many identical installs on identical machines. Also, it becomes possible to embed an
Oracle installation within the routine for deploying a packaged application. A silent
install requires a response file, which includes answers to all the prompts that would
usually be manually given. The syntax for running the OUI in this way is
runInstaller -silent -responsefile responsefilename

The response file can be created manually (there are examples in the /response
directory on the installation media), or it can be recorded by OUI during an
interactive install:
runInstaller -record -destinationFile responsefilename

Before doing a silent install, the inventory pointer file (/etc/oraInst.loc on
Linux) must have been created, or OUI will not be able to locate (or create if
necessary) the inventory.
Exercise 2-3: Install the Oracle Home
Home on Linux using the OUI.

In this exercise, install an Oracle

1. Log on to Linux as a user who is a member of the dba group. In the following
example, the operating system user is db11g. Confirm the username and
group membership with the id command, as in this illustration:

2. Switch to the root user with su and create an OFA-compliant directory for the
Oracle Base with the mkdir command. In the example, this is /u02/app/
db11g. Change the ownership and access modes of the directory such that
the db11g user has full control of it with the chown and chmod commands,
as in the preceding illustration, and exit back to the Oracle user.

PART I

runinstaller -ignoreSysPrereqs

OCA/OCP Oracle Database 11g All-in-One Exam Guide

76
3. If you are not working on the console machine, set your DISPLAY variable to
point to an X Window server on the machine on which you are working. In
the illustration, this is 10.0.0.12:0.0.
4. Launch the OUI by running the runInstaller shell script from the root of
the installation media. In the example, the installation media has been copied
into the directory /home/db11g/db11g_dvd.
5. The first OUI window will appear, as in the illustration that follows:
A. Select the Basic Installation radio button.
B. Specify the Oracle Base as the directory created in Step 2. The Oracle
Home will default to an OFA-compliant name beneath it.
C. Select the Enterprise Edition installation type.
D. Select dba as the Unix DBA group.
E. De-select the option to create a database.
F. Click NEXT.

6. If this is the first Oracle install on the machine, the next window will prompt
for the location of the OUI inventory. Be sure to specify a directory to which
the db11g user has write permission.
7. The OUI will then perform its prerequisite checks. If they pass, click NEXT to
continue. If any fail, take note and fix them if possible. Then use the RETRY
button to rerun the test. If the check really cannot be fixed, you can click NEXT
to proceed anyway at your own risk.
8. The next window will be a summary of what the OUI is going to do. Click
NEXT, and it will do it. This should take twenty minutes or so (highly variable,
depending on the machine).

Chapter 2: Installing and Creating a Database

77

10. The installer will return a message stating that “The installation of Oracle
Database 11g was successful.” Congratulations! Click EXIT.

Create a Database by Using the Database
Configuration Assistant
This one OCP examination objective is in fact a large task, comprising several steps. It is
not large in terms of the practicalities (creating a database can be quick and simple—a
single two-word command will do it, and it may take less than ten minutes), but there
are many prerequisite concepts you should understand:
• The instance, the database, and the data dictionary
• Using the DBCA to create a database
• The instance parameter file
• The CREATE DATABASE command
• Post-creation scripts
• The DBCA’s other functions

PART I

9. Toward the end of the install, the window shown in the illustration that
follows will appear. This prompts you to run two scripts as the root user: the
orainstRoot.sh script that will write the /etc/oraInst.loc file, and
the root.sh script that adjusts permissions on files in the new Oracle home.
If this is not the first time the OUI has run on the machine, there will not be
a prompt for orainstRoot.sh. Run the script(s) as root from an operating
system prompt (accept defaults for any prompts) and then click OK.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

78
The Instance, the Database, and the Data Dictionary
An Oracle server consists of an instance and a database; the two are separate, but
connected. The instance comprises memory structures and processes, stored in your
machine’s RAM and executing on its CPU(s); its existence is transient; it can be started
and stopped. The database comprises files on disk; once created, it persists until it is
deleted. Creating an instance is nothing more than building the memory structures
and starting the processes. Creating a database is done by the instance as a once-off
operation, and the instance can then open and close it many times subsequently. The
database is inaccessible without the instance.
Within the database there is a set of tables and other segments called the data
dictionary. The data dictionary describes all the logical and physical structures in the
database, including all the segments that store user data.
The process of database creation establishes the bare minimum of physical
structures needed to store the data dictionary, and then creates the data dictionary
within them.
An instance is defined by an instance parameter file. The parameter file contains
directives that define how the instance should be initialized in memory: the size of the
memory structures, and the behavior of the background processes. After building the
instance, it is said to be in no mount mode. In no mount mode, the instance exists but
has not connected to a database. Indeed, the database may not even exist at this point.
All parameters, either specified by the parameter file or implied, have default values,
except for one: the parameter DB_NAME. The DB_NAME parameter names the database
to which the instance will connect. This name is also embedded in the controlfile. The
CONTROL_FILES parameter points the instance to the location of the controlfile. This
parameter defines the connection between the instance and the database. When the
instance reads the controlfile (which it will find by reading the CONTROL_FILES
parameter) if there is a mismatch in database names, the database will not mount. In
mount mode, the instance has successfully connected to the controlfile. If the controlfile
is damaged or nonexistent, it will be impossible to mount the database. The controlfile is
small, but vital.
Within the controlfile, there are pointers to the other files (the online redo logfiles
and the datafiles) that make up the rest of the database. Having mounted the database,
the instance can open the database by locating and opening these other files. An open
database is a database where the instance has opened all the available online redo
logfiles and datafiles. Also within the controlfile, there is a mapping of datafiles to
tablespaces. This lets the instance identify the datafile(s) that make(s) up the SYSTEM
tablespace within which it will find the data dictionary. The data dictionary lets the
instance resolve references to objects referred to in SQL code to the segments in which
they reside, and work out where, physically, the objects are.
The creation of a database server must therefore involve these steps:
• Create the instance.
• Create the database.
• Create the data dictionary.

Chapter 2: Installing and Creating a Database

79
In practice, the steps are divided slightly differently:

• Create the database and the data dictionary objects.
• Create the data dictionary views.
The data dictionary as initially created with the database is fully functional but
unusable. It has the capability for defining and managing user data but cannot be
used by normal human beings because its structure is too abstruse. Before users (or
DBAs) can actually use the database, a set of views must be created on top of the data
dictionary that will render it understandable by humans.
The data dictionary itself is created by running a set of SQL scripts that exist in the
ORACLE_HOME/rdbms/admin directory. These are called by the CREATE DATABASE
command. The first is sql.bsq, which then calls several other scripts. These scripts
issue a series of commands that create all the tables and other objects that make up
the data dictionary.
The views and other objects that make the database usable are generated by
additional scripts in the ORACLE_HOME/rdbms/admin directory, prefixed with “cat”.
Examples of these are catalog.sql and catproc.sql, which should always be
run immediately after database creation. There are many other optional “cat” scripts
that will enable certain features—some of these can be run at creation time; others
might be run subsequently to install these features at a later date.

Using the DBCA to Create a Database
These are the steps to follow to create a database:
1. Create a parameter file and (optionally) a password file.
2. Use the parameter file to build an instance in memory.
3. Issue the CREATE DATABASE command. This will generate, as a minimum,
a controlfile; two online redo logfiles; two datafiles for the SYSTEM and
SYSAUX tablespaces; and a data dictionary.
4. Run SQL scripts to generate the data dictionary views and the supplied
PL/SQL packages.
5. Run SQL scripts to generate the objects used by Enterprise Manager Database
Control, and any other database options chosen to be enabled.
On Windows systems, there is an additional step because Oracle runs as a Windows
service. Oracle provides a utility, oradim.exe, to assist you in creating this service.
These steps can be executed interactively from the SQL*Plus prompt or through a
GUI tool, the Database Configuration Assistant (DBCA). Alternatively, you can automate
the process by using scripts or start the DBCA with a response file.
Whatever platform you are running on, the easiest way to create a database is
through the DBCA. You may well have run this as part of the installation: OUI can

PART I

• Create the instance.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

80
launch the DBCA, which prompts you and walks you through the whole process. It
creates a parameter file and a password file and then generates scripts that will start
the instance; create the database; and generate the data dictionary, the data dictionary
views, and Enterprise Manager Database Control. Alternatively, you can create the
parameter file and password file by hand, and then do the rest from a SQL*Plus
session. Many DBAs combine the two techniques: use the DBCA to generate the files
and scripts, and then look at them and perhaps edit them before running them from
SQL*Plus.
The DBCA is written in Java—it is therefore the same on all platforms. On Unix,
you run the DBCA on the machine where you wish to create the database, but you
can launch and control it from any machine that has an X server to display the DBCA
windows. This is standard X Window System—you set an environment variable
DISPLAY to tell the program where to send the windows it opens. For example,
export DISPLAY=10.10.10.65:0.0

will redirect all X windows to the machine identified by IP address 10.10.10.65, no
matter which machine you are actually running the DBCA on.
To launch the DBCA on Windows, take the shortcut on the Start menu. The
navigation path will be
1. Start
2. Programs
3. Oracle – OraDB11g_home3
4. Configuration and Migration Tools
5. Database Configuration Assistant
Note that the third part of the path will vary, depending on the name given to the
Oracle Home at install time.
To launch the DBCA on Linux, first set the environment variables that should
always be set for any Linux DBA session: ORACLE_BASE, ORACLE_HOME, PATH,
and LD_LIBRARY_PATH. This is an example of a script that will do this:
export
export
export
export

ORACLE_BASE=/u02/app/db11g
ORACLE_HOME=$ORACLE_BASE/product/11.1.0/db_1
PATH=$ORACLE_HOME/bin:$PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH

Note that the Base and Home will vary according to choices made at install time.
To launch the DBCA, run the dbca shell script, located in the $ORACLE_HOME/bin
directory.
TIP Be sure to have the $ORACLE_HOME/bin directory at the start of
your search path, in case there are any Linux executables with the same name
as Oracle executables. A well-known case in point is rman, which is both an
Oracle tool and a SUSE Linux utility.

Chapter 2: Installing and Creating a Database

81

1. Start
2. Programs
3. Oracle – OraDB11g_home3
4. Configuration and Migration Tools
5. Net Configuration Assistant
To launch the assistant on Linux, run the netca shell script, located in the
$ORACLE_HOME/bin directory.
Exercise 2-4: Use the DBCA to Create a Database In this exercise
you will create a database listener (if one does not exist already) and then create a
database to be called ocp11g using the DBCA, on either Windows or Linux. There is
no significant difference between platforms. The illustrations that follow happen to
be from Windows.
1. Launch the Net Configuration Assistant. The radio button for Listener
Configuration will be selected.
2. Click NEXT three times. If there is a message stating that a listener already exists,
you can exit the tool immediately by clicking CANCEL and FINISH, and proceed to
Step 3. Otherwise, click NEXT another four times to define the default listener,
and then FINISH to exit the tool.
3. Launch the Database Configuration Assistant.
4. On the DBCA Welcome dialog box, click NEXT.
5. The next dialog box has radio buttons for
• Create a Database
• Configure Database Options
• Delete a Database
• Manage Templates
• Configure Automatic Storage

PART I

Remember that (with one exception) every choice made at database creation time
can be changed later, but that some changes are awkward and may involve downtime.
It is not therefore vital to get everything right—but the more right it can be, the better.
If the database to be created is going to use Enterprise Manager Database Control,
there is an additional step that should be carried out before launching the DBCA:
configuring a database listener. This requirement is because Database Control always
connects to its database through a listener, and the DBCA checks whether one is
available. The configuration is a simple task, described in detail in Chapter 4. For
now, do this with the Net Configuration Assistant, accepting defaults all the way.
To launch the Net Configuration Assistant on Windows, take the shortcut on the
Start menu. The navigation path will be

OCA/OCP Oracle Database 11g All-in-One Exam Guide

82
The second and third options will be grayed out, unless the DBCA detects an
existing database running off this Oracle Home. Select the Create A Database
radio button, and click NEXT.
6. The Database Templates dialog box has radio buttons for selecting a template
on which to base the new database. Select the Custom Database radio button,
as this will present all possible options. Click NEXT.
7. In the Database Identification dialog box, enter a global database name, and
a System Identifier (a SID), which will be used as the instance name. These
will default to the same thing, which is often what is wanted. For this exercise,
enter ocp11g for both names. Click NEXT.
8. The Management Options dialog box has a check box for configuring the
database with Enterprise Manager. Select this. Then there are radio buttons for
either Grid Control or Database Control. The Grid Control radio button will
be grayed out if the DBCA does not detect a Grid Control agent running on
the machine. Select Database Control. There are check boxes for Enable Email
Notifications and Enable Daily Backup; do not select these. Click NEXT. It is at
this point that the DBCA will give an error if there is no listener available.
9. The Database Credentials dialog box prompts for passwords for four users in
the database: SYS (who owns the data dictionary), SYSTEM (used for most
DBA work), DBSNMP (used for external monitoring), and SYSMAN (used by
Enterprise Manager). Select the radio button for Use The Same Password For
All Accounts. Enter oracle as the password, twice, and click NEXT.
10. In the Security Settings dialog box, accept the default, which is 11g security,
and click NEXT.
11. The Storage Options dialog box offers a choice between file system, ASM, or
raw devices. Select File System, and click NEXT.
12. The Database File Locations dialog box prompts for a root directory for the
database. Select Use Database File Locations From Template. Click the FILE
LOCATION VARIABLES button to see where the database will be created. It will be
the OFA location ORACLE_BASE/oradata/DB_NAME. Click NEXT.
13. In the Recovery Configuration dialog box, accept the default configuration
for the flash recovery area (which will be 2GB in ORACLE_BASE/flash_
recovery_area) and do not enable archiving. Click NEXT.
14. In the Database Content dialog box, deselect all options except Enterprise
Manager Repository. The others are not needed for this database and will
increase the creation time. Some options will be grayed out; this will be
because they have not been installed into the Oracle Home. Click the STANDARD
DATABASE COMPONENTS button, and deselect these as well. Don’t worry about a
warning that the XML DB is used by other components. Click NEXT.

Chapter 2: Installing and Creating a Database

83

16. The Database Storage dialog box shows, via a navigation tree on the left, the
files that will be created. Navigate around this, and see the names and sizes of
the files. These are usually nowhere near adequate for a production system but
will be fine for now. Click NEXT.
17. In the Creation Options dialog box, select the check boxes for Create
Database and Generate Database Creation Scripts. Note the path for the
scripts; it will be ORACLE_BASE/admin/ocp11g/scripts. Click FINISH.
18. The Confirmation dialog box shows what the DBCA is about to do. Click OK.
19. The DBCA will generate the creation scripts (which should only take a few
minutes). Click OK, and the DBCA will create the database. The illustration
that follows shows the progress dialog box. Note the location of the DBCA
logs—ORACLE_BASE/cfgtoollogs/dbca/ocp11g—it may be necessary
to look at the logs if anything fails. The creation will typically take fifteen to
forty minutes, depending on the machine.

PART I

15. The Initialization Parameters dialog box has four tabs. Leave the default
values, but examine all the tabs. The Memory tab shows the memory that
will be allocated to the instance, based on a percentage of the main memory
detected. The Sizing tab shows the database block size, defaulting to 8KB. This
is the one thing that can never be changed after creation. The Character Sets
tab shows the character set to be used within the database, which will have
a default value based on the operating system. This can be very awkward to
change afterward. The Connection Mode tab determines how user sessions
will be managed. Click NEXT.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

84
20. When the DBCA completes, it will present the dialog box shown in the
illustration that follows. Take note of all the information given, in particular
the URL given for database control:
https://jwacer.bplc.co.za:1158/em

The Scripts and Other Files Created by the DBCA
While the DBCA is creating the database, inspect the scripts generated. They will be
in the directory ORACLE_BASE/admin/DB_NAME/scripts. In the example that
follow, which is from a Windows installation, the ORACLE_BASE is d:\oracle\app
and the database name (the global name, without the domain suffix) is ocp11g, so the
scripts are therefore in d:\oracle\app\admin\ocp11g\scripts. Navigate to
the appropriate directory, and study the files therein.

The Instance Parameter File
The first file to consider is the instance parameter file, named init.ora. This is a
print of a typical init.ora file, as generated by the DBCA:
######################################################################
# Copyright (c) 1991, 2001, 2002 by Oracle Corporation
######################################################################
###########################################
# Cache and I/O
###########################################
db_block_size=8192
###########################################
# Cursors and Library Cache
###########################################
open_cursors=300
###########################################
# Database Identification
###########################################
db_domain=""
db_name=ocp11g

Chapter 2: Installing and Creating a Database

85

Any line beginning with a # symbol is a comment and can be ignored. There are
about 300 parameters, but the file generated by the DBCA sets only a few. Most of
these are covered in detail in later chapters. Two parameters to emphasize at this point
are DB_BLOCK_SIZE and CONTROL_FILES. DB_BLOCK_SIZE determines the size of
the buffers in the database buffer cache. When the instance is instructed to create a
database, this size will also be used to format the datafiles that make up the SYSTEM
and SYSAUX tablespaces. It can never be changed after database creation. CONTROL_
FILES is the pointer that allows the instance to find all the multiplexed copies of the

PART I

###########################################
# File Configuration
###########################################
control_files=("D:\oracle\app\oradata\ocp11g\control01.ctl",
"D:\oracle\app\oradata\ocp11g\control02.ctl",
"D:\oracle\app\oradata\ocp11g\control03.ctl")
db_recovery_file_dest=D:\oracle\app\flash_recovery_area
db_recovery_file_dest_size=2147483648
###########################################
# Job Queues
###########################################
job_queue_processes=10
###########################################
# Miscellaneous
###########################################
compatible=11.1.0.0.0
diagnostic_dest=D:\oracle\app
###########################################
# NLS
###########################################
nls_language="ENGLISH"
nls_territory="UNITED KINGDOM"
###########################################
# Processes and Sessions
###########################################
processes=150
###########################################
# SGA Memory
###########################################
sga_target=318767104
###########################################
# Security and Auditing
###########################################
audit_file_dest=D:\oracle\app\admin\ocp11g\adump
audit_trail=db
remote_login_passwordfile=EXCLUSIVE
###########################################
# Shared Server
###########################################
dispatchers="(PROTOCOL=TCP) (SERVICE=ocp11gXDB)"
###########################################
# Sort, Hash Joins, Bitmap Indexes
###########################################
pga_aggregate_target=105906176
###########################################
# System Managed Undo and Rollback Segments
###########################################
undo_management=AUTO
undo_tablespace=UNDOTBS1

OCA/OCP Oracle Database 11g All-in-One Exam Guide

86
database controlfile. At this stage, the controlfile does not exist; this parameter will
tell the instance where to create it. Some of the other parameters are self-explanatory
and can be easily related back to the options taken when going through the steps of
the exercise, but eventually you must refer to the Oracle Documentation Library (the
volume you need is titled “Reference”) and read up on all of them. All! Those
necessary for examination purposes will be described at the appropriate point.
EXAM TIP What is the only instance parameter for which there is no default?
It is DB_NAME. A parameter file must exist with at least this one parameter,
or you cannot start an instance. The DB_NAME can be up to eight characters
long, letters and digits only, beginning with a letter.

The Database Creation Shell Script
This is the file the DBCA executes to launch the database creation process. It is a batch
file on Windows, and a shell script on Linux. A Windows example:
mkdir D:\oracle\app
mkdir D:\oracle\app\admin\ocp11g\adump
mkdir D:\oracle\app\admin\ocp11g\dpdump
mkdir D:\oracle\app\admin\ocp11g\pfile
mkdir D:\oracle\app\cfgtoollogs\dbca\ocp11g
mkdir D:\oracle\app\flash_recovery_area
mkdir D:\oracle\app\oradata\ocp11g
mkdir D:\oracle\app\product\11.1.0\db_3\database
set ORACLE_SID=ocp11g
set PATH=%ORACLE_HOME%\bin;%PATH%
D:\oracle\app\product\11.1.0\db_3\bin\oradim.exe -new -sid OCP11G
-startmode manual -spfile
D:\oracle\app\product\11.1.0\db_3\bin\oradim.exe -edit -sid OCP11G
-startmode auto -srvcstart system
D:\oracle\app\product\11.1.0\db_3\bin\sqlplus /nolog
@D:\oracle\app\admin\db11g\scripts\ocp11g.sql

First, the script creates a few directories in the Oracle Base. Then it sets the
ORACLE_SID environment variable (more of this later) and prepends the ORACLE_
HOME/bin directory to the search path.
The two commands that use oradim.exe will not appear on a Linux system. On
Windows, an Oracle instance runs as a Windows service. This service must be created.
The oradim.exe utility is run twice. The first time will define a new service in the
Windows registry, with the system identifier OCP11G, and put the service on manual
start. The -spfile switch refers to the type of initialization parameter file to be used.
The second use of oradim.exe edits the service, to set it to start automatically
whenever Windows starts. Figure 2-6 shows the resulting service defined in the
registry. To see this, use the regedit.exe registry editor (or some similar tool)
to navigate to the key
HKEY_LOCAL_MACHINE/SYSTEM/currentControlSet/Services/OracleServiceOCP11G

Each database instance that can run on a Windows machine will be a service,
named after the name of the instance (in this case, OCP11G) that was provided in
Exercise 2-4, Step 7.

Chapter 2: Installing and Creating a Database

87
PART I

Figure 2-6 The Windows service defining an Oracle instance

After the service creation, the script launches SQL*Plus and runs the SQL script
ocp11g.sql which will control the creation of the database:
set verify off
PROMPT specify a password for sys as parameter 1;
DEFINE sysPassword = &1
PROMPT specify a password for system as parameter 2;
DEFINE systemPassword = &2
PROMPT specify a password for sysman as parameter 3;
DEFINE sysmanPassword = &3
PROMPT specify a password for dbsnmp as parameter 4;
DEFINE dbsnmpPassword = &4
host D:\oracle\app\product\11.1.0\db_3\bin\orapwd.exe
file=D:\oracle\app\product\11.1.0\db_3\database\PWDocp11g.ora
password=&&sysPassword force=y
@D:\oracle\app\admin\ocp11g\scripts\CreateDB.sql
@D:\oracle\app\admin\ocp11g\scripts\CreateDBFiles.sql
@D:\oracle\app\admin\ocp11g\scripts\CreateDBCatalog.sql
@D:\oracle\app\admin\ocp11g\scripts\emRepository.sql
@D:\oracle\app\admin\ocp11g\scripts\postDBCreation.sql

At the top of the script, there are prompts for passwords for four critical accounts.
These will be provided by the password entered in Exercise 2-4, Step 9.
Then, using host to spawn an operating system shell, the script runs the orapwd
.exe utility (just called orapwd on Linux.) This will create an external password file
for the database. The name of the file must be
%ORACLE_HOME%\database\PWD.ora

on Windows, or
$ORACLE_HOME/dbs/orapw

OCA/OCP Oracle Database 11g All-in-One Exam Guide

88
on Linux, where  is the name of the database. This is the name provided
for the global database name in Exercise 2-4, Step 7, but without any domain suffix.
Usually, this is the same as the instance name—but they are not the same thing.
The script then calls CreateDB.sql, which will actually create the database.

The CREATE DATABASE Command
This is an example of the CreateDB.sql script:
connect "SYS"/"&&sysPassword" as SYSDBA
set echo on
spool D:\oracle\app\admin\ocp11g\scripts\CreateDB.log
startup nomount pfile="D:\oracle\app\admin\ocp11g\scripts\init.ora";
CREATE DATABASE "ocp11g"
MAXINSTANCES 8
MAXLOGHISTORY 1
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
DATAFILE 'D:\oracle\app\oradata\ocp11g\system01.dbf'
SIZE 300M REUSE AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL
SYSAUX DATAFILE 'D:\oracle\app\oradata\ocp11g\sysaux01.dbf'
SIZE 120M REUSE AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED
SMALLFILE DEFAULT TEMPORARY TABLESPACE TEMP TEMPFILE
'D:\oracle\app\oradata\ocp11g\temp01.dbf' SIZE 20M REUSE
AUTOEXTEND ON NEXT 640K MAXSIZE UNLIMITED
SMALLFILE UNDO TABLESPACE "UNDOTBS1" DATAFILE
'D:\oracle\app\oradata\ocp11g\undotbs01.dbf' SIZE 200M REUSE
AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED
CHARACTER SET WE8MSWIN1252
NATIONAL CHARACTER SET AL16UTF16
LOGFILE GROUP 1 ('D:\oracle\app\oradata\ocp11g\redo01.log') SIZE 51200K,
GROUP 2 ('D:\oracle\app\oradata\ocp11g\redo02.log') SIZE 51200K,
GROUP 3 ('D:\oracle\app\oradata\ocp11g\redo03.log') SIZE 51200K
USER SYS IDENTIFIED BY "&&sysPassword"
USER SYSTEM IDENTIFIED BY "&&systemPassword";
spool off

The script connects to the instance, using the syntax for password file authentication
(this is fully described in Chapter 3). Let’s consider the script line by line.
The echo and spool commands cause SQL*Plus to write out a log of everything
that happens next.
The STARTUP NOMOUNT command builds the instance in memory, using the
static parameter file we saw earlier. The significance of “NOMOUNT” will be dealt
with in Chapter 3; for now, let it suffice that it is necessary, as there is no database to
mount and open. After this completes, there will be an instance running with an SGA
and the background processes. The SGA will have been sized according to the parameters
in the nominated init.ora file.
The CREATE DATABASE command, which continues to the semicolon at the end
of the file, is followed by the database name (which is ocp11g). The first section of
the command sets some overall limits for the database. These can all be changed
subsequently, but if they are clearly inappropriate, it is a good idea to change them
now, before creation.

Chapter 2: Installing and Creating a Database

89

Datafile specifications are provided for the SYSTEM, SYSAUX, and UNDO
tablespaces. Tempfile specifications for a TEMPORARY tablespace are also provided.
The database character set used for storing data dictionary data and table columns
of type VARCHAR2, CHAR, and CLOB is specified followed by the national character
set (which is used for columns of type NVARCHAR2, NCHAR, and NCLOB). It is
possible to change the character set after creation with SQL*Plus. Choice and use of
character sets, and other aspects of globalization, are covered in detail in Chapter 26.
TIP Until version 9i of the database, there was no supported means for
changing the database character set after creation: it was therefore vital to
get this right. With 9i and later, it is possible to change it afterward, but this
is not an operation to embark on lightly. Get it right now!
The logfile clause specifies three log file groups, each consisting of one member.
This is an example of the DBCA defaults perhaps not doing a perfect job. It would be
better practice to multiplex the redo log: to create at least two members for each
group. Not a problem—this can be fixed later (in Chapter 14). The online redo log
will always require substantial tuning; the defaults are applicable to virtually no
production systems.
Finally, SYS and SYSTEM passwords are initialized, and spooling to the log is
switched off.
This one file with the CREATE DATABASE command will create a database. After
its successful execution, you will have an instance running in memory, and a database
consisting of a controlfile and copies as specified by the CONTROL_FILES initialization
parameter, and the datafiles and redo logs specified in the CREATE DATABASE
command. A data dictionary will have been generated in the SYSTEM tablespace. But
although the database has been created, it is unusable. The remaining scripts called by
ocp11g.sql make the database usable. The CREATE DATABASE command has many
options, all of which have defaults. For example, if you do not specify a datafile for
the SYSTEM or SYSAUX tablespace, one will be created anyway. If you do not specify a
character set, there is a default, which will depend on the operating system configuration
(it may not be a very helpful default—commonly, it is US7ASCII, which is inadequate
for many applications). There are also defaults for the online redo logfiles. There are
no defaults for the TEMP and UNDO tablespaces; if these are not specified, the
database will be created without them. Not a problem—they can be added later.
TIP The CREATE DATABASE command can be extremely long and
complicated—but there are defaults for everything.You can create a
database from a SQL*Plus prompt with two words: CREATE DATABASE.

PART I

TIP With the current release, some of these limits (such as the number of
datafiles) are only soft limits, and therefore of little significance.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

90
Post-Creation Scripts
The other SQL scripts called by ocp11g.sql to complete the database creation will
depend on the options chosen when going through the DBCA. In this example, as all
options except for Enterprise Manager Database control were deselected, there are
only four:
• CreateDBfiles.sql This is of minor significance. It creates a small
tablespace, USERS, to be used as the default location for any objects created
by users.
• CreateDBCatalog.sql This is vital. It runs a set of scripts in the
$ORACLE_HOME/rdbms/admin directory that construct views on the data
dictionary and create many PL/SQL packages. It is these views and packages
that make it possible to manage an Oracle database.
• emRepository.sql This runs the script to create the objects needed by
Enterprise Manager Database Control. It is run because this was selected in
Exercise 2-4, Step 8.
• postDBCreation.sql This generates a server parameter file from the
init.ora file (more of this in Chapter 3), unlocks the DBSNMP and
SYSMAN accounts used by Enterprise Manager, and runs the Enterprise
Manager configuration Assistant (which is emca.bat on Windows, emca
on Linux) to configure Database Control for the new database.

The DBCA’s Other Functions
The opening screen of the DBCA gives you five options:
• Create a database
• Configure database options
• Delete a database
• Manage templates
• Configure automatic storage management
“Configure Database Options” helps you change the configuration of a database
you have already created. In the preceding exercise, you deselected all the options: this
was to make the creation as quick and simple as possible.
TIP By deselecting all the options, particularly those for “standard database
components,” creation time is reduced dramatically.
If you decide subsequently to install some optional features, such as Java or OLAP,
running the DBCA again is the simplest way to do it. An alternative method is to run
the scripts to install the options by hand, but these are not always fully documented
and it is possible to make mistakes—the DBCA is better.

Chapter 2: Installing and Creating a Database

91

TIP Behind the scenes, Delete A Database invokes the SQL*Plus command
DROP DATABASE. There is some protection for this command: the database
cannot be open at the time; it must be in mount mode.
Manage Templates allows you to store database creation options for later use.
Remember that in the exercise, you chose to create a “Custom” database. A custom
database is not preconfigured—you chose it in order to see all the possibilities as you
worked your way through the DBCA. But apart from “Custom,” there were options for
“Data Warehouse” and “General Purpose or Transaction Processing.” If you choose
either of these, the DBCA suggests different defaults with which to create a database.
These defaults will be partly optimized for decision support systems (DSS, the data
warehouse option) or for online transaction processing systems (OLTP, the
transaction processing option). These templates do not create a database from the
beginning; they expand a set of compressed datafiles and modify these. The final
question when you created your database gave you the possibility of saving it as a
template—i.e., not to create it at all, but to save the definition for future use. The
DBCA will let you manage templates, either the supplied ones or new templates you
create yourself, by creating, copying, modifying, or deleting them. Templates can be
extremely useful if you are in a position where you are frequently creating and recreating databases that are very similar.
Finally, the Configure Automatic Storage Management option launches a wizard
that will create an ASM instance. An ASM instance does not open a database; it
manages a pool of disks, used for database storage. This is covered in Chapter 20.

Two-Minute Drill
Identify the Tools for Administering an Oracle Database
• Installation: the OUI
• Database creation and upgrade: DBCA, DBUA
• For issuing ad hoc SQL: SQL*Plus, SQL Developer
• Backup: RMAN, Oracle Secure Backup
• Network administration: Oracle Net Manager, Oracle Net Configuration
Assistant
• Data load and unload utilities: Data Pump, SQL*Loader
• Management: Oracle Enterprise Manager, Database Control, and Grid Control

PART I

The Delete A Database radio button will prompt you for which database you wish
to delete, and then give you one more chance to back out before it deletes all the files
that make up the database and (for a Windows system) invokes oradim.exe to
delete the instance’s service from the Windows registry as well.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

92
Plan an Oracle Database Installation
• Hardware requirements
• Disk space
• Main memory
• Swap space
• Temporary space
• A graphics terminal
• Operating system requirements
• Certified version
• Necessary packages
• Kernel settings
• OFA: an appropriate directory for the Oracle Base

Install the Oracle Software by Using
the Oracle Universal Installer (OUI)
• Use a suitable operating system account.
• Set necessary environment variables (Linux, Unix).
• Provide access to the root account (Linux, Unix).
• Make either an interactive or silent install.

Create a Database by Using the Database
Configuration Assistant
• A database can be created with the DBCA or from the SQL*Plus command line.
• The DBCA can create a database from a saved template.
• The DBCA and SQL*Plus commands can delete a database.
• An instance must be created before the database can be created.
• Any options not selected at creation time can be added later.

Self Test
1. Which of these tools is not usually installed with the Oracle Universal
Installer? (Choose the best answer.)
A. The Oracle Universal Installer itself
B. SQL*Plus
C. SQL Developer
D. Oracle Enterprise Manager Grid Control

Chapter 2: Installing and Creating a Database

93
2. Which tools can be used to create a database? (Choose three correct answers.)
B. Database Upgrade Assistant
C. SQL*Plus
D. Oracle Universal Installer
E. Oracle Enterprise Manager Database Control
3. Oracle provides the ability to back up the entire environment, not just the
Oracle Database. What tool can do this? (Choose the best answer.)
A. Recovery Manager
B. Oracle Secure Backup
C. User-managed backups, carried out with operating system commands
4. What statement best describes the relationship between the Oracle Base and
the Oracle Home? (Choose the best answer.)
A. The Oracle Base exists inside the Oracle Home.
B. The Oracle Base can contain Oracle Homes for different products.
C. One Oracle Base is required for each product, but versions of the product
can exist in their own Oracle Homes within their Oracle Base.
D. The Oracle Base is created when you run the orainstRoot.sh script,
and contains a pointer to the Oracle Home.
5. What does Optimal Flexible Architecture (OFA) describe? (Choose the best
answer.)
A. A directory structure
B. Distributed database systems
C. Multitier processing architecture
D. OFA encompasses all the above
6. What environment variable must be set on Linux before running the Oracle
Universal Installer? (Choose the best answer.)
A. ORACLE_HOME
B. ORACLE_BASE
C. ORACLE_SID
D. DISPLAY
7. If the OUI detects that a prerequisite has not been met, what can you do?
(Choose the best answer.)
A. You must cancel the installation, fix the problem, and launch OUI again.
B. A silent install will fail; an interactive install will continue.
C. Instruct the OUI to continue (at your own risk).
D. The options will depend on how far into the installation the OUI is when
the problem is detected.

PART I

A. Database Configuration Assistant

OCA/OCP Oracle Database 11g All-in-One Exam Guide

94
8. What type of devices can the OUI install an Oracle Home onto? (Choose one
or more correct answers.)
A. Regular file systems
B. Clustered file systems
C. Raw devices
D. ASM disk groups
9. Which command-line switch can be used to prevent the OUI from stopping
when prerequisite tests fail? (Choose the best answer.)
A. -silent
B. -record
C. -responsefile
D. -ignoresysprereqs
10. When does an OUI inventory get created? (Choose the best answer.)
A. Every time a new Oracle Home is created
B. Every time a new Oracle Base is created
C. Before the first run of the OUI
D. During the first run of the OUI
11. To create a database, in what mode must the instance be? (Choose the best answer.)
A. Not started
B. Started in NOMOUNT mode
C. Started in MOUNT mode
D. Started in OPEN mode
12. The SYSAUX tablespace is mandatory. What will happen if you attempt to
issue a CREATE DATABASE command that does not specify a datafile for the
SYSAUX tablespace? (Choose the best answer.)
A. The command will fail.
B. The command will succeed, but the database will be inoperable until the
SYSAUX tablespace is created.
C. A default SYSAUX tablespace and datafile will be created.
D. The SYSAUX objects will be created in the SYSTEM tablespace.
13. Is it necessary to have a database listener created before creating a database?
(Choose the best answer.)
A. No.
B. Yes.
C. It depends on whether the database is created with the DBCA or
SQL*Plus.
D. It depends on whether the Database Control option is selected in the DBCA.

Chapter 2: Installing and Creating a Database

95

1. Create the data dictionary views.
2. Create the parameter file.
3. Create the password file.
4. Issue the CREATE DATABASE command.
5. Issue the STARTUP command.
(Choose the best answer.)
A. 2, 3, 5, 4, 1
B. 3, 5, 2, 4, 1
C. 5, 3, 4, 2, 1
D. 2, 3, 1, 5, 4
15. What instance parameter cannot be changed after database creation? (Choose
the best answer.)
A. All instance parameters can be changed after database creation.
B. All instance parameters can be changed after database creation, if it is
done while the instance is in MOUNT mode.
C. CONTROL_FILES.
D. DB_BLOCK_SIZE.
16. What files are created by the CREATE DATABASE command? (Choose one or
more correct answers.)
A. The controlfile
B. The dynamic parameter file
C. The online redo log files
D. The password file
E. The static parameter file
F. The SYSAUX tablespace datafile
G. The SYSTEM tablespace datafile
17. What will happen if you do not run the CATALOG.SQL and CATPROC.SQL
scripts after creating a database? (Choose the best answer.)
A. It will not be possible to open the database.
B. It will not be possible to create any user tables.
C. It will not be possible to use PL/SQL.
D. It will not be possible to query the data dictionary views.
E. It will not be possible to connect as any users other than SYS and SYSTEM.

PART I

14. Several actions are necessary to create a database. Place these in the correct
order:

OCA/OCP Oracle Database 11g All-in-One Exam Guide

96
18. What tools can be used to manage templates? (Choose one or more correct
answers.)
A. The Database Configuration Assistant
B. The Database Upgrade Assistant
C. SQL*Plus
D. Database Control
E. The Oracle Universal Installer
19. At what point can you choose or change the database character set? (Choose
two correct answers.)
A. At database creation time, if you are not using any template
B. At database creation time, if you are using a template that does not include
datafiles
C. At database creation time, whether or not you are using a template
D. After database creation, with the DBCA
E. After database creation, with SQL*Plus
20. If there are several databases created off the same Oracle Home, how will
Database Control be configured? (Choose the best answer.)
A. Database Control will give access to all the databases created from the one
Oracle Home through one URL.
B. Database Control will give access to each database through different ports.
C. Database Control need only be configured in one database and can then
be used to connect to all of them.
D. Database Control can only manage one database per Oracle Home.

Self Test Answers
1. þ C. SQL Developer is not installed with the OUI; it is delivered as a ZIP file
that just needs to be unzipped.
ý A, B, and D. All other products (even the OUI) are installed with the OUI.
2. þ A, C, and D. DBCA is meant for creating databases, but they can also be
created from SQL*Plus or by instructing the OUI to create a database after
installing the Oracle Home.
ý B and E. B is wrong because DBUA can only upgrade an existing database. E
is wrong because Database Control is available only after the database is created.
3. þ B. Oracle Secure Backup is the enterprise backup facility.
ý A and C. These are both wrong because they are limited to backing up
database files only.

Chapter 2: Installing and Creating a Database

97

5. þ A. The rather grandly named Optimal Flexible Architecture is nothing
more than a naming convention for directory structures.
ý B, C, and D. These are wrong because they go way beyond OFA.
6. þ D. Without a DISPLAY set, the OUI will not be able to open any windows.
ý A, B, and C. These are wrong because while they can be set before
launching the OUI, the OUI will prompt for values for them.
7. þ C. Perhaps not advisable, but you can certainly do this.
ý A, B, and D. A is wrong because while it might be a good idea, it is not
something you have to do. B is wrong because the interactive installation will
halt. D is wrong because all prerequisites are checked at the same time.
8. þ A and B. The Oracle Home must exist on a file system, but it can be local
or clustered.
ý C and D. Raw devices and ASM devices can be used for databases, but not
for an Oracle Home.
9. þ D. The -ignoresysprereqs switch stops OUI from running the tests.
ý A, B, and C. A is wrong because this will suppress generation of windows,
not running tests. B is wrong because this is the switch to generate a response
file. C is wrong because this is the switch to read a response file.
10. þ D. If the OUI cannot find an inventory, it will create one.
ý A, B, and C. A and B are wrong because one inventory stores details of
all Oracle Base and Oracle Home directories. C is wrong because it is not
possible to create an inventory before running the OUI.
11. þ B. The CREATE DATABASE command can only be issued in NOMOUNT
mode.
ý A, C, and D. A is wrong, because if the instance is not started, the only
possible command is STARTUP. C and D are wrong because it is impossible to
mount a database if there is no controlfile, and it cannot be opened if there is
no redo log and SYSTEM tablespace.
12. þ C. There are defaults for everything, including the SYSAUX tablespace and
datafile definitions.
ý A, B, and D. A is wrong because the command will succeed. B and D are
wrong because these are not the way the defaults work.

PART I

4. þ B. The Oracle Base directory contains all the Oracle Homes, which can be
any versions of any products.
ý A, C, and D. A is wrong because it inverts the relationship. C is wrong
because there is no requirement for a separate base for each product. D is
wrong because it confuses the oraInst.loc file and the OUI with the OFA.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

98
13. þ D. The only time a listener is required is if the DBCA is used, and Database
Control is selected. The DBCA will not continue if it cannot detect a listener.
ý A, B, and C. A is wrong because there is a circumstance where a listener
is required; B is wrong because in all other circumstances a listener is not
required. C is wrong because it does not go far enough: The DBCA will not
require a listener, if Database Control is not selected.
14. þ A. This is the correct sequence (though 2 and 3 could be done the other
way round).
ý B, C, and D. None of these are possible.
15. þ D. This is the one parameter that can never be changed after creation.
ý A, B, and C. A and B are wrong because DB_BLOCK_SIZE cannot be
changed no matter when you try to do it. C is wrong because the CONTROL_
FILES parameter can certainly be changed, though this will require a shutdown
and restart.
16. þ A, C, F, and G. All of these will always be created, by default if they are
not specified.
ý B, D, and E. B and D are wrong because these should exist before the
instance is started. E is wrong because the conversion of the static parameter file
to a dynamic parameter file only occurs, optionally, after the database is created.
17. þ D. The database will function, but without the data dictionary views and
PL/SQL packages created by these scripts it will be unusable.
ý A, B, C, and E. A is wrong because the database will open; in fact, it must be
open to run the scripts. B is wrong because tables and other objects can certainly
be created. C is wrong because PL/SQL will be available; it is the supplied
packages that will be missing. E is completely irrelevant to these scripts.
18. þ A. The DBCA is the only tool that can manage templates.
ý B, C, D, and E. These are all wrong because only the DBCA offers
template management.
19. þ C and E. C is right because the character set can be set at creation time,
no matter how the creation is done. E is right because it is possible to change
character sets after creation (though you don’t want to do this unless it is
really necessary).
ý A, B, and D. A and B are wrong because templates are not relevant. If the
template includes datafiles, the DBCA will change the character set behind the
scenes. D is wrong because the DBCA does not offer an option to do this.
20. þ B. Database Control can be used for each database and will be configured
with a different port for each one.
ý A, C, and D. A is wrong because this is what Grid Control can do. C is
wrong because Database Control must be installed in every database that will
use it. D is wrong because while a Database Control is only for one database,
every database can have its own.

CHAPTER 3
Instance Management

Exam Objectives
In this chapter you will learn to
• 052.4.1 Set Database Initialization Parameters
• 052.4.2 Describe the Stages of Database Startup and Shutdown
• 052.4.3 Use Alert Log and Trace Files
• 052.4.4 Use Data Dictionary and Dynamic Performance Views

99

OCA/OCP Oracle Database 11g All-in-One Exam Guide

100
You should now have a database installed on your learning environment and be ready
to investigate and demystify your Oracle instance. There are many benefits to learning
in a playpen environment, the most important of which is that as you experiment and
explore you will inevitably make a mistake, and the authors find that resolving such
mistakes provides the best opportunity for learning. You could always deinstall and
reinstall the software if you believe you have damaged it irreparably, but even such a
nonheroic solution still provides valuable OUI experience.
The database and instance are governed by a set of initialization parameters. There
are a vast number of them, of which only about 33 are really important to know. These
parameters determine settings like the amount of memory your instance will request
the operating system to allocate at instance startup time, the location of the controlfiles
and redo logfiles, and the database name. The default parameter values won’t suit most
production environments, but they are general enough to acceptably run your learning
environment. Many DBAs are slightly afraid of modifying these parameters, but there
is nothing scary here, just a bunch of settings that once configured hardly ever change.
If you change them during the course of a performance tuning exercise, or while trying
to multiplex your controlfiles, and the database behaves worse, it is a simple matter to
revert your changes. These initialization settings are stored in a parameter file without
which your instance will not start.
The stages of database startup and shutdown will be examined, and although
they are quite simple, these fundamental stages have important implications for
understanding how the mechanism for instance crash recovery operates and how
some of the instance background processes interact with the database.
The value provided by alert log and trace files cannot be overemphasized when
problems arise, and Oracle has contrived a convenient set of initialization parameters
used to quickly locate the relevant files. This is especially useful when high-powered
company executives are intently watching you resolve problems after your company’s
production database has just decided to go for a loop. The alert log file is probably
the most important file to a DBA, as it contains a living record of the critical events
that occur on your instance, recording events like startups, shutdowns, and serious
error conditions. The trace files are usually generated by background and server
processes and, just like the alert log file, provide a mixture of informational and
error messaging. Familiarity with these files is vital and will be discussed.
The chapter closes with a discussion of the database dictionary and the dynamic
performance views. These objects are interrogated by SQL queries and provide vital
information on the current state of your system. One of the authors once had a manager
who insisted that all DBA support staff memorize the data dictionary objects. And they
did. Thankfully, the manager left when Oracle 7 was the current version. The Oracle 11g
dictionary is significantly larger and can be intimidating, but fortunately, you do not
have to memorize the plethora of information available. Knowing the nature of the
information available is, however, important and very useful. The data available in the
dynamic performance views will not persist across instance shutdown and startup
cycles. These views report on the current database activity and help both the instance
and the DBA keep abreast of the happenings in the system. Using and befriending these
objects will greatly simplify your task of understanding what the database is really about.

Chapter 3: Instance Management

101
An instance is defined by the parameters used to build it in memory. Many, though
not all parameters, can be changed after startup. Some are fixed at startup time and
can only be changed by shutting down the instance and starting again.
The parameters used to build the instance initially come either from the parameter
file (which may be a static pfile or a dynamic spfile) or from defaults. Every parameter
has a default value, except for the DB_NAME parameter; this must always be specified.
In total there are close to three hundred parameters (the exact number will vary
between releases and platforms) that it is acceptable for the DBA to set. There are in
fact about another fifteen hundred parameters, known as “hidden” parameters, that
the DBA is not supposed to set; these are not usually visible and should only be set
on the advice of Oracle Support.
The (approximately) three hundred parameters are divided into “basic” and
“advanced.” The idea is that most database instances will run well with default values
for the advanced parameters. Only about thirty-three (the exact number may vary
between versions) are “basic.” So setting parameters is not an enormous task. But it
is enormously important.

Static and Dynamic Parameters and
the Initialization Parameter File
To view the parameters and their current values, you may query the V$PARAMETER view:
select name,value from v$parameter order by name;

A query that may give slightly different results is
select name,value from v$spparameter order by name;

The difference is the view from which the parameter names and values are taken.
V$PARAMETER shows the parameter values currently in effect in the running instance.
V$SPPARAMETER shows the values stored in the spfile on disk. Usually, these will be
the same. But not always. Some parameters can be changed while the instance is
running; others, known as static parameters, are fixed at instance startup time. A change
made to the changeable parameters will have an immediate effect on your running
instance and can optionally be written out to the spfile. If this is done, then the
change will be permanent: the next time the instance is stopped and started, the new
value will be read from the spfile. If the change is not saved to the spfile, then the
change will only persist until the instance is stopped. To change a static parameter,
the change must be written to the spfile, but it will only come into effect at the next
startup. If the output of the two preceding queries differs, this will typically be
because the DBA has done some tuning work but not yet made it permanent, or has
found it necessary to adjust a static parameter and hasn’t yet restarted the instance.
The other columns in V$PARAMETER and V$SPPARAMETER are self-explanatory.
They show information such as whether the parameter can be changed (for a session
or for the whole instance), whether it has been changed, and whether it has been
specified at all or is on default.

PART I

Set Database Initialization Parameters

OCA/OCP Oracle Database 11g All-in-One Exam Guide

102
The views can also be seen through Database Control. From the database home
page, take the Server tab and the Initialization Parameters link. On the window
following, shown in Figure 3-1, there are two subtabs: Current shows the values
currently in effect in the running instance and may be obtained by querying the
V$PARAMETER view, while the SPFile tab shows those values recorded in the spfile
and may be obtained by querying the V$SPPARAMETER view.
The changeable parameters can be adjusted through the same window. The values
for the first four parameters shown (CLUSTER_DATABASE, COMPATIBLE, CONTROL_
FILES, and DB_BLOCK_SIZE) cannot be dynamically changed; they are static. But the
next parameter, DB_CREATE_FILE_DEST, can be dynamically changed. In the figure,
it has not been set—but it can be, by entering a value in the box in the column headed
“Value.” To change the static parameters, it is necessary to navigate to the SPFile tab,
and make the changes there.
To change a parameter from SQL*Plus, use the ALTER SYSTEM command. Figure
3-2 shows several examples.
The first query in Figure 3-2 shows that the values for the parameter DB_CREATE_
FILE_DEST are the same in the running instance in memory, and in the spfile on disk.
The next two commands adjust the parameter in both places to different values, by
using the SCOPE keyword. The results are seen in the second query. The final command
uses SCOPE=BOTH to change both the running and the stored value with one
command. The BOTH option is the default, if the SCOPE keyword is not specified.

Figure 3-1

Initialization parameters, as seen through Database Control

Chapter 3: Instance Management

103
PART I

Figure 3-2 Changing and querying parameters with SQL*Plus

EXAM TIP An attempt to change a static parameter will fail unless the
SCOPE is specified as SPFILE. The default SCOPE is BOTH the running
instance and the spfile. If the instance is started with a pfile, then
SCOPE=SPFILE will fail.
As was seen in Chapter 2, when a database instance is first created, it is built with
a pfile. This may be converted to an spfile using this command:
create spfile [='spfilename'] from pfile [='pfilename'];

If names are not given for spfilename or pfilename, then the default names based on
the ORACLE_HOME and the SID will be assumed. To reverse-engineer an spfile into a
pfile, the command is
create pfile [='pfilename'] from spfile [='spfilename'] ;

The CREATE PFILE and CREATE SPFILE commands can be run from SQL*Plus at
any time, even before the instance has been started.

The Basic Parameters
The instance parameters considered to be “basic” are those that should be considered
for every database. In some cases, the default values will be fine—but it is good
practice to always consider the values of the basic parameters in your database. The
basic parameters and their current values may be queried using
select name,value from

v$parameter where isbasic='TRUE' order by name;

A query that may give slightly different results is
select s.name,s.value
from v$spparameter s join v$parameter p on s.name=p.name
where p.isbasic='TRUE' order by name;

OCA/OCP Oracle Database 11g All-in-One Exam Guide

104
Any differences are because some parameter changes may have been applied to
the instance but not the spfile (or vice versa). The necessity for the join is because
there is no column on V$SPPARAMETER to show whether a parameter is basic or
advanced. Table 3-1 summarizes the basic parameters.
Parameter

Purpose

cluster_database

Is the database a RAC or a single instance? That this is basic
indicates that RAC is considered a standard option

compatible

The version that the instance will emulate. Normally this would
be the actual version, but it can look like older versions

control_files

The name and location of the controlfile copies

db_block_size

The default block size for formatting datafiles

db_create_file_dest

The default location for datafiles

db_create_online_log_dest_1

The default location for online redo logfiles

db_create_online_log_dest_2

The default location for online redo logfiles multiplexed copies

db_domain

The domain name that can be suffixed to the db_name to
generate a globally unique name

db_name

The name of the database (the only parameter with no default)

db_recovery_file_dest

The location of the flash recovery area

db_recovery_file_dest_size

The amount of data that may be written to the flash recovery area

db_unique_name

A unique identifier necessary if two databases with the same
db_name are on the same machine

instance_number

Used to distinguish two or more RAC instances opening the same
database. Another indication that RAC is considered standard

job_queue_processes

The number of processes available to run scheduled jobs

log_archive_dest_1

The destination for archiving redo logfiles

log_archive_dest_2

The destination for multiplexed copies of archived redo logfiles

log_archive_dest_state_1

An indicator for whether the destination is enabled or not

log_archive_dest_state_2

An indicator for whether the destination is enabled or not

nls_language

The language of the instance (provides many default formats)

nls_territory

The geographical location of the instance (which provides even
more default formats)

open_cursors

The number of SQL work areas that a session can have open
at once

pga_aggregate_target

The total amount of memory the instance can allocate to PGAs

processes

The maximum number of processes (including session server
processes) allowed to connect to the instance

Table 3-1 The Basic Parameters

Chapter 3: Instance Management

105
Purpose

remote_listener

The addresses of listeners on other machines with which the
instance should register; another parameter that is only relevant
for a RAC

remote_login_passwordfile

Whether or not to use an external password file, to permit
password file authentication

rollback_segments

Almost deprecated—superseded by the UNDO parameters
that follow

sessions

The maximum number of sessions allowed to connect to the
instance

sga_target

The size of the SGA, within which Oracle will manage the
various SGA memory structures

shared_servers

The number of shared server processes to launch, for sessions
that are not established with dedicated server processes

star_transformation_enabled

Whether to permit the optimizer to rewrite queries that join
the dimensions of a fact table

undo_management

Whether undo data should be automatically managed in an
undo tablespace, or manually managed in rollback segments

undo_tablespace

If using automatic undo management, where the undo data
should reside

Table 3-1

The Basic Parameters (continued)

All of these basic parameters, as well as some of the advanced parameters, are
discussed in the appropriate chapters.

Changing Parameters
The static parameters can only be changed using an ALTER SYSTEM command with a
SCOPE=SPFILE clause. Remember this command updates the spfile. Static parameters
cannot, by definition, take immediate effect. An example of a static parameter is
LOG_BUFFER. If you want to resize the log buffer to 6MB, you may issue the
command:
alter system set log_buffer=6m;

It will fail with the message “ORA-02095: specified initialization parameter cannot be
modified.” It must be changed with the SCOPE=SPFILE clause. The command will
succeed, but the instance must be restarted for the new value to take effect.
TIP The default log buffer size is probably correct. If you raise it, you may find
that commit processing takes longer. If you make it smaller than its default value,
it will in fact be internally adjusted up to the default size.

PART I

Parameter

OCA/OCP Oracle Database 11g All-in-One Exam Guide

106
Certain parameters affect the entire system, individual sessions, or both. An example
of a parameter that applies to the whole instance but can also be adjusted for individual
sessions is OPTIMIZER_MODE. This influences the way in which Oracle will execute
statements. A common choice is between the values ALL_ROWS and FIRST_ROWS.
ALL_ROWS instructs the optimizer to generate execution plans that will run the
statement to completion as quickly as possible, whereas FIRST_ROWS instructs it to
generate plans that will get something back to the user as soon as possible, even if
the complete execution of the statement ultimately takes longer to complete. So if
your database is usually used for long DSS-type queries but some users use it for
interactive work, you might issue the command
alter system set optimizer_mode=all_rows;

and let those individual users issue
alter session set optimizer_mode=first_rows;

There are a few parameters that can only be modified at the session level. Principal
among these is NLS_DATE_FORMAT. This parameter, which controls the display of
date and time values, can be specified in the parameter file but cannot be changed
with ALTER SYSTEM. So it is static, as far as the instance is concerned. But it can be
adjusted at the session level:
alter session set nls_date_format='dd-mm-yy hh24:mi:ss';

This will change the current session’s date/time display to the European norm without
affecting any other sessions.
Exercise 3-1: Query and Set Initialization Parameters In this exercise,
use either SQL*Plus or SQL Developer to manage initialization parameters.
1. Connect to the database (which must be open!) as user SYS, with the
SYSDBA privilege. Use either operating system authentication or password file
authentication.
2. Display all the basic parameters, checking whether they have all been set or
are still on default:
select name,value,isdefault from v$parameter where isbasic='TRUE'
order by name;

3. Any basic parameters that are on default should be investigated to see if the
default is appropriate. In fact, all the basic parameters should be considered.
Read up on all of them in the Oracle documentation. The volume you need is
titled Oracle Database Reference. Part 1, Chapter 1 has a paragraph describing
every initialization parameter.
4. Change the PROCESSES parameter to 200. This is a static parameter which
means its value cannot be changed in memory with immediate effect. It must
be set in the static pfile, or if you are using an spfile, it can be set as described
in the illustration by specifying “scope=spfile” and then restarting the database.

Chapter 3: Instance Management

107
PART I

5. Rerun the query from Step 3. Note the new value for PROCESSES, and also for
SESSIONS. PROCESSES limits the number of operating system processes that
are allowed to connect to the instance, and SESSIONS limits the number of
sessions. These figures are related, because each session will require a process.
The default value for SESSIONS is derived from PROCESSES, so if SESSIONS
was on default, it will now have a new value.
6. Change the value for the NLS_LANGUAGE parameter for your session.
Choose whatever mainstream language you want (Oracle supports many
languages: 67 at the time of writing), but the language must be specified in
English (e.g., use “German,” not “Deutsch”):
alter session set nls_language=German;

7. Confirm that the change has worked by querying the system date:
select to_char(sysdate,'day') from dual;

You may want to change your session language back to what it was before
(such as English) with another ALTER SESSION command. If you don’t, be
prepared for error messages to be in the language your session is now using.
8. Change the OPTIMIZER_MODE parameter, but restrict the scope to the
running instance only; do not update the parameter file. This exercise enables
the deprecated rule-based optimizer, which might be needed while testing
some old code.
alter system set optimizer_mode=rule scope=memory;

9. Confirm that the change has been effected, but not written to the parameter file:
select value from v$parameter where name='optimizer_mode'
union
select value from v$spparameter where name='optimizer_mode';

10. Return the OPTIMIZER_MODE to its standard value, in the running instance:
alter system set optimizer_mode=all_rows scope=memory;

OCA/OCP Oracle Database 11g All-in-One Exam Guide

108

Describe the Stages of Database Startup
and Shutdown
Oracle Corporation’s recommended sequence for starting a database is to start Database
Control, then the database listener, and then the database. Starting the database is itself
a staged process. There is no necessity to follow this sequence, and in more complex
environments such as clustered systems or those managed by Enterprise Manager Grid
Control there may well be additional processes too. But this sequence will suffice for a
simple single-instance environment.

Starting and Connecting to Database Control
Database Control is a tool for managing one database (though this database can be
clustered). If there are several database instances running off the same Oracle Home,
each instance will have its own Database Control instance. The tool is written in Perl
and Java, and accessed from a browser. There is no need to have a Java Runtime
Environment or a Perl interpreter installed on the system; both are provided in the
Oracle Home and installed by the OUI. All communications with Database Control
are over HTTPS, the secure sockets variant of HTTP, and there should therefore be no
problems with using Database Control from a browser anywhere in the world—the
communications will be secure, and any firewall proxy servers will have no problem
routing them. The only configuration needed on the firewall will be making it aware
of the port on which Database Control is listening for connection requests.
The configuration of Database Control is done at database creation time. This
configuration includes two vital bits of information: the hostname of the computer
on which Database Control is running, and the TCP port on which it will be listening.
If it is ever necessary to change either of these, Database Control will need to be
reconfigured.
To start Database Control, use the emctl utility located in the ORACLE_HOME/bin
directory. The three commands to start or stop Database Control and to check its
status are
emctl start dbconsole
emctl stop dbconsole
emctl status dbconsole

For these commands to work, three environment variables must be set: PATH,
ORACLE_HOME, and ORACLE_SID. PATH is needed to allow the operating system to
find the emctl utility. The ORACLE_HOME and ORACLE_SID are needed so that
emctl can find the Database Control configuration files. These are in three places: the
directory ORACLE_HOME/sysman/config has general configuration directives that
will apply to all Database Control instances running from the Oracle Home (one per
database). The ORACLE_HOME/hostname_sid/sysman/config and a similarly
named directory beneath ORACLE_HOME/oc4j/j2ee contain details for the Database
Control that manages one particular database (hostname is the hostname of the machine,
and sid is the value of the ORACLE_SID variable).
Figure 3-3 shows the startup of Database Control, after a couple of problems.

Chapter 3: Instance Management

109
PART I

Figure 3-3 Database Control startup, on a Windows system

In Figure 3-3, the first attempt to query the status of Database Control fails
because the ORACLE_SID environment variable is not set. Without this, the emctl
utility can’t find the necessary configuration files. This is further demonstrated by
setting the ORACLE_SID to a nonexistent instance name; the emctl status
dbconsole command uses this environment variable to construct a directory path
that does not exist. After setting the ORACLE_SID correctly, to ocp11g, the emctl
executable is located and its status can be queried. The nature of this query is nothing
more than accessing a URL; this URL can also be accessed from any browser as a
simple test. As Database Control is not running, the example in the figure continues
with starting it, and then again queries the status—this time successfully. Because this
example is on a Windows system, the startup involves starting a Windows service,
called OracleDBConsoleocp11g.
To connect to Database Control using your web browser, navigate to the URL
https://hostname:port/em

where hostname is the name of the machine on which Database Control is running,
and port is the TCP port on which it is listening for incoming connection requests. If
the host has several names or several network interface cards, any will do. You can
even use a loopback address, such as 127.0.0.1, because the Database Control process
does listen on all addresses. To identify the port, you can use emctl. As shown in
Figure 3-3, the output of emctl status dbconsole shows the port on which
Database Control should be running. Alternatively, you can look in the file ORACLE_
HOME/install/portlist.ini, which lists all the ports configured by the OUI
and DBCA.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

110
As Database Control (the current version, not the one released with 10g) requires
the use of HTTPS for security reasons, when you connect from your browser with the
URL just given, you may (depending on your local security settings) receive a message
regarding the digital certificate that Database Control is returning to your browser.
This certificate was generated by Oracle when the Oracle Home was installed and the
database created.
Your browser performs three checks on the validity of the certificate. The first check
is that the certificate is issued by a certificate issuing authority that your browser is
prepared to trust. If you view the details of the certificate, you will see that the certificate
was issued by the computer on which the Oracle installation was made. Presumably
this is a trustworthy source, so that is not a problem. The second check is for the validity
dates of the certificate. The third check is whether the host requested in the URL is the
same as the host to which the certificate was issued. These will usually be the same, but if
the machine has several hostname aliases or network interface cards, they may not be.
TIP The mechanism for managing certificates and HTTPS will vary
depending on your browser and how it is configured. For Database
Control, the certificate really doesn’t matter; you do not need secure
sockets for authentication, only for encryption.
Once past any SSL certificate issue (which may not arise, depending on local security
configuration), you will see the Database Control logon window, if the database listener
is running. If the listener is not running, you will see the screen in Figure 3-4, which is
presented when Database Control cannot detect the listener or the database instance.

Starting the Database Listener
The database listener is a process that monitors a port for database connection requests.
These requests (and all subsequent traffic once a session is established) use Oracle
Net, Oracle’s proprietary communications protocol. Oracle Net is a layered protocol
running over whatever underlying network protocol is in use, usually TCP/IP. Managing
the listener is fully described in Chapter 4, but it is necessary to know how to start it now.
There are three ways to start the database listener:
• With the lsnrctl utility
• With Database Control
• As a Windows service (Windows only, of course)
The lsnrctl utility is located in the ORACLE_HOME/bin directory. The key
commands are
lsnrctl start [listener]
lsnrctl status [listener]

where listener is the name of listener. This will have defaulted to LISTENER, which is
correct in most cases. You will know if you have created a listener with another name.
Figure 3-5 shows the output of the lsnrctl status command when the listener is
running.

Chapter 3: Instance Management

111
PART I

Figure 3-4 Database Control, failing to detect any other Oracle processes

Figure 3-5 An example of the status of a running database listener

OCA/OCP Oracle Database 11g All-in-One Exam Guide

112
Note the third line of the output in the figure shows the host address and port on
which the listener is listening, and the fifth line from the bottom states that the listener
will accept connections for a service “ocp11g”, which is offered by an instance called
“ocp11g”. These are the critical bits of information needed to connect to the database.
Following a successful database creation with DBCA, it can be assumed that they are
correct. If the listener is not running, the output of lsnrctl status will make this
very clear. Use lsnrctl start to start it, or click the START LISTENER Database Control
button, shown in Figure 3-4.

Starting SQL*Plus
As discussed in previous chapters, this couldn’t be simpler. SQL*Plus is just an
elementary client-server program used for issuing SQL commands to a database. A
variation you need to be aware of is the NOLOG switch. By default, the SQL*Plus
program immediately prompts you for an Oracle username, password, and database
connect string. This is fine for regular end users, but useless for database administrators
because it requires that the database must already be open. To launch SQL*Plus without
a login prompt, use the /NOLOG switch:
sqlplus /nolog

This will give you a SQL prompt, from which you can connect with a variety of
syntaxes, detailed in the next section. Many DBAs working on Windows will want to
modify the Start menu shortcut to include the NOLOG switch.

Database Startup and Shutdown
If one is being precise (always a good idea, if you want to pass the OCP examinations),
one does not start or stop a database: an instance may be started and stopped; a
database is mounted and opened, and then dismounted and closed. This can be done
from either SQL*Plus, using the STARTUP and SHUTDOWN commands, or through
Database Control. On a Windows system, it may also be done by controlling the
Windows service within which the instance runs. The alert log will give details of
all such operations, however they were initiated. Startup and shutdown are critical
operations. As such, they are always recorded and can only be carried out by highly
privileged users.

Connecting with an Appropriate Privilege
Ordinary users cannot start up or shut down a database. This is because an ordinary user
is authenticated against the data dictionary. It is logically impossible for an ordinary user
to start up an instance and open (or create) a database, since the data dictionary cannot
be read until the database is open. You must therefore connect with some form of
external authentication: you must be authenticated either by the operating system, as
being a member of the group that owns the Oracle software, or by giving a username/
password combination that exists in an external password file. You tell Oracle that you

Chapter 3: Instance Management

113

connect
connect
connect
connect
connect

user/pwd[@connect_alias]
user/pwd[@connect_alias] as sysdba
user/pwd[@connect_alias] as sysoper
/ as sysdba
/ as sysoper

In these examples, user is the username and pwd is the password. The connect_alias
is a network identifier, fully described in Chapter 4. The first example is normal, data
dictionary authentication. Oracle will validate the username/password combination
against values stored in the data dictionary. The database must be open, or the connect
will fail. Anyone connecting with this syntax cannot, no matter who they are, issue
startup or shutdown commands. The second two examples instruct Oracle to go to
the external password file to validate the username/password combination. The last
two examples use operating system authentication; Oracle will go to the host operating
system and check whether the operating system user running SQL*Plus is a member
of the operating system group that owns the Oracle software, and if the user passes
this test, they will be logged on as SYSDBA or SYSOPER without any need to provide
a username and password. A user connecting with any of the bottom four syntaxes
will be able to issue startup and shutdown commands and will be able to connect no
matter what state the database is in—it may not even have been created yet. Note that
the first three examples can include a network identifier string; this is necessary if the
connection is to be made across a network. Naturally, this is not an option for operating
system authentication, because operating system authentication relies on the user
being logged on to the machine hosting the Oracle server: they must either be working
on it directly or have logged in to it with telnet, secure shell, or some similar utility.
TIP From an operating system prompt, you can save a bit of time and typing
by combining the launch of SQL*Plus and the CONNECT into one command.
Here are two examples:
sqlplus / as sysdba
sqlplus sys/oracle@orcl as sysdba
Database Control will, by default, attempt to connect through a listener, but it can
also use operating system authentication. If the situation is that depicted in Figure 3-4,
clicking the STARTUP button will require operating system logon credentials to be entered
in order to proceed. If the listener is running, Database Control will present the login
window shown in Figure 3-6. The Connect As list of values lets you choose whether to
make a normal connection or a SYSDBA connection.

PART I

wish to use external authentication by using appropriate syntax in the CONNECT
command that you submit in your user process.
If you are using SQL*Plus, the syntax of the CONNECT command tells Oracle what
type of authentication you wish to use: the default of data dictionary authentication,
password file authentication, or operating system authentication. These are the
possibilities after connecting using the /NOLOG switch as described previously:

OCA/OCP Oracle Database 11g All-in-One Exam Guide

114
Figure 3-6
The Database
Control login
window, when a
listener has been
detected

SYSOPER and SYSDBA
These are privileges with special capabilities. They can only be enabled when users are
connecting with an external authentication method: either operating system or password
file. SYSOPER has the ability to issue these commands:
STARTUP
SHUTDOWN
ALTER DATABASE [MOUNT | OPEN | CLOSE | DISMOUNT]
ALTER [DATABASE | TABLESPACE] [BEGIN | END] BACKUP
RECOVER

The SYSDBA privilege includes all of these, but in addition has the ability to
create a database, to perform incomplete recovery, and to create other SYSOPER
and SYSDBA users.
EXAM TIP SYSDBA and SYSOPER are not users; they are privileges that can
be granted to users. By default, only user SYS has these privileges until they
are deliberately granted to other users.
You may be wondering what Oracle user you are actually logging on as when you
use operating system authentication. To find out, from a SQL*Plus prompt connect
using the operating system authentication syntax already shown, and then issue the
show user command (which can be abbreviated to sho user—never underestimate
the importance of saving keystrokes) as shown in the examples in Figure 3-7.
Use of the SYSDBA privilege logs you on to the instance as user SYS, the most
powerful user in the database and the owner of the data dictionary. Use of the SYSOPER
privilege connects you as user PUBLIC. PUBLIC is not a user in any normal sense—it is
a notional user with administration privileges, but (by default) has no privileges that
lets it see or manipulate data. You should connect with either of these privileges only
when you need to carry out procedures that no normal user can perform.

Chapter 3: Instance Management

115
PART I

Figure 3-7 Use of operating system and password file authentication

Startup: NOMOUNT, MOUNT, and OPEN
Remember that the instance and the database are separate entities that exist independently
of each other. When an instance is stopped, no memory structures or background
processes exist and the instance ceases to exist, but the database (consisting of files)
endures. Indeed, in a RAC environment other instances on other nodes could exist
and connect to the database.
The startup process is therefore staged: first you build the instance in memory,
second you enable a connection to the database by mounting it, and third you open
the database for use. At any moment, a database will be in one of four states:
• SHUTDOWN
• NOMOUNT
• MOUNT
• OPEN
When the database is SHUTDOWN, all files are closed and the instance does not
exist. In NOMOUNT mode, the instance has been built in memory (the SGA has been
created and the background processes started, according to whatever is specified in its
parameter file), but no connection has been made to a database. It is indeed possible
that the database does not yet exist. In MOUNT mode, the instance locates and reads
the database control file. In OPEN mode, all database files are located and opened
and the database is made available for use by end users. The startup process is staged:
whenever you issue a startup command, it will go through these stages. It is possible
to stop the startup partway. For example, if your control file is damaged, or a multiplexed
copy is missing, you will not be able to mount the database, but by stopping in
NOMOUNT mode you may be able to repair the damage. Similarly, if there are
problems with any datafiles or redo logfiles, you may be able to repair them in
MOUNT mode before transitioning the database to OPEN mode.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

116
At any stage, how does the instance find the files it needs, and exactly what
happens? Start with NOMOUNT. When you issue a startup command, Oracle will
attempt to locate a parameter file using a systematic ordered search as depicted in
Figure 3-8.
There are three default filenames. On Unix they are
$ORACLE_HOME/dbs/spfileSID.ora
$ORACLE_HOME/dbs/spfile.ora
$ORACLE_HOME/dbs/initSID.ora

and on Windows,
%ORACLE_HOME%\database\SPFILESID.ORA
%ORACLE_HOME%\database\SPFILE.ORA
%ORACLE_HOME%\database\INITSID.ORA

Figure 3-8
Sequential search
for an instance
parameter file
during STARTUP

Chapter 3: Instance Management

117

In all cases, SID refers to the name of the instance that the parameter file will start.
The preceding order is important! Oracle will work its way down the list, using the
first file it finds and ignoring the rest. If none of them exist, the instance will not start.
The only files used in NOMOUNT mode are the parameter file and the alert log.
The parameters in the parameter file are used to build the SGA in memory and to start
the background processes. Entries will be written out to the alert log describing this
process. Where is the alert log? In the location given by the BACKGROUND_DUMP_
DEST parameter, that can be found in the parameter file or by running
sho parameter background_dump_dest

from a SQL*Plus prompt once connected as a privileged user. If the alert log already
exists, it will be appended to. Otherwise, it will be created. If any problems occur
during this stage, trace files may also be generated in the same location.
EXAM TIP An “init” file is known as a “static” parameter file or a pfile,
because it is only read once, at instance startup. An “spfile” is known as a
dynamic parameter file, because Oracle continuously reads and updates
it while the instance is running. A parameter file of one sort or the other
is essential, because there is one parameter without a default value: the
DB_NAME parameter.
Once the instance is successfully started in NOMOUNT mode, it may be transitioned
to MOUNT mode by reading the controlfile. It locates the controlfile by using the
CONTROL_FILES parameter, which it knows from having read the parameter file used
when starting in NOMOUNT mode. If the controlfile (or any multiplexed copy of it)
is damaged or missing, the database will not mount and you will have to take appropriate
action before proceeding further. All copies of the controlfile must be available and
identical if the mount is to be successful.
As part of the mount, the names and locations of all the datafiles and online redo logs
are read from the controlfile, but Oracle does not yet attempt to find them. This happens
during the transition to OPEN mode. If any files are missing or damaged, the database
will remain in MOUNT mode and cannot be opened until you take appropriate action.
Furthermore, even if all the files are present, they must be synchronized before the
database opens. If the last shutdown was orderly, with all database buffers in the database
buffer cache being flushed to disk by DBWn, then everything will be synchronized: Oracle
will know that that all committed transactions are safely stored in the datafiles, and that
no uncommitted transactions are hanging about waiting to be rolled back. However, if the
last shutdown was disorderly (such as from a loss of power, or the server being accidently
rebooted), then Oracle must repair the damage and the database is considered to be in
an inconsistent state. The mechanism for this is described in Chapter 14. The process that

PART I

TIP spfileSID.ora is undoubtedly the most convenient file to use as your
parameter file. Normally, you will only use spfile.ora in a RAC environment,
where one file may be used to start several instances.You will generally only
use an initSID.ora file if for some reason you need to make manual edits using
a text editor; spfiles are binary files and cannot be edited by hand.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

118
mounts and opens the database (and carries out repairs, if the previous shutdown was
disorderly) is the SMON process. Only once the database has been successfully opened
will Oracle permit user sessions to be established. The startup process just described is
graphically summarized in Figure 3-9.

Figure 3-9 High-level steps followed during an instance startup

Chapter 3: Instance Management

119

TIP If someone were in the middle of a long-running uncommitted statement
(for example, they were loading tables for a data warehouse), when you had to
shut down the database, the rollback phase, and therefore the time it takes the
database to close and shut down cleanly, could be a very long time.

Shutdown: NORMAL, TRANSACTIONAL, IMMEDIATE, and ABORT
There are options that may be used on the shutdown command, all of which require
either a SYSDBA or a SYSOPER connection:
shutdown [normal | transactional | immediate | abort]

Normal: This is the default. No new user connections will be permitted, but all
current connections are allowed to continue. Only once all users have (voluntarily!)
logged off, will the database actually shut down.
TIP Typically, a normal shutdown is useless: there is always someone logged
on, even if it is only the Database Control process.
Transactional: No new user connections are permitted; existing sessions that are
not actively performing a transaction will be terminated; sessions currently involved
in a transaction are allowed to complete the transaction and will then be terminated.
Once all sessions are terminated, the database will shut down.
Immediate: No new sessions are permitted, and all currently connected sessions are
terminated. Any active transactions are rolled back, and the database will then shut
down.
Abort: As far as Oracle is concerned, this is the equivalent of a power failure. The
instance terminates immediately. Nothing is written to disk, no file handles are
closed, and there is no attempt to terminate transactions that may be in progress in
any orderly fashion.
TIP A shutdown abort will not damage the database, but some operations
(such as backups) are not advisable after an abort.
The “normal,” “immediate,” and “transactional” shutdown modes are usually
referred to as “clean,” “consistent,” or “orderly” shutdowns. After all sessions are

PART I

Shutdown should be the reverse of startup. During an orderly shutdown, the
database is first closed, then dismounted, and finally the instance is stopped. During
the close phase, all sessions are terminated: active transactions are rolled back by
PMON, completed transactions are flushed to disk by DBWn, and the datafiles and
redo logfiles are closed. During the dismount, the controlfile is closed. Then the
instance is stopped by deallocating the SGA memory and terminating the background
processes.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

120
terminated, PMON will roll back any incomplete transactions. Then a checkpoint is
issued (remember the CKPT process from Chapter 1), which forces the DBWn process
to write all updated data from the database buffer cache down to the datafiles. LGWR
also flushes any change vectors still in memory to the logfiles. Then the file headers
are updated, and the file handles closed. This means that the database is in a “consistent”
state: all committed transactions are in the datafiles, there are no uncommitted
transactions hanging about that need to be rolled back, and all datafiles and logfiles
are synchronized.
The “abort” mode, sometimes referred to as a “disorderly” shutdown, leaves the
database in an “inconsistent” state: it is quite possible that committed transactions
have been lost, because they existed only in memory and DBWn had not yet written
them to the datafiles. Equally, there may be uncommitted transactions in the datafiles
that have not yet been rolled back. This is a definition of a corrupted database: it may
be missing committed transactions, or storing uncommitted transactions. These
corruptions must be repaired by instance recovery (described in Chapter 14). It is
exactly as though the database server had been switched off, or perhaps rebooted,
while the database was running.
TIP There is a startup command startup force that can save time. It is
two commands in one: a shutdown abort followed by a startup.
An orderly shutdown is a staged process, and it is possible to control the stages
using the SQL*Plus:
alter database close;
alter database dismount;

These commands are exactly the reverse of the startup sequence. In practice,
however, there is little value to them; a shutdown is generally all any DBA will ever
use. The staged shutdown commands are not even available through Database Control.
Exercise 3-2: Conduct a Startup and a Shutdown Use SQL*Plus to start
an instance and open a database, then Database Control to shut it down. If the database
is already open, do this in the other order. Note that if you are working on Windows,
the Windows service for the database must be running. It will have a name of the
form OracleServiceSID, where SID is the name of the instance.
1. Log on to the computer as a member of the operating system group that owns
the ORACLE_HOME, and set the environment variables appropriately for
ORACLE_HOME and PATH and ORACLE_SID, as described in Chapter 2.
2. Check the status of the database listener, and start it if necessary. From an
operating system prompt:
lsnrctl status
lsnrctl start

Chapter 3: Instance Management

121

emctl status dbconsole
emctl start dbconsole

4. Launch SQL*Plus, using the /nolog switch to prevent an immediate logon
prompt:
sqlplus /nolog

5. Connect as SYS with operating system authentication:
connect / as sysdba

6. Start the instance only. Then query the V$INSTANCE view and examine its
STATUS column. Note that the status of the instance is “STARTED”.
startup nomount;
select status from v$instance;

7. Mount the database and query the instance status. The database has now been
“MOUNTED” by the instance.
alter database mount;
select status from v$instance;

8. Open the database:
alter database open;

9. Confirm that the database is open by querying V$INSTANCE. The database
should now be “OPEN”.
select status from v$instance;

10. From a browser, connect to the Database Control console. The hostname and
port will have been shown in the output of the emctl status dbconsole
command in Step 3. The URL will be of the format: https://hostname:
port/em.
11. Log on as SYS with the password selected at database creation, and choose
SYSDBA from the Connect As drop-down box.
12. On the database home page, click the SHUTDOWN button.
13. The next window prompts for host credentials, which will be your operating
system username and password, and database credentials, which will be the
SYS username and password. If you want to save these to prevent having to
enter them repeatedly, check the box Save As Preferred Credential. Click OK.

Use the Alert Log and Trace Files
The alert log is a continuous record of critical operations applied to the instance and
the database. Its location is determined by the instance parameter BACKGROUND_
DUMP_DEST, and its name is alert_SID.log, where SID is the name of the instance.

PART I

3. Check the status of the Database Control console, and start it if necessary.
From an operating system prompt:

OCA/OCP Oracle Database 11g All-in-One Exam Guide

122
The critical operations recorded in the alert log include
• All startup and shutdown commands, including intermediate commands such
as ALTER DATABASE MOUNT
• All errors internal to the instance (for example, any ORA-600 errors)
• Any detected datafile block corruptions
• Any record locking deadlocks that may have occurred
• All operations that affect the physical structure of the database, such as
creating or renaming datafiles and online redo logfiles
• All ALTER SYSTEM commands that adjust the values of initialization parameters
• All log switches and log archives
The alert log entry for a startup shows all the nondefault initialization parameters.
This information, together with the subsequent record of changes to the instance
made with ALTER SYSTEM commands and to the database physical structures made
with ALTER DATABASE commands, means that it is always possible to reconstruct the
history of changes to the database and the instance. This can be invaluable when
trying to backtrack in order to find the source of a problem.
TIP For many DBAs, the first thing they do when they are asked to look at
a database for the first time is locate the alert log and scan through it, just to
get an idea of what has been going on.
Trace files are generated by the various background processes, usually when they
encounter an error. These files are located in the BACKGROUND_DUMP_DEST
directory, along with the alert log. If a background process has failed because of an
error, the trace file generated will be invaluable in diagnosing the problem.
Exercise 3-3: Use the Alert Log In this exercise, locate the alert log and find
the entries for the parameter changes made in Exercise 3-1 and the startups and
shutdowns in Exercise 3-2.
1. Connect to your database with either SQL*Plus or SQL Developer, and find
the value of the BACKGROUND_DUMP_DEST parameter:
select value from v$parameter where name='background_dump_dest';

Note that this value can also be found with Database Control.
2. Using whatever operating system tool you please (such as Windows Explorer,
or whatever file system browser your Linux session is using), navigate to the
directory identified in Step 1.
3. Open the alert log. It will be a file called alert_SID.log, where SID is the
name of the instance. Use any editor you please (but note that on Windows,
Notepad is not a good choice because of the way carriage returns are handled.
WordPad is much better).
4. Go to the bottom of the file. You will see the ALTER SYSTEM commands of
Exercise 3-1 and the results of the startup and shutdowns.

Chapter 3: Instance Management

123

An Oracle database is defined by its data dictionary. The data dictionary is not very
comprehensible. For this reason, Oracle provides a set of views onto the data dictionary
that are much easier to understand. These views provide the DBA with a tool for
understanding what is happening in the database. The instance also has a set of tables
(which are in fact C data structures) that are not easily understandable. These are
externalized as the dynamic performance views that are key to understanding what
is happening within the instance.

The Data Dictionary Views
The data dictionary contains metadata: that is, data about data. It describes the
database, both physically and logically, and its contents. User definitions, security
information, integrity constraints, and (from release 10g onward) performance
monitoring information are all part of the data dictionary. It is stored as a set of
segments in the SYSTEM and SYSAUX tablespaces.
In many ways, the segments that make up the data dictionary are like other regular
table and index segments. The critical difference is that the data dictionary tables are
generated at database creation time, and you are not allowed to access them directly.
There is nothing to stop an inquisitive DBA from investigating the data dictionary
directly, but if you do any updates to it you may cause irreparable damage to your
database, and certainly Oracle Corporation will not support you. Creating a data
dictionary is part of the database creation process. It is maintained subsequently by
Data Definition Language (DDL) commands. When you issue the CREATE TABLE
command, you are not only creating a data segment to store your data in its rows,
your DDL command has the side effect of inserting rows into many data dictionary
tables that keep track of segment-related information including tablespace, extent,
column and ownership related properties.
To query the dictionary, Oracle provides a set of views which come in three forms,
prefixed with: DBA_, ALL_, or USER_. Most of the views come in all three forms. Any
view prefixed USER_ will be populated with rows describing objects owned by the
user querying the view. So no two users will see the same contents. When user JOHN
queries USER_TABLES, he will see information about only his tables; if you query
USER_TABLES, you will see information about only your tables. Any view prefixed
ALL_ will be populated with rows describing objects to which you have access. So
ALL_TABLES will contain rows describing your own tables, plus rows describing
tables belonging to anyone else that you have been given permission to see. Any view
prefixed DBA_ will have rows for every object in the database, so DBA_TABLES will
have one row for every table in the database, no matter who created it. Figure 3-10
describes the underlying concept represented by the three forms of dictionary views.
The USER_ views sit in the middle of the concentric squares and only describe an
individual user’s objects. The ALL_ views in the middle display all the contents of the
USER_ views, and in addition describe objects that belong to other schemas but to
which your user has been granted access. The DBA_ views describe all objects in the
database. Needless to say, a user must have DBA privileges to access the DBA_ views.

PART I

Use Data Dictionary and
Dynamic Performance Views

OCA/OCP Oracle Database 11g All-in-One Exam Guide

124
Figure 3-10
The overlapping
structure of the
three forms of the
dictionary views

These views are created as part of the database creation process, along with a large
number of PL/SQL packages that are provided by Oracle to assist database administrators
in managing the database and programmers in developing applications.
TIP Which view will show you ALL the tables in the database? DBA_TABLES,
not ALL_TABLES.
There are hundreds of data dictionary views. Some of those commonly used by
DBAs are
• DBA_OBJECTS

A row for every object in the database

• DBA_DATA_FILES
• DBA_USERS
• DBA_TABLES

A row describing every datafile

A row describing each user
A row describing each table

• DBA_ALERT_HISTORY

Rows describing past alert conditions

There are many more than these, some of which will be used in later chapters.
Along with the views, there are public synonyms onto the views. A query such as this,
select object_name,owner, object_type from dba_objects
where object_name='DBA_OBJECTS';

shows that there is, in fact, a view called DBA_OBJECTS owned by SYS, and a public
synonym with the same name.

The Dynamic Performance Views
There are more than three hundred dynamic performance views. You will often hear
them referred to as the “Vee dollar” views, because their names are prefixed with “V$”.
In fact, the “Vee dollar” views are not views at all—they are synonyms to views that
are prefixed with “V_$”, as shown in Figure 3-11.

Chapter 3: Instance Management

125

The figure shows V$SQL, which has one row for every SQL statement currently
stored in the shared pool, with information such as how often the statement has been
executed.
The dynamic performance views give access to a phenomenal amount of
information about the instance, and (to a certain extent) about the database. The
majority of the views are populated with information from the instance; while the
remaining views are populated from the controlfile. All of them provide real-time
information. Dynamic performance views that are populated from the instance, such
as V$INSTANCE or V$SYSSTAT, are available at all times, even when the instance is in
NOMOUNT mode. Dynamic performance views that are populated from the
controlfile, such as V$DATABASE or V$DATAFILE, cannot be queried unless the
database has been mounted, which is when the controlfile is read. By contrast, the
data dictionary views (prefixed DBA, ALL, or USER) can only be queried after the
database—including the data dictionary—has been opened.
EXAM TIP Dynamic performance views are populated from the instance or
the controlfile; DBA_, ALL_, and USER_ views are populated from the data
dictionary. This difference determines which views can be queried at the
various startup stages.
The dynamic performance views are created at startup, updated continuously
during the lifetime of the instance, and dropped at shutdown. This means they will
accumulate values since startup time; if your database has been open for six months
nonstop, they will have data built up over that period. After a shutdown/startup, they
will be initialized. While the totals may be interesting, they do not directly tell you
anything about what happened during certain defined periods, when there may have
been performance issues. For this reason, it is generally true that the dynamic
performance views give you statistics, not metrics. The conversion of these statistics
into metrics is a skillful and sometimes time-consuming task, made much easier by
the self-tuning and monitoring capabilities of the database.

PART I

Figure 3-11
A V_$ view and
its V$ synonym

OCA/OCP Oracle Database 11g All-in-One Exam Guide

126
TIP There is some overlap between V$ views and data dictionary views.
For instance,V$TABLESPACE has a row for every tablespace, as does DBA_
TABLESPACES. Note that as a general rule,V$ views are singular and data
dictionary views are plural. But there are exceptions.
Exercise 3-4: Query Data Dictionary and Dynamic Performance
Views In this exercise, investigate the physical structures of the database by
querying views.
1. Connect to the database with SQL*Plus or SQL Developer.
2. Use dynamic performance views to determine what datafiles and tablespaces
make up the database as well as the size of the datafiles:
select t.name,d.name,d.bytes from v$tablespace t join
v$datafile d on t.ts#=d.ts# order by t.name;

3. Obtain the same information from data dictionary views:
select tablespace_name,file_name,bytes from dba_data_files
order by tablespace_name;

4. Determine the location of all the controlfile copies. Use two techniques:
select * from v$controlfile;
select value from v$parameter where name='control_files';

5. Determine the location of the online redo logfile members, and their size. As the
size is an attribute of the group, not the members, you will have to join two views:
select m.group#,m.member,g.bytes from v$log g join v$logfile m
on m.group#=g.group# order by m.group#,m.member;

Two-Minute Drill
Describe the Stages of Database Startup and Shutdown
• The stages are NOMOUNT, MOUNT, and OPEN.
• NOMOUNT mode requires a parameter file.
• MOUNT mode requires the controlfile.
• OPEN mode requires the datafiles and online redo logfiles.

Set Database Initialization Parameters
• Static parameters cannot be changed without a shutdown/startup.
• Other parameters can be changed dynamically, for the instance or a session.
• Parameters can be seen in the dynamic performance views V$PARAMETER
and V$SPPARAMETER.

Chapter 3: Instance Management

127
Use the Alert Log and Trace Files

• Trace files are generated by background processes, usually when they
encounter errors.

Use Data Dictionary and Dynamic Performance Views
• The dynamic performance views are populated from the instance and the
controlfile.
• The data dictionary views are populated from the data dictionary.
• Dynamic performance views accumulate values through the lifetime of the
instance, and are reinitialized at startup.
• Data dictionary views show information that persists across shutdown and
startup.
• Both the data dictionary views and the dynamic performance views are
published through synonyms.

Self Test
1. You issue the URL https://127.0.0.1:5500/em and receive an error. What could
be the problem? (Choose three answers.)
A. You have not started the database listener.
B. You have not started the dbconsole.
C. The dbconsole is running on a different port.
D. You are not logged on to the database server node.
E. You have not started the Grid Control agent.
F. You have not started the database.
2. Which files must be synchronized for a database to open? (Choose the best
answer.)
A. Datafiles, online redo logfiles, and controlfile
B. Parameter file and password file
C. All the multiplexed controlfile copies
D. None—SMON will synchronize all files by instance recovery after opening
the database

PART I

• The alert log contains a continuous stream of messages regarding critical
operations.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

128
3. During the transition from NOMOUNT to MOUNT mode, which files are
required? (Choose the best answer.)
A. Parameter file
B. Controlfile
C. Online redo logfiles
D. Datafiles
E. All of the above
4. You shut down your instance with SHUTDOWN IMMEDIATE. What will
happen on the next startup? (Choose the best answer.)
A. SMON will perform automatic instance recovery.
B. You must perform manual instance recovery.
C. PMON will roll back uncommitted transactions.
D. The database will open without recovery.
5. You have created two databases on your computer and want to use Database
Control to manage them. Which of the following statements are correct?
(Choose two answers.)
A. You cannot use Database Control, because it can only manage one
database per computer.
B. You must use Grid Control, as you have multiple databases on the
computer.
C. You can use Database Control, if you contact it on different ports for each
database.
D. You must set the ORACLE_SID variable appropriately before starting each
Database Control console.
6. You issue the command SHUTDOWN, and it seems to hang. What could be
the reason? (Choose the best answer.)
A. You are not connected as SYSDBA or SYSOPER.
B. There are other sessions logged on.
C. You have not connected with operating system or password file
authentication.
D. There are active transactions in the database; when they complete, the
SHUTDOWN will proceed.
7. What action should you take after terminating the instance with SHUTDOWN
ABORT? (Choose the best answer.)
A. Back up the database immediately.

Chapter 3: Instance Management

129
B. Open the database, and perform database recovery.
D. None—recovery will be automatic.
8. What will be the setting of the OPTIMIZER_MODE parameter for your session
after the next startup if you issue these commands:
alter system set optimizer_mode=all_rows scope=spfile;
alter system set optimizer_mode=rule;
alter session set optimizer_mode=first_rows;

A. all_rows
B. rule
C. first_rows
(Choose the best answer.)
9. The LOG_BUFFER parameter is a static parameter. How can you change it?
(Choose the best answer.)
A. You cannot change it, because it is static.
B. You can change it only for individual sessions; it will return to the
previous value for all subsequent sessions.
C. You can change it within the instance, but it will return to the static value
at the next startup.
D. You can change it in the parameter file, but the new value will only come
into effect at the next startup.
10. Which of these actions will not be recorded in the alert log? (Choose two
answers.)
A. ALTER DATABASE commands
B. ALTER SESSION commands
C. ALTER SYSTEM commands
D. Archiving an online redo logfile
E. Creating a tablespace
F. Creating a user
11. Which parameter controls the location of background process trace files?
(Choose the best answer.)
A. BACKGROUND_DUMP_DEST
B. BACKGROUND_TRACE_DEST
C. DB_CREATE_FILE_DEST
D. No parameter—the location is platform specific and cannot be changed

PART I

C. Open the database, and perform instance recovery.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

130
12. Which of these views can be queried successfully in nomount mode? (Choose
all correct answers.)
A. DBA_DATA_FILES
B. DBA_TABLESPACES
C. V$DATABASE
D. V$DATAFILE
E. V$INSTANCE
F. V$SESSION
13. Which view will list all tables in the database? (Choose the best answer.)
A. ALL_TABLES
B. DBA_TABLES
C. USER_TABLES, when connected as SYS
D. V$FIXED_TABLE

Self Test Answers
1. þ B, C, and D. There will always be an error if the database console process
has not been started or it is on a different port, and since the URL used a
loopback address, there will be an error if the browser is not running on the
same machine as the console.
ý A, E, and F. A and F are wrong because these are not a problem; the
listener and the database can both be started if the console is accessible. E is
wrong because the Grid Control agent is not necessary for Database Control.
2. þ A. These are the files that make up a database, and must all be synchronized
before it can be opened.
ý B, C, and D. B is wrong because these files are not, strictly speaking, part
of the database at all. C is wrong because an error with the controlfile will
mean the database cannot even be mounted, never mind opened. E is wrong
because SMON can only fix problems in datafiles, not anything else.
3. þ B. Mounting the database entails the opening of all copies of the
controlfile.
ý A, C, D, and E. A is wrong because the parameter file is only needed for
NOMOUNT. C, D, and E are wrong because these file types are only needed
for open mode.
4. þ D. An immediate shutdown is clean, so no recovery will be required.
ý A, B, and C. These are wrong because no recovery or rollback will be
required; all the work will have been done as part of the shutdown.

Chapter 3: Instance Management

131
ý A and B. A is wrong because you can use Database Console, but you will
need separate instances for each database. B is wrong because while Grid
Control may be a better tool, it is by no means essential.
6. þ B. The default shutdown mode is SHUTDOWN NORMAL, which will
hang until all sessions have voluntarily disconnected.
ý A, C, and D. A and C are wrong because these would cause an error,
not a hang. D is wrong because it describes SHUTDOWN TRANSACTIONAL,
not SHUTDOWN NORMAL.
7. þ D. There is no required action; recovery will be automatic.
ý A, B, and C. A is wrong because this is one thing you should not do
after an ABORT. B is wrong because database recovery is not necessary,
only instance recovery. C, instance recovery, is wrong because it will occur
automatically in mount mode at the next startup.
8. þ B. The default scope of ALTER SYSTEM is both memory and spfile.
ý A and C. A is wrong because this setting will have been replaced by the
setting in the second command. C is wrong because the session-level setting
will have been lost during the restart of the instance.
9. þ D. This is the technique for changing a static parameter.
ý A, B, and C. A is wrong because static parameters can be changed—but
only with a shutdown. B and C are wrong because static parameters cannot
be changed for a running session or instance.
10. þ B and F. Neither of these affects the structure of the database or the
instance; they are not important enough to generate an alert log entry.
ý A, C, D, and E. All of these are changes to physical or memory structures,
and all such changes are recorded in the alert log.
11. þ A. This is the parameter used to determine the location of background
trace files.
ý B, C, and D. B is wrong because there is no such parameter. C is wrong
because this is the default location for datafiles, not trace files. D is wrong
because while there is a platform-specific default, it can be overridden with
a parameter.
12. þ E and F. These views are populated from the instance and will therefore
be available at all times.
ý A, B, C, and D. A and B are data dictionary views, which can only be seen
in open mode. C and D are dynamic performance views populated from the
controlfile, and therefore only available in mount mode or open mode.

PART I

5. þ C and D. Database Control will be fine but must be started for each
database and contacted on different ports for each database.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

132
13. þ B. The DBA views list every appropriate object in the database.
ý A, C, and D. A is wrong because this will list only the tables the current
user has permissions on. C is wrong because it will list only the tables
owned by SYS. D is wrong because this is the view that lists all the dynamic
performance views, not all tables.

CHAPTER 4
Oracle Networking

Exam Objectives
In this chapter you will learn to
• 052.5.1 Configure and Manage the Oracle Network
• 052.5.2 Use the Oracle Shared Server Architecture

133

OCA/OCP Oracle Database 11g All-in-One Exam Guide

134
Networking is an integral part of the client-server database architecture that is
fundamental to all modern relational databases. The Oracle database had the
potential for client-server computing from the beginning (version 1, released in 1978,
made a separation between the Oracle code and the user code), but it was only with
version 4 in 1984 that Oracle introduced interoperability between PC and server. True
client-server support came with version 5, in 1986. This chapter introduces the Oracle
Net services. Oracle Net was previously known as Sqlnet, and you will still hear many
DBAs refer to it as such.
The default Oracle Net configuration is dedicated server. In a dedicated server
environment, each user process is connected to its own server process. An alternative
is shared server, where a number of user processes make use of a pool of server
processes that are shared by all the sessions. Generally speaking, DBAs have been
reluctant to use shared server, but there are indications that Oracle Corporation would
like more sites to move to it, and certainly knowledge of the shared server architecture
is vital for the OCP examination.

Configure and Manage the Oracle Network
Oracle Net is the enabling technology for Oracle’s client-server architecture. It is the
mechanism for establishing sessions against a database instance. There are several
tools that can be used for setting up and administering Oracle Net, though it can be
done with nothing more than a text editor. Whatever tool is used, the end result is
a set of files that control a process (the database listener, which launches server
processes in response to connection requests) and that define the means by which
a user process will locate the listener.

Oracle Net and the Client-Server Paradigm
There are many layers between the user and the database. In the Oracle environment,
no user ever has direct access to the database—nor does the process that the user is
running. Client-server architecture guarantees that all access to data is controlled by
the server.
A user interacts with a user process: this is the software that is run on their local terminal.
For example, it could be Microsoft Access plus an ODBC driver on a Windows PC; it could be
something written in C and linked with the Oracle Call Interface (or OCI) libraries; it could
even be your old friend SQL*Plus. Whatever it is, the purpose of the user process is to prompt
the user to enter information that the process can use to generate SQL statements. In the case
of SQL*Plus, the process merely waits for you to type something in—a more sophisticated
user process will present a proper data entry screen, will validate your input, and then when
you click the Submit button will construct the statement and send it off to the server process.
The server process runs on the database server machine and executes the SQL it
receives from the user process. This is your basic client-server split: a user process
generating SQL, that a server process executes. The execution of a SQL statement goes
through four stages: parse, bind, execute, and fetch. In the parse phase your server

Chapter 4: Oracle Networking

135

Figure 4-1

The database is protected from users by several layers of segregation.

PART I

process works out what the statement actually means, and how best to execute it.
Parsing involves interaction with the shared pool of the instance: shared pool memory
structures are used to convert the SQL into something that is actually executable. In the
bind phase, any variables are expanded to literal values. Then the execute phase will
require more use of the instance’s SGA, and possibly of the database. During the
execution of a statement, data in the database buffer cache will be read or updated and
changes written to the redo log buffer, but if the relevant blocks are not in the database
buffer cache, your server process will read them from the datafiles. This is the only
point in the execution of a statement where the database itself is involved. And finally,
the fetch phase of the execution cycle is where the server process sends the result set
generated by the statement’s execution back to the user process, which should then
format it for display.
Oracle Net provides the mechanism for launching a server process to execute code
on behalf of a user process. This is referred to as establishing a session. Thereafter,
Oracle Net is responsible for maintaining the session: transmitting SQL from the user
process to the server process, and fetching results from the server process back to the
user process.
Figure 4-1 shows the various components of a session. A user interacts with a user
process; a user process interacts with a server process, via Oracle Net; a server process
interacts with the instance; and the instance, via its background processes, interacts
with the database. The client-server split is between the user process generating SQL
and the server process executing it. This split will usually be physical as well as logical:
there will commonly be a local area network between the machines hosting the user
processes and the machine hosting the server processes. But it is quite possible for this
link to be over a wide area network, or conversely to run the user processes on the
server machine. Oracle Net is responsible for establishing a session, and then for the
ongoing communication between the user process and the server process.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

136
A Word on Oracle Net and Communication Protocols
Oracle Net is a layered protocol: it runs on top of whatever communications protocol is
supported by your operating system. Historically, Sqlnet could work with all the popular
protocols (with the exception of NetBIOS/NetBEUI, which has limited functionality and
cannot be used for large database systems: it cannot be routed), but in release 11g
Oracle’s network support is limited to TCP, TCP with secure sockets, Windows Named
Pipes (or NMP), and the newer Sockets Direct Protocol (or SDP) over Infiniband highspeed networks. This reduction in protocol support is in line with industry standards.
All operating systems also have an Inter-Process Communication (or IPC) protocol
proprietary to the operating system—this is also available to Oracle Net for local
connections where the user process is on the same machine as the server.
This layering of Oracle Net on top of whatever is provided by your operating
system provides Oracle platform independence. You, as DBA, do not need to know
anything about the underlying network; you configure Oracle Net to use whatever
protocol has been configured by your network administrators. You need not concern
yourself with what is happening at a lower networking layer. TCP is, for better or
worse, undoubtedly the most popular protocol worldwide, so that is the one used in the
examples that follow. The use of industry standard protocols means that there need be
no dependency between the server-side and the client-side platforms. There is no reason
why, for example, a client on Windows cannot talk to a database on Unix. As long as
the platform can offer a TCP layer 4 interface, then Oracle Net can use it.
With regard to conformance with the Open Systems Interconnection (or OSI)
seven-layer model to which all IT vendors are supposed to comply, Oracle Net maps
on to layers 5, 6, and 7: the session, presentation, and application layers. The protocol
adapters installed with the standard Oracle installation provide the crossover to layer 4,
the transport layer, provided by your operating system. Thus Oracle Net is responsible
for establishing sessions between the end systems once TCP (or whatever else you are
using) has established a layer 4 connection. The presentation layer functions are
handled by the Oracle Net Two Task Common (or TTC) layer. TTC is responsible for
any conversions necessary when data is transferred between the user process and the
server process, such as character set changes. Then the application layer functions are
the user and server processes themselves.

Establishing a Session
When a user, through their user process, wishes to establish a session against an
instance, they may issue a command like
CONNECT STORE/ADMIN123@ORCL11G

Of course, if they are using a graphical user interface, they won’t type in that
command. but will be prompted to enter the details into a logon screen—but one
way or another that is the command the user process will generate. It is now time to
go into what actually happens when that command is processed. First, break down
the command into its components. There is a database user name (“STORE”), followed
by a database password (“ADMIN123”), and the two are separated by a “/” as a
delimiter. Then there is an “@” symbol, followed by a connect string, “ORCL11G”.
The “@” symbol is an indication to the user process that a network connection is

Chapter 4: Oracle Networking

137

Connecting to a Local Instance
Even when you connect to an instance running on your local machine, you still use
Oracle Net. All Oracle sessions use a network protocol to implement the separation
of user code from server code, but for a local connection the protocol is IPC: this is the
protocol provided by your operating system that will allow processes to communicate
within the host machine. This is the only type of connection that does not require a
database listener; indeed, local connections do not require any configuration at all. The
only information needed is to tell your user process which instance you want to connect
to. Remember that there could be several instances running on your local computer. You
give the process this information through an environment variable. Figure 4-2 shows
examples of this on Linux, and Figure 4-3 shows how to connect to a local database on
Windows.
Figure 4-2
Local database
connections—Linux

PART I

required. If the “@” and the connect string are omitted, then the user process will
assume that the instance you wish to connect to is running on the local machine,
and that the always-available IPC protocol can be used. If the “@” and a connect
string are included, then the user process will assume that you are requesting a
network connection to an instance on a remote machine—though in fact, you could
be bouncing off the network card and back to the machine on to which you are logged.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

138
Remember that the only difference between platforms is the syntax for setting
environment variables, as demonstrated in Figures 4-2 and 4-3.

Name Resolution
When connecting using Oracle Net, the first stage is to work out exactly what it is you
want to connect to. This is the process of name resolution. If your connect statement
includes the connect string “@orcl11g”, Oracle Net has to work out what is meant by
“orcl11g”. This means that the string has to be resolved into certain pieces of information:
the protocol you want to use (assume that this is TCP), the IP address on which the
database listener is running, the port that the listener is monitoring for incoming
connection requests, and the name of the instance (which need not be the same as
the connect string) to which you wish to connect. There are variations: rather than an
IP address, the connect string can include a hostname, which then gets further resolved
to an IP address by a DNS server. Rather than specifying an instance by name, the
connect string can include the name of a service, which (in a RAC environment) could
be made up of a number of instances. In a single-instance environment, services can

Figure 4-3

Local database connections—Windows

Chapter 4: Oracle Networking

139

Launching a Server Process
The database listener, running on the server machine, uses one or more protocols
to monitor one or more ports on one or more network interface cards for incoming
connection requests. You can further complicate matters by running multiple listeners
on one machine, and any one listener can accept connection requests for a number of
instances. When it receives a connect request, the listener must first validate whether
the instance requested is actually available. Assuming that it is, the listener will launch
a new server process to service the user process. Thus if you have a thousand users
logging on concurrently to your instance, you will be launching a thousand server
processes. This is known as the dedicated server architecture. Later in this chapter
you’ll see the shared server alternative where each user process is handled by a
dedicated dispatcher process, but shared by multiple user processes.
In the TCP environment, each dedicated server process launched by a listener will
acquire a unique TCP port number. This will be assigned at process startup time by
your operating system’s port mapping algorithm. The port number gets passed back
to the user process by the listener (or on some operating systems the socket already
opened to the listener is transferred to the new port number), and the user process
can then communicate directly with its server process. The listener has now completed
its work and waits for the next connect request.
EXAM TIP If the database listener is not running, no new server processes
can be launched—but this will not affect any existing sessions that have
already been established.

Creating a Listener
A listener is defined in a file: the listener.ora file, whose default location is in the
ORACLE_HOME/network/admin directory. As a minimum, the listener.ora file
must include a section for one listener, which states its name and the protocol and
listening address it will use. You can configure several listeners in the one file, but they
must all have different names and addresses.
TIP You can run a listener completely on defaults, without a listener
.ora file at all. It will listen on whatever address resolves to the machine’s
hostname, on port 1521. Always configure the listener.ora file, to make
your Oracle Net environment self-documenting.

PART I

still be used—perhaps to assist with tracking the workload imposed on the database
by different groups of users. You can configure a number of ways of resolving connect
strings to address and instance names, but one way or another the name resolution
process gives your user process enough information to go across the network to a
database listener and request a connection to a particular instance.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

140
As with other files used to configure Oracle Net, the listener.ora file can be
very fussy about seemingly trivial points of syntax, such as case sensitivity, white spaces,
and abbreviations. For this reason, many DBAs do not like to edit it by hand (though
there is no reason not to). Oracle provides three graphical tools to manage Oracle
Net: Enterprise Manager (Database Control or Grid Control), the Net Manager, and
the Net Configuration Assistant. The latter two tools are both written in Java. There is
considerable overlap between the functionality of these tools, though there are a few
things that can only be done in one or another.
This is an example of a listener.ora file:
LISTENER =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = jwlnx1)(PORT = 1521))
)
LIST2 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 127.0.0.1)(PORT = 1522))
(ADDRESS = (PROTOCOL = TCP)(HOST = jwlnx1.bplc.co.za)(PORT = 1522))
)
)

The first section of this file defines a listener called LISTENER, monitoring the
local hostname on the default port, 1521. The second section defines another listener
called LIST2. This listener monitors port 1522 on both the hostname address and a
loopback address.
To create the listener, you need do nothing more than create an entry in the
listener.ora file, and start it. Under Windows the listener will run as a Windows
service, but there is no need to create the service explicitly; it will be created implicitly
the first time the listener is started. From then, if you wish, it can be started and
stopped like any other Windows service.
Figure 4-4 shows the Net Manager’s view of the listener LIST2, and Figure 4-5
shows it through the Net Configuration Assistant.
Note that the Net Manager lets you configure multiple listening addresses for a
listener (Figure 4-4 shows the loopback address), whereas the Net Configuration
Assistant does not: it can only see the one address of the hostname; there is no
prompt for creating or viewing any other.

Database Registration
A listener is necessary to spawn server processes against an instance. In order to do
this, it needs to know what instances are available on the computer on which it is
running. A listener finds out about instances by the process of “registration.”
EXAM TIP The listener and the instance must be running on the same
computer, unless you are using RAC. In a RAC environment, any listener on
any computer in the cluster can connect you to any instance on any computer.

Chapter 4: Oracle Networking

141
PART I

Figure 4-4 A listener definition as created or viewed with the Net Manager

Figure 4-5 A listener definition as created or viewed with the Net Configuration Assistant

OCA/OCP Oracle Database 11g All-in-One Exam Guide

142
There are two methods for registering an instance with a database: static and
dynamic registration. For static registration, you hard-code a list of instances in the
listener.ora file. Dynamic registration means that the instance itself, at startup
time, locates a listener and registers with it.

Static Registration
As a general rule, dynamic registration is a better option, but there are circumstances
when you will resort to static registration. Dynamic registration was introduced with
release 8i, but if you have older databases that your listener must connect users to,
you will have to register them statically. Also some applications may require static
registration: typically management tools. To register an instance statically, add an
appropriate entry to the listener.ora file:
LIST2 =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 127.0.0.1)(PORT = 1522))
)
)
SID_LIST_LIST2 =
(SID_LIST =
(SID_DESC =
(ORACLE_HOME = /u01/oracle/app/product/11.1.0/db_1)
(SID_NAME = ocp11g)
)
)

This entry will configure the listener called LIST2 to accept connection requests for
an instance called ocp11g. It says nothing about whether the instance is running or
even exists at all. The directive ORACLE_HOME is only required if the database listener
is not running from the same Oracle Home as the instance. If this is the case, then this
directive will let the listener find the executable file that it must run to launch a server
process. Usually, this is only necessary if configuring a listener to make connections to
instances of a different version, which have to be running off a different home.

Dynamic Instance Registration
This is the preferred method by which an instance will register with a listener. The
initialization parameter local_listener tells the instance the network address that
it should contact to find a listener with which to register. At instance startup time, the
PMON process will use this parameter to locate a listener, and inform it of the instance’s
name and the names of the service(s) that the instance is offering. The instance name is
defined by the instance_name parameter, and the service_names parameter will
have defaulted to this suffixed by the db_domain parameter, which will default to null.
It is possible to create and start additional services at any time, either by changing the
value of the service_names parameter (which can be a comma-delimited list, if the
instance is to offer several services) or programmatically using the DBMS_SERVICE
package.
Any change to the services must be registered with the local listener. If this is not
done, the listener won’t know what services are being offered, and will therefore not

Chapter 4: Oracle Networking

143

SQL> alter system register;

TIP You will need to register your instance with the listener with alter
system register if you have restarted the listener, or if you started the
database instance before starting the listener or you can wait a minute for
PMON to register.
Dynamic registration is a better option than static registration because it ensures
that only running instances and available services are registered with the listener,
and also that there are no errors in the instance and service names. It is all too easy
to make mistakes here, particularly if you are editing the listener.ora file by
hand. Also, when the instance shuts down, it will deregister from the listener
automatically.
From release 9i onward, dynamic registration requires no configuration at all if
your listener is running on the default port, 1521. All instances will automatically
look for a listener on the local host on that port, and register themselves if they find
one. However, if your listener is not running on the default port on the address
identified by the hostname, you must specify where the listener is by setting the
parameter local_listener and re-registering, for example,
SQL> alter system set local_listener=list2;
SQL> alter system register;

In this example, the local_listener has been specified by name. This name
needs to be resolved into an address in order for the instance to find the listener and
register itself, as described in the following section. An alternative technique is to
hard-code the listener’s address in the parameter:
SQL> alter system set
local_listener='(address=(protocol=tcp)(host=127.0.0.1)(port=1522))';

This syntax is perfectly acceptable, but the use of a name that can be resolved is
better practice, as it places a layer of abstraction between the logical name and the
physical address. The abstraction means that if the listening address ever has to be
changed, one need only change it in the name resolution service, rather than having
to change it in every instance that uses it.

Techniques for Name Resolution
At the beginning of this chapter you saw that to establish a session against an instance,
your user process must issue a connect string. That string resolves to the address of a
listener and the name of an instance or service. In the discussion of dynamic instance
registration, you saw again the use of a logical name for a listener, which needs to be
resolved into a network address in order for an instance to find a listener with which
to register. Oracle provides four methods of name resolution: easy connect, local

PART I

be able to set up sessions to them. The PMON process will register automatically once
a minute, but at any time subsequent to instance startup you can force a re-registration
by executing the command

OCA/OCP Oracle Database 11g All-in-One Exam Guide

144
naming, directory naming, and external naming. It is probably true to say that the
majority of Oracle sites use local naming, but there is no question that directory
naming is the best method for a large and complex installation.

Easy Connect
The Easy Connect name resolution method was introduced with release 10g. It is very
easy to use—it requires no configuration at all. But it is limited to one protocol: TCP.
The other name resolution methods can use any of the other supported protocols,
such as TCP with secure sockets, or Named Pipes. Another limitation is that Easy
Connect cannot be used with any of Oracle Net’s more advanced capabilities, such as
load balancing or connect-time failover across different network routes. It is fair to say
that Easy Connect is a method you as the DBA will find very handy to use, but that it
is not a method of much use for your end users. Easy Connect is enabled by default.
You invoke it with connect string syntax such as
SQL> connect store/admin123@jwlnx1.bplc.co.za:1522/ocp11g

In this example, SQL*Plus will use TCP to go to port 1522 on the IP address to which
the hostname resolves. Then if there is a listener running on that port and address, it
will ask the listener to spawn a server process against an instance that is part of the
service called ocp11g. Easy Connect can be made even easier:
SQL> connect store/admin123@jwlnx1.bplc.co.za

This syntax will also work, but only if the listener running on this hostname is using
port 1521, and the service name registered with the listener is jwlnx1.bplc.co.za, the
same as the computer name.

Local Naming
With local naming the user supplies an alias, known as an Oracle Net service alias, for
the connect string, and the alias is resolved by a local file into the full network address
(protocol, address, port, and service or instance name). This local file is the infamous
tnsnames.ora file, which has caused DBAs much grief over the years. Consider this
example of a tnsnames.ora file:
ocp11g =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = jwlnx1.bplc.co.za)(PORT = 1522))
)
(CONNECT_DATA =
(service_name = ocp11g)
)
)
test =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = serv2.bplc.co.za)(PORT = 1521))
)
(CONNECT_DATA =

Chapter 4: Oracle Networking

145
(sid = testdb)
)

This tnsnames.ora file has two Oracle Net service aliases defined within it:
ocp11g and test. These aliases are what your users will provide in their connect
statements. The first entry, ocp11g, simply says that when the connect string
“@ocp11g” is issued, your user process should use the TCP protocol to go the
machine jwlnx1.bplc.co.za, contact it on port 1522, and ask the listener monitoring
that port to establish a session against the instance with the service name ocp11g.
The second entry, test, directs users to a listener on a different machine and asks for
a session against the instance called testdb.
TIP There need be no relationship between the alias, the service name, and
the instance name, but for the sake of your sanity you will usually keep them
the same.
Local naming supports all protocols and all the advanced features of Oracle Net,
but maintaining tnsnames.ora files on all your client machines can be an extremely
time-consuming task. The tnsnames.ora file is also notoriously sensitive to apparently
trivial variations in layout. Using the GUI tools will help avoid such problems.

Directory Naming and External Naming
Directory naming points the user toward an LDAP directory server to resolve aliases.
LDAP (the Lightweight Directory Protocol) is a widely used standard that Oracle
Corporation (and other mainstream software vendors) is encouraging organizations
to adopt. To use directory naming, you must first install and configure a directory
server somewhere on your network. Oracle provides an LDAP server (the Oracle
Internet Directory) as part of the Oracle Application Server, but you do not have
to use that—if you already have a Microsoft Active Directory, that will be perfectly
adequate. IBM and Novell also sell directory servers conforming to the LDAP
standard.
Like local naming, directory naming supports all Oracle Net features—but unlike
local naming, it uses a central repository, the directory server, for all your name
resolution details. This is much easier to maintain than many tnsnames.ora files
distributed across your whole user community.
External naming is conceptually similar to directory naming, but it uses thirdparty naming services such as Sun’s Network Information Services (NIS+) or the Cell
Directory Services (CDS) that are part of the Distributed Computing Environment (DCE).
The use of directories and external naming services is beyond the scope of the
OCP syllabus.

The Listener Control Utility
You can start and stop listeners through Database Control, but there is also a
command-line utility, lsnrctl (it is lsnrctl.exe on Windows). The lsnrctl
commands can be run directly from an operating system prompt, or through a simple

PART I

)

OCA/OCP Oracle Database 11g All-in-One Exam Guide

146
user interface. For all the commands, you must specify the name of the listener, if it is
not the default name of LISTENER. Figures 4-6 and 4-7 show how to check the status
of a listener and to stop and start it, issuing the commands either from the operating
system prompt or from within the user interface.
Note that the status command always tells you the address on which the
listener accepts connection requests, the name and location of the listener.ora
file that defines the listener, and the name and location of the log file for the listener.
Also, in the examples shown in the figures, the listener LIST2 “supports no services.”
This is because there are no services statically registered in the listener.ora file for
that listener, and no instances have dynamically registered either. Figure 4-8 uses the
services command to show the state of the listener after an instance has registered
dynamically.

Figure 4-6 Using lsnrctl commands from the operating system prompt to check the status and
then start the listener LIST2

Chapter 4: Oracle Networking

147
PART I

Figure 4-7 Using the lsnrctl user interface to check the status and then stop the listener LIST2
Figure 4-8
The services
command shows the
services for which
the listener will
accept connections.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

148
In Figure 4-8, the output of the status command tells you that the listener
called LISTENER supports three services, all available on the instance orcl11g:
• Service orcl11g.jwlnx1.bplc.co.za is the regular database service. The listener
can launch dedicated server sessions against it (it hasn’t launched any yet).
• Service orcl11gXDB.jwlnx1.bplc.co.za is the XML database protocol server. This
lets users connect to the database with protocols other than Oracle NET, such
FTP and HTTP.
• Service orcl11g_XPT.jwlnx1.bplc.co.za has to do with Dataguard.
By default, an 11g database instance will register the XDP and XPT services, but
they cannot be used without considerable further configuration. The fact that the
services are shown to be “status ready” indicates that they were automatically
registered by the PMON process: the listener knows they are ready because PMON
said they were. If the services has been statically registered, they would be marked as
“status unknown,” indicating that while they are in the listener.ora file, they may
not in fact be working.
To see all the lsnrctl commands, use the HELP command:
C:\>lsnrctl help
LSNRCTL for 32-bit Windows: Version 11.1.0.4.0 - Beta
on 26-NOV-2007 17:47:16
Copyright (c) 1991, 2006, Oracle. All rights reserved.
The following operations are available
An asterisk (*) denotes a modifier or extended command:
start
stop
status
services
version
reload
save_config
trace
change_password
quit
exit
set*
show*

In summary, these commands are
•
•
•
•
•
•
•
•
•
•
•
•
•

START Start a listener.
STOP Stop a listener.
STATUS See the status of a listener.
SERVICES See the services a listener is offering (fuller information than STATUS).
VERSION Show the version of a listener.
RELOAD Force a listener to reread its entry in listener.ora.
SAVE_CONFIG Write any changes made online to the listener.ora file.
TRACE Enable tracing of a listener’s activity.
CHANGE_PASSWORD Set a password for a listener’s administration.
QUIT Exit from the tool without saving changes to the listener.ora file.
EXIT Exit from the tool and save changes to the listener.ora file.
SET Set various options, such as tracing and timeouts.
SHOW Show options that have been set for a listener.

Chapter 4: Oracle Networking

149

Configuring Service Aliases
Having decided what name resolution method to use, your next task is to configure
the clients to use it. You can do this through Database Control, but since Database
Control is a server-side process, you can use it only to configure clients running on
the database server. An alternative is to use the Net Manager. This is a stand-alone
Java utility, shipped with all the Oracle client-side products.
To launch the Net Manager, run netmgr from a Unix prompt, or on Windows
you will find it on the Start menu.
The Net Manager navigation tree has three branches. The Profile branch is used to
set options that may apply to both the client and server sides of Oracle Net and can
be used to influence the behavior of all Oracle Net connections. This is where, for
example, you can configure detailed tracing of Oracle Net sessions. The Service Naming
branch is used to configure client-side name resolution, and the Listeners branch is
used to configure database listeners.
When you select the Profile branch as shown in Figure 4-9, you are in fact
configuring a file called sqlnet.ora. This file exists by default in your ORACLE_
HOME/network/admin directory. It is optional, as there are defaults for every
sqlnet.ora directive, but you will usually configure it if only to select the name
resolution method.

Figure 4-9 Net Manager’s Profile editor

PART I

Note that all these commands should be qualified with the name of the listener to
which the command should be applied. If a name is not supplied, the command will
be executed against the listener called LISTENER.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

150
In the Profile branch, you will see all the available naming methods, with three
(TNSNAMES and EZCONNECT and HOSTNAME) selected by default: these are Local
Naming and Easy Connect and Host Naming. The external methods are NIS and CDS.
LDAP is Directory Naming. Host Naming is similar to Easy Connect and retained for
backward compatibility.
Then you need to configure the individual Oracle Net service aliases. This is
done in the Service Naming branch, which in fact creates or edits the Local Naming
tnsnames.ora file that resides by default in your ORACLE_HOME/network/admin
directory. If you are fortunate enough to be using Directory Naming, you do not need
to do this; choosing LDAP in the Profile as your naming method is enough.
A typical entry in the tnsnames.ora file would be
OCP11G =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = jwacer.bplc.co.za)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = ocp11g)
)
)

If a user enters the connect string “ocp11g”, this entry will resolve the name to a
listener running on the address jwlnx1.bplc.co.za monitoring port 1521, and ask the
listener for a session against an instance offering the service ocp11g. To connect with
this, use
sqlplus system/oracle@ocp11g

The equivalent with Easy Connect would be
sqlplus system/manager@jwacer.bplc.co.za:1521/ocp11g

To test a connect string, use the TNSPING utility. This will accept a connect string,
locate the Oracle Net files, resolve the string, and send a message to the listener. If the
listener is running and does know about the service requested, the test will return
successfully. For example,
C:\> tnsping ocp11g
TNS Ping Utility for 32-bit Windows: Version 11.1.0.4.0 - Beta
on 27-NOV-2007 11
:49:55
Copyright (c) 1997, 2006, Oracle. All rights reserved.
Used parameter files:
D:\oracle\app\product\11.1.0\db_3\network\admin\sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION =
(ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)
(HOST = 127.0.0.1)(PORT = 2521))) (CONNECT_DATA = (SERVICE_NAME = ocp11g)))
OK (40 msec)

Chapter 4: Oracle Networking

151

Filenames and the TNSADMIN Environment Variable
There are three critical files involved in configuring Oracle Net:
• The listener.ora file is a server-side file that defines database listeners.
It includes the protocols, addresses, and ports on which they will listen for
incoming connection requests, and (optionally) a hard-coded list of instances
against which they will launch sessions.
• The tnsnames.ora file is a client-side file used for name resolution. It is
used by user processes to locate database listeners. It may also be used by the
instance itself, to locate a listener with which to register.
• The sqlnet.ora file is optional and may exist (possibly with different
settings) on the server side, the client side, or both. It contains settings that
apply to all connections and listeners, such as security rules and encryption.
The three Oracle Net files by default exist in the directory ORACLE_HOME/network/
admin. It is possible to relocate them with an environment variable: TNS_ADMIN.
An important use of this is on systems that have several Oracle Homes. This is a very
common situation. A typical Oracle server machine will have at least three homes:
one for the Enterprise Manager Grid Control Agent, one for launching database
instances, and one for launching Automatic Storage Management (ASM) instances.
(ASM is covered in the second OCP examination.) Client machines may well have
several Oracle Homes as well, perhaps one each for the 10g and11g clients. Setting the
TNS_ADMIN variable to point to one set of files in one of the Oracle home directories
(or indeed in a different directory altogether) means that instead of having to maintain
multiple sets of files, you need maintain only one set. To set the variable, on Windows
you can use the SET command to set it for one session,
set TNS_ADMIN=c:\oracle\net

though it will usually be better to set it in the registry, as a string value key in the
Oracle Home branch. On Linux and Unix, the syntax will vary depending on the
shell, but something like this will usually do:
set TNS_ADMIN=/u01/oracle/net; export TNS_ADMIN

This command could be placed in each user’s .profile file, or in the /etc/
profile where every user will pick it up.
Figure 4-10 traces the flow of logic utilized to resolve a typical client connection
request.

PART I

Note that the output of TNSPING shows the sqlnet.ora file used, the name
resolution method used, and then the details of the address contacted. The tool does not
go further than the listener; it will not check whether the instance is actually working.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

152

Figure 4-10 Typical resolution logic for client connection request

Chapter 4: Oracle Networking

153

Database Links
So far, Oracle Net has been discussed in the context of users connecting to database
instances. Oracle Net can also be used for communications between databases: a user
session against one database can execute SQL statements against another database.
This is done through a database link. There are several options for creating database
links (all to do with security), but a simple example is
create database link prodstore
connect to store identified by admin123 using 'prod';

This defines a database link from the current database to a remote database
identified by the connect string PROD. The link exists in and can only be used by
the current user’s schema. When a statement such as
select * from orders@prodstore;

is issued, the user’s session will launch a session against the remote database, log on
to it transparently as user STORE, and run the query there. The results will be sent
back to the local database and then returned to the user.

PART I

A user typically initiates a connection to the database server by providing a
username, a password, and a connect string. If a connect string is absent, the Oracle
Net client layer tries to use the ORACLE_SID environment variable or registry variable
as a default connect string value. If this is not set, an error usually results. If a connect
string is available, the Oracle Net client then tries to figure out what mechanism to
utilize to resolve the connect string and it does this by trying to locate the relevant
sqlnet.ora file, either in the directory specified by the TNS_ADMIN variable or
in the ORACLE_HOME/network/admin directory. If neither the TNS_ADMIN nor
ORACLE_HOME variable is set, an error is returned.
Typically, sqlnet.ora contains a NAMES.DIRECTORY_PATH directive, which
lists, in order of preference, the different connection name resolution mechanisms,
like TNSNAMES, LDAP, and EZCONNECT. If TNSNAMES is listed as the first preferred
mechanism, Oracle Net then tries to locate the infamous tnsnames.ora file either
in the directory specified by the TNS_ADMIN variable or in the ORACLE_HOME/
network/admin directory. The tnsnames.ora file is then used to obtain the
network address of the connection string, typically yielding a hostname:port:sid
or hostname:port:servicename triad.
The Oracle Net client is finally in a position to bind the user process that initiated
the connection to the database server. If the connection string contained the “@”
symbol, then the listener on the hostname is contacted on the relevant port, for access
to the specified instance or service. If the listener is functioning correctly, the user
process tries to negotiate a server connection, or else an error is returned. If the
connection string does not contain the “@” symbol, a local IPC connection is
attempted. If the instance or service is available on the same server as the client
user process, then the connection may be successfully made.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

154
Any SQL statements can be executed through a link, provided that the schema
to which the link connects has appropriate permissions. For example, consider this
scenario:
There is a production database, identified by the connect string PROD, which
contains a schema STORE, with two tables: ORDERS and PRODUCTS. There is a link
to this database as just defined. There is also a development database, identified by the
connect string DEV, which also contains the schema STORE. You are connected to a
third database called TEST. You need to update the development schema with the
production data.
First, define a database link to the development database:
create database link devstore
connect to store identified by devpasswd using 'dev';

Then update the development schema to match the production schema:
truncate table orders@devstore;
truncate table customers@devstore;
insert into orders@devstore select * from orders@prodstore;
insert into customers@devstore select * from customers@prodstore;
commit;

To check whether any rows have been inserted in the production system since the
last refresh of development and, if so, insert them into development, you could run
this statement:
insert into orders@devstore
(select * from orders@prodstore

minus select * from orders@devstore);

If it were necessary to change the name of a customer, you could do it in both
databases concurrently with
update customers@prodstore set customer_name='Coda' where customer_id=10;
update customers@devstore customer_name='Coda' where customer_id=10;
commit;

When necessary, Oracle will always implement a two-phase commit to ensure that
a distributed transaction (which is a transaction that affects rows in more than one
database) is treated as an atomic transaction: the changes must succeed in all databases
or be rolled back in all databases. Read consistency is also maintained across the
whole environment.
Exercise 4-1: Configure Oracle Net In this exercise, you will set up a
complete Oracle Net environment, using graphical and command-line tools.
Differences between Windows and Linux will be pointed out.
1. Create a directory to be used for the Oracle Net configuration files, and set the
TNS_ADMIN variable to point to this. It doesn’t matter where the directory is,
as long as the Oracle user has permission to create, read, and write it.
On Linux:
mkdir /u01/oracle/net
export TNS_ADMIN=/u01/oracle/net

Chapter 4: Oracle Networking

155

On Windows:
mkdir d:\oracle\net

Create and set the key TNS_ADMIN as a string variable in the registry in the
Oracle Home branch. This will typically be
HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE\KEY_OraDb11g_home1

2. Check that the variable is being read by using the TNSPING command from
an operating system prompt:
tnsping orcl

This will return an error “TNS-03505: Failed to resolve name” because there
are no files in the TNS_ADMIN directory. On Windows, you may need to
launch a new command prompt to pick up the new TNS_ADMIN value from
the registry.
3. Start the Net Manager. On Linux, run netmgr from an operating system
prompt; on Windows, launch it from the Start menu. The top line of the Net
Manager window will show the location of the Oracle Net files. If this is not
the new directory, then the TNS_ADMIN variable has not been set correctly.
4. Create a new listener: expand the Local branch of the navigation tree,
highlight Listeners, and click the + icon.
5. Enter a listener name, NEWLIST, and click OK.
6. Click Add Address.
7. For Address 1, choose TCP/IP as the protocol and enter 127.0.0.1 as the host,
2521 as the port. The illustration that follows shows the result.

PART I

Ensure that all work from now is done from a session where the variable has
been set.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

156
8. Create a new service name: highlight Service Naming in the navigation tree,
and click the + icon.
9. Enter NEW as the net service name, and click Next.
10. Select TCP/IP as the protocol, and click Next.
11. Enter 127.0.0.1 as the host name and 2521 as the port and click Next.
12. Enter SERV1 as the service name, and click Next.
13. Click Finish. If you try the test, it will fail at this time. The illustration that
follows shows the result.

14. Save the configuration by clicking File and Save Network Configuration. This
will create the listener.ora and tnsnames.ora files in the TNS_ADMIN
directory.
15. Use an editor to check the two files. They will look like this:
LISTENER.ORA:
NEWLIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 127.0.0.1)(PORT = 2521))
)

TNSNAMES.ora:
NEW =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 127.0.0.1)(PORT = 2521))
)
(CONNECT_DATA =
(SERVICE_NAME = SERV1)
)
)

Chapter 4: Oracle Networking

157

17. From an operating system prompt, test the connect string with tnsping new.
18. Connect to your database using operating system authentication, bypassing
any listener, with sqlplus / as sysdba.
19. Set the service_names and local_listener parameters for the running
instance (memory only, not the parameter file) and register the new service
name with the new listener:
alter system set service_names=serv1 scope=memory;
alter system set local_listener=new scope=memory;
alter system register;

20. From an operating system prompt, confirm that the new service has registered
with the new listener with lsnrctl services newlist.
21. Confirm that the new network environment is functional by logging on:
sqlplus system/oracle@new

Use the Oracle Shared Server Architecture
The standard dedicated server architecture requires that the database listener should
spawn a dedicated server process for each concurrent connection to the instance. These
server processes will persist until the session is terminated. On Unix-type platforms,
the server processes are real operating system processes; on Windows, they are threads
within the one ORACLE.EXE process. This architecture does not scale easily to support
a large number of user processes on some platforms. An alternative is the shared server
architecture, known as the multithreaded server (or MTS) in earlier releases.

The Limitations of Dedicated Server Architecture
As more users log on to your instance, more server processes get launched. This is not
a problem as far as Oracle is concerned. The database listener can launch as many
processes as required, though there may be limits on the speed with which it can
launch them. If you have a large number of concurrent connection requests, your
listener will have to queue them up. You can avoid this by running multiple listeners
on different ports, and load-balancing between them. Then once the sessions are
established, there is no limit to the number that PMON can manage. But your
operating system may well have limits on the number of processes that it can
support, limits to do with context switches and with memory.
A computer can only do one thing at once unless it is an SMP machine, in which
case each CPU can only do one thing at once. The operating system simulates concurrent
processing by using an algorithm to share CPU cycles across all the currently executing
processes. This algorithm, often referred to as a time slicing or time sharing algorithm,
takes care of allocating a few CPU cycles to each process in turn. The switch of taking
one process off CPU in order to put another process on CPU is called a context switch.
Context switches are very expensive: the operating system has to do a lot of work to

PART I

16. From an operating system prompt, start the listener with lsnrctl start
newlist.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

158
restore the state of each process as it is brought on to CPU and then save its state when
it is switched off the CPU. As more users connect to the instance, the operating system
has to context-switch between more and more server processes. Depending on your
operating system, this can cause a severe degradation in performance. A decent mainframe
operating system can context-switch between tens of thousands of processes without
problems, but newer (and simpler) operating systems such as Unix and Windows may
not be good at running thousands, or even just hundreds, of concurrent processes.
Performance degrades dramatically, because a large proportion of the computer’s
processing capacity is taken up with managing the context switches, leaving a relatively
small amount of processing capacity available for actually doing work.
There may also be memory problems that occur as more sessions are established.
The actual server processes themselves are not an issue, because all modern operating
systems use shared memory when the same process is loaded more than once. So
launching a thousand server processes should take no more memory than launching
one. The problem comes with the program global area, or PGA. The PGA is a block of
memory associated with each server process, to maintain the state of the session and
as a work area for operations such as sorting rows. Clearly, the PGAs cannot be in
shared memory: they contain data unique to each session. In many operating systems,
as memory thresholds are reached, they make use of swap space or paging areas on
disk, and memory pages are swapped out to disk to make room for memory requirements
of other processes. When the memory pages that have been swapped out to disk are
required, they are swapped back into memory and something else is swapped out to
disk. Excessive swapping can be catastrophic for the performance of your system. Due
to the PGA requirements of each session, your system may begin to swap as more
users log on.
So in the dedicated server environment, performance may degrade if your operating
system has problems managing a large number of concurrent processes, and the
problem will be exacerbated if your server machine has insufficient memory. Note
that it doesn’t really matter whether the sessions are actually doing anything or not.
Even if the sessions are idle, the operating system must still bring them on and off
CPU, and possibly page the appropriate PGA into main memory from swap files,
according to its time slicing algorithm. There comes a point when, no matter what
you do in the way of hardware upgrades, performance begins to degrade because of
operating system inefficiencies in managing context switches and paging. These are
not Oracle’s problems, but to overcome them Oracle offers the option of the shared
server architecture. This allows a large number of user processes to be serviced by a
relatively small number of shared server processes, thus reducing dramatically the
number of processes that the server’s operating system has to manage. As a fringe
benefit, memory usage may also reduce.
Always remember that the need for a shared server is very much platform and
installation specific. Some operating systems will hardly ever need it. For example, a
mainframe computer can time-share between many thousands of processes with no
problems—it is usually simpler operating systems like Windows or Unix that are more
likely to have problems.

Chapter 4: Oracle Networking

159
The Shared Server Architecture

EXAM TIP A session’s connection to a dispatcher persists for the duration of
the session, unlike the connection to the listener, which is transient.
When a user process issues a SQL statement, it is sent to the dispatcher. The
dispatcher puts all the statements it receives onto a queue. This queue is called the
common queue, because all dispatchers share it. No matter which dispatcher a user
process is connected to, all statements end up on the common queue.

PART I

One point to emphasize immediately is that shared server is implemented purely on
the server side. The user process and the application software have no way of telling
that anything has changed. The user process issues a connect string that must resolve
to the address of a listener and the name of a service (or of an instance). In return, it
will receive the address of a server-side process that it will think is a dedicated server.
It will then proceed to send SQL statements and receive back result sets; as far as the
user process is concerned, absolutely nothing has changed. But the server side is very
different.
Shared server is implemented by additional processes that are a part of the instance.
They are background processes, launched at instance startup time. There are two new
process types, dispatchers and shared servers. There are also some extra queue memory
structures within the SGA, and the database listener modifies its behavior for shared
server. When an instance that is configured for shared server starts up, in addition to the
usual background processes one or more dispatcher processes also start. The dispatchers,
like any other TCP process, run on a unique TCP port allocated by your operating
system’s port mapper: they contact the listener and register with it, using the local_
listener parameter to locate the listener. One or more shared server processes also
start. These are conceptually similar to a normal dedicated server process, but they are
not tied to one session. They receive SQL statements, parse and execute them, and
generate a result set—but they do not receive the SQL statements directly from a user
process, they read them from a queue that is populated with statements from any
number of user processes. Similarly, the shared servers don’t fetch result sets back to
a user process directly—instead, they put the result sets onto a response queue.
The next question is, how do the user-generated statements get onto the queue
that is read by the server processes, and how do results get fetched to the users? This
is where the dispatchers come in. When a user process contacts a listener, rather than
launching a server process and connecting it to the user process, the listener passes
back the address of a dispatcher. If there is only one dispatcher, the listener will
connect it to all the user processes. If there are multiple dispatchers, the listener will
load-balance incoming connection requests across them, but the end result is that
many user processes will be connected to each dispatcher. Each user process will be
under the impression that it is talking to a dedicated server process, but it isn’t: it is
sharing a dispatcher with many other user processes. At the network level, many user
processes will have connections multiplexed through the one port used by the dispatcher.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

160
All the shared server processes monitor the common queue. When a statement
arrives on the common queue, the first available shared server picks it up. From then
execution proceeds through the usual parse-bind-execute cycle, but when it comes to
the fetch phase, it is impossible for the shared server to fetch the result set back to the
user process: there is no connection between the user process and the shared server.
So instead, the shared server puts the result set onto a response queue that is specific
to the dispatcher that received the job in the first place. Each dispatcher monitors its
own response queue, and whenever any results are put on it, the dispatcher will pick
them up and fetch them back to the user process that originally issued the statement.
Figure 4-11 depicts three user processes making use of shared server mode.
User processes 1 and 2 try to connect to an instance or service and are handed
over to Dispatcher 1 by the listener, while user process 3 interacts with the instance
via Dispatcher 2.
A

User process 1 submits a statement for execution.

B

Dispatcher 1 places the statement onto the common queue.

C

A shared server process picks up the statement from the common request queue,
parses it, executes it, and generates a result set.

D

The shared server places the result set in Dispatcher 1’s response queue.

E

Dispatcher 1 fetches the result set from its response queue.

F

Dispatcher 1 returns the results to User process 1.

G–L

These steps are identical to steps A–F but apply to User process 2. Note that
Dispatcher 1 services both these user processes.

M

User process 3 submits a statement for execution.

N

Dispatcher 2 places the statement onto the common queue.

O

A shared server process picks up the statement from the common request queue,
parses it, executes it, and generates a result set.

P

The shared server places the result set in Dispatcher 2’s response queue. Note that
this shared server process could be the very same process that performed preceding
steps C, I, O, D and J.

Q

Dispatcher 2 fetches the result set from its response queue.

R

Dispatcher 2 returns the results to User process 3.

EXAM TIP There is a common input queue shared by all dispatchers, but each
dispatcher has its own response queue.

A result of the mechanism of dispatchers and queues is that any statement from
any user process could be executed by any available shared server. This raises the
question of how the state of the session can be maintained. It would be quite possible

Chapter 4: Oracle Networking

161
PART I

Figure 4-11

Shared server mode

for a user process to issue, for example, a SELECT FOR UPDATE, a DELETE, and a
COMMIT. In a normal dedicated server connection, this isn’t a problem because the
PGA (which is tied to the one server process that is managing the session) stores
information about what the session was doing, and therefore the dedicated server will
know what to COMMIT and what locks to release. The PGA for a dedicated server
session will store the session’s session data, its cursor state, its sort space, and its stack
space. But in the shared server environment, each statement might be picked off the
common queue by a different shared server process, which will have no idea what the
state of the transaction is. To get around this problem, a shared server session stores
most of the session data in the SGA, rather than in a PGA. Then whenever a shared
server picks a job off the common queue, it will go to the SGA and connect to the
appropriate block of memory to find out the state of the session. The memory used
in the SGA for each shared server session is known as the user global area (the UGA)
and includes all of what would have been in a PGA with the exception of the session’s
stack space. This is where the memory saving will come from. Oracle can manage
memory in the shared pool much more effectively than it can in many separate PGAs.
The part of the SGA used for storing UGAs is the large pool. This can be configured
manually with the large_pool_size parameter, or it can be automatically managed.
EXAM TIP In shared server, what PGA memory structure does not go into
the SGA? The stack space.

Configuring Shared Server
Being a server-side capability, no additional client configuration is needed beyond the
regular client-side Oracle Net (the tnsnames.ora and sqlnet.ora files) as detailed
previously. On the server side, shared server has nothing to do with the database—only

OCA/OCP Oracle Database 11g All-in-One Exam Guide

162
the instance. The listener will be automatically configured for shared server through
dynamic instance registration. It follows that shared server is configured though instance
initialization parameters. There are a number of relevant parameters, but two are all
that are usually necessary: dispatchers and shared_servers.
The first parameter to consider is shared_servers. This controls the number
of shared servers that will be launched at instance startup time. Shared server uses a
queuing mechanism, but the ideal is that there should be no queuing: there should
always be a server process ready and waiting for every job that is put on the common
queue by the dispatchers. Therefore, shared_servers should be set to the maximum
number of concurrent requests that you expect. But if there is a sudden burst of activity,
you don’t have to worry too much, because Oracle will dynamically launch additional
shared servers, up to the value specified by max_shared_servers. By default,
shared_servers is one if dispatchers is set. If the parameter max_shared_
servers is not set, then it defaults to one eighth of the processes parameter.
The dispatchers parameter controls how many dispatcher processes to launch
at instance startup time, and how they will behave. This is the only required parameter.
There are many options for this parameter, but usually two will suffice: how many to
start, and what protocol they should listen on. Among the more advanced options are
ones that allow you to control the port and network card on which the dispatcher will
listen, and the address of the listener(s) with which it will register, but usually you can
let your operating system’s port mapper assign a port, and use the local_listener
parameter to control which listener they will register with. The max_dispatchers
parameter sets an upper limit to the number of dispatchers you can start, but unlike
with shared servers, Oracle will not start extra dispatchers on demand. You can, however,
manually launch additional dispatchers at any time up to this limit.
For example, to enable the shared server architecture, adjust the two critical
parameters as follows:
SQL> alter system set dispatchers='(dispatchers=2)(protocol=tcp)';
SQL> alter system set shared_servers=20;

Tuning the shared server is vital. There should always be enough shared servers
to dequeue requests from the common queue as they arrive, and enough dispatchers to
service incoming requests as they arrive and return results as they are enqueued to the
response queues. Memory usage by shared server sessions in the SGA must be monitored.
After converting from dedicated server to shared server, the SGA will need to be
substantially larger.

When to Use the Shared Server
You will not find a great deal of hard advice in the Oracle documentation on when to
use shared server, or how many dispatchers and shared servers you’ll need. The main
point to hang on to is that shared server is a facility you use because you are forced to,
not something you use automatically. It increases scalability, but it could potentially

Chapter 4: Oracle Networking

163

TIP It is often said that you should think about using shared server when
your number of concurrent connections is in the low hundreds. If you have
less than a hundred concurrent connections, you almost certainly don’t need
it. But if you have more than a thousand, you probably do. The critical factor is
whether your operating system performance is beginning to degrade.
Consider an OLTP environment, such as one where the application supports
hundreds of telephone operators in a call center. Each operator may spend one or two
minutes per call, collecting the caller details and entering them into the user process
(their application session). Then when the Submit button is clicked, the user process
constructs an insert statement and sends it off to the server process. The server process
might go through the whole parse/bind/execute/fetch cycle for the statement in just a
few hundredths of a second. Clearly, no matter how fast the clerks work, their server
processes are idle 99.9 percent of the time. But the operating system still has to switch
all those processes on and off CPU, according to its time sharing algorithm. By contrast,
consider a data warehouse environment. Here, users submit queries that may run for
a long time. The batch uploads of data will be equally long running. Whenever one of
these large jobs is submitted, the server process for that session could be working flat
out for hours on just one statement.
It should be apparent that shared server is ideal for managing many sessions doing
short transactions, where the bulk of the work is on the client side of the client-server
divide. In these circumstances, one shared server will be able to service dozens of
sessions. But for batch processing work, dedicated servers are much better. If you
submit a large batch job through a shared server session, it will work—but it will
tie up one of your small pool of shared server processes for the duration of the job,
leaving all your other users to compete for the remaining shared servers. The amount
of network traffic involved in batch uploads from a user process and in fetching large
result sets back to a user process will also cause contention for dispatchers.
A second class of operations that are better done through a dedicated server is
database administration work. Index creation, table maintenance operations, and
backup and recovery work through the Recovery Manager will perform much better
through a dedicated server. And it is logically impossible to issue startup or shutdown
commands through a shared server: the shared servers are part of the instance and
thus not available at the time you issue a startup command. So the administrator
should always have a dedicated server connection.

PART I

reduce performance. It is quite possible that any one statement will take longer to
execute in a shared server environment than if it were executing on a dedicated server,
because it has to go via queues. It may also take more CPU resources because of the
enqueuing and dequeuing activity. But overall, the scalability of your system will
increase dramatically. Even if each request is marginally slower, you will be able to
carry out many more requests per second through the instance.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

164
TIP If the default mode has been changed to shared server mode, batch and
administration user processes can ensure that they are serviced by dedicated
server processes, by modifying their client-side tnsnames.ora by adding
the entry: (SERVER=DEDICATED).
Exercise 4-2: Set Up a Shared Server Environment In this exercise, which
continues from Step 21 of Exercise 4-1, you will configure the shared server and prove
that it is working.
1. Set the dispatchers and shared_servers parameters and register with
the listener as follows:
alter system set dispatchers='(protocol=tcp)(dispatchers=2)'
scope=memory;
alter system set shared_servers=4 scope=memory;
alter system register;

2. Confirm that the dispatchers and shared servers have started by querying the
view V$PROCESS. Look for processes named S000, S001, S002, S003, D000,
and D001:
select program from v$process order by program;

3. From an operating system prompt, confirm that the dispatchers have
registered with the listener:
lsnrctl services newlist

4. Connect through the listener, and confirm that the connection is through the
shared server mechanism:
connect system/oracle@new;
select d.name,s.name from v$dispatcher d,v$shared_server s, v$circuit c
where d.paddr=c.dispatcher and s.paddr=c.server;

This query will show the dispatcher to which your season is connected, and
the shared server process that is executing your query.
5. Tidy up the environment, by returning to the original configuration:
alter
alter
alter
alter
alter

system
system
system
system
system

set local_listener='' scope=memory;
set service_names='' scope=memory;
set dispatchers='' scope=memory;
set shared_servers=0 scope=memory;
register;

Stop the listener from an operating system prompt with lsnrctl stop
newlist.
Unset the TNS_ADMIN variable: on Linux, export TNS_ADMIN=" or on
Windows, remove the TNS_ADMIN registry key.

Chapter 4: Oracle Networking

165

Configure and Manage the Oracle Network
• The server-side files are the listener.ora and (optionally)
sqlnet.ora files.
• The client-side files are the tnsnames.ora and (optionally)
sqlnet.ora files.
• The Oracle Net files live by default in ORACLE_HOME/network/admin, or in
whatever directory the TNS_ADMIN variable points to.
• Name resolution can be local (with a tnsnames.ora file) or central (with
an LDAP directory).
• Easy Connect does not need any name resolution.
• One listener can listen for many databases.
• Many listeners can connect to one database.
• Instance registration with listeners can be static (by coding details in the
listener.ora file) or dynamic (by the PMON process updating
the listener).
• Each user process has a persistent connection to its dedicated server process.

Use the Oracle Shared Server Architecture
• User processes connect to dispatchers; these connections are persistent.
• All dispatchers place requests on a common queue.
• Shared server processes dequeue requests from the common queue.
• Each dispatcher has its own response queue.
• Shared server processes place results onto the appropriate dispatcher’s
response queue.
• The dispatchers fetch results back to the appropriate user process.
• Shared server is configured with (as a minimum) two instance parameters:
dispatchers and shared_servers.

PART I

Two-Minute Drill

OCA/OCP Oracle Database 11g All-in-One Exam Guide

166

Self Test
1. Which protocols can Oracle Net 11g use? (Choose all correct answers.)
A. TCP
B. UDP
C. SPX/IPX
D. SDP
E. TCP with secure sockets
F. Named Pipes
G. LU6.2
H. NetBIOS/NetBEUI
2. Where is the division between the client and the server in the Oracle
environment? (Choose the best answer.)
A. Between the instance and the database
B. Between the user and the user process
C. Between the server process and the instance
D. Between the user process and the server process
E. The client-server split varies depending on the stage of the
execution cycle
3. Which of the following statements about listeners is correct? (Choose
the best answer.)
A. A listener can connect you to one instance only.
B. A listener can connect you to one service only.
C. Multiple listeners can share one network interface card.
D. An instance will only accept connections from the listener specified
on the local_listener parameter.
4. You have decided to use Local Naming. Which files must you create on the
client machine? (Choose the best answer.)
A. tnsnames.ora and sqlnet.ora
B. listener.ora only
C. tnsnames.ora only
D. listener.ora and sqlnet.ora
E. None—you can rely on defaults if you are using TCP and your listener is
running on port 1521

Chapter 4: Oracle Networking

167

A. They will continue if you have configured failover.
B. They will not be affected in any way.
C. They will hang until you restart the listener.
D. You cannot stop a listener if it is in use.
E. The sessions will error out.
6. Study this tnsnames.ora file:
test =
(description =
(address_list =
(address = (protocol = tcp)(host = serv2)(port = 1521))
)
(connect_data =
(service_name = prod)
)
)
prod =
(description =
(address_list =
(address = (protocol = tcp)(host = serv1)(port = 1521))
)
(connect_data =
(service_name = prod)
)
)
dev =
(description =
(address_list =
(address = (protocol = tcp)(host = serv2)(port = 1521))
)
(connect_data =
(service_name = dev)
)
)

Which of the following statements is correct about the connect strings test,
prod, and dev? (Choose all correct answers.)
A. All three are valid.
B. All three can succeed only if the instances are set up for dynamic instance
registration.
C. The test connection will fail, because the connect string doesn’t match the
service name.
D. There will be a port conflict on serv2, because prod and dev try to use the
same port.

PART I

5. If you stop your listener, what will happen to sessions that connected through
it? (Choose the best answer.)

OCA/OCP Oracle Database 11g All-in-One Exam Guide

168
7. Consider this line from a listener.ora file:
L1=(description=(address=(protocol=tcp)(host=serv1)(port=1521)))

What will happen if you issue this connect string?
connect scott/tiger@L1 (Choose the best answer)

A. You will be connected to the instance L1.
B. You will only be connected to an instance if dynamic instance registration
is working.
C. The connection attempt will fail.
D. If you are logged on to the server machine, IPC will connect you to the
local instance.
E. The connection will fail if the listener is not started.
8. Which of these memory structures is not stored in the SGA for a shared server
session? (Choose the best answer.)
A. Cursor state
B. Sort space
C. Stack space
9. Match the object to the function:
Object

Function

a. Common queue

A. Connects users to dispatchers

b. Dispatcher

B. Stores jobs waiting for execution

c. Large pool

C. Executes SQL statements

d. Listener

D. Stores results waiting to be fetched

e. Response queue

E. Receives statements from user processes

f. Shared server

F. Stores UGAs accessed by all servers

10. Which of the following is true about dispatchers? (Choose all correct answers.)
A. Dispatchers don’t handle the work of users’ requests; they only interface
between user processes and queues.
B. Dispatchers share a common response queue.
C. Dispatchers load-balance connections between themselves.
D. Listeners load-balance connections across dispatchers.
E. You can terminate a dispatcher, and established sessions will continue.

Chapter 4: Oracle Networking

169

A. All statements in a multistatement transaction will be executed by the
same server.
B. If one statement updates multiple rows, the work may be shared across
several servers.
C. The number of shared servers is fixed by the SHARED_SERVERS parameter.
D. Oracle will spawn additional shared servers on demand.

Self Test Answers
1. þ A, D, E, and F. TCP, SDP, TCPS, and NMP are the supported protocols
with the current release.
ý B, C, G, and H. B and H are wrong because UDP and NetBIOS/NetBEUI
have never been supported. C and G are wrong because SPX and LU6.2 are no
longer supported.
2. þ D. The client-server split is between user process and server process.
ý A, B, C, and E. These all misrepresent the client-server architecture.
3. þ C. Many listeners can shared one address, if they use different ports.
ý A, B, and D. A is wrong because one listener can launch sessions against
many instances. B is wrong because a listener can connect you to a registered
service. D is wrong because the local_listener parameter controls
which listener the instance will register with dynamically; it will also accept
connections from any listener that has it statically registered.
4. þ C. This is the only required client-side file for local naming.
ý A, B, D, and E. A is wrong because sqlnet.ora is not essential. B and
D are wrong because they refer to server-side files. E is wrong because some
configuration is always necessary for local naming (though not for Easy
Connect).
5. þ B. The listener establishes connections but is not needed for their
maintenance.
ý A, C, D, and E. These are all incorrect because they assume that the
listener is necessary for the continuance of an established session.
6. þ A and B. All three are valid but will only work if the services are registered
with the listeners.
ý C and D. C is wrong because there need be no connection between the
alias used in a connect string and the service name. D is wrong because many
services can be accessible through a single listening port.

PART I

11. Which of the following statements about shared servers are true? (Choose the
best answer.)

OCA/OCP Oracle Database 11g All-in-One Exam Guide

170
7. þ C. The CONNECT_DATA that specifies a SID or service is missing.
ý A, B, D, and E. A is wrong because L1 is the connect string, not an
instance or service name. B is wrong because dynamic registration is not
enough to compensate for a missing CONNECT_DATA clause. D is wrong
because the use of IPC to bypass the listener is not relevant. E is wrong
because (while certainly true) it is not the main problem.
8. þ C. Stack space is not part of the UGA and therefore does not go into
the SGA.
ý A and B. These are UGA components and therefore do go into the SGA.
9. þ a – B, b – E, c – F, d – A, e – D, f – C
These are the correct mappings of objects to functions.
10. þ A and D. Dispatchers maintain the connection to user processes, place
requests on the common queue, and retrieve result sets from response queues.
ý B, C, and E. B is wrong because each dispatcher has its own response
queue. C is wrong because it is the listener that load-balances, not the
dispatchers. E is wrong because the connections to a dispatcher are persistent:
if it dies, they will be broken.
11. þ D. To prevent queueing on the common queue, Oracle will launch
additional shared servers—but only up to the max_shared_servers value.
ý A, B, and C. A is wrong because each statement may be picked up by a
different server. B is wrong because any one statement can be executed by only
one server. C is wrong because this parameter controls the number of servers
initially launched, which may change later.

CHAPTER 5
Oracle Storage

Exam Objectives
In this chapter you will learn to
• 052.6.1 Work with Tablespaces and Datafiles
• 052.6.2 Create and Manage Tablespaces
• 052.6.3 Handle Space Management in Tablespaces

171

OCA/OCP Oracle Database 11g All-in-One Exam Guide

172
The preceding two chapters dealt with the instance and the sessions against it: processes
and memory structures. This chapter begins the investigation of the database itself. All
data processing occurs in memory, in the instance, but data storage occurs in the
database on disk. The database consists of three file types: the controlfile, the online
redo log files, and the datafiles. Data is stored in the datafiles.
Users never see a physical datafile. All they see are logical segments. System
administrators never see a logical segment. All they see are physical datafiles. The
Oracle database provides an abstraction of logical storage from physical. This is one
of the requirements of the relational database paradigm. As a DBA, you must be aware
of the relationship between the logical and the physical storage. Monitoring and
administering these structures, a task often described as space management, used to
be a huge part of a DBA’s workload. The facilities provided in recent releases of the
database can automate space management to a certain extent, and they can certainly
let the DBA set up storage in ways that will reduce the maintenance workload
considerably.

Overview of Tablespaces and Datafiles
Data is stored logically in segments (typically tables) and physically in datafiles. The
tablespace entity abstracts the two: one tablespace can contain many segments and be
made up of many datafiles. There is no direct relationship between a segment and a
datafile. The datafiles can exist as files in a file system or (from release 10g onward)
on Automatic Storage Management (ASM) devices.

The Oracle Data Storage Model
The separation of logical from physical storage is a necessary part of the relational
database paradigm. The relational paradigm states that programmers should address
only logical structures and let the database manage the mapping to physical structures.
This means that physical storage can be reorganized, or the whole database moved
to completely different hardware and operating system, and the application will not
be aware of any change.
Figure 5-1 shows the Oracle storage model sketched as an entity-relationship
diagram, with the logical structures to the left and the physical structures to the right.
There is one relationship drawn in as a dotted line: a many-to-many relationship
between segments and datafiles. This relationship is dotted, because it shouldn’t be
there. As good relational engineers, DBAs do not permit many-to-many relationships.
Resolving this relationship into a normalized structure is what the storage model is
all about. The following discussion takes each of the entities in Figure 5-1 one by one.
The tablespace entity resolves the many-to-many relationship between segments
and datafiles. One tablespace can contain many segments and be made up of many
datafiles. This means that any one segment may be spread across multiple datafiles,
and any one datafile may contain all of or parts of many segments. This solves many
storage challenges. Some older database management systems used a one-to-one

Chapter 5: Oracle Storage

173
Logical view

Physical view

Tablespace

Datafile

Segment

Extent

Oracle block

Operating
system block

relationship between segments and files: every table or index would be stored as a
separate file. This raised two dreadful problems for large systems. First, an application
might well have thousands of tables and even more indexes; managing many
thousands of files was an appalling task for the system administrators. Second, the
maximum size of a table is limited by the maximum size of a file. Even if modern
operating systems do not have any practical limits, there may well be limitations
imposed by the underlying hardware environment. Use of tablespaces bypasses both
these problems. Tablespaces are identified by unique names in the database.
The segment entity represents any database object that stores data and therefore
requires space in a tablespace. Your typical segment is a table, but there are other
segment types, notably index segments (described in Chapter 7) and undo segments
(described in Chapter 8). Any segment can exist in only one tablespace, but the
tablespace can spread it across all the files making up the tablespace. This means that
the tables’ sizes are not subject to any limitations imposed by the environment on
maximum file size. As many segments can share a single tablespace, it becomes
possible to have far more segments than there are datafiles. Segments are schema
objects, identified by the segment name qualified with the owning schema name.
Note that programmatic schema objects (such as PL/SQL procedures, views, or
sequences) are not segments: they do not store data, and they exist within the data
dictionary.
The Oracle block is the basic unit of I/O for the database. Datafiles are formatted
into Oracle blocks, which are consecutively numbered. The size of the Oracle blocks
is fixed for a tablespace (generally speaking, it is the same for all tablespaces in the
database); the default (with release 11g) is 8KB. A row might be only a couple
hundred bytes, and so there could be many rows stored in one block, but when

PART I

Figure 5-1
The Oracle storage
model

OCA/OCP Oracle Database 11g All-in-One Exam Guide

174
a session wants a row, the whole block will be read from disk into the database buffer
cache. Similarly, if just one column of one row has been changed in the database
buffer cache, the DBWn will (eventually) write the whole block back into the datafile
from which it came, overwriting the previous version. The size of an Oracle block can
range from 2KB to 16KB on Linux or Windows, and to 32KB on some other operating
systems. The block size is controlled by the parameter DB_BLOCK_SIZE. This can
never be changed after database creation, because it is used to format the datafile(s)
that make up the SYSTEM tablespace. If it becomes apparent later on that the block
size is inappropriate, the only course of action is to create a new database and transfer
everything into it. A block is uniquely identified by its number within a datafile.
Managing space one block at a time would be a crippling task, so blocks are
grouped into extents. An extent is a set of consecutively numbered Oracle blocks
within one datafile. Every segment will consist of one or more extents, consecutively
numbered. These extents may be in any and all of the datafiles that make up the
tablespace. An extent can be identified from either the dimension of the segment
(extents are consecutively numbered per segment, starting from zero) or the
dimension of the datafile (every segment is in one file, starting at a certain Oracle
block number).
A datafile is physically made up of a number of operating system blocks. How
datafiles and the operating system blocks are structured is entirely dependent on the
operating system’s file system. Some file systems have well-known limitations and
are therefore not widely used for modern systems (for example, the old MS-DOS FAT
file system could handle files up to only 4GB, and only 512 of them per directory).
Most databases will be installed on file systems with no practical limits, such as NTFS
on Windows or ext3 on Linux. The alternatives to file systems for datafile storage are
raw devices or Automatic Storage Management (ASM). Raw devices are now very
rarely used for datafile storage because of manageability issues. ASM is detailed in
Chapter 20.
An operating system block is the basic unit of I/O for your file system. A process might
want to read only one byte from disk, but the I/O system will have to read an operating
system block. The operating system block size is configurable for some file systems
(for example, when formatting an NTFS file system you can choose from 512B to
64KB), but typically system administrators leave it on default (512B for NTFS, 1KB for
ext3). This is why the relationship between Oracle blocks and operating system blocks
is usually one-to-many, as shown in Figure 5-1. There is no reason not to match the
operating system block size to the Oracle block size if your file system lets you do
this. The configuration that should always be avoided would be where the operating
system blocks are bigger than the Oracle blocks.

Segments, Extents, Blocks, and Rows
Data is stored in segments. The data dictionary view DBA_SEGMENTS describes every
segment in the database. This query shows the segment types in a simple database:

Chapter 5: Oracle Storage

175

In brief, and in the order they are most likely to concern a DBA, these segments
types are
• TABLE These are heap-structured tables that contain rows of data. Even
though a typical segment is a table segment, never forget that the table is not
the same as the segment, and that there are more complex table organizations
that use other segment types.
• INDEX Indexes are sorted lists of key values, each with a pointer, the
ROWID, to the physical location of the row. The ROWID specifies which
Oracle block of which datafile the row is in, and the row number within
the block.
• TYPE2 UNDO These are the undo segments (no one refers to them as “type2
undo” segments) that store the pre-change versions of data that are necessary
for providing transactional integrity: rollback, read consistency, and isolation.
• ROLLBACK Rollback segments should not be used in normal running from
release 9i onward. Release 9i introduced automatic undo management, which
is based on undo segments. There will always be one rollback segment that
protects the transactions used to create a database (this is necessary because
at that point no undo segments exist), but it shouldn’t be used subsequently.
• TABLE PARTITION A table can be divided into many partitions. If this is
done, the partitions will be individual segments, and the partitioned table
itself will not be a segment at all: it will exist only as the sum total of its
partitions. Each table partition of a heap table is itself structured as a heap
table, in its own segment. These segments can be in different tablespaces,
meaning that it becomes possible to spread one table across multiple
tablespaces.
• INDEX PARTITION An index will by default be in one segment, but indexes
can also be partitioned. If you are partitioning your tables, you will usually
partition the indexes on those tables as well.

PART I

SQL> select segment_type,count(1) from dba_segments group by segment_type
2 order by segment_type;
SEGMENT_TYPE
COUNT(1)
------------------ ---------CLUSTER
10
INDEX
3185
INDEX PARTITION
324
LOB PARTITION
7
LOBINDEX
760
LOBSEGMENT
760
NESTED TABLE
29
ROLLBACK
1
TABLE
2193
TABLE PARTITION
164
TYPE2 UNDO
10
11 rows selected.
SQL>

OCA/OCP Oracle Database 11g All-in-One Exam Guide

176
• LOBSEGMENT, LOBINDEX, LOB PARTITION If a column is defined as a
large object data type, then only a pointer is stored in the table itself: a pointer
to an entry in a separate segment where the column data actually resides.
LOBs can have indexes built on them for rapid access to data within the
objects, and LOBs can also be partitioned.
• CLUSTER A cluster is a segment that can contain several tables. In contrast
with partitioning, which lets you spread one table across many segments,
clustering lets you denormalize many tables into one segment.
• NESTED TABLE If a column of a table is defined as a user-defined object
type that itself has columns, then the column can be stored in its own
segment, as a nested table.
Every segment is comprised of one or more extents. When a segment is created,
Oracle will allocate an initial extent to it in whatever tablespace is specified.
Eventually, as data is entered, that extent will fill. Oracle will then allocate a second
extent, in the same tablespace but not necessarily in the same datafile. If you know
that a segment is going to need more space, you can manually allocate an extent.
Figure 5-2 shows how to identify precisely the location of a segment.
In the figure, the first command creates the table HR.NEWTAB, relying completely
on defaults for the storage. Then a query against DBA_EXTENTS shows that the

Figure 5-2

Determining the physical location of a segment’s extents

Chapter 5: Oracle Storage

177

ALTER TABLE tablename ALLOCATE EXTENT STORAGE (DATAFILE 'filename');

TIP Preallocating space by manually adding extents can deliver a performance
benefit but is a huge amount of work.You will usually do it for only a few tables
or indexes that have an exceptionally high growth rate, or perhaps before bulk
loading operations.
The last query in Figure 5-2 interrogates the view DBA_DATA_FILES to determine
the name of the file in which the extents were allocated, and the name of the tablespace
to which the datafile belongs. To identify the table’s tablespace, one could also query
the DBA_SEGMENTS view.
TIP You can query DBA_TABLES to find out in which tablespace a table
resides, but this will only work for nonpartitioned tables—not for partitioned
tables, where each partition is its own segment and can be in a different
tablespace. Partitioning lets one table (stored as multiple segments)
span tablespaces.
An extent consists of a set of consecutively numbered blocks. Each block has a
header area and a data area. The header is of variable size and grows downward from
the top of the block. Among other things, it contains a row directory (that lists where
in the block each row begins) and row locking information. The data area fills from
the bottom up. Between the two there may (or may not) be an area of free space.
Events that will cause a block’s header to grow include inserting and locking rows.
The data area will initially be empty and will fill as rows are inserted (or index keys
are inserted, in the case of a block of an index segment). The free space does get
fragmented as rows are inserted, deleted, and updated (which may cause a row’s size
to change), but that is of no significance because all this happens in memory, after
the block has been copied into a buffer in the database buffer cache. The free space is
coalesced into a contiguous area when necessary, and always before the DBWn writes
the block back to its datafile.

PART I

segment consists of just one extent, extent number zero. This extent is in file number
4 and is 8 blocks long. The first of the 8 blocks is block number 1401. The size of the
extent is 64KB, which shows that the block size is 8KB. The next command forces
Oracle to allocate another extent to the segment, even though the first extent is not
full. The next query shows that this new extent, number 1, is also in file number 4
and starts immediately after extent zero. Note that it is not clear from this example
whether or not the tablespace consists of multiple datafiles, because the algorithm
Oracle uses to work out where to assign the next extent does not simply use datafiles
in turn. If the tablespace does consist of multiple datafiles, you can override Oracle’s
choice with this syntax:

OCA/OCP Oracle Database 11g All-in-One Exam Guide

178
File Storage Technologies
Datafiles can exist on four device types: local file systems, clustered file systems, raw
devices, and ASM disk groups:
• Files on a file local system These are the simplest datafiles; they exist
as normal operating system files in a directory structure on disks directly
accessible to the computer running the instance. On a PC running Windows
or Linux, these could be internal IDE or SATA drives. On more sophisticated
hardware, they would usually be SCSI disks, or external drives.
• Files on a clustered file system A clustered file system is usually created
on external disks, mounted concurrently on more than one computer. The
clustered file system mediates access to the disks from processes running on
all the computers in the cluster. Using clustered file systems is one way of
implementing RAC: the database must reside on disks accessible to all the
instances that are going to open it. Clustered file systems can be bought from
operating system vendors, or Oracle Corporation’s OCFS (Oracle Clustered
File System) is an excellent alternative. OCFS was first written for Linux and
Windows (for which there were no proper clustered file systems) and bundled
with database release 9i; with 10g it was ported to all the other mainstream
operating systems.
• Files on raw devices It is possible to create datafiles on disks with no file
system at all. This is still supported but is really only a historical anomaly. In
the bad old days before clustered file systems (or ASM) existed, raw devices
were the only way to implement a Parallel Server database. Parallel Server itself
was replaced with RAC in database release 9i.
• Files on ASM devices ASM is Automatic Storage Management, a facility
introduced with database release 10g. This is an alternative to file system–
based datafile storage and covered in detail in Chapter 20.
TIP Some people claim that raw devices give the best performance. With
contemporary disk and file system technology, this is almost certainly not true.
And even if it were true, they are so awkward to manage that no sane DBA
wants to use them.
ASM is tested in detail in the second OCP examination, but an understanding
of what it can do is expected for the first examination. ASM is a logical volume
manager provided by Oracle and bundled with the database. The general idea is that
you take a bunch of raw disks, give them to Oracle, and let Oracle get on with it. Your
system administrators need not worry about creating file systems at all.
A logical volume manager provided by the operating system, or perhaps by a third
party such as Veritas, will take a set of physical volumes and present them to the operating
system as logical volumes. The physical volumes could be complete disks, or they could
be partitions of disks. The logical volumes will look to application software like disks,

Chapter 5: Oracle Storage

179

EXAM TIP ASM can store only database files, not the binaries. The Oracle
Home must always be on a conventional file system.
Exercise 5-1: Investigate the Database’s Data Storage Structures In
this exercise, you will run queries to document a database’s physical structure. The
commands could be run interactively from SQL*Plus or Database Control, but it
would make sense to save them as a script that (with suitable refinements for display
format and for site specific customizations) can be run against any database as part
of the regular reports on space usage.
1. Connect to the database as user SYSTEM.
2. Determine the name and size of the controlfile(s):
select name,block_size*file_size_blks bytes from v$controlfile;

3. Determine the name and size of the online redo log file members:
select member,bytes from v$log join v$logfile using (group#);

PART I

but the underlying storage of any one logical volume might not be one physical volume
but several. It is on these logical volumes that the file systems are then created.
A logical volume can be much larger than any of the physical volumes of which it
is composed. Furthermore, the logical volume can be created with characteristics that
exploit the performance and safety potential of using multiple physical volumes.
These characteristics are striping and mirroring of data. Striping data across multiple
physical volumes gives huge performance gains. In principle, if a file is distributed
across two disks, it should be possible to read it in half the time it would take if it
were all on one disk. The performance will improve geometrically, in proportion to
the number of disks assigned to the logical volume. Mirroring provides safety. If a
logical volume consists of two or more physical volumes, then every operating system
block written to one volume can be written simultaneously to the other volume. If
one copy is damaged, the logical volume manager will read the other. If there are
more than two physical volumes, a higher degree of mirroring becomes possible,
providing fault tolerance in the event of multiple disk failures.
Some operating systems (such as AIX) include a logical volume manager as
standard; with other operating systems it is an optional (and usually chargeable)
extra. Historically, some of the simpler operating systems (such as Windows and
Linux) did not have much support for logical volume managers at all. If a logical
volume manager is available, it may require considerable time and skill to set up
optimally.
ASM is a logical volume manager designed for Oracle database files. The definition
of “database file” is broad. Apart from the true database files (controlfile, online redo
log files, and datafiles), ASM can also store backup files, archived redo log files, and
Data Pump files (all these files will be detailed in later chapters). It cannot be used for
the Oracle Home, or for the alert log and trace files.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

180
4. Determine the name and size of the datafiles and the tempfiles:
select name,bytes from v$datafile
union all
select name,bytes from v$tempfile;

Create and Manage Tablespaces
Tablespaces are repositories for schema data, including the data dictionary (which
is the SYS schema). All databases must have a SYSTEM tablespace and a SYSAUX
tablespace, and (for practical purposes) a temporary tablespace and an undo tablespace.
These four will usually have been created when the database was created. Subsequently,
the DBA may create many more tablespaces for user data, and possible additional
tablespaces for undo and temporary data.

Tablespace Creation
To create a tablespace with Enterprise Manager Database Control, from the database
home page take the Server tab and then the Tablespaces link in the Storage section.
Figure 5-3 shows the result for the default database.

Figure 5-3

The tablespaces in the default ORCL database

Chapter 5: Oracle Storage

181

• Allocated size This is the current size of the datafile(s) assigned to the
tablespace. It is based on the current size, not the maximum size to which
it may be allowed to expand.
• Space used This is the space occupied by segments in the tablespace that
cannot be reclaimed.
• Allocated space used (%)
figures.
• Allocated free space

A graphical representation of the preceding two

The space currently available within the tablespace.

• Status A green tick indicates that the tablespace is online, and therefore
that the objects within it should be accessible. An offline tablespace would
be indicated with a red cross.
• Datafiles The number of datafiles (or tempfiles for temporary tablespaces,
if one is being precise) that make up the tablespace.
• Type The type of objects that can be stored in the tablespace. A permanent
tablespace stores regular schema objects, such as tables and indexes. A
temporary tablespace stores only system-managed temporary segments,
and an undo tablespace stores only system-managed undo segments.
• Extent management The technique used for allocating extents to segments.
LOCAL is the default and should always be used.
• Segment management The technique used for locating blocks into which
data insertions may be made. AUTO is the default and is recommended for
all user data tablespaces.
This information could be also be gleaned by querying the data dictionary views
DBA_TABLESPACES, DBA_DATA_FILES, DBA_SEGMENTS, and DB_FREE_SPACE as
in this example:
SQL>
2
3
4
5
6
7
8
9
10
11

select t.tablespace_name name, d.allocated, u.used, f.free,
t.status, d.cnt, contents, t.extent_management extman,
t.segment_space_management segman
from dba_tablespaces t,
(select sum(bytes) allocated, count(file_id) cnt from dba_data_files
where tablespace_name='EXAMPLE') d,
(select sum(bytes) free from dba_free_space
where tablespace_name='EXAMPLE') f,
(select sum(bytes) used from dba_segments
where tablespace_name='EXAMPLE') u
where t.tablespace_name='EXAMPLE';

NAME
ALLOCATED
USED
FREE STATUS CNT CONTENTS EXTMAN SEGMAN
------- ---------- --------- --------- ------ ---- --------- ------ -----EXAMPLE 104857600 81395712 23396352 ONLINE
1 PERMANENT LOCAL AUTO

PART I

There are six tablespaces shown in the figure. For each tablespace, identified by
name, the window shows

OCA/OCP Oracle Database 11g All-in-One Exam Guide

182
Click the CREATE button to create a tablespace. The Create Tablespace window
prompts for a tablespace name, and the values for Extent Management, Type, and
Status. In most circumstances, the defaults will be correct: Local, Permanent, and
Read Write. Then the ADD button lets you specify one or more datafiles for the new
tablespace. Each file must have a name and a size, and can optionally be set to
autoextend up to a maximum file size. The autoextend facility will let Oracle increase
the size of the datafile as necessary, which may avoid out-of-space errors.
Figures 5-4 and 5-5 show the Database Control interfaces for creating a tablespace
NEWTS with one datafile.

Figure 5-4

The Create Tablespace window

Chapter 5: Oracle Storage

183
PART I

Figure 5-5 The Add Datafile window

Clicking the sHOW SQL button would display this command (the line numbers have
been added manually):
1
2
3
4
5
6
7

CREATE SMALLFILE TABLESPACE "NEWTS"
DATAFILE 'D:\APP\ORACLE\ORADATA\ORCL11G\newts01.dbf'
SIZE 100M AUTOEXTEND ON NEXT 10M MAXSIZE 200M
LOGGING
EXTENT MANAGEMENT LOCAL
SEGMENT SPACE MANAGEMENT AUTO
DEFAULT NOCOMPRESS;

OCA/OCP Oracle Database 11g All-in-One Exam Guide

184
Consider this command line by line:
Line 1

The tablespace is a SMALLFILE tablespace. This means that it can consist of many
datafiles. The alternative is BIGFILE, in which case it would be impossible to add a
second datafile later (though the first file could be resized.) The Use Bigfile Tablespace
check box in Figure 5-4 controls this.

Line 2

The datafile name and location.

Line 3

The datafile will be created as 100MB but when full can automatically extend in 10MB
increments to a maximum of 200MB. By default, automatic extension is not enabled.

Line 4

All operations on segments in the tablespace will generate redo; this is the default. It is
possible to disable redo generation for a very few operations (such as index generation).

Line 5

The tablespace will use bitmaps for allocating extents; this is the default.

Line 6

Segments in the tablespace will use bitmaps for tracking block usage; this is the default.

Line 7

Segments in the tablespace will not be compressed; this is the default.

Taking the Storage tab shown in Figure 5-4 gives access to options for extent
management and compression, as shown in Figure 5-6.

Figure 5-6

Further options for tablespace creation

Chapter 5: Oracle Storage

185

TIP All tablespaces should be locally managed. The older mechanism, known
as dictionary managed, was far less efficient and is only supported (and only
just) for backward compatibility. It has been possible to create locally managed
tablespaces, and to convert dictionary-managed tablespaces to locally managed,
since release 8i.
A typical tablespace creation statement as executed from the SQL*Plus command
line is shown in Figure 5-7, with a query confirming the result.
The tablespace STORETABS consists of two datafiles, neither of which will
autoextend. The only deviation from defaults has been to specify a uniform extent
size of 5MB. The first query in the figure shows that the tablespace is not a bigfile
tablespace—if it were, it would not have been possible to define two datafiles.
The second query in the figure investigates the TEMP tablespace, used by the
database for storing temporary objects. It is important to note that temporary
tablespaces use tempfiles, not datafiles. Tempfiles are listed in the views V$TEMPFILE

Figure 5-7 Tablespace creation and verification with SQL*Plus

PART I

When using local extent management (as all tablespaces should), it is possible
to enforce a rule that all extents in the tablespace should be the same size. This is
discussed in the following section. If enabling compression, then it can be applied
to data only when it is bulk-loaded, or as a part of all DML operations. If logging is
disabled, this provides a default for the very few operations where redo generation
can be disabled, such as index creation. Whatever setting is chosen, all DML operations
will always generate redo.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

186
and DBA_TEMP_FILES, whereas datafiles are listed in V$DATAFILE and DBA_DATA_
FILES. Also note that the V$ views and the DBA views give different information. As
the query shows, you can query V$TABLESPACE to find if a tablespace is a bigfile table
and V$TEMPFILE (or V$DATAFILE) to find how big a file was at creation time. This
information is not shown in the DBA views. However, the DBA views give the detail
of extent management and segment space management. The different information
available in the views is because some information is stored only in the controlfile
(and therefore visible only in V$ views) and some is stored only in the data
dictionary (and therefore visible only in DBA views). Other information is duplicated.

Altering Tablespaces
The changes made to tablespaces after creation are commonly
• Renaming
• Taking online and offline
• Flagging as read-write or read only
• Resizing
• Changing alert thresholds

Rename a Tablespace and Its Datafiles
The syntax is
ALTER TABLESPACE tablespaceoldname RENAME TO tablespacenewname;

This is very simple but can cause problems later. Many sites rely on naming
conventions to relate tablespaces to their datafiles. All the examples in this chapter do
just that: they embed the name of the tablespace in the name of the datafiles. Oracle
doesn’t care: internally, it maintains the relationships by using the tablespace number
and the datafile (or tempfile) number. These are visible as the columns V$TABLESPACE
.TS# and V$DATAFILE.FILE#. If your site does rely on naming conventions, then it will
be necessary to rename the files as well. A tablespace can be renamed while it is in use,
but to rename a datafile, the datafiles must be offline. This is because the file must be
renamed at the operating system level, as well as within the Oracle environment, and
this can’t be done if the file is open: all the file handles would become invalid.
Figure 5-8 demonstrates an example of the entire process, using the tablespace
created in Figure 5-7.
In the figure, the first command renames the tablespace. That’s the easy part.
Then the tablespace is taken offline (as described in the following section), and two
operating system commands rename the datafiles in the file system. Two ALTER
DATABASE commands change the filenames as recorded within the controlfile, so
that Oracle will be able to find them. Finally the tablespace is brought back online.

Chapter 5: Oracle Storage

187
PART I

Figure 5-8 Renaming a tablespace and its datafiles

Taking a Tablespace Online or Offline
An online tablespace or datafile is available for use; an offline tablespace or datafile
exists as a definition in the data dictionary and the controlfile but cannot be used. It
is possible for a tablespace to be online but one or more of its datafiles to be offline.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

188
This is a situation that can produce interesting results and should generally be avoided.
The syntax for taking a tablespace offline is
ALTER TABLESPACE tablespacename OFFLINE [NORMAL | IMMEDIATE | TEMPORARY];

A NORMAL offline (which is the default) will force a checkpoint for all the
tablespace’s datafiles. Every dirty buffer in the database buffer cache that contains a
block from the tablespace will be written to its datafile, and then the tablespace and
the datafiles are taken offline.
At the other extreme is IMMEDIATE. This offlines the tablespace and the datafiles
immediately, without flushing any dirty buffers. Following this, the datafiles will be
corrupted (they may be missing committed changes) and will have to be recovered by
applying change vectors from the redo log before the tablespace can be brought back
online. Clearly, this is a drastic operation. It would normally be done only if a file has
become damaged so that the checkpoint cannot be completed. The process of
recovery is detailed in Chapter 16.
A TEMPORARY offline will checkpoint all the files that can be checkpointed, and
then take them and the tablespace offline in an orderly fashion. Any damaged file(s)
will be offlined immediately. If just one of the tablespaces datafiles has been damaged,
this will limit the number of files that will need to be recovered.

Mark a Tablespace as Read Only
To see the effect of making a tablespace read only, study Figure 5-9.
The syntax is completely self-explanatory:
ALTER TABLESPACE tablespacename [READ ONLY | READ WRITE];

Following making a tablespace read only, none of the objects within it can be
changed with DML statements, as demonstrated in the figure. But they can be dropped.
This is a little disconcerting but makes perfect sense when you think it through.
Dropping a table doesn’t actually affect the table. It is a transaction against the data
dictionary, that deletes the rows that describe the table and its columns; the data
dictionary is in the SYSTEM tablespace, and that is not read only. Creating a table in
a read-only tablespace also fails, since although it is a DDL statement, actual physical
space for the initial extent of the table is required from the tablespace.
TIP Making a tablespace read only can have advantages for backup and
restore operations. Oracle will be aware that the tablespace contents cannot
change, and that it may not therefore be necessary to back it up repeatedly.

Resize a Tablespace
A tablespace can be resized either by adding datafiles to it or by adjusting the size of
the existing datafiles. The datafiles can be resized upward automatically as necessary if

Chapter 5: Oracle Storage

189
PART I

Figure 5-9 Operations on a read-only tablespace

the AUTOEXTEND syntax was used at file creation time. Otherwise, you have to do it
manually with an ALTER DATABASE command:
ALTER DATABASE DATAFILE filename RESIZE n[M|G|T];

The M, G, or T refer to the units of size for the file: megabytes, gigabytes, or
terabytes. For example,
alter database datafile '/oradata/users02.dbf' resize 10m;

From the syntax, you do not know if the file is being made larger or smaller. An
upward resize can only succeed if there is enough space in the file system; a resize
downward can only succeed if the space in the file that would be released is not
already in use by extents allocated to a segment.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

190
To add another datafile of size 50MB to a tablespace,
alter tablespace storedata
add datafile ' C:\ORACLE\ORADATA\ORCL11G\STOREDATA_03.DBF' size 50m;

Clauses for automatic extension can be included, or to enable automatic extension
later use a command such as this:
alter database datafile ' C:\ORACLE\ORADATA\ORCL11G\STOREDATA_03.DBF'
autoextend on next 50m maxsize 2g;

This will allow the file to double in size, increasing 50MB at a time.

Change Alert Thresholds
The use of the server-generated alert system will be described in Chapter 24. For now,
it is only necessary to know that the MMON process of the instance monitors, in near
real time, how full every tablespace is. If a tablespace fills up beyond a certain point,
MMON will raise an alert. The default alert levels are to raise a warning alert when a
tablespace is over 85 percent full, and a critical alert when it is over 97 percent full.
The alerts can be seen in several ways, but the easiest is to look at the database home
page of Database Control, where they are displayed in the Alerts section.
To view or change the alert levels, select the tablespace and click the EDIT button,
visible in Figure 5-3, then in the Edit Tablespace window take the Thresholds tab.
Figure 5-10 shows this for the EXAMPLE tablespace.

Figure 5-10

The alert thresholds for the EXAMPLE tablespace

Chapter 5: Oracle Storage

191

Dropping Tablespaces
To drop a tablespace, use the DROP TABLESPACE command. The syntax is
DROP TABLESPACE tablespacename
[INCLUDING CONTENTS [AND DATAFILES] ] ;

If the INCLUDING CONTENTS keywords are not specified, the drop will fail
if there are any objects in the tablespace. Using these keywords instructs Oracle to
drop the objects first, and then to drop the tablespace. Even this will fail in some
circumstances, such as if the tablespace contains a table that is the parent in a foreign
key relationship with a table in another tablespace.
If the AND DATAFILES keywords are not specified, the tablespace and its contents
will be dropped but the datafiles will continue to exist on disk. Oracle will know
nothing about them anymore, and they will have to be deleted with operating system
commands.
TIP On Windows systems, you may find the datafiles are still there after using
the INCLUDING CONTENTS AND DATAFILES clause. This is because of the
way Windows flags files as “locked.” It may be necessary to stop the Windows
Oracle service (called something like OracleServiceORCL) before you can
delete the files manually.

Oracle-Managed Files (OMF)
Use of OMF is intended to remove the necessity for the DBA to have any knowledge
of the file systems. The creation of database files can be fully automated. To enable
OMF, set some or all of these instance parameters:
DB_CREATE_FILE_DEST
DB_CREATE_ONLINE_LOG_DEST_1
DB_CREATE_ONLINE_LOG_DEST_2
DB_CREATE_ONLINE_LOG_DEST_3
DB_CREATE_ONLINE_LOG_DEST_4
DB_CREATE_ONLINE_LOG_DEST_5
DB_RECOVERY_FILE_DEST

PART I

In the figure, the “Available Space” in the tablespace is reported as 32GB. This
is clearly incorrect, because the Allocated Space, as displayed in Figure 5-3, is only
100MB. The answer lies in datafile autoextension. If AUTOEXTEND is enabled for a
datafile and no MAXSIZE is specified, then the file can grow to a platform-dependent
limit, in this case 32GB. Of course, this says nothing about whether the file system
has room for a file that size. The alert system uses the maximum possible size of the
tablespace as the basis for its calculations, which is meaningless if the tablespace’s
datafiles were created with the syntax AUTOEXTEND ON MAXSIZE UMLIMITED, or
if a MAXSIZE was not specified.
It should be apparent that when using automatic extension, it is good practice
to set a maximum limit. This can be done from the command line with an ALTER
DATABASE command, or through Database Control.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

192
The DB_CREATE_FILE_DEST parameter specifies a default location for all datafiles.
The DB_CREATE_ONLINE_LOG_DEST_n parameters specify a default location for
online redo log files. DB_RECOVERY_FILE_DEST sets up a default location for archive
redo log files and backup files. As well as setting default file locations, OMF will
generate filenames and (by default) set the file sizes. Setting these parameters can greatly
simplify file-related operations. Having enabled OMF, it can always be overridden by
specifying a datafile name in the CREATE TABLESPACE command.
Exercise 5-2: Create, Alter, and Drop Tablespaces In this exercise, you
will create tablespaces and change their characteristics. Then enable and use OMF. The
exercise can be done through Database Control, but if so, be sure to click the SHOW SQL
button at all stages to observe the SQL statements being generated.
1. Connect to the database as user SYSTEM.
2. Create a tablespace in a suitable directory—any directory on which the Oracle
owner has write permission will do:
create tablespace newtbs
datafile '/home/db11g/oradata/newtbs_01.dbf' size 10m
extent management local autoallocate
segment space management auto;

This command specifies the options that are the default. Nonetheless, it
may be considered good practice to do this, to make the statement selfdocumenting.
3. Create a table in the new tablespace, and determine the size of the
first extent:
create table newtab(c1 date) tablespace newtbs;
select extent_id,bytes from dba_extents
where owner='SYSTEM' and segment_name='NEWTAB';

4. Add extents manually, and observe the size of each new extent by repeatedly
executing this command,
alter table newtabs allocate extent;

followed by the query from Step 3. Note the point at which the extent size
increases.
5. Take the tablespace offline, observe the effect, and bring it back online. This
is shown in the following illustration.

Chapter 5: Oracle Storage

193
PART I

6. Make the tablespace read only, observe the effect, and make it read-write
again. This is shown in the next illustration.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

194
7. Enable Oracle-Managed Files for datafile creation:
alter system set db_create_file_dest='/home/db11g/oradata';

8. Create a tablespace, using the minimum syntax now possible:
create tablespace omftbs;

9. Determine the characteristics of the OMF file:
select file_name,bytes,autoextensible,maxbytes,increment_by
from dba_data_files where tablespace_name='OMFTBS';

Note the file is initially 100MB, autoextensible, with no upper limit.
10. Adjust the OMF file to have more sensible characteristics. Use whatever
system-generated filename was returned by Step 9:
alter database datafile
'/oradata/ORCL11G/datafile/o1_mf_omftbs_3olpn462_.dbf'
resize 500m;
alter database datafile
'/home/db11g/oradata/ORCL11G/datafile/o1_mf_omftbs_3olpn462_.dbf'
autoextend on next 100m maxsize 2g;

11. Drop the tablespace, and use an operating system command to confirm that
the file has indeed gone:
drop tablespace omftbs including contents and datafiles;

Space Management in Tablespaces
Space management occurs at several levels. First, space is assigned to a tablespace. This
is done by sizing the datafiles, as already described. Second, space within a tablespace
is assigned to segments. This is done by allocating extents. Third, space within a
segment is assigned to rows. This is done by maintaining bitmaps that track how
much space is free in each block.

Extent Management
The extent management method is set per tablespace and applies to all segments
in the tablespace. There are two techniques for managing extent usage: dictionary
management or local management. The difference is clear: local management should
always be used; dictionary management should never be used. Dictionary-managed
extent management is still supported, but only just. It is a holdover from previous
releases.
Dictionary extent management uses two tables in the data dictionary. SYS.UET$
has rows describing used extents, and SYS.FET$ has rows describing free extents. Every
time the database needs to allocate an extent to a segment, it must search FET$ to find
an appropriate bit of free space, and then carry out DML operations against FET$ and
UET$ to allocate it to the segment. This mechanism causes negative problems with
performance, because all space management operations in the database (many of
which could be initiated concurrently) must serialize on the code that constructs the
transactions.

Chapter 5: Oracle Storage

195

create tablespace large_tabs datafile 'large_tabs_01.dbf' size 10g
extent management local uniform size 160m;

Every extent allocated in this tablespace will be 160MB, so there will be about 64
of them. The bitmap needs only 64 bits, and 160MB of space can be allocated by
updating just one bit. This should be very efficient—provided that the segments in
the tablespace are large. If a segment were created that needed space for only a few
rows, it would still get an extent of 160MB. Small objects need their own tablespace:
create tablespace small_tabs datafile 'small_tabs_01.dbf' size 1g
extent management local uniform size 160k;

The alternative (and default) syntax would be
create tablespace any_tabs datafile 'any_tabs_01.dbf' size 10g
extent management local autoallocate;

When segments are created in this tablespace, Oracle will allocate a 64KB extent.
As a segment grows and requires more extents, Oracle will allocate extents of 64KB
up to 16 extents, from which it will allocate progressively larger extents. Thus fastgrowing segments will tend to be given space in ever-increasing chunks.
TIP Oracle Corporation recommends AUTOALLOCATE, but if you know
how big segments are likely to be and can place them accordingly, UNIFORM
SIZE may well be the best option.
It is possible that if a database has been upgraded from previous versions, it will
include dictionary-managed tablespaces. You can verify this with the query:
select tablespace_name, extent_management from dba_tablespaces;

Any dictionary-managed tablespaces should be converted to local management
with the provided PL/SQL program, which can be executed as follows:
execute dbms_space_admin.tablespace_migrate_to_local('tablespacename');

PART I

Local extent management was introduced with release 8i and became default with
release 9i. It uses bitmaps stored in each datafile. Each bit in the bitmap covers a range
of blocks, and when space is allocated, the appropriate bits are changed from zero to
one. This mechanism is far more efficient than the transaction-based mechanism of
dictionary management. The cost of assigning extents is amortized across bitmaps in
every datafile that can be updated concurrently, rather than being concentrated (and
serialized) on the two tables.
When creating a locally managed tablespace, an important option is uniform size.
If uniform is specified, then every extent ever allocated in the tablespace will be that
size. This can make the space management highly efficient, because the block ranges
covered by each bit can be larger: only one bit per extent. Consider this statement:

OCA/OCP Oracle Database 11g All-in-One Exam Guide

196
TIP Converting tablespaces to local management is quick and easy, except
for the SYSTEM tablespace, where some extra steps are required. These are
well documented in the System Administrator’s guide available as part of the
product documentation.

Segment Space Management
The segment space management method is set per tablespace and applies to all
segments in the tablespace. There are two techniques for managing segment space
usage: manual or automatic. The difference is clear: automatic management should
always be used; manual management should never be used. Manual segment space
management is still supported but never recommended. Like dictionary-managed
extent management, it is a holdover from previous releases.
Automatic segment space management was introduced with release 9i and
has become the default with release 11g. Every segment created in an automatic
management tablespace has a set of bitmaps that describe how full each block is.
There are five bitmaps for each segment, and each block will appear on exactly one
bitmap. The bitmaps track the space used in bands or ranges: there is a bitmap for full
blocks; and there are bitmaps for blocks that are 75 percent to 100 percent used; 50
percent to 75 percent used; 25 percent to 50 percent used; and 0 percent to 25 percent
used. When searching for a block into which to insert a row, the session server process
will look at the size of the row to determine which bitmap to search. For instance, if
the block size is 4KB and the row to be inserted is 1500 bytes, an appropriate block
will be found by searching the 25 percent to 50 percent bitmap. Every block on this
bitmap is guaranteed to have at least 2KB of free space. As rows are inserted, deleted,
or change size through updates, the bitmaps get updated accordingly.
The old manual space management method used a simple list, known as the free
list, which listed which blocks were available for inserts but without any information
on how full they were. This method could cause excessive activity, as blocks had to be
tested for space at insert time, and often resulted in a large proportion of wasted space.
To verify if any tablespaces are using manual management, you can run the query:
select tablespace_name,segment_space_management from dba_tablespaces;

It is not possible to convert a tablespace from manual to automatic segment space
management. The only solution is to create a new tablespace using automatic segment
space management, move the segments into it (at which point the bitmaps will be
generated), and drop the old tablespaces.
Exercise 5-3: Change Tablespace Characteristics In this exercise, you will
create a tablespace using the nondefault manual space management, to simulate the
need to convert to automatic segment space management after an upgrade.
1. Connect to your database as user SYSTEM.
2. Create a tablespace using manual segment space management. As OMF was
enabled in Exercise 5-2, there is no need for any datafile clause:
create tablespace manualsegs segment space management manual;

Chapter 5: Oracle Storage

197
3. Confirm that the new tablespace is indeed using the manual technique:

4. Create a table and an index in the tablespace:
create table mantab (c1 number) tablespace manualsegs;
create index mantabi on mantab(c1) tablespace manualsegs;

These segments will be created with freelists, not bitmaps.
5. Create a new tablespace that will (by default) use automatic segment space
management:
create tablespace autosegs;

6. Move the objects into the new tablespace:
alter table mantab move tablespace autosegs;
alter index mantabi rebuild online tablespace autosegs;

7. Confirm that the objects are in the correct tablespace:
select tablespace_name from dba_segments
where segment_name like 'MANTAB%';

8. Drop the original tablespace:
drop tablespace manualsegs including contents and datafiles;

9. Rename the new tablespace to the original name. This is often necessary,
because some application software checks tablespace names:
alter tablespace autosegs rename to manualsegs;

10. Tidy up by dropping the tablespace, first with this command:
drop tablespace manualsegs;

Note the error caused by the tablespace not being empty, and fix it:
drop tablespace manualsegs including contents and datafiles;

Two-Minute Drill
Overview of Tablespaces and Datafiles
• One tablespace can be physically represented by many datafiles.
• One tablespace can contain many segments.
• One segment comprises one or more extents.
• One extent is many consecutive blocks, in one datafile.
• One Oracle block should be one or more operating system blocks.
• The Oracle block is the granularity of database I/O.

PART I

select segment_space_management from dba_tablespaces
where tablespace_name='MANUALSEGS';

OCA/OCP Oracle Database 11g All-in-One Exam Guide

198
Create and Manage Tablespaces
• A SMALLFILE tablespace can have many datafiles, but a BIGFILE tablespace
can have only one.
• Tablespaces default to local extent management, automatic segment space
management, but not to a uniform extent size.
• OMF datafiles are automatically named, initially 100MB, and can autoextend
without limit.
• A tablespace that contains segments cannot be dropped—unless an
INCLUDING CONTENTS clause is specified.
• Tablespaces can be online or offline, read-write or read only.
• Tablespaces can store one of three types of objects: permanent objects,
temporary objects, or undo segments.

Space Management in Tablespaces
• Local extent management tracks extent allocation with bitmaps in each datafile.
• The UNIFORM SIZE clause when creating a tablespace forces all extents to be
the same size.
• The AUTOALLOCATE clause lets Oracle determine the next extent size, which
is based on how many extents are being allocated to a segment.
• Automatic segment space management tracks the free space in each block of
an extent using bitmaps.
• It is possible to convert a tablespace from dictionary extent management
to local extent management, but not from freelist segment management to
automatic management.

Self Test
1. This illustration shows the Oracle
storage model, with four entities
having letters for names. Match
four of the following entities to
the letters A, B, C, D:

Tablespaces

D

A

DATAFILE
EXTENT
ORACLE BLOCK

B

ROW
SEGMENT
TABLE

C

Operating
system block

Chapter 5: Oracle Storage

199
2. Which statements are correct about extents? (Choose all correct answers.)
B. An extent is a random grouping of Oracle blocks.
C. An extent can be distributed across one or more datafiles.
D. An extent can contain blocks from one or more segments.
E. An extent can be assigned to only one segment.
3. Which of these are types of segment? (Choose all correct answers.)
A. Sequence
B. Stored procedure
C. Table
D. Table partition
E. View
4. If a tablespace is created with this syntax:
create tablespace tbs1 datafile 'tbs1.dbf' size 10m;

which of these characteristics will it have? (Choose all correct answers.)
A. The datafile will autoextend, but only to double its initial size.
B. The datafile will autoextend with MAXSIZE UNLIMITED.
C. The extent management will be local.
D. Segment space management will be with bitmaps.
E. The file will be created in the DB_CREATE_FILE_DEST directory.
5. How can a tablespace be made larger? (Choose all correct answers.)
A. Convert it from a SMALLFILE tablespace to a BIGFILE tablespace.
B. If it is a SMALLFILE tablespace, add files.
C. If it is a BIGFILE tablespace, add more files.
D. Resize the existing file(s).
6. Which of these commands can be executed against a table in a read-only
tablespace? (Choose the best answer.)
A. DELETE
B. DROP
C. INSERT
D. TRUNCATE
E. UPDATE

PART I

A. An extent is a consecutive grouping of Oracle blocks.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

200
7. What operation cannot be applied to a tablespace after creation? (Choose the
best answer.)
A. Convert from dictionary extent management to local extent management.
B. Convert from manual segment space management to automatic segment
space management.
C. Change the name of the tablespace.
D. Reduce the size of the datafile(s) assigned to the tablespace.
E. All the above operations can be applied.
8. By default, what thresholds are set for space warnings on a tablespace?
(Choose the best answer.)
A. 85 percent and 97 percent.
B. This will depend on whether AUTOEXTEND has been enabled.
C. This will depend on whether it is a SMALLFILE or a BIGFILE tablespace.
D. By default, no warnings are enabled.
9. When the database is in mount mode, what views must be queried to
identify the datafiles and tablespaces that make up the database? (Choose
all correct answers.)
A. DBA_DATA_FILES
B. DBA_TABLESPACES
C. DBA_TEMP_FILES
D. V$DATABASE
E. V$DATAFILE
F. V$TABLESPACE
10. Which views could you query to find out about the temporary tablespaces and
the files that make them up? (Choose all correct answers.)
A. DBA_DATA_FILES
B. DBA_TABLESPACES
C. DBA_TEMP_TABLESPACES
D. DBA_TEMP_FILES
E. V$DATAFILE
F. V$TABLESPACE
G. V$TEMPTABLESPACE
H. V$TEMPFILE

Chapter 5: Oracle Storage

201
1. þ A is SEGMENT; B is EXTENT; C is ORACLE BLOCK; D is DATAFILE.
ý Neither ROW nor TABLE is included in the model.
2. þ A and E. One extent is several consecutive Oracle blocks, and one segment
consists of one or more extents.
ý B, C, and D. B, C, and D are all wrong because they misinterpret the Oracle
storage model.
3. þ C and D. A table can be a type of segment, as is a table partition (in
which case the partitioned table is not a segment).
ý A, B, and E. A, B, and E are wrong because they exist only as objects
defined within the data dictionary. The data dictionary itself is a set of
segments.
4. þ C and D. With release 11g, local extent management and automatic
segment space management are enabled by default.
ý A and B. A and B are both wrong because by default autoextension is
disabled.
E is wrong because providing a filename will override the OMF mechanism.
5. þ B and D. A small file tablespace can have many files, and all datafiles can
be resized upward.
ý A and C. A is wrong because you cannot convert between a SMALLFILE and
a BIGFILE. C is wrong because a BIGFILE tablespace can have only one file.
6. þ B. Objects can be dropped from read-only tablespaces.
ý A, C, D, and E. All of these commands will fail because they require
writing to the table, unlike a DROP, which only writes to the data dictionary.
7. þ B. It is not possible to change the segment space management method
after creation.
ý A C, D, and E. A and C are wrong because a tablespace can be converted
to local extent management or renamed at any time. D is wrong because a
datafile can be resized downward—though only if the space to be freed up has
not already been used. E is wrong because you cannot change the segment
space management method without re-creating the tablespace.
8. þ A. 85 percent and 97 percent are the database-wide defaults applied to all
tablespaces.
ý B, C, and D. B is wrong because AUTOEXTEND does not affect the
warning mechanism (though it may make it pointless). C is wrong because
the warning mechanism considers only the tablespace, not the files. D is
wrong because by default the space warning is enabled.

PART I

Self Test Answers

OCA/OCP Oracle Database 11g All-in-One Exam Guide

202
9. þ E and F. Joining these views will give the necessary information.
ý A, B, C, and D. A and B are wrong because these views will not be
available in mount mode. C is wrong because it is not relevant to datafiles
(and is also not available in mount mode). D is wrong because there is no
datafile or tablespace information in V$DATABASE.
10. þ B, D, F, and H. V$TABLESPACE and DBA_TABLESPACES will list the
temporary tablespaces, and V$TEMPFILE and DBA_TEMP_FILES will list
their files.
ý A, C, E, and G. A and E are wrong because V$DATAFILE and DBA_DATA_
FILES do not include tempfiles. C and G are wrong because there are no views
with these names.

CHAPTER 6
Oracle Security

Exam Objectives
In this chapter you will learn to
• 052.7.1
Create and Manage Database User Accounts
• 052.7.2
Grant and Revoke Privileges
• 052.7.3
Create and Manage Roles
• 052.7.4
Create and Manage Profiles
• 052.11.1 Implement Database Security and Principle of Least Privilege
• 052.11.2 Work with Standard Database Auditing

203

OCA/OCP Oracle Database 11g All-in-One Exam Guide

204
Security is an issue of vital concern at all sites. All organizations should have a security
manual documenting rules and procedures. If your organization does not have such
a manual, someone should be writing it—perhaps that someone should be you. In
security, there is no right or wrong; there is only conformance or nonconformance to
agreed procedures. If administrators follow the rules and advise on what those rules
should be, then any breach of security is not their fault. But unfortunately, history
shows that when something goes wrong in the security arena, there is a great desire
to blame individuals. It is vitally important that administration staff should be able to
point to a rule book that lays down the procedures they should follow, and to routines
and logs that demonstrate that they did indeed follow them. This devolves the
responsibility to the authors of the rule book, the security manual. If no such manual
exists, then any problems are likely to be dumped on the most convenient scapegoat.
This is often the database administrator. You have been warned.
The Oracle product set provides many facilities for enforcing security up to and
beyond the highest standards specified by any legislation. Many of the facilities (such
as data encryption) are beyond the scope of the first OCP examination, where the
treatment of security is limited to the use of privileges and auditing. This chapter
discusses the basic security model governing user accounts and their authentication.
The differences between a schema and a user (terms often used synonymously) are
explored along with the use of privileges to permit access to as few items as necessary
and the grouping of privileges into roles for ease of administration. Profiles used to
manage passwords and resources to a limited extent are covered before delving into
the powerful auditing features available.

Create and Manage Database User Accounts
When a user logs on to the database, they connect to a user account by specifying an
account name followed by some means of authentication. The user account defines
the initial access permissions and the attributes of the session. Associated with a user
account is a schema. The terms “user,” “user account,” and “schema” can often be used
interchangeably in the Oracle environment, but they are not the same thing. A user is
a person who connects to a user account by establishing a session against the instance
and logging on with the user account name. A schema is a set of objects owned by a
user account, and is described in Chapter 7. The way the account was created will set
up a range of attributes for the session, some of which can be changed later, while the
session is in progress. A number of accounts are created at database creation time, and
the DBA will usually create many more subsequently.
In some applications, each user has their own database user account. This means
that the database is fully aware of who is the real owner of each session. This security
model works well for small applications but is often impractical for larger systems
with many hundreds or thousands of users. For large systems, many users will connect
to the same account. This model relies on the application to map the real end user to
a database user account, and it can make session-level security and auditing more
complex. This chapter assumes that every user is known to the database: they each
have their own user account.

Chapter 6: Oracle Security

205
User Account Attributes

• Username
• Authentication method
• Default tablespace
• Tablespace quotas
• Temporary tablespace
• User profile
• Account status
All of these should be specified when creating the user, though only username and
authentication methods are mandatory; the others have defaults.

Username
The username must be unique in the database and must conform to certain rules. A
username must begin with a letter, must have no more than 30 characters, and can
consist of only letters, digits, and the characters dollar ($) and underscore (_). A user
name may not be a reserved word. The letters are case sensitive but will be automatically
converted to uppercase. All these rules (with the exception of the length) can be
broken if the username is specified within double quotes, as shown on Figure 6-1.

Figure 6-1 How to create users with nonstandard names

PART I

A user account has a number of attributes defined at account creation time. These will
be applied to sessions that connect to the account, though some can be modified by
the session or the DBA while the session is running. These attributes are

OCA/OCP Oracle Database 11g All-in-One Exam Guide

206
In the first example in the figure, a username JOHN is created. This was entered in
lowercase, but is converted to uppercase, as can be seen in the first query. The second
example uses double quotes to create the user with a name in lowercase. The third
and fourth examples use double quotes to bypass the rules on characters and reserved
words; both of these would fail without the double quotes. If a username includes
lowercase letters or illegal characters or is a reserved word, then double quotes must
always be used to connect to the account subsequently.
TIP It is possible to use nonstandard usernames, but this may cause dreadful
confusion. Some applications rely on the case conversion; others always
use double quotes. It is good practice to always use uppercase and only the
standard characters.
A username can never be changed after creation. If it is necessary to change it,
the account must be dropped and another account created. This is a drastic action,
because all the objects in the user’s schema will be dropped along with the user.

Default Tablespace and Quotas
Every user account has a default tablespace. This is the tablespace where any schema
objects (such as tables or indexes) created by the user will reside. It is possible for a
user to create (own) objects in any tablespace on which they have been granted a
quota, but unless another tablespace is specified when creating the object, it will go
into the user’s default tablespace.
There is a database-wide default tablespace that will be applied to all user accounts
if a default tablespace is not specified when creating the user. The default can be set
when creating the database and changed later with:
ALTER DATABASE DEFAULT TABLESPACE tablespace_name ;

If a default tablespace is not specified when creating the database, it will be set to
the SYSTEM tablespace.
TIP After creating a database, do not leave the default tablespace as SYSTEM;
this is very bad practice as nonsystem users could potentially fill up this
tablespace, thus hampering the operation of the data dictionary and
consequently the entire database. Change it as soon as you can.
A quota is the amount of space in a tablespace that the schema objects of a user are
allowed to occupy. You can create objects and allocate extents to them until the quota
is reached. If you have no quota on a tablespace, you cannot create any objects at all.
Quotas can be changed at any time by an administrator user with sufficient privileges.
If a user’s quota is reduced to below the size of their existing objects (or even reduced
to zero), the objects will survive and will still be usable, but they will not be permitted
to get any bigger.
Figure 6-2 shows how to investigate and set quotas.

Chapter 6: Oracle Security

207
PART I

Figure 6-2
Managing user
quotas

The first command queries DBA_USERS and determines the default and temporary
tablespaces for the user JOHN, created in Figure 6-1. DBA_USERS has one row for
every user account in the database. User JOHN has picked up the database defaults
for the default and temporary tablespaces, which are shown in the last query against
DATABASE_PROPERTIES.
The two ALTER USER commands in Figure 6-2 give user JOHN the capability to
use up to 10MB of space in the USERS tablespace, and an unlimited amount of space
in the EXAMPLE tablespace. The query against DBA_TS_QUOTAS confirms this; the
number “–1” represents an unlimited quota. At the time the query was run, JOHN
had not created any objects, so the figures for BYTES are zeros, indicating that he is
not currently using any space in either tablespace.
EXAM TIP Before you can create a table, you must have both permission to
execute CREATE TABLE and quota on a tablespace in which to create it.

TIP Most users will not need any quotas, because they will never create
objects. They will only have permissions against objects owned by other
schemas. The few object-owning schemas will probably have QUOTA
UNLIMITED on the tablespaces where their objects reside.

Temporary Tablespace
Permanent objects (such as tables) are stored in permanent tablespaces; temporary
objects are stored in temporary tablespaces. A session will need space in a temporary
tablespace if it needs space for certain operations that exceed the space available in
the session’s PGA. Remember that the PGA is the program global area, the private
memory allocated to the session. Operations that need temporary space (in memory

OCA/OCP Oracle Database 11g All-in-One Exam Guide

208
if possible, in a temporary tablespace if necessary) include sorting rows, joining tables,
building indexes, and using temporary tables. Every user account is assigned a
temporary tablespace, and all user sessions connecting to the account will share
this temporary tablespace.
The query against DBA_USERS in Figure 6-2 shows user JOHN’s temporary
tablespace, which is the database default temporary tablespace. This is shown by
the last query in Figure 6-2, against DATABASE_PROPERTIES.
Space management within a temporary tablespace is completely automatic.
Temporary objects are created and dropped as necessary by the database. A user does
not need to be granted a quota on their temporary tablespace. This is because the
objects in it are not actually owned by them; they are owned by the SYS user, who
has an unlimited quota on all tablespaces.
EXAM TIP Users do not need a quota on their temporary tablespace.

To change a user’s temporary tablespace (which will affect all future sessions that
connect to that account), use an ALTER USER command:
ALTER USER username TEMPORARY TABLESPACE tablespace_name;

TIP If many users are logging on to the same user account, they will share the
use of one temporary tablespace. This can be a performance bottleneck, which
may be avoided by using temporary tablespace groups.

Profile
A user’s profile controls their password settings and provides a limited amount of
control over resource usage. Use of profiles is detailed in the later section “Create and
Manage Profiles.”
Profiles are a useful way of managing passwords and resources but can really only
apply in an environment where every application user has their own database user
account. For example, if many users connect to the same database user account, you
would not want the password to be invalidated by one of them, because that would
lock out everyone else. Similarly, resource usage will often need to be managed on a
per-session basis rather than for the account as a whole.

Account Status
Every user account has a certain status, as listed in the ACCOUNT_STATUS column of
DBA_USERS. There are nine possibilities:
• OPEN

The account is available for use.

• LOCKED This indicates that the DBA deliberately locked the account. No
user can connect to a locked account.

Chapter 6: Oracle Security

209

• EXPIRED & LOCKED Not only has the account been locked, but its
password has also expired.
• EXPIRED (GRACE) This indicates that the grace period is in effect. A
password need not expire immediately when its lifetime ends; it may be
configured with a grace period during which users connecting to the account
have the opportunity to change the password.
• LOCKED (TIMED) This indicates that the account is locked because of failed
login attempts. An account can be configured to lock automatically for a
period after an incorrect password is presented a certain number of times.
• EXPIRED & LOCKED (TIMED)
• EXPIRED (GRACE) & LOCKED
• EXPIRED (GRACE) & LOCKED (TIMED)
To lock and unlock an account, use these commands:
ALTER USER username ACCOUNT LOCK ;
ALTER USER username ACCOUNT UNLOCK ;

To force a user to change their password, use this command:
ALTER USER username PASSWORD EXPIRE;

This will immediately start the grace period, forcing the user to make a password
change at their next login attempt (or one soon after). There is no such command as
“alter . . . unexpire.” The only way to make the account fully functional again is to
reset the password.

Authentication Methods
A user account must have an authentication method: some means whereby the
database can determine if the user attempting to create a session connecting to the
account is allowed to do so. The simplest technique is by presenting a password that
will be matched against a password stored within the database, but there are
alternatives. The possibilities are
• Operating system authentication
• Password file authentication
• Password authentication
• External authentication
• Global authentication

PART I

• EXPIRED This indicates that the password lifetime has expired. Passwords
can have a limited lifetime. No user can connect to an EXPIRED account until
the password is reset.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

210
The first two techniques are used only for administrators; the last requires an LDAP
directory server. The LDAP directory server may be the Oracle Internet Directory,
shipped as a part of the Oracle Application Server.

Operating System and Password File Authentication
To enable operating system and password file authentication (the two go together) for
an account, you must grant the user either the SYSDBA or the SYSOPER privilege:
GRANT [sysdba | sysoper ] TO username ;

Granting either (or both) of these privileges will copy the user’s password from
the data dictionary into the external password file, where it can be read by the instance
even if the database is not open. It also allows the instance to authenticate users by
checking whether the operating system user attempting the connection is a member
of the operating system group that owns the Oracle Home installation. Following
database creation, the only user with these privileges is SYS.
To use password file authentication, the user can connect with this syntax using
SQL*Plus:
CONNECT username / password [@db_alias] AS [ SYSOPER | SYSDBA ] ;

Note that password file authentication can be used for a connection to a remote
database over Oracle Net.
To use operating system authentication, the user must be first logged on to the
database server after being authenticated as an operating system user with access to
the Oracle binaries before connecting with this syntax using SQL*Plus:
CONNECT / AS [ SYSOPER | SYSDBA ] ;

The operating system password is not stored by Oracle, and therefore there are no
issues with changing passwords.
The equivalent of these syntaxes is also available when connecting with Database
Control, by selecting SYSDBA from the Connect As drop-down box on the Database
Control login window. To determine to whom the SYSDBA and SYSOPER privileges
have been granted, query the view V$PWFILE_USERS. Connection with operating
system or password file authentication is always possible, no matter what state the
instance and database are in, and is necessary to issue STARTUP or SHUTDOWN
commands.
A third privilege that operates in the same manner as SYSDBA and SYSOPER is
SYSASM. This is a privilege that is only applicable to ASM instances and is detailed
in Chapter 20.
TIP All user sessions must be authenticated. There is no such thing as an
“anonymous” login, and some authentication method must be used.

Chapter 6: Oracle Security

211
Password Authentication
CONNECT username / password [@db_alias] ;

Or with Database Control, select NORMAL from the Connect As drop-down box.
When connecting with password authentication, the instance will validate the
password given against that stored with the user account in the data dictionary. For
this to work, the database must be open; it is therefore logically impossible to issue
STARTUP or SHUTDOWN commands when connected with password authentication.
The user SYS is not permitted to connect with password authentication; only password
file, operating system, or LDAP authentication is possible for SYS.
Usernames are case sensitive but are automatically converted to uppercase unless
specified within double quotes. In previous releases of the database, passwords were
not case sensitive at all. With release 11g, passwords are case sensitive and there is no
automatic case conversion. It is not necessary to use double quotes; the password will
always be read exactly as entered.
When a connection is made across a network, release 11g will always encrypt it
using the AES algorithm before transmission. To use encryption for the ongoing traffic
between the user process and the server process requires the Advanced Security Option,
but password encryption is standard.
Any user can change their user account password at any time, or a highly privileged
user (such as SYSTEM) can change any user account password. The syntax (whether
you are changing your own password or another one) is
ALTER USER username IDENTIFIED BY password ;

External Authentication
If a user account is created with external authentication, Oracle will delegate the
authentication to an external service; it will not prompt for a password. If the Advanced
Security Option has been licensed, then the external service can be a Kerberos server, a
RADIUS server, or (in the Windows environment) the Windows native authentication
service. When a user attempts to connect to the user account, rather than authenticating
the user itself, the database instance will accept (or reject) the authentication according
to whether the external authentication service has authenticated the user. For example,
if using Kerberos, the database will check that the user does have a valid Kerberos token.
Without the Advanced Security Option, the only form of external authentication
that can be used is operating system authentication. This is a requirement for SYSDBA
and SYSOPER accounts (as already discussed) but can also be used for normal users.
The technique is to create an Oracle user account with the same name as the operating
system user account but prefixed with a string specified by the instance parameter OS_
AUTHENT_PREFIX. This parameter defaults to the string OPS$. To check its value, use
a query such as
select value from v$parameter where name='os_authent_prefix';

PART I

The syntax for a connection with password authentication using SQL*Plus is

OCA/OCP Oracle Database 11g All-in-One Exam Guide

212
On Linux or Unix, external operating system authentication is very simple.
Assuming that the OS_AUTHENT_PREFIX is on default and that there is an operating
system user called jwatson, then create an oracle user and grant the CREATE SESSION
privilege as follows:
create user ops$jwatson identified externally;
grant create session to ops$jwatson;

A user logged on to Unix as jwatson will be able to issue the command:
sqlplus /

from an operating system prompt, and will be connected to the database user account
ops$jwatson.
Under Windows, when Oracle queries the operating system to identify the user,
Windows will usually (depending on details of Windows security configuration) return
the username prefixed with the Windows domain. Assuming that the Windows logon
ID is John Watson (including a space) and that the Windows domain is JWACER
(which happens to be the machine name) and that the OS_AUTHENT_PREFIX is on
default, the command will be
create user "OPS$JWACER\JOHN WATSON" identified externally;

Note that the username must be in uppercase, and because of the illegal characters
(a backslash and a space) must be enclosed in double quotes.
TIP Using external authentication can be very useful, but only if the users
actually log on to the machine hosting the database. Users will rarely do this,
so the technique is more likely to be of value for accounts used for running
maintenance or batch jobs.

Global Authentication
An emerging standard for identity management makes use of LDAP servers. An LDAPcompliant directory server, the Oracle Internet Directory, is distributed by Oracle
Corporation as part of Oracle Application Server. A global user is a user who is defined
within the LDAP directory, and global authentication is a means of delegating user
authentication to the directory.
There are two techniques for global authentication:
• The users can be defined in the directory, and also in the database. A user will be
connected to a user account with the same name as the user’s common name
in the directory.
• The users can be defined only in the directory. The database will be aware
of the users’ global names but connects all users to the same database user
account.
Neither of these techniques requires the user to present a password to the database.
The connection will happen without any prompts if the directory accounts and the
database user accounts are set up correctly.

Chapter 6: Oracle Security

213
Creating Accounts

1
2
3
4
5
6

create user scott identified by tiger
default tablespace users temporary tablespace temp
quota 100m on users, quota unlimited on example
profile developer_profile
password expire
account unlock;

Only the first line is required; there are defaults for everything else. Taking the
command line by line:
1. Provide the username, and a password for password authentication.
2. Provide the default and temporary tablespaces.
3. Set up quotas on the default and another tablespace.
4. Nominate a profile for password and resource management.
5. Force the user to change his password immediately.
6. Make the account available for use (which would have been the default).
Every attribute of an account can be adjusted later with ALTER USER commands,
with the exception of the name. To change the password:
alter user scott identified by lion;

To change the default and temporary tablespaces:
alter user scott default tablespace store_data temporary tablespace temp;

To change quotas:
alter user scott quota unlimited on store_data, quota 0 on users;

To change the profile:
alter user scott profile prod_profile;

To force a password change:
alter user scott password expire;

To lock the account:
alter user scott account lock;

Having created a user account, it may be necessary to drop it:
drop user scott;

PART I

The CREATE USER command has only two mandatory arguments: a username and
a method of authentication. Optionally, it can accept a clause to specify a default
tablespace and a temporary tablespace, one or more quota clauses, a named profile,
and commands to lock the account and expire the password. A typical example (with
line numbers added) would be

OCA/OCP Oracle Database 11g All-in-One Exam Guide

214
This command will only succeed if the user does not own any objects: if the schema
is empty. If you do not want to identify all the objects owned and drop them first,
they can be dropped with the user by specifying CASCADE:
drop user scott cascade;

To manage accounts with Database Control, from the database home page take
the Schema tab and then the Users link in the Security section. This will show all the
user accounts in the database. Figure 6-3 shows these, sorted in reverse order of creation.
To change the sort order, click the appropriate column header.
The first “user” in the figure is PUBLIC. This is a notional user to whom privileges
can be granted if you wish to grant them to every user. The CREATE button will present a
window that prompts for all the user account attributes. The DELETE button will drop
an account, with the CASCADE option if necessary—but it will give an “Are you sure?”
prompt before proceeding.
To adjust the attributes of an account, select it and click EDIT. This will take you to
the Edit User window, shown in Figure 6-4. This interface can be used to change all

Figure 6-3

Users shown by Database Control

Chapter 6: Oracle Security

215
PART I

Figure 6-4 The Edit User Database Control window

aspects of the account except for tablespace quotas, which have their own tabs. It also
has tabs for granting and revoking privileges and roles.
Exercise 6-1: Create Users In this exercise, you will create some users to be
used for the remaining exercises in this chapter. It is assumed that there is a permanent
tablespace called STOREDATA and a temporary tablespace called TEMP. If these don’t
exist, either create them or use any other suitable tablespaces.
1. Connect to your database with SQL*Plus as a highly privileged user, such as
SYSTEM or SYS.
2. Create three users:
create user sales identified by sales
default tablespace storedata password expire;
create user webapp identified by oracle
default tablespace storedata quota unlimited on storedata;
create user accounts identified by oracle;

OCA/OCP Oracle Database 11g All-in-One Exam Guide

216
3. Confirm that the users have been created with Database Control. From the
database home page, the navigation path is the Server tab and the Users link
in the Security section. They should look something like those shown in this
illustration:

4. From SQL*Plus, attempt to connect as user SALES:
connect sales/sales

5. When prompted, select a new password (such as “oracle”). But it won’t get you
anywhere, because user SALES does not have the CREATE SESSION privilege.
6. Refresh the Database Control window, and note that the status of the SALES
account is no longer EXPIRED but OPEN, because the password has been
changed.

Grant and Revoke Privileges
By default, no unauthorized user can do anything in an Oracle database. A user
cannot even connect without being granted a privilege. And once this has been done,
you still can’t do anything useful (or dangerous) without being given more privileges.
Privileges are assigned to user accounts with a GRANT command and withdrawn with
a REVOKE. Additional syntax can give a user the ability to grant any privileges they
have to other users. By default only the database administrators (SYS and SYSTEM)
have the right to grant any privileges. The user that grants one or more privileges to
another user is referred to as the grantor while the recipient is referred to as the grantee.
Privileges come in two groups: system privileges that (generally speaking) let users
perform actions that affect the data dictionary, and object privileges that let users perform
actions that affect data.

System Privileges
There are about two hundred system privileges. Most apply to actions that affect the data
dictionary, such as creating tables or users. Others affect the database or the instance,
such as creating tablespaces, adjusting instance parameter values, or establishing a
session. Some of the more commonly used privileges are

Chapter 6: Oracle Security

217

• RESTRICTED SESSION If the database is started with STARTUP RESTRICT,
or adjusted with ALTER SYSTEM ENABLE RESTRICTED SESSION, only users
with this privilege will be able to connect.
• ALTER DATABASE Gives access to many commands necessary for modifying
physical structures.
• ALTER SYSTEM Gives control over instance parameters and memory structures.
• CREATE TABLESPACE With the ALTER TABLESPACE and DROP
TABLESPACE privileges, these will let a user manage tablespaces.
• CREATE TABLE Lets the grantee create tables in their own schema; includes
the ability to alter and drop them, to run SELECT and DML commands on
them, and to create, alter, or drop indexes on them.
• GRANT ANY OBJECT PRIVILEGE Lets the grantee grant object permissions
on objects they don’t own to others—but not to themselves.
• CREATE ANY TABLE
• DROP ANY TABLE

The grantee can create tables that belong to other users.
The grantee can drop tables belonging to any other users.

• INSERT ANY TABLE, UPDATE ANY TABLE, DELETE ANY TABLE The grantee
can execute these DML commands against tables owned by all other users.
• SELECT ANY TABLE

The grantee can SELECT from any table in the database.

The syntax for granting system privileges is
GRANT privilege [, privilege...] TO username ;

After creating a user account, a command such as this will grant the system privileges
commonly assigned to users who will be involved in developing applications:
grant create session, alter session,
create table, create view, create synonym, create cluster,
create database link, create sequence,
create trigger, create type, create procedure, create operator
to username ;

These privileges will let you connect and configure your session, and then create
objects to store data and PL/SQL objects. These objects can only exist in your own
schema; you will have no privileges against any other schema. The object creation will
also be limited by the quota(s) you may (or may not) have been assigned on various
tablespaces.
A variation in the syntax lets the grantee pass their privilege on to a third party. For
example:
connect system/oracle;
grant create table to scott with admin option;
connect scott/tiger;
grant create table to jon;

PART I

• CREATE SESSION This lets the user connect. Without this, you cannot even
log on to the database.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

218
This gives SCOTT the ability to create tables in his own schema, and also to issue the
GRANT command himself. In this example, he lets user JON create tables too—but
JON will only be able to create them in the JON schema. Figure 6-5 shows the result
of the grant as depicted by Database Control; the same information could be garnered
by querying the view DBA_SYS_PRIVS.
If a system privilege is revoked from you, any actions you performed using that
privilege (such as creating tables) remain intact. Also, if you had been granted the
privilege with the ADMIN OPTION, any users to whom you passed on the privilege
will retain it, even after it was revoked from you. There is no record kept of the grantor
of a system privilege, so it is not possible for a REVOKE to cascade as illustrated in
Figure 6-6.
EXAM TIP Revocation of a system privilege will not cascade (unlike
revocation of an object privilege).
The ANY privileges give permissions against all relevant objects in the database. Thus,
grant select any table to scott;

will let SCOTT query every table in every schema in the database. It is often
considered bad practice to grant the ANY privileges to any user other than the
system administration staff.

Figure 6-5

System privileges granted to a user

Chapter 6: Oracle Security

219
PART I

Figure 6-6
GRANT and
REVOKE from
SQL*Plus

TIP In fact, ANY is not as dangerous now as with earlier releases. It no longer
includes tables in the SYS schema, so the data dictionary is still protected. But
ANY should still be used with extreme caution, as it removes all protection
from user tables.

Object Privileges
Object privileges provide the ability to perform SELECT, INSERT, UPDATE, and
DELETE commands against tables and related objects, and to execute PL/SQL objects.
These privileges do not exist for objects in the users’ own schemas; if users have the
system privilege CREATE TABLE, they can perform SELECT and DML operations
against the tables they create with no further need for permissions.
EXAM TIP The ANY privileges, that grant permissions against objects in
every user account in the database, are not object privileges—they are
system privileges.
The object privileges apply to different types of object:
Privilege

Granted on

SELECT

Tables, views, sequences, synonyms

INSERT

Tables, views, synonyms

UPDATE

Tables, views, synonyms

DELETE

Tables, views, synonyms

ALTER

Tables, sequences

EXECUTE

Procedures, functions, packages, synonyms

OCA/OCP Oracle Database 11g All-in-One Exam Guide

220
The syntax is
GRANT privilege ON [schema.]object TO username [WITH GRANT OPTION] ;

For example,
grant select on store.customers to scott;

Variations include the use of ALL, which will apply all the permissions relevant to the
type of object, and nominate particular columns of view or tables:
grant select on store.orders to scott;
grant update (order_status) on store.orders to scott;
grant all on store.regions to scott;

This code will let SCOTT query all columns of the ORDERS table in the STORE schema
but only write to one nominated column, ORDER_STATUS. Then SCOTT is given all
the object privileges (SELECT and DML) on STORE’s REGIONS table. Figure 6-7
shows the result of this, as viewed in Database Control.
TIP Granting privileges at the column level is often said to be bad practice
because of the massive workload involved. If it is necessary to restrict peoples’
access to certain columns, creating a view that shows only those columns will
often be a better alternative.

Figure 6-7

Object privilege management with Database Control

Chapter 6: Oracle Security

221

connect store/admin123;
grant select on customers to sales with grant option;
connect sales/sales;
grant select on store.customers to webapp with grant option;
conn webapp/oracle;
grant select on store.customers to scott;
connect store/admin123;
revoke select on customers from sales;

At the conclusion of these commands, neither SALES nor WEBAPP nor SCOTT has
the SELECT privilege against STORE.CUSTOMERS.
EXAM TIP Revocation of an object privilege will cascade (unlike revocation of
a system privilege).
Exercise 6-2: Grant Direct Privileges In this exercise, you will grant some
privileges to the users created in Exercise 6-1 and prove that they work.
1. Connect to your database as user SYSTEM with SQL*Plus.
2. Grant CREATE SESSION to user SALES:
grant create session to sales;

3. Open another SQL*Plus session, and connect as SALES. This time, the login
will succeed:
connect sales/oracle

4. As SALES, attempt to create a table:
create table t1 (c1 date);

This will fail with the message “ORA-01031: insufficient privileges.”
5. In the SYSTEM session, grant SALES the CREATE TABLE privilege:
grant create table to sales;

6. In the SALES session, try again:
create table t1 (c1 date);

This will fail with the message “ORA-01950: no privileges on tablespace
STOREDATA.”
7. In the SYSTEM session, give SALES a quota on the STOREDATA tablespace:
alter user sales quota 1m on storedata;

8. In the SALES session, try again. This time, the creation will succeed.
9. As SALES, grant object privileges on the new table:
grant all on t1 to webapp;
grant select on t1 to accounts;

PART I

Using WITH GRANT OPTION (or with Database Control, selecting the Grant
Option check box shown in Figure 6-7) lets a user pass their object privilege on to
a third party. Oracle retains a record of who granted object privileges to whom; this
allows a REVOKE of an object to cascade to all those in the chain. Consider this
sequence of commands:

OCA/OCP Oracle Database 11g All-in-One Exam Guide

222
10. Connect to Database Control as user SYSTEM.
11. Confirm that the object privileges have been granted. The navigation path
from the database home page is as follows: On the Schema tab click the Tables
link in the Database Objects section. Enter SALES as the Schema and T1 as
the Table and click GO. In the Actions drop-down box, select Object Privileges.
As shown in the illustration, ACCOUNTS has only SELECT, but WEBAPP has
everything else. Note that the window also shows by whom the privileges were
granted, and that none of them were granted WITH GRANT OPTION.

12. With Database Control, confirm which privileges have granted to SALES. The
navigation path from the database home page is as follows: On the Server
tab click the Users link in the Security section. Select the radio button for
SALES, and click VIEW. You will see that he has two system privileges (CREATE
SESSION and CREATE TABLE) without the ADMIN OPTION, a 1MB quota on
STOREDATA, and nothing else.
13. Retrieve the same information shown in Steps 11 and 12 with SQL*Plus. As
SYSTEM, run these queries:
select grantee,privilege,grantor,grantable from dba_tab_privs
where owner='SALES' and table_name='T1';
select * from dba_sys_privs where grantee='SALES';

14. Revoke the privileges granted to WEBAPP and ACCOUNTS:
revoke all on sales.t1 from webapp;
revoke all on sales.t1 from accounts;

Confirm the revocations by rerunning the first query from Step 13.

Chapter 6: Oracle Security

223
Managing security with directly granted privileges works but has two problems. First,
it can be a huge workload: an application with thousands of tables and users could
need millions of grants. Second, if a privilege has been granted to a user, that user
has it in all circumstances: it is not possible to make a privilege active only in certain
circumstances. Both these problems are solved by using roles. A role is a bundle of
system and/or object privileges that can be granted and revoked as a unit, and having
been granted can be temporarily activated or deactivated within a session.

Creating and Granting Roles
Roles are not schema objects: they aren’t owned by anyone and so cannot be prefixed
with a username. However, they do share the same namespace as users: it is not possible
to create a role with the same name as an already-existing user, or a user with the same
name as an already-existing role.
Create a role with the CREATE ROLE command:
CREATE ROLE rolename ;

Then grant privileges to the role with the usual syntax, including WITH ADMIN or
WITH GRANT OPTION if desired.
For example, assume that the HR schema is being used as a repository for data to
be used by three groups of staff. Managerial staff have full access, senior clerical staff
have limited access, and junior clerical staff have very restricted access. First create a
role that might be suitable for the junior clerks; all they can do is answer questions
by running queries:
create role hr_junior;
grant create session to hr_junior;
grant select on hr.regions to hr_junior;
grant select on hr.locations to hr_junior;
grant select on hr.countries to hr_junior;
grant select on hr.departments to hr_junior;
grant select on hr.job_history to hr_junior;
grant select on hr.jobs to hr_junior;
grant select on hr.employees to hr_junior;

Anyone granted this role will be able to log on to the database and run SELECT
statements against the HR tables. Then create a role for the senior clerks, who can also
write data to the EMPLOYEES and JOB_HISTORY tables:
create role hr_senior;
grant hr_junior to hr_senior with admin option;
grant insert, update, delete on hr.employees to hr_senior;
grant insert, update, delete on hr.job_history to hr_senior;

This role is first granted the HR_JUNIOR role (there is no problem granting one
role to another) with the syntax that will let the senior users assign the junior role to

PART I

Create and Manage Roles

OCA/OCP Oracle Database 11g All-in-One Exam Guide

224
others. Then it is granted DML privileges on just two tables. Then create the manager’s
role, which can update all the other tables:
create role hr_manager;
grant hr_senior to hr_manager with admin option;
grant all on hr.regions to hr_manager;
grant all on hr.locations to hr_manager;
grant all on hr.countries to hr_manager;
grant all on hr.departments to hr_manager;
grant all on hr.job_history to hr_manager;
grant all on hr.jobs to hr_manager;
grant all on hr.employees to hr_manager;

This third role is given the HR_SENIOR role with the ability to pass it on, and
then gets full control over the contents of all the tables. But note that the only system
privilege this role has is CREATE_SESSION, acquired through HR_SENIOR, which
acquired it through HR_JUNIOR. Not even this role can create or drop tables; that
must be done by the HR user, or an administrator with CREATE ANY TABLE and
DROP ANY TABLE.
Note the syntax WITH ADMIN OPTION, which is the same as that for granting
system privileges. As with system privileges, revocation of a role will not cascade;
there is no record kept of who has granted a role to whom.
Finally, grant the roles to the relevant staff. If SCOTT is a manager, SUE is a senior
clerk, and JON and ROOP are junior clerks, the flow could be as in Figure 6-8.

Predefined Roles
There are at least 50 predefined roles in an Oracle database (possibly many more,
depending on what options have been installed). Roles that every DBA should be
aware of are
• CONNECT This only exists for backward compatibility. In previous releases it
had the system privileges necessary to create data storing objects, such as tables.
Now it has only the CREATE SESSION privilege.
Figure 6-8
Granting roles
with SQL*Plus

Chapter 6: Oracle Security

225

• DBA Has most of the system privileges, and several object privileges and
roles. Any user granted DBA can manage virtually all aspects of the database,
except for startup and shutdown.
• SELECT_CATALOG_ROLE Has over 2000 object privileges against data
dictionary objects, but no system privileges or privileges against user data. Useful
for junior administration staff who must monitor and report on the database
but not be able to see user data.
• SCHEDULER_ADMIN Has the system privileges necessary for managing the
Scheduler job scheduling service.
There is also a predefined role PUBLIC, which is always granted to every database
user account. It follows that if a privilege is granted to PUBLIC, it will be available to
all users. So following this command:
grant select on hr.regions to public;

all users will be able to query the HR.REGIONS table.
TIP The PUBLIC role is treated differently from any other role. It does not,
for example, appear in the view DBA_ROLES. This is because the source code
for DBA_ROLES, which can be seen in the cdsec.sql script called by the
catalog.sql script, specifically excludes it.

Enabling Roles
By default, if a user has been granted a role, then the role will be enabled. This means
that the moment a session is established connecting to the user account, all the
privileges (and other roles) granted to the role will be active. This behavior can
be modified by making the role nondefault. Following the example given in the
preceding section, this query shows what roles have been granted to JON:
SQL> select * from dba_role_privs where grantee='JON';
GRANTEE
GRANTED_ROLE
ADM DEF
----------------------------- --------------- --- --JON
HR_JUNIOR
NO YES

JON has been granted HR_JUNIOR. He does not have administration on the role
(so he cannot pass it on to anyone else), but it is a default role—he will have this role
whenever he connects. This situation may well not be what you want. For example,
JON has to be able to see the HR tables (it’s his job) but that doesn’t mean that you
want him to be able to dial in from home, at midnight, and hack into the tables with
SQL*Plus. You want to arrange things such that he can only see the tables when he is
at a terminal in the Personnel office, running the HR application, in working hours.

PART I

• RESOURCE Also for backward compatibility, this role can create both data
objects (such as tables) and procedural objects (such PL/SQL procedures). It
also includes the UNLIMITED TABLESPACE privilege.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

226
To change the default behavior:
alter user jon default role none;

Now when JON logs on, he will not have any roles enabled. Unfortunately, this
means he can’t log on at all—because it is only HR_JUNIOR that gives him the
CREATE SESSION system privilege. Easily fixed:
SQL> grant connect to jon;
Grant succeeded.
SQL> alter user jon default role connect;
User altered.
SQL> select * from dba_role_privs where grantee='JON';
GRANTEE
GRANTED_ROLE
ADM DEF
------------------------------ --------------- --- --JON
HR_JUNIOR
NO NO
JON
CONNECT
NO YES

Now when JON connects, only his CONNECT role is enabled—and the current
version of CONNECT is not dangerous at all. Within the application, software
commands can be embedded to enable the HR_JUNIOR role. The basic command to
enable a role within a session is
SET ROLE rolename ;

which can be issued by the user at any time. So no security yet. But if the role is
created with this syntax:
CREATE ROLE rolename IDENTIFIED USING procedure_name ;

then the role can only be enabled by running the PL/SQL procedure nominated by
procedure_name. This procedure can make any number of checks, such as checking that
the user is working on a particular TCP/IP subnet; or that they are running a particular
user process (probably not SQL*Plus); or that the time is in a certain range; and so
on. Embedding calls to the enabling procedures at appropriate points in an application
can switch roles on and off as required, while leaving them disabled at all times when
a connection is made with an ad hoc SQL tool such as SQL*Plus.
TIP It can be very difficult to work out why you can see certain data.You may
have been granted the SELECT privilege on specific objects; you may have
been granted the ALL privilege; you may have SELECT ANY; SELECT may have
been granted to PUBLIC; or you may have a role to which SELECT has been
granted.You may have all of these, in which case they would all have to be
revoked to prevent you from seeing the data.

Chapter 6: Oracle Security

227

1. Connect to your database with SQL*Plus as user SYSTEM.
2. Create two roles as follows:
create role usr_role;
create role mgr_role;

3. Grant some privileges to the roles, and grant USR_ROLE to MGR_ROLE:
grant
grant
grant
grant

create session to usr_role;
select on sales.t1 to usr_role;
usr_role to mgr_role with admin option;
all on sales.t1 to mgr_role;

4. As user SYSTEM, grant the MGR_ROLE to WEBAPP:
grant mgr_role to WEBAPP;

5. Connect to the database as user WEBAPP:
connect webapp/oracle;

6. Grant the USR_ROLE to ACCOUNTS, and insert a row into SALES.T1:
grant usr_role to accounts;
insert into sales.t1 values(sysdate);
commit;

7. Confirm the ACCOUNTS can connect and query SALES.T1 but do nothing
else. The INSERT statement that follows should fail with an ORA-01031:
insufficient privileges error.
connect accounts/oracle
select * from sales.t1;
insert into sales.t1 values(sysdate);

8. As user SYSTEM, adjust ACCOUNTS so that by default the user can log on but
do nothing else:
connect system/oracle
grant connect to accounts;
alter user accounts default role connect;

9. Demonstrate the enabling and disabling of roles. The first time SALES
tries to query the SALES.T1 table, it will receive an “ORA-00942: table
or view does not exist” error. Once the USR_ROLE is activated, the same
query succeeds.
connect accounts/oracle
select * from sales.t1;
set role usr_role;
select * from sales.t1;

PART I

Exercise 6-3: Create and Grant Roles In this exercise, you will create some
roles, grant them to the users, and demonstrate their effectiveness.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

228
10. Use Database Control to inspect the roles. The navigation path from the
database home page is: On the Server tab click the Roles link in the Security
section. Click the links for the two new roles to see their privileges. This
illustration shows the MGR_ROLE:

11. To see to whom a role has been granted, in the Actions drop-down box
shown in the preceding illustration, select Show Grantees and click GO.
This illustration shows the result for USR_ROLE:

Chapter 6: Oracle Security

229
12. Obtain the same information retrieved in Steps 10 and 11 with these queries:

Create and Manage Profiles
A profile has a dual function: to enforce a password policy and to restrict the resources
a session can consume. Password controls are always enforced; resource limits are
only enforced if the instance parameter RESOURCE_LIMIT is on TRUE—by default, it
is FALSE. Profiles are used automatically, but the default profile (applied by default to
all users, including SYS and SYSTEM) does very little.
EXAM TIP Profile password limits are always enforced; profile resource limits
are enforced only if the instance parameter RESOURCE_LIMIT is TRUE.

Password Management
The limits that can be applied to passwords are
• FAILED_LOGIN_ATTEMPTS Specifies the number of consecutive errors on a
password before the account is locked. If the correct password is given before
this limit is reached, the counter is reset to zero.
• PASSWORD_LOCK_TIME The number of days to lock an account after
FAILED_LOGIN_ATTEMPTS is reached.
• PASSWORD_LIFE_TIME The number of days before a password expires.
It may still be usable for a while after this time, depending on PASSWORD_
GRACE_TIME.
• PASSWORD_GRACE_TIME The number of days following the first
successful login after the password has expired that prompts to change the
password will be generated. The old password is still usable during this time.
• PASSWORD_REUSE_TIME
reused.

The number of days before a password can be

• PASSWORD_REUSE_MAX

The number of times a password can be reused.

• PASSWORD_VERIFY_FUNCTION The name of a function to run whenever
a password is changed. The purpose of the function is assumed to be checking
the new password for a required degree of complexity, but it can do pretty
much anything you want.

PART I

select * from dba_role_privs
where granted_role in (‘USR_ROLE','MGR_ROLE');
select grantee,owner,table_name,privilege,grantable
from dba_tab_privs where grantee in (‘USR_ROLE','MGR_ROLE')
union all
select grantee,to_char(null),to_char(null),privilege,admin_option
from dba_sys_privs where grantee in (‘USR_ROLE','MGR_ROLE')
order by grantee;

OCA/OCP Oracle Database 11g All-in-One Exam Guide

230
Resource Limits
The limits that can be applied to resource usage (also known as kernel limits) are
• SESSIONS_PER_USER The number of concurrent logins that can be made
to the same user account. Sessions attempting to log in with the same user
name when this limit is reached will be blocked.
• CPU_PER_SESSION The CPU time (in centiseconds) that a session’s server
process is allowed to use before the session is forcibly terminated.
• CPU_PER_CALL The CPU time (in centiseconds) that a session’s server
process is allowed to use to execute one SQL statement before the statement is
forcibly terminated.
• LOGICAL_READS_PER_SESSION The number of blocks that can be read
by a session (irrespective of whether they were in the database buffer cache or
read from disk) before the session is forcibly terminated.
• LOGICAL_READS_PER_CALL The number of blocks that can be read by
a single statement (irrespective of whether they were in the database buffer
cache or read from disk) before the statement is forcibly terminated.
• PRIVATE_SGA For sessions connected through the shared server architecture,
the number of kilobytes that the session is allowed to take in the SGA for
session data.
• CONNECT_TIME In minutes, the maximum duration of a session before the
session is forcibly terminated.
• IDLE_TIME In minutes, the maximum time a session can be idle before the
session is forcibly terminated.
• COMPOSITE_LIMIT A weighted sum of CPU_PER_SESSION, CONNECT_
TIME, LOGICAL_READS_PER_SESSION, and PRIVATE_SGA. This is an
advanced facility that requires configuration beyond the scope of the OCP
examination.
Resource limits will not be applied unless an instance parameter has been set:
alter system set resource_limit=true;

This defaults to FALSE.
When a session is terminated because a resource limit has been reached, if there
was a transaction in progress it will be rolled back. If a statement is terminated, the
work done by the statement will be rolled back, but any earlier statements will remain
intact and uncommitted.
TIP Profiles can be used to limit resource usage, but a much more
sophisticated tool is the Resource Manager discussed in Chapter 21.

Chapter 6: Oracle Security

231
Creating and Assigning Profiles

select username,profile from dba_users;

By default, all users (with the exception of two internal users, DBSNMP and
WKSYS) will be assigned the profile called DEFAULT. Then the view that will display
the profiles themselves is DBA_PROFILES:
select * from dba_profiles where profile='DEFAULT';

Or with Database Control, from the database home page take the Server tab, and
then click the Users link in the Security section to see which profile each user has.
Select a user and click EDIT to assign a different profile. To see how the profiles are set
up, click the Profiles link in the Security section.
The DEFAULT profile has no resource limits at all, but there are some password
limits:
Resource Name

Limit

FAILED_LOGIN_ATTEMPTS

10

PASSWORD_LOCK_TIME

1

PASSWORD_LIFE_TIME

180

PASSWORD_GRACE_TIME

7

These restrictions are not too strict: a password can be entered incorrectly ten
consecutive times before the account is locked for one day, and a password will expire
after about six months with a one-week grace period for changing it after that.
The simplest way to enable more sophisticated password management is to run
the Oracle-supplied script. On Unix or Linux it is
$ORACLE_HOME/rdbms/admin/utlpwdmg.sql

On Windows it is
%ORACLE_HOME%\rdbms\admin\utlpwdmg.sql

On either platform, the script creates two functions called VERIFY_FUNCTION
and VERIFY_FUNCTION_11G, and runs this command:
ALTER PROFILE DEFAULT LIMIT
PASSWORD_LIFE_TIME 180
PASSWORD_GRACE_TIME 7
PASSWORD_REUSE_TIME UNLIMITED
PASSWORD_REUSE_MAX UNLIMITED
FAILED_LOGIN_ATTEMPTS 10
PASSWORD_LOCK_TIME 1
PASSWORD_VERIFY_FUNCTION verify_function_11G;

PART I

Profiles can be managed through Database Control or from SQL*Plus. To see which
profile is currently assigned to each user, run this query:

OCA/OCP Oracle Database 11g All-in-One Exam Guide

232
This command will adjust the profile called DEFAULT. Any users with the DEFAULT
profile (which is all users, by default) will immediately pick up the new values. Following
a standard database creation, the only change will be the specification of the PASSWORD_
VERIFY_FUNCTION. The function nominated, VERIFY_FUNCTION_11G, makes a set
of simple tests and will reject a password change if it does not pass all of them:
• The new password must be at least eight characters long.
• The new password cannot be the same as the username (spelled backward or
forward) or the name of the database, in upper- or lowercase.
• A few simple and commonly used passwords (such as “oracle”) will be rejected.
• The new password must have at least one letter and at least one digit.
• The password must differ in at least three characters from the preceding
password.
The script should be viewed as an example script (certainly the function is very
elementary) and should be edited to suit the needs of the organization. Most
organizations will need to go further than this and create a set of profiles to be
applied to different users.
To create a profile with SQL*Plus, use the CREATE PROFILE command, setting
whatever limits are required. Any limits not specified will be picked up from the
current version of the DEFAULT profile. For example, it could be that the rules of the
organization state that no users should be able to log on more than once, except for
administration staff, who can log on as many concurrent sessions as they want and
must change their passwords every week with one-day grace, and the programmers,
who can log on twice. To do this, first adjust the DEFAULT profile:
alter profile default limit sessions_per_user 1;

Create a new profile for the DBAs, and assign it:
create profile dba_profile limit sessions_per_user unlimited
password_life_time 7 password_grace_time 1;
alter user system profile dba_profile;

Create a profile for the programmers, and assign it:
create profile programmers_profile limit sessions_per_user 2;
alter user jon profile programmers_profile;
alter user sue profile programmers_profile;

To let the resource limit take effect, adjust the instance parameter:
alter system set resource_limit=true;

Assuming that the instance is using an SPFILE, this change will be propagated to
the parameter file and will therefore be permanent.
A profile cannot be dropped if it has been assigned to users. They must be altered
to a different profile first. Once done, drop the profile with
DROP PROFILE profile_name ;

Chapter 6: Oracle Security

233
Alternatively, use this syntax:

which will automatically reassign all users with profile_name back to the DEFAULT
profile.
Exercise 6-4: Create and Use Profiles In this exercise, create, assign, and test
a profile that will force some password control.
1. Connect to your database with SQL*Plus as user system.
2. Create a profile that will lock accounts after two wrong passwords:
create profile two_wrong limit failed_login_attempts 2;

3. Assign this new profile to SALES:
alter user sales profile two_wrong;

4. Deliberately enter the wrong password for SALES a few times. You will get an
“ORA-28000: the account is locked” message after the third failed attempt.
connect sales/wrongpassword

5. As user SYSTEM, unlock the SALES account:
alter user sales account unlock;

6. Check that SALES can now connect:
connect sales/oracle

The next illustration shows the sequence of events.

PART I

DROP PROFILE profile_name CASCADE ;

OCA/OCP Oracle Database 11g All-in-One Exam Guide

234
7. Tidy up by dropping the profile, the roles, and the users. Note the use of
CASCADE when dropping the profile to remove it from SALES, and on the
DROP USER command to drop their table as well. Roles can be dropped
even if they are assigned to users. The privileges granted on the table will be
revoked as the table is dropped.
connect system/oracle
drop profile two_wrong cascade;
drop role usr_role;
drop role mgr_role;
drop user sales cascade;
drop user accounts;
drop user webapp;

Database Security and Principle
of Least Privilege
The safest principle to follow when determining access to computer systems is that of
least privilege: no one should have access to anything beyond the absolute minimum
needed to perform their work, and anything not specifically allowed is forbidden. The
Oracle database conforms to this, in that by default no one can do anything at all, with
the exception of the two users SYS and SYSTEM. No other users can even connect—not
even those created by the standard database creation routines.
In addition to the use of password profiles, there are best practices that should be
followed to assist with implementing the least-privilege principle, particularly regarding
privileges granted to the PUBLIC account and certain instance parameters.

Public Privileges
The PUBLIC role is implicitly granted to every user. Any privileges granted to PUBLIC
have, in effect, been granted to everyone who can connect to the database; every
account you create will have access to these privileges. By default, PUBLIC has a large
number of privileges. In particular, this role has execute permission on a number of
PL/SQL utility packages, as shown in Figure 6-9.
You should always consider revoking the execution privileges on the UTL packages,
but remember that application software may assume that the privilege is there. Execution
privilege may be revoked as follows:
SQL> revoke execute on utl_file from public;

Some of the more dangerous packages listed in Figure 6-9 are
• UTL_FILE Allows users to read and write any file and directory that is accessible
to the operating system Oracle owner. This includes all the database files, and the
ORACLE_HOME directory. On Windows systems, this is particularly dangerous,
as many Windows databases run with Administrator privileges. The package is to
a certain extent controlled by the UTL_FILE_DIR instance parameter, discussed in
the next section.

Chapter 6: Oracle Security

235
PART I

Figure 6-9 Privileges granted to PUBLIC

• UTL_TCP Allows users to open TCP ports on the server machine for
connections to any accessible address on the network. The interface provided
in the package only allows connections to be initiated by the PL/SQL program;
it does not allow the PL/SQL program to accept connections initiated outside
the program. Nonetheless, it does allow malicious users to use your database
as the starting point for launching attacks on other systems, or for transmitting
data to unauthorized recipients.
• UTL_SMTP Written using UTL_TCP calls, this package lets users send mail
messages. It is restricted by the UTL_SMTP_SERVER instance parameter, which
specifies the address of the outgoing mail server, but even so you probably
do not want your database to be used for exchange of mail messages without
your knowledge.
• UTL_HTTP Allows users to send HTTP messages and receive responses—in
effect, converting your database into a web browser. This package also makes
use of UTL_TCP subprograms.
Always remember that, by default, these packages are available to absolutely
anyone who has a logon to your database, and furthermore that your database may
have a number of well-known accounts with well-known passwords.
EXAM TIP PUBLIC is a role that is granted to everyone—but when connecting
to the instance using the AS SYSOPER syntax, you will appear to be connected to
an account PUBLIC.

Security-Critical Instance Parameters
Some parameters are vital to consider for securing the database. The defaults are
usually fine, but in some circumstances (for which there should always be a good
business case), you may need to change them. All of the parameters described here

OCA/OCP Oracle Database 11g All-in-One Exam Guide

236
are static: you must restart the instance for a change to take effect. This is intended to
provide extra security, as it reduces the likelihood that they can be changed temporarily
to an inappropriate setting without the DBA being aware of it.

UTL_FILE_DIR
The UTL_FILE_DIR instance parameter defaults to NULL and is therefore not a
security problem. But if you need to set it, take care. This parameter gives PL/SQL
access to the file system of the server machine, through the UTL_FILE supplied
package. The package has procedures to open a file (either a new file or an existing
one) and read from and write to it. The only limitation is that the directories listed
must be accessible to the Oracle owner.
The difficulty with this parameter is that, being set at the instance level, it offers
no way to allow some users access to some directories and other users access to other
directories. All users with execute permission on the UTL_FILE package have access to
all the directories listed in the UTL_FILE_DIR parameter.
The parameter takes a comma-separated list of directories and is static. To set it,
follow the syntax in this example, which gives access to two directories, and restart the
instance:
SQL> alter system set utl_file_dir='/oracle/tmp','/oracle/interface' scope=spfile;

TIP The UTL_FILE_DIR parameter can include wildcards. Never set it to ‘*’,
because that will allow all users access to everything that the database owner
can see, including the ORACLE_HOME and all the database files.

REMOTE_OS_AUTHENT and OS_AUTHENT_PREFIX
The REMOTE_OS_AUTHENT instance parameter defaults to FALSE. This controls
whether a user can connect to the database from a remote computer without the need
to supply a password. The reasons for wanting to do this have largely disappeared
with modern computer systems, but the capability is still there.
In the days before all users had intelligent terminals, such as PCs, it was customary for
users to log on directly to the database server machine and therefore to be authenticated
by the server’s operating system. They would then launch their user process on the
server machine and connect to the database. In order to avoid the necessity for users
to provide usernames and passwords twice (once for the operating system logon, and
again for the database logon), it was common to create the Oracle users with this
syntax:
SQL> create user jon identified externally;

This delegates responsibility for authentication to the server’s operating system.
Any person logged on to the server machine as operating system user “jon” will be
able to connect to the database without the need for any further authentication:
$ sqlplus /
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production

Chapter 6: Oracle Security

237

This is secure, as long as your server’s operating system is secure. As networking
became more widespread, it became common to separate the user process workload
from the server process workload by having users log on to a different machine
dedicated to running user processes, which would connect to the server over Oracle
Net (or SQL*Net, as it was then known). Since the user no longer logs on to the
server’s operating system, external authentication can’t be used—unless you use the
REMOTE_OS_AUTHENT parameter. Setting this to TRUE means that user JON can
connect without a password from any machine where he is logged on as operating
system user “jon”. An example of the syntax is
sqlplus connect /@orcl11g

This will log the user on to the database identified in the connect string ORCL11G,
passing through his operating system username on his local machine as the database
username. This is only secure if you trust the operating systems of all machines
connected to the network. An obvious danger is PCs: it is common for users to have
administration rights on their PCs, and they can therefore create user accounts that
match any Oracle account name.
TIP It is generally considered bad practice to enable remote operating system
authentication.
The OS_AUTHENT_PREFIX instance parameter is related to external authentication,
either local or remote. It specifies a prefix that must be applied to the operating system
username before it can be mapped onto an Oracle username. The default is “OPS$”.
In the preceding example, it is assumed that this parameter has been cleared, with
SQL> alter system set os_authent_prefix='' scope=spfile;

Otherwise, the Oracle username would have had to be OPS$JON.

O7_DICTIONARY_ACCESSIBILITY
The O7_DICTIONARY_ACCESSIBILITY instance parameter controls the effect of
granting object privileges with the ANY keyword. It defaults to FALSE. You can give
user JON permission to see any table in the database with
SQL> grant select any table to jon;

but do you want him to be able to see the data dictionary tables as well as user tables?
Probably not—some of them contain sensitive data, such as unencrypted passwords
or source code that should be protected.
O7_DICTIONARY_ACCESSIBILITY defaults to false, meaning that the ANY
privileges exclude objects owned by SYS, thus protecting the data dictionary; JON

PART I

SQL> show user;
USER is "JON”
SQL>

OCA/OCP Oracle Database 11g All-in-One Exam Guide

238
can see all the user data, but not objects owned by SYS. If you change the parameter
to TRUE, then ANY really does mean ANY—and JON will be able to see the data
dictionary as well as all user data.
It is possible that some older application software may assume that the ANY
privileges include the data dictionary, as was always the case with release 7 of the
Oracle database (hence the name of the parameter). If so, you have no choice but to
change the parameter to TRUE until the software is patched up to current standards.
TIP Data dictionary accessibility is sometimes a problem for application
installation routines.You may have to set O7_DICTIONARY_ACCESSIBILITY
to TRUE while installing a product, and then put it back on default when the
installation is finished.
If you have users who really do need access to the data dictionary, rather than
setting O7_DICTIONARY_ACCESSIBILITY to true, consider granting them the SELECT
ANY DICTIONARY privilege. This will let them see the data dictionary and dynamic
performance views, but they will not be able to see any user data—unless you have
specifically granted them permission to do so. This might apply, for example, to the
staff of an external company you use for database administration support: they need
access to all the data dictionary information, but they have no need to view your
application data.

REMOTE_LOGIN_PASSWORDFILE
The remote REMOTE_LOGIN_PASSWORDFILE instance parameter controls whether it
is possible to connect to the instance as a user with the SYSDBA or SYSOPER privilege
over the network. With this parameter on its default of NONE, the only way to get a
SYSDBA connection is to log on to the operating system of the server machine as a
member of the operating system group that owns the Oracle software. This is absolutely
secure—as long as your server operating system is secure, which it should be.
Setting this parameter to either EXCLUSIVE or SHARED gives users another way
in: even if they are not logged on to the server as a member of the Oracle owning
group, or even if they are coming in across the network, they can still connect as
SYSDBA if they know the appropriate password. The passwords are embedded, in
encrypted form, in an operating system file in the Oracle home directory: $ORACLE_
HOME/dbs on Unix, or %ORACLE_HOME%\database on Windows. A setting of
SHARED means that all instances running of the same Oracle home directory will
share a common password file. This will have just one password within it for the SYS
user that is common to all the instances. EXCLUSIVE means that the instance will
look for a file whose name includes the instance name: PWDinstance_name.ora
on Windows, orapwinstance_name on Unix, where instance_name is the
instance name. This file will have instance-specific passwords.

Chapter 6: Oracle Security

239

TIP Some computer auditors do not understand operating system and
password file authentication. They may even state that you must create a
password file, to improve security. Just do as they say—it is easier than arguing.

Figure 6-10 Managing the password file with SQL*Plus

PART I

Create the password file by running the orapwd utility from an operating system
prompt. This will create the file and embed within it a password for the SYS user.
Subsequently, you can add other users’ passwords to the file, thus allowing them to
connect as SYSDBA or SYSOPER as well. Review the scripts in Chapter 2 for an example
of the syntax for creating a password file. To add another user to the file, grant them
either the SYSDBA or SYSOPER privilege, as in Figure 6-10. The V$PWFILE_USERS
view shows you which users have their passwords entered in the password file, and
whether they have the SYSOPER privilege, the SYSDBA privilege, or both.
Note that when connecting as SYSDBA, even though you use a username and
password, you end up connected as user SYS; when connecting as SYSOPER, you
are in fact connected as the PUBLIC user.
Enabling a password file does not improve security; it weakens it, by giving users
another way of obtaining a privileged connection (in addition to local operating
system authentication, which is always available). It is, however, standard practice
to enable it, because without a password file it may be very difficult to manage the
database remotely.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

240
Exercise 6-5: Remove Some Potentially Dangerous Privileges In this
exercise, you will generate a script that could be used (possibly after edits, depending
on local requirements) to remove some of the more dangerous privileges from PUBLIC.
Use SQL*Plus.
1. Connect to your database as user SYSTEM.
2. Adjust SQL*Plus to remove extraneous characters from its output:
set heading off
set pagesize 0
set feedback off

3. Start spooling output to a file in a suitable directory. Following are examples
for Unix and Windows:
spool $HOME/oracle/scripts/clear_public_privs.sql
spool c:\oracle\scripts\clear_public_privs.sql

4. Generate the SQL command file by running this statement:
select 'revoke execute on '||table_name||' from public;'
from dba_tab_privs where table_name like 'UTL_%';

5. Stop the spooling of output:
spool off

6. Open the generated file with the editor of your choice. Note that you need to
remove the first and last lines before running the script. Site variations would
determine which (if any) privileges could not actually be revoked.

Work with Standard Database Auditing
No matter how good your security policies are, there will be occasions when a policy
is not enough. You will have to accept that users have privileges that could be dangerous.
All you can do is monitor their use of those privileges, and track what they are actually
doing with them. The most extreme example of this is you—the database administrator.
Anyone with the SYSDBA privilege can do anything at all within the database. For
your employers to have confidence that you are not abusing this power (which cannot
be revoked, or you couldn’t do your job), it is necessary to audit all SYSDBA activity.
For regular users, you may also wish to track what they doing. You may not be able to
prevent them from breaking company rules on access to data, but you can track the
fact that they did it.
Apart from SYSDBA auditing, Oracle provides three auditing techniques:
• Database auditing can track the use of certain privileges, the execution of
certain commands, access to certain tables, or logon attempts.
• Value-based auditing uses database triggers. Whenever a row is inserted,
updated, or deleted, a block of PL/SQL code will run that can (among other
things) record complete details of the event.

Chapter 6: Oracle Security

241

TIP Auditing of any type increases the amount of work that the database
must do. In order to limit this workload, you should focus your auditing
closely and not track events of minimal significance.

Auditing SYSDBA Activity
If the instance parameter AUDIT_SYS_OPERATIONS is set to TRUE (the default is
FALSE), then every statement issued by a user connected AS SYSDBA or AS SYSOPER
is written out to the operating system’s audit trail. This contains a complete record of
all work done by the DBA. Clearly, the audit trail must be protected; if it were possible
for the DBA to delete the audit records, there would be no point in creating them.
This brings up the question of separation of duties. Your system needs to be configured
in such a way that the DBA has no access to the audit records that track their activity;
they should only be accessible to the computer’s system administrator. If the DBA is
also the system administrator, then the auditing is useless. For this reason, a decent
computer auditor will always state that the DBA must not have the Unix “root”
password (or the Windows “Administrator” password).
The destination of the SYS audit records is platform specific. On Windows, it is the
Windows Application log, on Unix it is controlled by the AUDIT_FILE_DEST parameter.
This parameter should point to a directory on which the Oracle owner has write
permission (so that the audit records can be written by the instance) but that the Unix
ID used by the DBA does not, so that they cannot adjust the audit records by hand.

Database Auditing
Before setting up database auditing, the AUDIT_TRAIL instance parameter must be
set. This has six possible values:
• NONE (or FALSE) Database auditing is disabled, no matter what auditing
you attempt to configure.
• OS Audit records will be written to the operating system’s audit trail—the
Application Log on Windows, or the AUDIT_FILE_DEST directory on Unix.
• DB The audit records are written to a data dictionary table, SYS.AUD$. There
are views that let you see the contents of this table.
• DB_EXTENDED As DB, but including the SQL statements with bind
variables that generated the audit records.
• XML As OS, but formatted with XML tags.
• XML_EXTENDED As XML, but with the SQL statements and bind variables.

PART I

• Fine-grained auditing allows tracking access to tables according to which rows
(or which columns of the rows) were accessed. It is much more precise than
either database auditing or value-based auditing, and it can limit the number
of audit records generated to only those of interest.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

242
Having set the AUDIT_TRAIL parameter, you can use database auditing to capture
login attempts, use of system and object privileges, and execution of SQL commands.
Furthermore, you can specify whether to audit these events when they succeeded,
when they failed because of insufficient privileges, or both. Auditing commands that
did not succeed can be particularly valuable: any records produced will tell you that
users are attempting to break their access rights.
Database auditing is configured using the AUDIT command.
Use of privileges can be audited with, for example,
SQL> audit create any trigger;
SQL> audit select any table by session;

Your programmers will have been granted the CREATE ANY TRIGGER privilege
because they will be creating triggers on other schemas’ tables as part of their work,
but it is a dangerous privilege that could be used maliciously. So you certainly need to
know when they use it, in order that you can demand to see the code. Similarly, some
staff will need the SELECT ANY TABLE and UPDATE ANY TABLE privileges in order
to sort out problems with transactions that have gone wrong, but whenever they use
these privileges, a record must be kept so that they will be deterred from accessing
data unless they have a legitimate reason.
By default, auditing will generate one audit record for every session that violates
an audit condition, irrespective of the number of times it violates the condition. This
is equivalent to appending BY SESSION to the AUDIT command. Appending the
keywords BY ACCESS to the AUDIT command will generate one record for every
violation.
TIP The default BY SESSION clause will often not be what you want, but it
does reduce the volume of audit records produced to a more manageable
number.
Auditing can also be oriented toward objects:
SQL> audit insert on ar.hz_parties whenever successful;
SQL> audit all on ar.ra_interface_lines_all;

The first of these commands will generate audit records if a session inserts a row
into the named table. The WHENEVER SUCCESSFUL keywords restrict audit records
to those where the operation succeeded; the alternative syntax is WHENEVER NOT
SUCCESSFUL. By default, all operations (successful or not) are audited. The second
example will audit every session that executes and SELECT, DML, or DDL statements
against the named table.
Database Control has a graphical interface to the auditing system. Figure 6-11
shows the interface after executing the two preceding commands. Note that the
window has tabs for displaying, adding, and removing auditing of privileges, objects,
and statements. In the figure, you can see the auditing of objects owned by user AR.
In the Configuration section of the window shown in Figure 6-11, there are links
for setting the audit parameter previously described.

Chapter 6: Oracle Security

243
PART I

Figure 6-11

Managing standard auditing with Database Control

Logons are audited with AUDIT SESSION. For example,
SQL> audit session whenever not successful;

This is equivalent to auditing the use of the CREATE SESSION privilege. Session
auditing records each connection to the database. The NOT SUCCESFUL keywords
restrict the output to only failed attempts. This can be particularly useful: recording
failures may indicate if attempts are being made to break into the database.
If auditing is to the operating system (because the AUDIT_TRAIL instance
parameter is set to OS or XML), then view the files created in the operating system
audit trail to see the results of the audits with an appropriate editor. If auditing is
directed to the database (AUDIT_TRAIL=DB or DB_EXTENDED), then the audit
records are written to a table in the data dictionary: the SYS.AUD$ table. It is possible
to query this directly, but usually you will go through views. The critical view is the
DBA_AUDIT_TRAIL view. This will show all audit trail entries, no matter whether the
audited event was use of a privilege, execution of a statement, or access to an object.
Of necessity, the view is very generic, and not all columns (41 in all) will be populated
for each audit trail entry. Table 6-1 lists the more commonly used columns.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

244
Column

Description

OS_USERNAME

Operating system name of the user performing the action

USERNAME

Oracle username of the user performing the action

USERHOST

The name of the machine running the user process

TIMESTAMP

When the audited event occurred

OWNER, OBJ_NAME

Schema and name of the object affected

ACTION_NAME

The action audited

PRIV_USED

System privilege used (if any)

SQL_TEXT

The statement executed

Table 6-1

Common Columns in the DBA_AUDIT_TRAIL View

The other audit views (DBA_AUDIT_OBJECT, DBA_AUDIT_STATEMENT, and
DBA_AUDIT_SESSION) each show a subset of the DBA_AUDIT_TRAIL view, only
displaying certain audit records and the columns relevant to them.

Value-Based Auditing with Triggers
The database auditing just described can catch the fact that a command was executed
against a table, but not necessarily the rows that were affected. For example, issuing
AUDIT INSERT ON HR.EMPLOYEES will cause an audit record to be generated
whenever a row is inserted into the named table, but the record will not include the
actual values of the row that was inserted. On occasion, you may want to capture
these. This can be done by using database triggers.
A database trigger is a block of PL/SQL code that will run automatically whenever
an INSERT, UPDATE, or DELETE is executed against a table. A trigger can do almost
anything—in particular, it can write out rows to other tables. These rows will be part
of the transaction that caused the trigger to execute, and they will be committed when
the rest of the transaction is committed. There is no way that a user can prevent the
trigger from firing: if you update a table with an update trigger defined, that trigger
will execute.
Consider this trigger creation statement:
SQL>
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16

CREATE OR REPLACE TRIGGER system.creditrating_audit
AFTER UPDATE OF creditrating
ON store.customers
REFERENCING NEW AS NEW OLD AS OLD
FOR EACH ROW
BEGIN
IF :old.creditrating != :new.creditrating THEN
INSERT INTO system.creditrating_audit
VALUES (sys_context('userenv','os_user'),
sys_context(‘userenv','ip_address'),
:new.customer_id ||' credit rating changed from
'||:old.creditrating||
' to '||:new.creditrating);
END IF;
END;
/

Chapter 6: Oracle Security

245

TIP Auditing through triggers is a slower process than database auditing,
but it does give you more information and let you implement sophisticated
business rules.

Fine-Grained Auditing (FGA)
Database auditing can record all statement accesses to a table, whether SELECT or for
DML. But it cannot distinguish between rows, even though it might well be that only
some rows contain sensitive information. Using database auditing, you may have to
sift through a vast number of audit records to find the few that have significance.
Fine-grained auditing, or FGA, can be configured to generate audit records only when
certain rows are accessed, or when certain columns of certain rows are accessed. It can
also run a block of PL/SQL code when the audit condition is breached.
FGA is configured with the package DBMS_FGA. To create an FGA audit policy,
use the ADD_POLICY procedure, which takes these arguments:
Argument

Description

OBJECT_SCHEMA

The name of the user who owns the object to be audited. This
defaults to the user who is creating the policy.

OBJECT_NAME

The name of the table to be audited.

POLICY_NAME

Every FGA policy created must be given a unique name.

AUDIT_CONDITION

An expression to determine which rows will generate an audit
record. If left NULL, access to any row is audited.

AUDIT_COLUMN

A list of columns to be audited. If left NULL, then access to any
column is audited.

HANDLER_SCHEMA

The username that owns the procedure to run when the audit
condition is met. The default is the user who is creating the policy.

HANDLER_MODULE

A PL/SQL procedure to execute when the audit condition is met.

ENABLE

By default, this is TRUE: the policy will be active and can be
disabled with the DISABLE_POLICY procedure. If FALSE, then the
ENABLE_POLICY procedure must be used to activate the policy.

PART I

The first line names the trigger, which is in the SYSTEM schema. Lines 2 and 3 specify
the rule that determines when the trigger will execute: every time the CREDITRATING
column of a row in STORE’s CUSTOMERS table is updated. There could be separate
triggers defined to manage inserts and deletes, or actions on other columns. Line 7
supplies a condition: if the CREDITRATING column were not actually changed, then
the trigger will exit without doing anything. But if the CREDITRATING column were
updated, then a row is inserted into another table designed for trapping audit events.
Lines 9 and 10 use the SYS_CONTEXT function to record the user’s operating system user
name and the IP address of the terminal in use when the update is executed. Lines 11, 12,
and 13 record the customer number of the row updated, and the old and new values of
the CREDITRATING column. Database auditing as described in the preceding section
could have captured all this information, except for the actual values: which customer
was updated, and what the data change actually was.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

246
Argument

Description

STATEMENT_TYPES

One or more of SELECT, INSERT, UPDATE, or DELETE to define
which statement types should be audited. Default is SELECT only.

AUDIT_TRAIL

Controls whether to write out the actual SQL statement and its
bind variables to the FGA audit trail. The default is to do so.

AUDIT_COLUMN_OPTS

Determines whether to audit if a statement addresses any or
all of the columns listed in the AUDIT_COLUMNS argument.
Options are DBMS_FGA.ANY_COLUMNS, the default, or
DBMS_FGA_ALL_COLUMNS.

The other DBMS_FGA procedures are to enable, disable, or drop FGA policies.
To see the results of fine-grained auditing, query the DBA_FGA_AUDIT_TRAIL view:
SQL> describe dba_fga_audit_trail;
Name
Null?
----------------------------- -------SESSION_ID
NOT NULL
TIMESTAMP
DB_USER
OS_USER
USERHOST
CLIENT_ID
EXT_NAME
OBJECT_SCHEMA
OBJECT_NAME
POLICY_NAME
SCN
SQL_TEXT
SQL_BIND
COMMENT$TEXT
STATEMENT_TYPE
EXTENDED_TIMESTAMP
PROXY_SESSIONID
GLOBAL_UID
INSTANCE_NUMBER
OS_PROCESS
TRANSACTIONID
STATEMENTID
ENTRYID

Type
--------------------------NUMBER
DATE
VARCHAR2(30)
VARCHAR2(255)
VARCHAR2(128)
VARCHAR2(64)
VARCHAR2(4000)
VARCHAR2(30)
VARCHAR2(128)
VARCHAR2(30)
NUMBER
NVARCHAR2(2000)
NVARCHAR2(2000)
VARCHAR2(4000)
VARCHAR2(7)
TIMESTAMP(6) WITH TIME ZONE
NUMBER
VARCHAR2(32)
NUMBER
VARCHAR2(16)
RAW(8)
NUMBER
NUMBER

This procedure call will create a policy POL1 that will record all SELECT statements
that read the SALARY column of the HR.EMPLOYEES table, if at least one of the rows
retrieved is in department 80:
SQL> execute dbms_fga.add_policy(> object_schema=>'HR',> object_name=>'EMPLOYEES',> policy_name=>'POL1',> audit_condition=>'department_id=80',> audit_column=>'SALARY');

In addition to the DBA_AUDIT_TRAIL view, which shows the results of standard
database auditing, and the DBA_FGA_AUDIT_TRAIL view, which shows the results of

Chapter 6: Oracle Security

247

EXAM TIP Which views show the audit trail? DBA_AUDIT_TRIAL is used
for standard database auditing; DBA_FGA_AUDIT_TRAIL is used for finegrained auditing; while DBA_COMMON_AUDIT_TRAIL is used for both. To
see the results of auditing with triggers, you must create your own views that
address your own tables.
Exercise 6-6: Use Standard Database Auditing In this exercise you will
enable standard database auditing and see the results, using either Database Control
or SQL*Plus. If you use Database Control, be sure to click the SHOW SQL button
whenever possible to see the SQL statements being generated.
1. Connect to your database as user SYSTEM and create a user and a table to be
used for the exercise:
create user auditor identified by oracle;
create table system.audi as select * from all_users;
grant create session, select any table to auditor;
grant select on audi to auditor;

2. Enable auditing of AUDITOR’s use of SELECT ANY PRIVILEGE, and of all
accesses to the table AUDI. With SQL*Plus:
audit select any table by access;
audit all on system.audi by access;

With Database Control, this can be done from the Audit Settings window.
3. Connect to the database as user SYS. This is necessary, as this step involves
restarting the instance. Set the audit trail destination to DB and enable
auditing of privileged users, and bounce the instance. Using SQL*Plus:
alter system set audit_trail='DB_EXTENDED' scope=spfile;
alter system set audit_sys_operations=true scope =spfile;
startup force;

Using Database Control, a possible navigation path from the database home
page is to take the Server tab, and then the Audit Settings link in the Security
section. Clicking the link labeled Audit Trail in the Configuration section will
take you to a window where you can modify the parameter settings in the
spfile. Alternatively, go directly to the Initialization Parameters window from
the Server tab by taking the Initialization Parameters link in the Database
Configuration section.
Set the two parameters in the spfile, and then from the database home page
shut down and restart the database.
4. While connected as SYS, all statements will be audited. Run this statement:
select count(*) from system.audi;

PART I

fine-grained auditing, the DBA_COMMON_AUDIT_TRAIL view shows audit events
from both types of auditing.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

248
5. If using Linux or Unix, identify the location of the system audit trail by
querying the parameter AUDIT_FILE_DEST. This will be used for the auditing
of SYS operations, irrespective of the setting for AUDIT_DEST. With SQL*Plus:
select value from v$parameter where name='audit_file_dest';

Using an operating system utility, navigate to this directory and open the most
recently created file.
If using Microsoft Windows, open the Application Log in the Event Viewer.
Either way, you will see the SELECT statement that you executed as SYS, with
details of the operating system user and hostname.
6. Connect to the database as AUDITOR, and run these queries:
select count(*)from system.audi;
select count(*) from system.product_user_profile;

7. As user SYSTEM, run this query to see the audit events:
select sql_text,priv_used,action_name from dba_audit_trail
where username='AUDITOR';

Note that the lowest possible privilege is used: access to the AUDI table was
through the SELECT object privilege, not through the much more powerful
(SELECT ANY TABLE) system privilege that was needed to get to PRODUCT_
USER_PROFILE.
8. Tidy up:
drop user auditor;
drop table system.audi;

Two-Minute Drill
Create and Manage Database User Accounts
• Users connect to a user account, which is coupled with a schema.
• All users must be authenticated before they can connect.
• A user must have a quota on a tablespace before they create any objects.
• A user who owns objects cannot be dropped, unless the CASCADE keyword
is used.

Grant and Revoke Privileges
• By default, a user can do nothing. You can’t even log on.
• Direct privileges are always enabled.
• A revocation of a system privilege does not cascade; a revocation of an object
privilege does.

Chapter 6: Oracle Security

249
Create and Manage Roles
• Roles can contain both system and object privileges, and other roles.
• A role can be enabled or disabled for a session.

Create and Manage Profiles
• Profiles can manage passwords and resource limits.
• Password limits are always enforced; resource limits are dependent on an
instance parameter.
• Every user is associated with a profile, which by default is the DEFAULT profile.

Database Security and Principle of Least Privilege
• Everything not specifically permitted should be forbidden.
• The database administrator and the system administrator should not be the
same person.
• Privileges granted to the PUBLIC role must be monitored.
• Security-critical instance parameters must be monitored and cannot be
changed without restarting the instance.

Work with Standard Database Auditing
• Database auditing can be oriented toward privileges, commands, or objects.
• Audit records can be directed toward a database table or an operating system file.
• Database audit records are stored in the SYS.AUD$ data dictionary table.
• Fine-grained auditing can be directed toward particular rows and columns.
• Auditing can also be implemented with database triggers.

Self Test
1. How can you permit users to connect without requiring them to authenticate
themselves? (Choose the best answer.)
A. Grant CREATE SESSION to PUBLIC.
B. Create a user such as this, without a password:
CREATE USER ANON IDENTIFIED BY ‘';

C. Create a profile that disables password authentication and assign it to the
users.
D. You cannot do this because all users must be authenticated.

PART I

• Roles are not schema objects.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

250
2. You create a user with this statement:
create user jon identified by oracle default tablespace example;

What more must be done before he can create a table in the EXAMPLE
tablespace? (Choose all correct answers.)
A. Nothing more is necessary.
B. Give him a quota on EXAMPLE.
C. Grant him the CREATE TABLE privilege.
D. Grant him the CREATE SESSION privilege.
E. Grant him the MANAGE TABLESPACE privilege.
3. If a user owns tables in a tablespace, what will be the effect of attempting to
reduce their quota on the tablespace to zero? (Choose the best answer.)
A. The tables will survive, but INSERTs will fail.
B. The tables will survive but cannot get bigger.
C. The attempt will fail unless the tables are dropped first.
D. The tables will be dropped automatically if the CASCADE keyword is used.
4. If you create a user without specifying a temporary tablespace, what temporary
tablespace will be assigned? (Choose the best answer.)
A. You must specify a temporary tablespace
B. SYSTEM
C. TEMP
D. The database default temporary tablespace
E. The user will not have a temporary tablespace
5. You issue these commands:
a.
b.
c.
d.

grant
grant
grant
grant

select
all on
dba to
select

on hr.regions to jon;
hr.regions to jon;
jon;
on hr.regions to public;

Which grants could be revoked to prevent JON from seeing the contents of
HR.REGIONS? (Choose all correct answers.)
A. a, b, c, and d
B. a, c, and d
C. b, c, and d
D. c and d
E. a, b, and c
6. Which of these statements about system privileges are correct? (Choose all
correct answers.)
A. Only the SYS and SYSTEM users can grant system privileges.

Chapter 6: Oracle Security

251

C. If a system privilege is revoked from you, it will not be revoked from all
users to whom you granted it.
D. CREATE TABLE is a system privilege.
E. CREATE ANY TABLE is a system privilege.
7. Study this script (line numbers have been added):
1
2
3
4

create role hr_role identified by pass;
grant create table to hr_role;
grant select table to hr_role;
grant connect to hr_role;

Which line will cause an error? (Choose the best answer.)
A. Line 1, because only users, not roles, have passwords.
B. Line 2, because only users, not roles, can create and own tables.
C. Line 3, because SELECT TABLE is not a privilege.
D. Line 4, because a role cannot have a system privilege in addition to table
privileges.
8. Which of these statements is incorrect regarding roles? (Choose the best
answer.)
A. You can grant object privileges and system privileges and roles to a role.
B. A role cannot have the same name as a table.
C. A role cannot have the same name as a user.
D. Roles can be enabled or disabled within a session.
9. You have created a profile with LIMIT SESSIONS_PER_USER 1 and granted
it to a user, but you find that they are still able to log on several times
concurrently. Why could this be? (Choose the best answer.)
A. The user has been granted CREATE SESSION more than once.
B. The user has been granted the DBA role.
C. The RESOURCE_LIMIT parameter has not been set.
D. The RESOURCE_MANAGER_PLAN parameter has not been set.
10. Which of these can be controlled by a password profile? (Choose all correct
answers.)
A. Two or more users choosing the same password
B. Preventing the reuse of a password by the same user
C. Forcing a user to change password
D. Enabling or disabling password file authentication

PART I

B. If a system privilege is revoked from you, it will also be revoked from all
users to whom you granted it.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

252
11. Under what circumstances should you set the REMOTE_LOGIN_PASSWORDFILE
instance parameter to EXCLUSIVE? (Choose two correct answers.)
A. You need a SYSDBA connection when you are logged on to a machine
other than the server.
B. You want to disable operating system authentication.
C. You want to add users to the password file.
D. You want to prevent other users from being added to the password file.
12. If you execute this command as user SYSTEM, it will fail. Why? (Choose the
best answer.)
alter system set audit_sys_operations=false;

A. The parameter can only be changed by the SYS user.
B. The parameter can only be adjusted in NOMOUNT or MOUNT mode, and
SYSTEM can only connect when the database is OPEN.
C. The principle of “separation of duties” means that only the system
administrator, not the database administrator, can change this parameter.
D. The parameter is a static parameter.
13. What conditions must hold before a database session can create a file stored
by the operating system of the server? (Choose three correct answers.)
A. The session must be connected to a database account with execute
permission on the package UTL_FILE.
B. The session must be connected to a database account with execute
permission on the package DBMS_OUTPUT.
C. The parameter UTL_FILE_DIR must have been set.
D. The parameter DB_WRITER_PROCESSES must be set to greater than zero.
E. The parameter DB_CREATE_FILE_DEST must be set.
F. The operating system account under which the Oracle instance is running
must have write permission on the directory that will store the file.
14. If you want a block of PL/SQL code to run whenever certain data is accessed
with a SELECT statement, what auditing technique could you use? (Choose
the best answer.)
A. Database auditing
B. Fine-grained auditing
C. Database triggers
D. You cannot do this
15. What is necessary to audit actions done by a user connected with the SYSDBA
privilege? (Choose the best answer.)
A. Set the AUDIT_SYS_OPERATIONS instance parameter to TRUE.
B. Use database auditing to audit use of the SYSDBA privilege.

Chapter 6: Oracle Security

253

D. This is not possible: any user with SYSDBA privilege can always bypass the
auditing mechanisms.
16. Where can you see the results of standard database auditing? (Choose all
correct answers.)
A. In the DBA_AUDIT_TRAIL view, if the AUDIT_TRAIL parameter is set to DB
B. In the DBA_COMMON_AUDIT_TRAIL view, if the AUDIT_TRAIL
parameter is set to DB
C. In the operating system audit trail, if the AUDIT_TRAIL parameter is set to OS
D. In the operating system audit trail, if the AUDIT_TRAIL parameter is set
to XML
17. You issue this statement:
audit select on hr.emp by access;

but when you issue the command:
select * from hr.emp where employee_id=0;

no audit record is generated. Why might this be? (Choose the best answer.)
A. You are connected as SYS, and the parameter AUDIT_SYS_OPERATIONS is
set to FALSE.
B. The AUDIT_TRAIL parameter is set to NONE.
C. The statement did not access any rows; there is no row with EMPLOYEE_
ID equal to zero.
D. The instance must be restarted before any change to auditing comes into
effect.

Self Test Answers
1. þ D. All users must be authenticated.
ý A, B, C. A is wrong because while this will give all users permission to
connect, they will still have to authenticate. B is wrong because a NULL is
not acceptable as a password. C is wrong because a profile can only manage
passwords, not disable them.
2. þ B, C, and D. All these actions are necessary.
ý A and E. A is wrong because without privileges and quota, JON cannot
connect and create a table. E is wrong because this privilege lets you manage a
tablespace, not create objects in it.

PART I

C. Set the REMOTE_LOGIN_PASSWORDFILE instance parameter to NONE,
so that SYSDBA connections can only be made with operating system
authentication. Then set the AUDIT_TRIAL parameter to OS, and make
sure that the DBA does not have access to it.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

254
3. þ B. It will not be possible to allocate further extents to the tables.
ý A, C, and D. A is wrong because inserts will succeed as long as there is
space in the extents already allocated. C is wrong because there is no need to
drop the tables. D is wrong because CASCADE cannot be applied to a quota
command.
4. þ D. There is always a database-wide default, which (by default) is SYSTEM.
In many cases, it will have been set to TEMP.
ý A, B, C, and E. A is wrong because there is a default. B is wrong because
the default may not be SYSTEM (though it is by default). C is wrong because
while TEMP is a frequently used default, it may not be. E is wrong because all
user accounts must have a temporary tablespace.
5. þ A, B, and C. Any of these will prevent the access.
ý D and E. D is wrong because the grants in (a) and (b) will remain in
effect. Note that ALL is implemented as a set of grants (or revokes) of each
privilege, so it is not necessary to grant or revoke SELECT as well as ALL. E is
wrong because the grant to PUBLIC in (d) will remain in effect.
6. þ C, D, and E. C is correct because the revocation of a system privilege does
not cascade. D and E are correct because any action that updates the data
dictionary is a system privilege.
ý A and B. A is wrong because system privileges can be granted by any
user who has been granted the privilege WITH ADMIN OPTION. B is wrong
because the revocation of a system privilege does not cascade.
7. þ C. There is no such privilege as SELECT TABLE; it is granted implicitly
with CREATE TABLE.
ý A, B, and D. A is wrong because roles can be password protected. B is
wrong because even though tables must be owned by users, permission to
create them can be granted to a role. D is wrong because a role can have any
combination of object and system privileges.
8. þ B. Roles are not schema objects, and so can have the same names as tables.
ý A, C, and D. A is wrong because roles can have any combination of
system, object, and role privileges. C is wrong because roles cannot have the
same names as users. D is wrong because roles can be enabled and disabled at
any time.
9. þ C. The RESOURCE_LIMIT parameter will default to FALSE, and without
this resource limits are not enforced.
ý A, B, and D. A is wrong because this privilege controls whether users can
connect to the account at all, not how many times. B is wrong because no
role can exempt a user from profile limits. D is wrong because this parameter
controls which Resource Manager plan is active, which is not relevant to
whether resource limits are enforced.

Chapter 6: Oracle Security

255

11. þ A and C. Password file authentication is necessary if SYSDBA connections
need to be made across a network, and if you want to grant SYSDBA or
SYSOPER to any other database users.
ý B and D. B is wrong because operating system authentication can never
be disabled. D is wrong because EXCLUSIVE doesn’t exclude users; it means
one password file per instance.
12. þ D. No matter who you are connected as, the parameter is static and will
therefore require a SCOPE=SPFILE clause when changing it.
ý A, B, and C. A is wrong because SYSTEM can adjust the parameter (as can
anyone to whom the ALTER SYSTEM privilege has been granted). B is wrong
because the parameter can be changed in any mode—if the SCOPE is SPFILE.
C is wrong because the system administrator cannot change parameters: only
a database administrator can do this.
13. þ A, C, and F. The necessary conditions are that the session must be able to
execute the UTL_FILE procedures, and that the UTL_FILE_DIR parameter must
point to a directory on which the Oracle user has the necessary permissions.
ý B, D, and E. B is wrong because DBMS_OUTPUT is used to write to the
user process, not to the operating system. D is wrong because DB_WRITER_
PROCESSES controls the number of database writers. E is wrong because
DB_CREATE_FILE_DEST sets a default location for datafiles.
14. þ B. A fine-grained auditing policy can nominate a PL/SQL function to run
whenever the audit condition is violated.
ý A, C, and D. A is wrong because database auditing can do no more than
record events. C is wrong because database triggers can only be defined for
DML and not for SELECT statements. D is wrong because FGA can indeed do
this.
15. þ A. Setting this parameter is all that is necessary, though on Unix and
Linux you may want to adjust AUDIT_FILE_DEST as well.
ý B, C, and D. B is wrong because this is a privilege whose use cannot be
audited, because it can apply before the database is open. C is wrong because
the method of gaining SYSDBA access is not relevant to whether it is audited.
D is wrong because SYS cannot bypass this audit technique.
16. þ A, B, C, and D. These are all correct.
ý None.

PART I

10. þ B and C. These are both password limits.
ý A and D. A is wrong because this cannot be prevented by any means. D is
wrong because profiles only apply to password authentication; password file
authentication is managed separately.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

256
17. þ B. If AUDIT_TRAIL is set to NONE, there will be no standard database
auditing.
ý A, C, and D. A is wrong because auditing the SYS user is in addition to
standard database auditing. C is wrong because standard database auditing
will record access to the object, regardless of whether any rows were retrieved.
D is wrong because audits of parameter changes require an instance restart,
not audits of commands.

PART II
SQL

■
■
■
■
■
■
■

Chapter 7
Chapter 8
Chapter 9
Chapter 10
Chapter 11
Chapter 12
Chapter 13

DDL and Schema Objects
DML and Concurrency
Retrieving, Restricting, and Sorting Data Using SQL
Single-Row and Conversion Functions
Group Functions
SQL Joins
Subqueries and Set Operators

This page intentionally left blank

CHAPTER 7
DDL and Schema Objects

Exam Objectives
In this chapter you will learn to
• 051.10.1 Categorize the Main Database Objects
• 051.10.2 Review the Table Structure
• 051.10.3 List the Data Types That Are Available for Columns
• 051.10.4 Create a Simple Table
• 051.10.5 Explain How Constraints Are Created at the Time of Table Creation
• 051.10.6 Describe How Schema Objects Work
• 052.8.1
Create and Modify Tables
• 052.8.2
Manage Constraints
• 052.8.3
Create Indexes
• 052.8.4
Create and Use Temporary Tables
• 051.11.1 Create Simple and Complex Views
• 051.11.2 Retrieve Data from Views
• 051.11.3 Create, Maintain, and Use Sequences
• 051.11.4 Create and Maintain Indexes
• 051.11.5 Create Private and Public Synonyms

259

OCA/OCP Oracle Database 11g All-in-One Exam Guide

260
In terms of the sheer number of exam objectives covered in this chapter, it looks
horrific. Do not worry: there is some duplication in the objectives, and many of the
objectives are revisited in other chapters as well.
Understanding the primitive data types and the standard heap-organized table
structure is the first topic. Then the chapter moves on to defining the object types that
are dependent on tables (indexes, constraints, and views), and then sequences and
synonyms. Objects of all these types will be used throughout the remainder of this
book, sometimes with more detail provided.

Categorize the Main Database Objects
There are various object types that can exist within a database, many more with the
current release than with earlier versions. All objects have a name and a type, and
each object is owned by a schema. Various common object types and the rules to
which they must conform will be discussed.

Object Types
This query lists (in a neatly formatted output), the count by object types for the
objects that happen to exist in this particular database:
SQL> select object_type,count(object_type) from dba_objects
group by object_type order by object_type;
OBJECT_TYPE
COUNT(OBJECT_TYPE) OBJECT_TYPE
COUNT(OBJECT_TYPE)
CLUSTER
10 PACKAGE
1240
CONSUMER GROUP
12 PACKAGE BODY
1178
CONTEXT
6 PROCEDURE
118
DIMENSION
5 PROGRAM
17
DIRECTORY
9 QUEUE
37
EDITION
1 RESOURCE PLAN
7
EVALUATION CONTEXT
13 RULE
1
FUNCTION
286 RULE SET
21
INDEX
3023 SCHEDULE
2
INDEX PARTITION
342 SEQUENCE
204
INDEXTYPE
12 SYNONYM
26493
JAVA CLASS
22018 TABLE
2464
JAVA DATA
322 TABLE PARTITION
199
JAVA RESOURCE
820 TRIGGER
413
JOB
11 TYPE
2630
JOB CLASS
11 TYPE BODY
231
LIBRARY
177 UNDEFINED
6
LOB
769 VIEW
4669
LOB PARTITION
7 WINDOW
9
MATERIALIZED VIEW
3 WINDOW GROUP
4
OPERATOR
60 XML SCHEMA
93
42 rows selected.

This query addresses the view DBA_OBJECTS, which has one row for every object
in the database. The numbers are low, because the database is a very small one used
only for teaching. A database used for a business application might have hundreds of

Chapter 7: DDL and Schema Objects

261

Naming Schema Objects
A schema object is owned by a user and must conform to certain rules:
• The name may be between 1 to 30 characters long (with the exception of
database link names that may be up to 128 characters long).
• Reserved words (such as SELECT) cannot be used as object names.
• All names must begin with a letter of the alphabet.
• Object names can only include letters, numbers, the underscore (_), the dollar
sign ($), or the hash symbol (#).
• Lowercase letters will be automatically converted to uppercase.
By enclosing the name within double quotes, all these rules (with the exception
of the length) can be broken, but to get to the object subsequently, it must always be
specified with double quotes, as in the examples in Figure 7-1. Note that the same
restrictions also apply to column names.
EXAM TIP Object names must be no more than 30 characters. The
characters can be letters, digits, underscore, dollar, or hash.

PART II

thousands of objects. You may not be able to see the view DBA_OBJECTS, depending
on what permissions your account has. Alternate views are USER_OBJECTS, which
will show all the objects owned by you, and ALL_OBJECTS, which will show all the
objects to which you have been granted access (including your own). All users have
access to these views.
The objects of greatest interest to a SQL programmer are those that contain, or
give access to, data. These include: Tables, Views, Synonyms, Indexes, and Sequences.
Tables basically store data in rows segmented by columns. A view is a stored
SELECT statement that can be referenced as though it were a table. It is nothing more
than a query, but rather than running the statement itself, the user issues a SELECT
statement against the view instead. In effect, the user is selecting from the result of
another selection. A synonym is an alias for a table (or a view). Users can execute SQL
statements against the synonym, and the database will map them into statements
against the object to which the synonym points. Indexes are a means of improving
access times to rows in tables. If a query requires only one row, then rather than
scanning the entire table to find the row, an index can provide a pointer to the row’s
exact location. Of course, the index itself must be searched, but this is often faster
than scanning the table. A sequence is a construct that generates unique numbers.
There are many cases where unique numbers are needed. Sequences issue numbers in
order, on demand: it is absolutely impossible for the same number to be issued twice.
The remaining object types are less commonly relevant to a SQL programmer.
Their use falls more within the realm of PL/SQL programmers and database
administrators.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

262

Figure 7-1

Using double quotes to use nonstandard names

Although tools such as SQL*Plus and SQL Developer will automatically convert
lowercase letters to uppercase unless the name is enclosed within double quotes,
remember that object names are always case sensitive. In this example, the two tables
are completely different:
SQL> create table lower(c1 date);
Table created.
SQL> create table "lower"(col1 varchar2(2));
Table created.
SQL> select table_name from dba_tables where lower(table_name) = 'lower';
TABLE_NAME
-----------------------------lower
LOWER

TIP While it is possible to use lowercase names and nonstandard characters
(even spaces), it is considered bad practice because of the confusion it can cause.

Object Namespaces
It is often said that the unique identifier for an object is the object name, prefixed
with the schema name. While this is generally true, for a full understanding of naming
it is necessary to introduce the concept of a namespace. A namespace defines a group
of object types, within which all names must be uniquely identified—by schema and
name. Objects in different namespaces can share the same name.
These object types all share the same namespace:
Tables

Views

Sequences

Private synonyms

Stand-alone procedures

Stand-alone stored functions

Packages

Materialized views

User-defined types

Chapter 7: DDL and Schema Objects

263
Thus it is impossible to create a view with the same name as a table—at least, it is
impossible if they are in the same schema. And once created, SQL statements can address
a view as though it were a table. The fact that tables, views, and private synonyms share
the same namespace means that you can set up several layers of abstraction between
what the users see and the actual tables, which can be invaluable for both security and
for simplifying application development.
These object types each have their own namespace:
Constraints

Clusters

Database triggers

Private database links

Dimensions

Thus it is possible (though perhaps not a very good idea) for an index to have the
same name as a table, even within the same schema.
EXAM TIP Within a schema, tables, views, and synonyms cannot have the
same names.
Exercise 7-1: Determine What Objects Are Accessible to Your
Session In this exercise, query various data dictionary views as user HR to determine
what objects are in the HR schema and what objects in other schemas HR has access to.
1. Connect to the database with SQL*Plus or SQL Developer as user HR.
2. Determine how many objects of each type are in the HR schema:
select object_type,count(*) from user_objects group by object_type;

The USER_OBJECTS view lists all objects owned by the schema to which the
current session is connected, in this case HR.
3. Determine how many objects in total HR has permissions on:
select object_type,count(*) from all_objects group by object_type;

The ALL_OBJECTS view lists all objects to which the user has some sort of
access.
4. Determine who owns the objects HR can see:
select distinct owner from all_objects;

List the Data Types That Are Available
for Columns
When creating tables, each column must be assigned a data type, which determines
the nature of the values that can be inserted into the column. These data types are also
used to specify the nature of the arguments for PL/SQL procedures and functions.
When selecting a data type, you must consider the data that you need to store and the
operations you will want to perform upon it. Space is also a consideration: some data
types are fixed length, taking up the same number of bytes no matter what data is

PART II

Indexes

OCA/OCP Oracle Database 11g All-in-One Exam Guide

264
actually in it; others are variable. If a column is not populated, then Oracle will not
give it any space at all. If you later update the row to populate the column, then the
row will get bigger, no matter whether the data type is fixed length or variable.
The following are the data types for alphanumeric data:
VARCHAR2

Variable-length character data, from 1 byte to 4KB. The data is stored in
the database character set.

NVARCHAR2

Like VARCHAR2, but the data is stored in the alternative national language
character set, one of the permitted Unicode character sets.

CHAR

Fixed-length character data, from 1 byte to 2KB, in the database character set.
If the data is not the length of the column, then it will be padded with spaces.

TIP For ISO/ANSI compliance, you can specify a VARCHAR data type, but any
columns of this type will be automatically converted to VARCHAR2.
The following are the data types for numeric data, all variable length:
NUMBER

Numeric data, for which you can specify precision and scale. The precision
can range from 1 to 38, the scale can range from –84 to 127.

FLOAT

This is an ANSI data type, floating-point number with precision of
126 binary (or 38 decimal). Oracle also provides BINARY_FLOAT and
BINARY_DOUBLE as alternatives.

INTEGER

Equivalent to NUMBER, with scale zero.

The following are the data types for date and time data, all fixed length:
DATE

This is either length zero, if the column is empty, or 7 bytes. All DATE
data includes century, year, month, day, hour, minute, and second. The
valid range is from January 1, 4712 BC to December 31, 9999 AD.

TIMESTAMP

This is length zero if the column is empty, or up to 11 bytes, depending
on the precision specified. Similar to DATE, but with precision of up to
9 decimal places for the seconds, 6 places by default.

TIMESTAMP WITH
TIMEZONE

Like TIMESTAMP, but the data is stored with a record kept of the time
zone to which it refers. The length may be up to 13 bytes, depending
on precision. This data type lets Oracle determine the difference
between two times by normalizing them to UTC, even if the times are
for different time zones.

TIMESTAMP
WITH LOCAL
TIMEZONE

Like TIMESTAMP, but the data is normalized to the database time zone
on saving. When retrieved, it is normalized to the time zone of the
user process selecting it.

INTERVAL YEAR
TO MONTH

Used for recording a period in years and months between two DATEs
or TIMESTAMPs.

INTERVAL DAY TO
SECOND

Used for recording a period in days and seconds between two DATEs
or TIMESTAMPs.

Chapter 7: DDL and Schema Objects

265
The following are the large object data types:
Character data stored in the database character set, size effectively unlimited:
4GB multiplied by the database block size.

NCLOB

Like CLOB, but the data is stored in the alternative national language
character set, one of the permitted Unicode character sets.

BLOB

Like CLOB, but binary data that will not undergo character set conversion by
Oracle Net.

BFILE

A locator pointing to a file stored on the operating system of the database
server. The size of the files is limited to 4GB.

LONG

Character data in the database character set, up to 2GB. All the functionality
of LONG (and more) is provided by CLOB; LONGs should not be used in a
modern database, and if your database has any columns of this type, they should
be converted to CLOB. There can only be one LONG column in a table.

LONG RAW

Like LONG, but binary data that will not be converted by Oracle Net. Any
LONG RAW columns should be converted to BLOBs.

The following are RAW and ROWID data types:
RAW

Variable-length binary data, from 1 byte to 4KB. Unlike the CHAR and
VARCHAR2 data types, RAW data is not converted by Oracle Net from the
database’s character set to the user process’s character set on SELECT or the
other way on INSERT.

ROWID

A value coded in base 64 that is the pointer to the location of a row in a table.
Within it is the exact physical address. ROWID is an Oracle proprietary data
type, not visible unless specifically selected.

EXAM TIP All examinees will be expected to know about these data types:
VARCHAR2, CHAR, NUMBER, DATE, TIMESTAMP, INTERVAL, RAW, LONG,
LONG RAW, CLOB, BLOB, BFILE, and ROWID. Detailed knowledge will also
be needed for VARCHAR2, NUMBER, and DATE.
The VARCHAR2 data type must be qualified with a number indicating the maximum
length of the column. If a value is inserted into the column that is less than this, it is
not a problem: the value will only take up as much space as it needs. If the value is
longer than this maximum, the INSERT will fail with an error. If the value is updated
to a longer or shorter value, the length of the column (and therefore the row itself)
will change accordingly. If is not entered at all or is updated to NULL, then it will take
up no space at all.
The NUMBER data type may optionally be qualified with a precision and a scale.
The precision sets the maximum number of digits in the number, and the scale is how
many of those digits are to the right of the decimal point. If the scale is negative, this has
the effect of replacing the last digits of any number inserted with zeros, which do not
count toward the number of digits specified for the precision. If the number of digits
exceeds the precision, there will be an error; if it is within the precision but outside the
scale, the number will be rounded (up or down) to the nearest value within the scale.

PART II

CLOB

OCA/OCP Oracle Database 11g All-in-One Exam Guide

266
The DATE data type always includes century, year, month, day, hour, minute, and
second—even if all these elements are not specified at insert time. Year, month, and
day must be specified; if the hours, minutes, and seconds are omitted, they will
default to midnight.
Exercise 7-2: Investigate the Data Types in the HR Schema In this
exercise, find out what data types are used in the tables in the HR schema, using two
techniques.
1. Connect to the database as user HR with SQL*Plus or SQL Developer.
2. Use the DESCRIBE command to show the data types in some tables:
describe employees;
describe departments;

3. Use a query against a data dictionary view to show what columns make up the
EMPLOYEES table, as the DESCRIBE command would:
select column_name,data_type,nullable,data_length,data_precision,data_scale
from user_tab_columns where table_name='EMPLOYEES';

The view USER_TAB_COLUMNS shows the detail of every column in every
table in the current user’s schema.

Create a Simple Table
Tables can be stored in the database in several ways. The simplest is the heap table.
A heap table contains variable-length rows in random order. There may be some
correlation between the order in which rows are entered and the order in which they
are stored, but this is not guaranteed and should not be relied upon. More advanced
table structures, such as the following, may impose ordering and grouping on the
rows or force a random distribution:
Index organized tables

Store rows in the order of an index key.

Index clusters

Can denormalize tables in parent-child relationships so that
related rows from different table are stored together.

Hash clusters

Force a random distribution of rows, which will break down any
ordering based on the entry sequence.

Partitioned tables

Store rows in separate physical structures, the partitions,
allocating rows according to the value of a column.

Using the more advanced table structures has no effect whatsoever on SQL. Every
SQL statement executed against tables defined with these options will return exactly
the same results as though the tables were standard heap tables, so use of these
features will not affect your code. But while their use is transparent to programmers,
they can provide enormous benefits in performance.

Chapter 7: DDL and Schema Objects

267
Creating Tables with Column Specifications
To create a standard heap table, use this syntax:
CREATE TABLE [schema.]table [ORGANIZATION HEAP]
(column datatype [DEFAULT expression]
[,column datatype [DEFAULT expression]);

CREATE TABLE SCOTT.EMP
(EMPNO NUMBER(4),
ENAME VARCHAR2(10),
HIREDATE DATE DEFAULT TRUNC(SYSDATE),
SAL NUMBER(7,2),
COMM NUMBER(7,2) DEFAULT 0.03);

This will create a table called EMP in the SCOTT schema. Either user SCOTT has
to issue the statement (in which case nominating the schema would not actually be
necessary), or another user could issue it if they have been granted permission to
create tables in SCOTT’s schema. Taking the columns one by one:
• EMPNO can be four digits long, with no decimal places. If any decimals are
included in an INSERT statement, they will be rounded (up or down) to the
nearest integer.
• ENAME can store up to ten characters.
• HIREDATE will accept any date, optionally with the time, but if a value is not
provided, today’s date will be entered as at midnight.
• SAL, intended for the employee’s salary, will accept numeric values with up to
seven digits. If any digits over seven are to the right of the decimal point, they
will be rounded off.
• COMM (for commission percentage) has a default value of 0.03, which will
be entered if the INSERT statement does not include a value for this column.
Following creation of the table, these statements insert a row and select the result:
SQL> insert into scott.emp(empno,ename,sal) values(1000,'John',1000.789);
1 row created.
SQL> select * from emp;
EMPNO ENAME
HIREDATE
SAL
COMM
---------- ---------- --------- ---------- ---------1000 John
19-NOV-07
1000.79
.03

PART II

As a minimum, specify the table name (it will be created in your own schema, if
you don’t specify someone else’s) and at least one column with a data type. There are
very few developers who ever specify ORGANIZATION HEAP, as this is the default
and is industry-standard SQL. The DEFAULT keyword in a column definition lets you
provide an expression that will generate a value for the column when a row is inserted
if a value is not provided by the INSERT statement.
Consider this statement:

OCA/OCP Oracle Database 11g All-in-One Exam Guide

268
Note that values for the columns not mentioned in the INSERT statement have
been generated by the DEFAULT clauses. Had those clauses not been defined in the
table definition, the column values would have been NULL. Also note the rounding
of the value provided for SAL.
TIP The DEFAULT clause can be useful, but it is of limited functionality.You
cannot use a subquery to generate the default value: you can only specify
literal values or functions.

Creating Tables from Subqueries
Rather than creating a table from nothing and then inserting rows into it (as in the
preceding section), tables can be created from other tables by using a subquery. This
technique lets you create the table definition and populate the table with rows with
just one statement. Any query at all can be used as the source of both the table
structure and the rows. The syntax is as follows:
CREATE TABLE [schema.]table AS subquery;

All queries return a two-dimensional set of rows; this result is stored as the new
table. A simple example of creating a table with a subquery is
create table employees_copy as select * from employees;

This statement will create a table EMPLOYEES_COPY, which is an exact copy of
the EMPLOYEES table, identical in both definition and the rows it contains. Any notnull and check constraints on the columns will also be applied to the new table, but
any primary key, unique, or foreign key constraints will not be. (Constraints are
discussed in a later section.) This is because these three types of constraints require
indexes that might not be desired.
The following is a more complex example:
create table emp_dept as select
last_name ename,department_name dname,round(sysdate - hire_date) service
from employees natural join departments order by dname,ename;

The rows in the new table will be the result of joining the two source tables, with
two of the selected columns having their names changed. The new SERVICE column
will be populated with the result of the arithmetic that computes the number of days
since the employee was hired. The rows will be inserted in the order specified. This
ordering will not be maintained by subsequent DML, but assuming the standard HR
schema data, the new table will look like this:
SQL> select * from emp_dept where rownum < 5;
ENAME
DNAME
SERVICE
--------------- --------------- ---------Gietz
Accounting
4914
De Haan
Executive
5424
Kochhar
Executive
6634
Chen
Finance
3705
4 rows selected.

Chapter 7: DDL and Schema Objects

269
The subquery can of course include a WHERE clause to restrict the rows inserted
into the new table. To create a table with no rows, use a WHERE clause that will
exclude all rows:
create table no_emps as select * from employees where 1=2;

Altering Table Definitions after Creation
There are many alterations that can be made to a table after creation. Those that affect
the physical storage fall into the domain of the database administrator, but many
changes are purely logical and will be carried out by the SQL developers. The
following are examples (for the most part self-explanatory):
• Adding columns:
alter table emp add (job_id number);

• Modifying columns:
alter table emp modify (commission_pct number(4,2) default 0.05);

• Dropping columns:
alter table emp drop column commission_pct;

• Marking columns as unused:
alter table emp set unused column job_id;

• Renaming columns:
alter table emp rename column hire_date to recruited;

• Marking the table as read-only:
alter table emp read only;

All of these changes are DDL commands with a built-in COMMIT. They are therefore
nonreversible and will fail if there is an active transaction against the table. They are also
virtually instantaneous with the exception of dropping a column. Dropping a column
can be a time-consuming exercise because as each column is dropped, every row must be
restructured to remove the column’s data. The SET UNUSED command, which makes
columns nonexistent as far as SQL is concerned, is often a better alternative, followed
when convenient by
ALTER TABLE tablename DROP UNUSED COLUMNS;

which will drop all the unused columns in one pass through the table.
Marking a table as read-only will cause errors for any attempted DML commands.
But the table can still be dropped. This can be disconcerting but is perfectly logical when

PART II

The WHERE clause 1=2 can never return TRUE, so the table structure will be
created ready for use, but no rows will be inserted at creation time.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

270
you think it through. A DROP command doesn’t actually affect the table: it affects the
tables in the data dictionary that define the table, and these are not read-only.

Dropping and Truncating Tables
The TRUNCATE TABLE command (discussed in detail in Chapter 8) has the effect
of removing every row from a table, while leaving the table definition intact. DROP
TABLE is more drastic in that the table definition is removed as well. The syntax is as
follows:
DROP TABLE [schema.]tablename ;

If schema is not specified, then the table called tablename in your currently logged-on
schema will be dropped.
As with a TRUNCATE, SQL will not produce a warning before the table is dropped,
and furthermore, as with any DDL command, it includes a COMMIT. A DROP is
therefore generally nonreversible. Under certain conditions, a DROP may be reversed
using flashback and other recovery techniques (discussed in Chapter 19). But there
are some restrictions: if any session (even your own) has a transaction in progress that
includes a row in the table, then the DROP will fail, and it is also impossible to drop
a table that is referred to in a foreign key constraint defined for another table. This
table (or the constraint) must be dropped first.
Exercise 7-3: Create Tables This exercise marks the formal beginning of the
case study. By now, you should have a database installed on one of your machines,
and if you completed the exercises in Chapter 5, you should have a tablespace called
STOREDATA; otherwise, create it now.
In this exercise, use SQL Developer to create a heap table, insert some rows with a
subquery, and modify the table. Do some more modifications with SQL*Plus, and
then drop the table.
1. Connect to the database as user SYSTEM and create the WEBSTORE user with
default tablespace STOREDATA and temporary tablespace TEMP. Grant the
WEBSTORE user unlimited quota on the STOREDATA tablespace as well as
the privileges to create a session and create a table. The WEBSTORE schema
will be used in subsequent exercises.
2. Using SQL Developer, connect as the WEBSTORE user. Right-click the Tables
branch of the navigation tree, and click NEW TABLE.
3. Name the new table CUSTOMERS, and use the ADD COLUMN button to set it up
as in the following illustration:

Chapter 7: DDL and Schema Objects

271

PART II

4. Click the DDL tab to see if the statement has been constructed. It should look
like this:
CREATE TABLE CUSTOMERS
(
CUSTOMER_ID NUMBER(8, 0) NOT NULL,
JOIN_DATE DATE NOT NULL,
CUSTOMER_STATUS VARCHAR2(8) NOT NULL,
CUSTOMER_NAME VARCHAR2(20) NOT NULL,
CREDITRATING VARCHAR2(10)
)
;

Return to the Table tab (as in the preceding illustration) and click OK to create
the table.
5. Run these statements:
insert
values
insert
values

into customers(customer_id, customer_status, customer_name, creditrating)
(1, 'NEW', 'Ameetha', 'Platinum');
into customers(customer_id, customer_status, customer_name, creditrating)
(2, 'NEW', 'Coda', 'Bronze');

and commit the insert:
commit;

6. Right-click the CUSTOMERS table in the SQL Developer navigator; click
COLUMN and ADD.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

272
7. Define a new column EMAIL, type VARCHAR2(50), as in the following
illustration; and click APPLY to create the column.

8. Connect to the database as WEBSTORE with SQL*Plus.
9. Define a default for the JOIN_DATE column in the CUSTOMERS table:
alter table customers modify (join_date default sysdate);

10. Insert a row without specifying a value for JOIN_DATE and check that the new
row does have a JOIN_DATE date but that the other rows do not:
insert into customers(customer_id, customer_status, customer_name,
creditrating) values (3, 'NEW', 'Sid', 'Gold');
select join_date, count(1) from customers group by join_date;

11. Create three additional tables as in the following illustration:

Chapter 7: DDL and Schema Objects

273
12. Add a column called QUANTITY with datatype NUMBER to the ORDER_
ITEMS table:
alter table order_items add (quantity number);

Create and Use Temporary Tables

CREATE GLOBAL TEMPORARY TABLE temp_tab_name
(column datatype [,column datatype] )
[ON COMMIT {DELETE | PRESERVE} ROWS] ;

The column definition is the same as a regular table and can indeed be supplied
from a subquery. The optional clause at the end determines the lifetime of any rows
inserted. The default is to remove the rows the moment the transaction that inserted
them completes, but this behavior can be changed to preserve them until the session
that inserted them ends. Whichever option is chosen, the data will be private to each
session: different users can insert their own rows into their own copy of the table, and
they will never see each other’s rows.
In many ways, a temporary table is similar to a permanent table. You can execute
any DML or SELECT command against it. It can have indexes, constraints, and triggers
defined. It can be referenced in views and synonyms, or joined to other tables. The
difference is that the data is transient and private to the session, and that all SQL
commands against it will be far faster than commands against permanent tables.
The first reason for the speed is that temporary tables are not segments in permanent
tablespaces. Ideally, they exist only in the PGAs of the sessions that are using them, so
there is no disk activity or even database buffer cache activity involved. If the PGA cannot
grow sufficiently to store the temporary table (which will be the case if millions of rows
are being inserted—not unusual in complex report generation), then the table gets
written out to a temporary segment in the user’s temporary tablespace. I/O on temporary
tablespaces is much faster than I/O on permanent tablespaces, because it does not go
via the database buffer cache; it is all performed directly on disk by the session’s server
process.
A second reason for speed is that DML against temporary tables does not generate
redo. Since the data only persists for the duration of a session (perhaps only for the
duration of a transaction), there is no purpose in generating redo. This gives the dual
benefit of fast DML for the session working on the table, and taking the strain off the
redo generation system, which can be a bad point of contention on busy multiuser
databases.
Figure 7-2 shows the creation and use of a temporary table with SQL*Plus. The
Database Control Table Creation Wizard can also create temporary tables.

PART II

A temporary table has a definition that is visible to all sessions, but the rows within it
are private to the session that inserted them. Programmers can use them as a private
storage area for manipulating large amounts of data. The syntax is

OCA/OCP Oracle Database 11g All-in-One Exam Guide

274

Figure 7-2

Creation and use of a temporary table

Exercise 7-4: Create and Use Temporary Tables In this exercise, create a
temporary table to be used for reporting on current employees. Demonstrate, by using
two SQL*Plus sessions, that the data is private to each session.
1. Connect to your database with SQL*Plus as user HR.
2. Create a temporary table as follows:
create global temporary table tmp_emps on commit preserve rows
as select * from employees where 1=2;

3. Insert some rows and commit them:
insert into tmp_emps select * from employees where department_id=30;
commit;

4. Start a second SQL*Plus session as HR.
5. In the second session, confirm that the first insert is not visible, even though it
was committed in the first session, and insert some different rows:
select count(*) from tmp_emps;
insert into tmp_emps select * from employees where department_id=50;
commit;

6. In the first session, truncate the table:
truncate table tmp_emps;

7. In the second session, confirm that there are still rows in that session’s copy of
the table:
select count(*) from tmp_emps;

Chapter 7: DDL and Schema Objects

275
8. In the second session, demonstrate that terminating the session does clear the
rows. This will require disconnecting and connecting again:
disconnect;
connect hr/hr
select count(*) from tmp_emps;

9. Tidy up the environment by dropping the tables in both sessions.

Indexes have two functions: to enforce primary key and unique constraints, and to
improve performance. An application’s indexing strategy is critical for performance.
There is no clear demarcation of whose domain index management lies within. When
the business analysts specify business rules that will be implemented as constraints,
they are in effect specifying indexes. The database administrators will be monitoring
the execution of code running in the database, and will make recommendations for
indexes. The developer, who should have the best idea of what is going on in the code
and the nature of the data, will also be involved in developing the indexing strategy.

Why Indexes Are Needed?
Indexes are part of the constraint mechanism. If a column (or a group of columns)
is marked as a table’s primary key, then every time a row is inserted into the table,
Oracle must check that a row with the same value in the primary key does not already
exist. If the table has no index on the column(s), the only way to do this would be to
scan right through the table, checking every row. While this might be acceptable for a
table of only a few rows, for a table with thousands or millions (or billions) of rows
this is not feasible. An index gives (near) immediate access to key values, so the check
for existence can be made virtually instantaneously. When a primary key constraint is
defined, Oracle will automatically create an index on the primary key column(s), if
one does not exist already.
A unique constraint also requires an index. It differs from a primary key constraint
in that the column(s) of the unique constraint can be left null. This does not affect
the creation and use of the index. Foreign key constraints are enforced by indexes, but
the index must exist on the parent table, not necessarily on the table for which the
constraint is defined. A foreign key constraint relates a column in the child table to
the primary key or to a unique key in the parent table. When a row is inserted in the
child table, Oracle will do a lookup on the index on the parent table to confirm that
there is a matching row before permitting the insert. However, you should always
create indexes on the foreign key columns within the child table for performance
reasons: a DELETE on the parent table will be much faster if Oracle can use an index
to determine whether there are any rows in the child table referencing the row that is
being deleted.
Indexes are critical for performance. When executing any SQL statement that
includes a WHERE clause, Oracle has to identify which rows of the table are to be
selected or modified. If there is no index on the column(s) referenced in the WHERE
clause, the only way to do this is with a full table scan. A full table scan reads every row

PART II

Indexes

OCA/OCP Oracle Database 11g All-in-One Exam Guide

276
of the table, in order to find the relevant rows. If the table has billions of rows, this
can take hours. If there is an index on the relevant column(s), Oracle can search the
index instead. An index is a sorted list of key values, structured in a manner that
makes the search very efficient. With each key value is a pointer to the row in the
table. Locating relevant rows via an index lookup is far faster than using a full table
scan, if the table is over a certain size and the proportion of the rows to be retrieved
is below a certain value. For small tables, or for a WHERE clause that will retrieve a
large fraction of the table’s rows, a full table scan will be quicker: you can (usually)
trust Oracle to make the correct decision regarding whether to use an index, based on
statistical information the database gathers about the tables and the rows within them.
A second circumstance where indexes can be used is for sorting. A SELECT
statement that includes the ORDER BY, GROUP BY, or UNION keyword (and a few
others) must sort the rows into order—unless there is an index, which can return the
rows in the correct order without needing to sort them first.
A third circumstance when indexes can improve performance is when tables are
joined, but again Oracle has a choice: depending on the size of the tables and the
memory resources available, it may be quicker to scan tables into memory and join
them there, rather than use indexes. The nested loop join technique passes through one
table using an index on the other table to locate the matching rows; this is usually a
disk-intensive operation. A hash join technique reads the entire table into memory,
converts it into a hash table, and uses a hashing algorithm to locate matching rows;
this is more memory and CPU intensive. A sort merge join sorts the tables on the join
column and then merges them together; this is often a compromise among disk,
memory, and CPU resources. If there are no indexes, then Oracle is severely limited
in the join techniques available.
TIP Indexes assist SELECT statements, and also any UPDATE, DELETE, or
MERGE statements that use a WHERE clause—but they will slow down
INSERT statements.

Types of Index
Oracle supports several types of index, which have several variations. The two index
types of concern here are the B*Tree index, which is the default index type, and the
bitmap index. As a general rule, indexes will improve performance for data retrieval
but reduce performance for DML operations. This is because indexes must be
maintained. Every time a row is inserted into a table, a new key must be inserted into
every index on the table, which places an additional strain on the database. For this
reason, on transaction processing systems it is customary to keep the number of
indexes as low as possible (perhaps no more than those needed for the constraints)
and on query-intensive systems such as a data warehouse to create as many as might
be helpful.

B*Tree Indexes
A B*Tree index (the “B” stands for “balanced”) is a tree structure. The root node of the
tree points to many nodes at the second level, which can point to many nodes at the

Chapter 7: DDL and Schema Objects

277
third level, and so on. The necessary depth of the tree will be largely determined by
the number of rows in the table and the length of the index key values.
TIP The B*Tree structure is very efficient. If the depth is greater than three
or four, then either the index keys are very long or the table has billions of
rows. If neither if these is the case, then the index is in need of a rebuild.

Figure 7-3 Displaying and using rowids

PART II

The leaf nodes of the index tree store the rows’ keys, in order, each with a pointer
that identifies the physical location of the row. So to retrieve a row with an index
lookup, if the WHERE clause is using an equality predicate on the indexed column,
Oracle navigates down the tree to the leaf node containing the desired key value,
and then uses the pointer to find the row location. If the WHERE clause is using a
nonequality predicate (such as: LIKE, BETWEEN, >, or < ), then Oracle can navigate
down the tree to find the first matching key value and then navigate across the leaf
nodes of the index to find all the other matching values. As it does so, it will retrieve
the rows from the table, in order.
The pointer to the row is the rowid. The rowid is an Oracle-proprietary
pseudocolumn, which every row in every table has. Encrypted within it is the physical
address of the row. As rowids are not part of the SQL standard, they are never visible
to a normal SQL statement, but you can see them and use them if you want. This is
demonstrated in Figure 7-3.
The rowid for each row is globally unique. Every row in every table in the entire
database will have a different rowid. The rowid encryption provides the physical
address of the row; from which Oracle can calculate which operating system file,
and where in the file the row is, and go straight to it.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

278
B*Tree indexes are a very efficient way of retrieving rows if the number of rows
needed is low in proportion to the total number of rows in the table, and if the table
is large. Consider this statement:
select count(*) from employees where last_name between 'A%' and 'Z%';

This WHERE clause is sufficiently broad that it will include every row in the table.
It would be much slower to search the index to find the rowids and then use the
rowids to find the rows than to scan the whole table. After all, it is the whole table
that is needed. Another example would be if the table were small enough that one
disk read could scan it in its entirety; there would be no point in reading an index first.
It is often said that if the query is going to retrieve more than two to four percent
of the rows, then a full table scan will be quicker. A special case is if the value specified
in the WHERE clause is NULL. NULLs do not go into B*Tree indexes, so a query such as
select * from employees where last_name is null;

will always result in a full table scan. There is little value in creating a B*Tree index on
a column with few unique values, as it will not be sufficiently selective: the proportion
of the table that will be retrieved for each distinct key value will be too high. In general,
B*Tree indexes should be used if
• The cardinality (the number of distinct values) in the column is high, and
• The number of rows in the table is high, and
• The column is used in WHERE clauses or JOIN conditions.

Bitmap Indexes
In many business applications, the nature of the data and the queries is such that
B*Tree indexes are not of much use. Consider the table of sales for a chain of
supermarkets, storing one year of historical data, which can be analyzed in several
dimensions. Figure 7-4 shows a simple entity-relationship diagram, with just four
of the dimensions.
Figure 7-4
A fact table with
four dimensions

Shop

Channel

Sales

Date

Product

Chapter 7: DDL and Schema Objects

279
The cardinality of each dimension could be quite low. Make these assumptions:
SHOP

There are four shops.

PRODUCT

There are two hundred products.

DATE

There are 365 days.

CHANNEL

There are two channels (walk-in and delivery).

WALK-IN
DELIVERY

11010111000101011100010101.....
00101000111010100010100010.....

This indicates that the first two rows were sales to walk-in customers, the third sale
was a delivery, the fourth sale was a walk-in, and so on.
The bitmaps for the SHOP index might be
LONDON
OXFORD
READING
GLASGOW

11001001001001101000010000.....
00100010010000010001001000.....
00010000000100000100100010.....
00000100100010000010000101.....

This indicates that the first two sales were in the London shop, the third was in
Oxford, the fourth in Reading, and so on. Now if this query is received:
select count(*) from sales where channel='WALK-IN' and shop='OXFORD';

Oracle can retrieve the two relevant bitmaps and add them together with a Boolean
AND operation:
WALK-IN
OXFORD
WALKIN & OXFORD

11010111000101011100010101.....
00100010010000010001001000.....
00000010000000010000001000.....

The result of the bitwise-AND operation shows that only the seventh and sixteenth
rows qualify for selection. This merging of bitmaps is very fast and can be used to
implement complex Boolean operations with many conditions on many columns
using any combination of AND, OR, and NOT operators. A particular advantage that
bitmap indexes have over B*Tree indexes is that they include NULLs. As far as the
bitmap index is concerned, NULL is just another distinct value, which will have its
own bitmap.

PART II

Assuming an even distribution of data, only two of the dimensions (PRODUCT
and DATE) have a selectivity that is better than the commonly used criterion of
2 percent to 4 percent, which makes an index worthwhile. But if queries use range
predicates (such as counting sales in a month, or of a class of ten or more products),
then not even these will qualify. This is a simple fact: B*Tree indexes are often useless
in a data warehouse environment. A typical query might want to compare sales
between two shops by walk-in customers of a certain class of product in a month.
There could well be B*Tree indexes on the relevant columns, but Oracle would ignore
them as being insufficiently selective. This is what bitmap indexes are designed for.
A bitmap index stores the rowids associated with each key value as a bitmap. The
bitmaps for the CHANNEL index might look like this:

OCA/OCP Oracle Database 11g All-in-One Exam Guide

280
In general, bitmap indexes should be used if
• The cardinality (the number of distinct values) in the column is low, and
• The number of rows in the table is high, and
• The column is used in Boolean algebra operations.
TIP If you knew in advance what the queries would be, then you could build
B*Tree indexes that would work, such as a composite index on SHOP and
CHANNEL. But usually you don’t know, which is where the dynamic merging
of bitmaps gives great flexibility.

Index Type Options
There are six commonly used options that can be applied when creating indexes:
• Unique or nonunique
• Reverse key
• Compressed
• Composite
• Function based
• Ascending or descending
All these six variations apply to B*Tree indexes, but only the last three can be applied
to bitmap indexes.
A unique index will not permit duplicate values. Nonunique is the default. The
unique attribute of the index operates independently of a unique or primary key
constraint: the presence of a unique index will not permit insertion of a duplicate
value even if there is no such constraint defined. A unique or primary key constraint
can use a nonunique index; it will just happen to have no duplicate values. This is in
fact a requirement for a constraint that is deferrable, as there may be a period (before
transactions are committed) when duplicate values do exist. Constraints are discussed
in the next section.
A reverse key index is built on a version of the key column with its bytes reversed:
rather than indexing “John”, it will index “nhoJ”. When a SELECT is done, Oracle will
automatically reverse the value of the search string. This is a powerful technique for
avoiding contention in multiuser systems. For instance, if many users are concurrently
inserting rows with primary keys based on a sequentially increasing number, all their
index inserts will concentrate on the high end of the index. By reversing the keys, the
consecutive index key inserts will tend to be spread over the whole range of the index.
Even though “John” and “Jules” are close together, “nhoJ” and “seluJ” will be quite
widely separated.
A compressed index stores repeated key values only once. The default is not to
compress, meaning that if a key value is not unique, it will be stored once for each
occurrence, each having a single rowid pointer. A compressed index will store the key
once, followed by a string of all the matching rowids.

Chapter 7: DDL and Schema Objects

281

Creating and Using Indexes
Indexes are created implicitly when primary key and unique constraints are defined, if
an index on the relevant column(s) does not already exist. The basic syntax for creating
an index explicitly is
CREATE [UNIQUE | BITMAP] INDEX [ schema.]indexname
ON [schema.]tablename (column [, column...] ) ;

The default type of index is a nonunique, noncompressed, non–reverse key B*Tree
index. It is not possible to create a unique bitmap index (and you wouldn’t want to if
you could—think about the cardinality issue). Indexes are schema objects, and it is
possible to create an index in one schema on a table in another, but most people
would find this somewhat confusing. A composite index is an index on several columns.
Composite indexes can be on columns of different data types, and the columns do
not have to be adjacent in the table.
TIP Many database administrators do not consider it good practice to rely on
implicit index creation. If the indexes are created explicitly, the creator has full
control over the characteristics of the index, which can make it easier for the
DBA to manage subsequently.
Consider this example of creating tables and indexes, and then defining constraints:
create table dept(deptno number,dname varchar2(10));
create table emp(empno number, surname varchar2(10),
forename varchar2(10), dob date, deptno number);
create unique index dept_i1 on dept(deptno);
create unique index emp_i1 on emp(empno);
create index emp_i2 on emp(surname,forename);
create bitmap index emp_i3 on emp(deptno);
alter table dept add constraint dept_pk primary key (deptno);
alter table emp add constraint emp_pk primary key (empno);
alter table emp add constraint emp_fk
foreign key (deptno) references dept(deptno);

PART II

A composite index is built on the concatenation of two or more columns. There are
no restrictions on mixing datatypes. If a search string does not include all the columns,
the index can still be used—but if it does not include the leftmost column, Oracle will
have to use a skip-scanning method that is much less efficient than if the leftmost
column is included.
A function-based index is built on the result of a function applied to one or more
columns, such as upper(last_name) or to_char(startdate, 'ccyy-mm-dd').
A query will have to apply the same function to the search string, or Oracle may not
be able to use the index.
By default, an index is ascending, meaning that the keys are sorted in order of lowest
value to highest. A descending index reverses this. In fact, the difference is often not
important: the entries in an index are stored as a doubly linked list, so it is possible
to navigate up or down with equal celerity, but this will affect the order in which rows
are returned if they are retrieved with an index full scan.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

282
The first two indexes created are flagged as UNIQUE, meaning that it will not be
possible to insert duplicate values. This is not defined as a constraint at this point but
is true nonetheless. The third index is not defined as UNIQUE and will therefore
accept duplicate values; this is a composite index on two columns. The fourth index
is defined as a bitmap index, because the cardinality of the column is likely to be low
in proportion to the number of rows in the table.
When the two primary key constraints are defined, Oracle will detect the preexisting
indexes and use them to enforce the constraints. Note that the index on DEPT.DEPTNO
has no purpose for performance because the table will in all likelihood be so small
that the index will never be used to retrieve rows (a scan will be quicker), but it is still
essential to have an index to enforce the primary key constraint.
Once created, indexes are used completely transparently and automatically. Before
executing a SQL statement, the Oracle server will evaluate all the possible ways of
executing it. Some of these ways may involve using whatever indexes are available;
others may not. Oracle will make use of the information it gathers on the tables and
the environment to make an intelligent decision about which (if any) indexes to use.
TIP The Oracle server should make the best decision about index use, but
if it is getting it wrong, it is possible for a programmer to embed instructions,
known as optimizer hints, in code that will force the use (or not) of certain
indexes.

Modifying and Dropping Indexes
The ALTER INDEX command cannot be used to change any of the characteristics
described in this chapter: the type (B*Tree or bitmap) of the index; the columns; or
whether it is unique or nonunique. The ALTER INDEX command lies in the database
administration domain and would typically be used to adjust the physical properties
of the index, not the logical properties that are of interest to developers. If it is necessary
to change any of these properties, the index must be dropped and recreated. Continuing
the example in the preceding section, to change the index EMP_I2 to include the
employees’ birthdays,
drop index emp_i2;
create index emp_i2 on emp(surname,forename,dob);

This composite index now includes columns with different data types. The columns
happen to be listed in the same order that they are defined in the table, but this is by
no means necessary.
When a table is dropped, all the indexes and constraints defined for the table are
dropped as well. If an index was created implicitly by creating a constraint, then dropping
the constraint will also drop the index. If the index had been created explicitly and the
constraint created later, then if the constraint were dropped the index would survive.
Exercise 7-5: Create Indexes In this exercise, add some indexes to the
CUSTOMERS table.
1. Connect to your database with SQL*Plus as user WEBSTORE.

Chapter 7: DDL and Schema Objects

283
2. Create a compound B*Tree index on the customer names and status:
create index cust_name_i on customers (customer_name, customer_status);

3. Create bitmap indexes on a low-cardinality column:
create bitmap index creditrating_i on customers(creditrating);

select index_name,column_name,index_type,uniqueness
from user_indexes natural join user_ind_columns
where table_name='CUSTOMERS';

Constraints
Table constraints are a means by which the database can enforce business rules and
guarantee that the data conforms to the entity-relationship model determined by the
systems analysis that defines the application data structures. For example, the business
analysts of your organization may have decided that every customer and every order
must be uniquely identifiable by number, that no orders can be issued to a customer
before that customer has been created, and that every order must have a valid date
and a value greater than zero. These would implemented by creating primary key
constraints on the CUSTOMER_ID column of the CUSTOMERS table and the ORDER_ID
column of the ORDERS table, a foreign key constraint on the ORDERS table referencing
the CUSTOMERS table, a not-null constraint on the DATE column of the ORDERS
table (the DATE data type will itself ensure that that any dates are valid automatically—it
will not accept invalid dates), and a check constraint on the ORDER_AMOUNT column
on the ORDERS table.
If any DML executed against a table with constraints defined violates a constraint,
then the whole statement will be rolled back automatically. Remember that a DML
statement that affects many rows might partially succeed before it hits a constraint
problem with a particular row. If the statement is part of a multistatement transaction,
then the statements that have already succeeded will remain intact but uncommitted.
EXAM TIP A constraint violation will force an automatic rollback of the
entire statement that hit the problem, not just the single action within the
statement, and not the entire transaction.

The Types of Constraint
The constraint types supported by the Oracle database are
• UNIQUE
• NOT NULL
• PRIMARY KEY
• FOREIGN KEY
• CHECK

PART II

4. Determine the name and some other characteristics of the indexes just created
by running this query.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

284
Constraints have names. It is good practice to specify the names with a standard
naming convention, but if they are not explicitly named, Oracle will generate names.

Unique Constraints
A unique constraint nominates a column (or combination of columns) for which the
value must be different for every row in the table. If the constraint is based on a single
column, this is known as the key column. If the constraint is composed of more than
one column (known as a composite key unique constraint), the columns do not have to
be the same data type or be adjacent in the table definition.
An oddity of unique constraints is that it is possible to enter a NULL value into
the key column(s); it is indeed possible to have any number of rows with NULL
values in their key column(s). So selecting rows on a key column will guarantee that
only one row is returned—unless you search for NULL, in which case all the rows
where the key columns are NULL will be returned.
EXAM TIP It is possible to insert many rows with NULLs in a column with
a unique constraint. This is not possible for a column with a primary key
constraint.
Unique constraints are enforced by an index. When a unique constraint is defined,
Oracle will look for an index on the key column(s), and if one does not exist, it will
be created. Then whenever a row is inserted, Oracle will search the index to see if the
values of the key columns are already present; if they are, it will reject the insert. The
structure of these indexes (known as B*Tree indexes) does not include NULL values,
which is why many rows with NULL are permitted: they simply do not exist in the
index. While the first purpose of the index is to enforce the constraint, it has a
secondary effect: improving performance if the key columns are used in the WHERE
clauses of SQL statements. However, selecting WHERE key_column IS NULL cannot
use the index (because it doesn’t include the NULLs) and will therefore always result
in a scan of the entire table.

Not-Null Constraints
The not-null constraint forces values to be entered into the key column. Not-null
constraints are defined per column and are sometimes called mandatory columns;
if the business requirement is that a group of columns should all have values, you
cannot define one not-null constraint for the whole group but must define a not-null
constraint for each column.
Any attempt to insert a row without specifying values for the not-null-constrained
columns results in an error. It is possible to bypass the need to specify a value by
including a DEFAULT clause on the column when creating the table, as discussed in
the earlier section “Creating Tables with Column Specifications.”

Primary Key Constraints
The primary key is the means of locating a single row in a table. The relational database
paradigm includes a requirement that every table should have a primary key: a column
(or combination of columns) that can be used to distinguish every row. The Oracle

Chapter 7: DDL and Schema Objects

285

EXAM TIP Unique and primary key constraints need an index. If one does not
exist, one will be created automatically.

Foreign Key Constraints
A foreign key constraint is defined on the child table in a parent-child relationship. The
constraint nominates a column (or columns) in the child table that corresponds to
the primary key column(s) in the parent table. The columns do not have to have the
same names, but they must be of the same data type. Foreign key constraints define
the relational structure of the database: the many-to-one relationships that connect
the table, in their third normal form.
If the parent table has unique constraints as well as (or instead of) a primary key
constraint, these columns can be used as the basis of foreign key constraints, even if
they are nullable.
EXAM TIP A foreign key constraint is defined on the child table, but a unique
or primary key constraint must already exist on the parent table.
Just as a unique constraint permits null values in the constrained column, so does
a foreign key constraint. You can insert rows into the child table with null foreign key
columns—even if there is not a row in the parent table with a null value. This creates
orphan rows and can cause dreadful confusion. As a general rule, all the columns in a
unique constraint and all the columns in a foreign key constraint are best defined
with not-null constraints as well; this will often be a business requirement.
Attempting to insert a row in the child table for which there is no matching row
in the parent table will give an error. Similarly, deleting a row in the parent table will
give an error if there are already rows referring to it in the child table. There are two
techniques for changing this behavior. First, the constraint may be created as ON
DELETE CASCADE. This means that if a row in the parent table is deleted, Oracle will
search the child table for all the matching rows and delete them too. This will happen
automatically. A less drastic technique is to create the constraint as ON DELETE SET
NULL. In this case, if a row in the parent table is deleted, Oracle will search the child

PART II

database deviates from the paradigm (as do some other RDBMS implementations)
by permitting tables without primary keys.
The implementation of a primary key constraint is in effect the union of a unique
constraint and a not-null constraint. The key columns must have unique values, and
they may not be null. As with unique constraints, an index must exist on the constrained
column(s). If one does not exist already, an index will be created when the constraint
is defined. A table can have only one primary key. Try to create a second, and you will
get an error. A table can, however, have any number of unique constraints and notnull columns, so if there are several columns that the business analysts have decided
must be unique and populated, one of these can be designated the primary key, and
the others made unique and not null. An example could be a table of employees,
where e-mail address, social security number, and employee number should all be
required and unique.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

286
table for all the matching rows and set the foreign key columns to null. This means
that the child rows will be orphaned but will still exist. If the columns in the child
table also have a not-null constraint, then the deletion from the parent table will fail.
It is not possible to drop or truncate the parent table in a foreign key relationship,
even if there are no rows in the child table. This still applies if the ON DELETE SET
NULL or ON DELETE CASCADE clause was used.
A variation on the foreign key constraint is the self-referencing foreign key constraint.
This defines a condition where the parent and child rows exist in the same table. An
example would be a table of employees, which includes a column for the employee’s
manager. The manager is himself an employee and must exist in the table. So if the
primary key is the EMPLOYEE_ID column, and the manager is identified by a column
MANAGER_ID, then the foreign key constraint will state that the value of the MANAGER_
ID column must refer back to a valid EMPLOYEE_ID. If an employee is his own manager,
then the row would refer to itself.

Check Constraints
A check constraint can be used to enforce simple rules, such as that the value entered
in a column must be within a range of values. The rule must be an expression that
will evaluate to TRUE or FALSE. The rules can refer to absolute values entered as
literals, or to other columns in the same row, and they may make use of some functions.
As many check constraints as you want can be applied to one column, but it is not
possible to use a subquery to evaluate whether a value is permissible, or to use
functions such as SYSDATE.
TIP The not-null constraint is in fact implemented as a preconfigured check
constraint.

Defining Constraints
Constraints can be defined when creating a table, or added to the table later. When
defining constraints at table creation time, the constraint can be defined in line with
the column to which it refers, or at the end of the table definition. There is more
flexibility to using the latter technique. For example, it is impossible to define a
foreign key constraint that refers to two columns or a check constraint that refers
to any column other than that being constrained if the constraint is defined in line,
but both these are possible if the constraint is defined at the end of the table.
For the constraints that require an index (the unique and primary key constraints),
the index will be created with the table if the constraint is defined at table creation time.
Consider these two table creation statements (to which line numbers have been
added):
1
2
3
4
5

create table dept(
deptno number(2,0) constraint dept_deptno_pk primary key
constraint dept_deptno_ck check (deptno between 10 and 90),
dname varchar2(20) constraint dept_dname_nn not null);
create table emp(

Chapter 7: DDL and Schema Objects

287
empno number(4,0) constraint emp_empno_pk primary key,
ename varchar2(20) constraint emp_ename_nn not null,
mgr number (4,0) constraint emp_mgr_fk references emp (empno),
dob date,
hiredate date,
deptno number(2,0) constraint emp_deptno_fk references dept(deptno)
on delete set null,
email varchar2(30) constraint emp_email_uk unique,
constraint emp_hiredate_ck check (hiredate >= dob + 365*16),
constraint emp_email_ck
check ((instr(email,'@') > 0) and (instr(email,'.') > 0)));

Taking these statements line by line:
1. The first table created is DEPT, intended to have one row for each department.
2. DEPTNO is numeric, two digits, no decimals. This is the table’s primary key.
The constraint is named DEPT_DEPTNO_PK.
3. A second constraint applied to DEPTNO is a check limiting it to numbers in
the range 10 to 90. The constraint is named DEPT_DEPTNO_CK.
4. The DNAME column is variable-length characters, with a constraint
DEPT_DNAME_NN making it not nullable.
5. The second table created is EMP, intended to have one row for every employee.
6. EMPNO is numeric, up to four digits with no decimals. Constraint EMP_
EMPNO_PK marks this as the table’s primary key.
7. ENAME is variable-length characters, with a constraint EMP_ENAME_NN
making it not nullable.
8. MGR is the employee’s manager, who must himself be an employee. The
column is defined in the same way as the table’s primary key column
of EMPNO. The constraint EMP_MGR_FK defines this column as a selfreferencing foreign key, so any value entered must refer to an already-extant
row in EMP (though it is not constrained to be not null, so it can be left blank).
9. DOB, the employee’s birthday, is a date and not constrained.
10. HIREDATE is the date the employee was hired and is not constrained. At least,
not yet.
11. DEPTNO is the department with which the employee is associated. The
column is defined in the same way as the DEPT table’s primary key column
of DEPTNO, and the constraint EMP_DEPTNO_FK enforces a foreign key
relationship; it is not possible to assign an employee to a department that
does not exist. This is nullable, however.
12. The EMP_DEPTO_FK constraint is further defined as ON DELETE SET NULL,
so if the parent row in DEPT is deleted, all matching child rows in EMPNO
will have DEPTNO set to NULL.
13. EMAIL is variable-length character data and must be unique if entered
(though it can be left empty).

PART II

6
7
8
9
10
11
12
13
14
15
16

OCA/OCP Oracle Database 11g All-in-One Exam Guide

288
14. This defines an additional table level constraint EMP_HIREDATE_CK. The
constraint checks for use of child labor, by rejecting any rows where the date
of hiring is not at least 16 years later than the birthday. This constraint could
not be defined in line with HIREDATE, because the syntax does not allow
references to other columns at that point.
15. An additional constraint EMP_EMAIL_CK is added to the EMAIL column,
which makes two checks on the e-mail address. The INSTR functions search
for “@” and “.” characters (which will always be present in a valid e-mail
address) and if it can’t find both of them, the check condition will return
FALSE and the row will be rejected.
The preceding examples show several possibilities for defining constraints at table
creation time. Further possibilities not covered include:
• Controlling the index creation for the unique and primary key constraints
• Defining whether the constraint should be checked at insert time (which it is
by default) or later on, when the transaction is committed
• Stating whether the constraint is in fact being enforced at all (which is the
default) or is disabled
It is possible to create tables with no constraints and then to add them later with
an ALTER TABLE command. The end result will be the same, but this technique does
make the code less self-documenting, as the complete table definition will then be
spread over several statements rather than being in one.

Constraint State
At any time, every constraint is either enabled or disabled, and validated or not
validated. Any combination of these is syntactically possible:
• ENABLE VALIDATE It is not possible to enter rows that would violate the
constraint, and all rows in the table conform to the constraint.
• DISABLE NOVALIDATE Any data (conforming or not) can be entered, and
there may already be nonconforming data in the table.
• ENABLE NOVALIDATE There may already be nonconforming data in the
table, but all data entered now must conform.
• DISABLE VALIDATE An impossible situation: all data in the table conforms
to the constraint, but new rows need not. The end result is that the table is
locked against DML commands.
The ideal situation (and the default when a constraint is defined) is ENABLE
VALIDATE. This will guarantee that all the data is valid, and no invalid data can
be entered. The other extreme, DISABLE NOVALIDATE, can be very useful when
uploading large amounts of data into a table. It may well be that the data being
uploaded does not conform to the business rules, but rather than have a large upload

Chapter 7: DDL and Schema Objects

289

alter table sales_archive modify constraint sa_nn1 disable novalidate;
insert into sales_archive select * from sales_current;
alter table sales_archive modify constraint sa_nn1 enable novalidate;
update sales_archive set channel='NOT KNOWN' where channel is null;
alter table sales_archive modify constraint sa_nn1 enable validate;

Constraint Checking
Constraints can be checked as a statement is executed (an IMMEDIATE constraint) or
when a transaction is committed (a DEFERRED constraint). By default, all constraints
are IMMEDIATE and not deferrable. An alternative approach to the previous example
would have been possible had the constraint been created as deferrable:
set constraint sa_nn1 deferred;
insert into sales_archive select * from sales_current;
update sales_archive set channel='NOT KNOWN' where channel is null;
commit;
set constraint sa_nn1 immediate;

For the constraint to be deferrable, it must have been created with appropriate
syntax:
alter table sales_archive add constraint sa_nn1
check (channel is not null) deferrable initially immediate;

It is not possible to make a constraint deferrable later, if it was not created that
way. The constraint SA_NN1 will by default be enforced when a row is inserted (or
updated), but the check can be postponed until the transaction commits. A common
use for deferrable constraints is with foreign keys. If a process inserts or updates rows
in both the parent and the child tables, if the foreign key constraint is not deferred the
process may fail if rows are not processed in the correct order.
Changing the status of a constraint between ENABLED/DISABLED and VALIDATE/
NOVALIDATE is an operation that will affect all sessions. The status change is a data
dictionary update. Switching a deferrable constraint between IMMEDIATE and
DEFERRED is session specific, though the initial state will apply to all sessions.
EXAM TIP By default, constraints are enabled and validated, and they are not
deferrable.

PART II

fail because of a few bad rows, putting the constraint in this state will allow the
upload to succeed. Immediately following the upload, transition the constraint into
the ENABLE NOVALIDATE state. This will prevent the situation from deteriorating
further while the data is checked for conformance before transitioning the constraint
to the ideal state.
As an example, consider this script, which reads data from a source table of live
data into a table of archive data. The assumption is that there is a NOT NULL
constraint on a column of the target table that may not have been enforced on
the source table:

OCA/OCP Oracle Database 11g All-in-One Exam Guide

290
Exercise 7-6: Manage Constraints In this exercise, use SQL Developer and
SQL*Plus to define and adjust some constraints on the table created in Exercise 7-3.
1. In SQL Developer, navigate to the listing of WEBSTORE’s tables and click the
CUSTOMERS table.
2. Take the Constraints tab to view the four NOT NULL constraints that were
created with the table. Note that their names are not helpful—this will be
fixed in Step 8.
3. Click the Actions button and choose Constraints: Add Primary Key.
4. In the Add Primary Key window name the constraint: PK_CUSTOMER_ID,
choose the CUSTOMER_ID column, and click Apply.
5. Choose the Show SQL tab to see the constraint creation statement, and then
click the Apply button to run the statement.
6. Connect to your database as user WEBSTORE with SQL*Plus.
7. Run this query to find the names of the constraints:
select constraint_name,constraint_type,column_name
from user_constraints natural join user_cons_columns
where table_name='CUSTOMERS';

8. Rename the constraints to something more meaningful, using the original
constraint names retrieved in Step 7, with ALTER TABLE commands:
ALTER TABLE CUSTOMERS RENAME CONSTRAINT old_name TO new_name ;

9. Add the following constraints to the WEBSTORE schema:
alter table ORDERS add constraint PK_ORDER_ID primary key(ORDER_ID);
alter table PRODUCTS add constraint PK_PRODUCT_ID primary key(PRODUCT_ID);
alter table ORDER_ITEMS add constraint FK_PRODUCT_ID foreign key(PRODUCT_ID)
references PRODUCTS(PRODUCT_ID);
alter table ORDER_ITEMS add constraint FK_ORDER_ID foreign key(ORDER_ID)
references ORDERS(ORDER_ID);
alter table ORDERS add constraint FK_CUSTOMER_ID foreign key(CUSTOMER_ID)
references CUSTOMERS(CUSTOMER_ID);

Views
To the user, a view looks like a table: a two-dimensional structure of rows of columns,
against which the user can run SELECT and DML statements. The programmer knows
the truth: a view is just a named SELECT statement. Any SELECT statement returns a
two-dimensional set of rows. If the SELECT statement is saved as a view, then whenever
the users query or update rows in the view (under the impression that it is a table), the
statement runs, and the result is presented to users as though it were a table. The SELECT
statement on which a view is based can be anything. It can join tables, perform
aggregations, or do sorts; absolutely any legal SELECT command can be used as the
basis for a view.
EXAM TIP Views share the same namespace as tables: anywhere that a table
name can be used, a view name is also syntactically correct.

Chapter 7: DDL and Schema Objects

291
Why Use Views at All?
Possible reasons include: security, simplifying user SQL statements, preventing errors,
improving performance, and making data comprehensible. Table and column names are
often long and pretty meaningless. The view and its columns can be much more obvious.

Views to Enforce Security

create view hr.emp_fin as select
hire_date,job_id,salary,commission_pct,department_id from hr.employees;

Note the use of schema qualifiers for the table as the source of the data (often
referred to as either the base or the detail table) and the view: views are schema objects
and can draw their data from tables in the same schema or in other schemas. If the
schema is not specified, it will of course be in the current schema.
Finance staff can then be given permission to see the view but not the table and
can issue statements such as this:
select * from emp_fin where department_id=50;

They will see only the five columns that make up the view, not the remaining
columns of EMPLOYEES with the personal information. The view can be joined
to other tables or aggregated as though it were a table:
select department_name, sum(salary) from departments natural join emp_fin
group by department_name;

A well-constructed set of views can implement a whole security structure within
the database, giving users access to data they need to see while concealing data they
do not need to see.

Views to Simplify User SQL
It will be much easier for users to query data if the hard work (such as joins or
aggregations) is done for them by the code that defines the view. In the last example,
the user had to write code that joined the EMP_FIN view to the DEPARTMENTS table
and summed the salaries per department. This could all be done in a view:
create view dept_sal as
select d.department_name, sum(e.salary) from
departments d left outer join employees e on d.department_id=e.department_id
group by department_name order by department_name;

Then the users can select from DEPT_SAL without needing to know anything
about joins, or even how to sort the results:
select * from dept_sal;

PART II

It may be that users should only see certain rows or columns of a table. There
are several ways of enforcing this, but a view is often the simplest. Consider the
HR.EMPLOYEES table. This includes personal details that should not be visible
to staff outside the personnel department. But finance staff will need to be able to
see the costing information. This view will depersonalize the data:

OCA/OCP Oracle Database 11g All-in-One Exam Guide

292
In particular, they do not need to know how to make sure that all departments
are listed, even those with no employees. The example in the preceding section would
have missed these.

Views to Prevent Errors
It is impossible to prevent users from making errors, but well-constructed views can
prevent some errors arising from a lack of understanding of how data should be
interpreted. The preceding section already introduced this concept by constructing
a view that will list all departments, whether or not they currently have staff assigned
to them.
A view can help to present data in a way that is unambiguous. For example, many
applications never actually delete rows. Consider this table:
create table emp(empno number constraint emp_empno_pk primary key,
ename varchar2(10),deptno number,active varchar2(1) default 'Y');

The column ACTIVE is a flag indicating that the employee is currently employed
and will default to ‘Y’ when a row is inserted. When a user, through the user interface,
“deletes” an employee, the underlying SQL statement will be an update that sets
ACTIVE to ‘N’. If users who are not aware of this query the table, they may severely
misinterpret the results. It will often be better to give them access to a view:
create view current_staff as select * from emp where active='Y';

Queries addressed to this view cannot possibly see “deleted” staff members.

Views to Make Data Comprehensible
The data structures in a database will be normalized tables. It is not reasonable to
expect users to understand normalized structures. To take an example from the Oracle
E-Business Suite, a “customer” in the Accounts Receivable module is in fact an entity
consisting of information distributed across the tables HZ_PARTIES, HZ_PARTY_SITES,
HZ_CUST_ACCTS_ALL, and many more. All these tables are linked by primary key–
to–foreign key relationships, but these are not defined on any identifiers visible to
users (such as a customer number): they are based on columns the users never see
that have values generated internally from sequences. The forms and reports used
to retrieve customer information never address these tables directly; they all work
through views.
As well as presenting data to users in a comprehensible form, the use of views to
provide a layer of abstraction between the objects seen by users and the objects stored
within the database can be invaluable for maintenance work. It becomes possible to
redesign the data structures without having to recode the application. If tables are
changed, then adjusting the view definitions may make any changes to the SQL and
PL/SQL code unnecessary. This can be a powerful technique for making applications
portable across different databases.

Views for Performance
The SELECT statement behind a view can be optimized by programmers, so that users
don’t need to worry about tuning code. There may be many possibilities for getting

Chapter 7: DDL and Schema Objects

293

create view dept_emp as
select /*+USE_HASH (employees departments)*/ department_name, last_name
from departments natural join employees;

Whenever users query the DEPT_EMP view, the join will be performed by scanning
the detail tables into memory. The users need not know the syntax for forcing use of
this join method. You do not need to know it, either: this is beyond the scope of the
OCP examination, but the concept of tuning with view design should be known.

Simple and Complex Views
For practical purposes, classification of a view as simple or complex is related to whether
DML statements can be executed against it: simple views can (usually) accept DML
statements; complex views cannot. The strict definitions are as follows:
• A simple view draws data from one detail table, uses no functions, and does
no aggregation.
• A complex view can join detail tables, use functions, and perform aggregations.
Applying these definitions shows that of the four views used as examples in the
preceding section, the first and third are simple and the second and fourth are complex.
It is not possible to execute INSERT, UPDATE, or DELETE commands against a
complex view. The mapping of the rows in the view back to the rows in the detail
table(s) cannot always be established on a one-to-one basis, which is necessary for
DML operations. It is usually possible to execute DML against a simple view but not
always. For example, if the view does not include a column that has a NOT NULL
constraint, then an INSERT through the view cannot succeed (unless the column has
a default value). This can produce a disconcerting effect because the error message
will refer to a table and a column that are not mentioned in the statement, as
demonstrated in the first example in Figure 7-5.
The first view in the figure, RNAME_V, does conform to the definition of a simple
view, but an INSERT cannot be performed through it because it is missing a mandatory
column. The second view, RUPPERNAME_V, is a complex view because it includes a
function. This makes an INSERT impossible, because there is no way the database can
work out what should actually be inserted: it can’t reverse-engineer the effect of the
UPPER function in a deterministic fashion. But the DELETE succeeds, because that is
not dependent on the function.

PART II

the same result, but some techniques can be much slower than others. For example,
when joining two tables there is usually a choice between the nested loop join and the
hash join. A nested loop join uses an index to get to individual rows; a hash join reads
the whole table into memory. The choice between the two will be dependent on the
state of the data and the hardware resources available.
Theoretically, one can always rely on the Oracle optimizer to work out the best way
to run a SQL statement, but there are cases where it gets it wrong. If the programmers
know which technique is best, they can instruct the optimizer. This example forces use
of the hash technique:

OCA/OCP Oracle Database 11g All-in-One Exam Guide

294

Figure 7-5

DML against simple and complex views

CREATE VIEW, ALTER VIEW, and DROP VIEW
The syntax to create a view is as follows:
CREATE [OR REPLACE] [FORCE | NOFORCE] VIEW
[schema.]viewname [(alias [,alias]…)]
AS subquery
[WITH CHECK OPTION [CONSTRAINT constraintname]]
[WITH READ ONLY [CONSTRAINT constraintname]] ;

Note that views are schema objects. There is no reason not to have a view owned
by one user referencing detail tables owned by another user. By default, the view will
be created in the current schema. The optional keywords, none of which have been
used in the examples so far, are as follows:
• OR REPLACE
created.

If the view already exists, it will be dropped before being

• FORCE or NOFORCE The FORCE keyword will create the view even if the
detail table(s) in the subquery does not exist. NOFORCE is the default and
will cause an error if the detail table does not exist.
• WITH CHECK OPTION This has to do with DML. If the subquery includes
a WHERE clause, then this option will prevent insertion of rows that wouldn’t
be seen in the view or updates that would cause a row to disappear from the
view. By default, this option is not enabled, which can give disconcerting results.
• WITH READ ONLY

Prevents any DML through the view.

• CONSTRAINT constraintname This can be used to name the WITH CHECK
OPTION and WITH READ ONLY restrictions so that error messages when the
restrictions cause statements to fail will be more comprehensible.

Chapter 7: DDL and Schema Objects

295

DROP VIEW [schema.]viewname ;

By using the OR REPLACE keywords with the CREATE VIEW command, the view
will be automatically dropped (if it exists at all) before being created.
Exercise 7-7: Create Views In this exercise, you will create some simple and
complex views, using data in the HR schema. Either SQL*Plus or SQL developer can
be used.
1. Connect to your database as user HR.
2. Create views on the EMPLOYEES and DEPARTMENT tables that remove all
personal information:
create
select
create
select

view emp_anon_v as
hire_date, job_id,salary,commission_pct,department_id from employees;
view dept_anon_v as
department_id,department_name,location_id from departments;

3. Create a complex view that will join and aggregate the two simple views. Note
that there is no reason not to have views of views.
create view dep_sum_v as
select e.department_id,count(1) staff, sum(e.salary) salaries,
d.department_name from emp_anon_v e join dept_anon_v d
on e.department_id=d.department_id
group by e.department_id,d.department_name;

4. Confirm that the view works by querying it.

Synonyms
A synonym is an alternative name for an object. If synonyms exist for objects, then any
SQL statement can address the object either by its actual name or by its synonym. This
may seem trivial. It isn’t. Use of synonyms means that an application can function for

PART II

In addition, a set of alias names can be provided for the names of the view’s
columns. If not provided, the columns will be named after the table’s columns or
with aliases specified in the subquery.
The main use of the ALTER VIEW command is to compile the view. A view must
be compiled successfully before it can be used. When a view is created, Oracle will
check that the detail tables and the necessary columns on which the view is based do
exist. If they do not, the compilation fails and the view will not be created—unless
you use the FORCE option. In that case, the view will be created but will be unusable
until the tables or columns to which it refers are created and the view is successfully
compiled. When an invalid view is queried, Oracle will attempt to compile it
automatically. If the compilation succeeds because the problem has been fixed, users
won’t know there was ever a problem—except that their query may take a little longer
than usual. Generally speaking, you should manually compile views to make sure
they do compile successfully, rather than having users discover errors.
It is not possible to adjust a view’s column definitions after creation in the way
that a table’s columns can be changed. The view must be dropped and recreated. The
DROP command is as follows:

OCA/OCP Oracle Database 11g All-in-One Exam Guide

296
any user, irrespective of which schema owns the views and tables or even in which
database the tables reside. Consider this statement:
select * from hr.employees@prod;

The user issuing the statement must know that the employees table is owned by
the HR schema in the database identified by the database link PROD (do not worry
about database links—they are a means of accessing objects in a database other than
that onto which you are logged). If a public synonym has been created with this
statement:
create public synonym emp for hr.employees@prod;

then all the user (any user!) need enter is the following:
select * from emp;

This gives both data independence and location transparency. Tables and views
can be renamed or relocated without ever having to change code; only the synonyms
need to be adjusted.
As well as SELECT statements, DML statements can address synonyms as though
they were the object to which they refer.
Private synonyms are schema objects. Either they must be in your own schema, or
they must be qualified with the schema name. Public synonyms exist independently
of a schema. A public synonym can be referred to by any user to whom permission
has been granted to see it without the need to qualify it with a schema name. Private
synonyms must have unique names within their schema. Public synonyms can have
the same name as schema objects. When executing statements that address objects
without a schema qualifier, Oracle will first look for the object in the local schema,
and only if it cannot be found will it look for a public synonym. Thus, in the
preceding example, if the user happened to own a table called EMP it would be
this that would be seen—not the table pointed to by the public synonym.
The syntax to create a synonym is as follows:
CREATE [PUBLIC] SYNONYM synonym FOR object ;

A user will need to have been granted permission to create private synonyms and
further permission to create public synonyms. Usually, only the database administrator
can create (or drop) public synonyms. This is because their presence (or absence) will
affect every user.
EXAM TIP The “public” in “public synonym” means that it is not a schema
object and cannot therefore be prefixed with a schema name. It does not
mean that everyone has permissions against it.
To drop a synonym:
DROP [PUBLIC] SYNONYM synonym ;

Chapter 7: DDL and Schema Objects

297
If the object to which a synonym refers (the table or view) is dropped, the synonym
continues to exist. Any attempt to use it will return an error. In this respect, synonyms
behave in the same way as views. If the object is recreated, the synonym must be
recompiled before use. As with views, this will happen automatically the next time
the synonym is addressed, or it can be done explicitly with
ALTER SYNONYM synonym COMPILE;

1. Connect to your database as user HR.
2. Create synonyms for the three views created in Exercise 7-7:
create synonym emp_s for emp_anon_v;
create synonym dept_s for dept_anon_v;
create synonym dsum_s for dep_sum_v;

3. Confirm that the synonyms are identical to the underlying objects:
describe emp_s;
describe emp_anon_v;

4. Confirm that the synonyms work (even to the extent of producing the same
errors) by running the statements in Exercise 7-7 against the synonyms
instead of the views:
select * from dsum_s;
insert into dept_s values(99,'Temp Dept',1800);
insert into emp_s values(sysdate,'AC_MGR',10000,0,99);
update emp_s set salary=salary*1.1;
rollback;
select max(salaries / staff) from dsum_s;

5. Drop two of the views:
drop view emp_anon_v;
drop view dept_anon_v;

6. Query the complex view that is based on the dropped views:
select * from dep_sum_v;

Note that the query fails.
7. Attempt to recompile the broken view:
alter view dep_sum_v compile;

This will fail as well.
8. Drop the DEP_SUM_V view:
drop view dep_sum_v;

9. Query the synonym for a dropped view:
select * from emp_s;

This will fail.

PART II

Exercise 7-8: Create and Use Synonyms In this exercise, you will create and
use private synonyms, using objects in the HR schema. Either SQL*Plus or SQL developer
can be used.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

298
10. Recompile the broken synonym:
alter synonym emp_s compile;

Note that this does not give an error, but rerun the query from Step 9. It is
definitely still broken.
11. Tidy up by dropping the synonyms:
drop synonym emp_s;
drop synonym dept_s;
drop synonym dsum_s;

Sequences
A sequence is a structure for generating unique integer values. Only one session can
read the next value and thus force it to increment. This is a point of serialization, so
each value generated will be unique.
Sequences are an invaluable tool for generating primary keys. Many applications
will need automatically generated primary key values. Examples in everyday business
data processing are customer numbers or order numbers: the business analysts will
have stated that every order must have a unique number, which should continually
increment. Other applications may not have such a requirement in business terms,
but it will be needed to enforce relational integrity. Consider a telephone billing
system: in business terms the unique identifier of a telephone is the telephone
number (which is a string) and that of a call will be the source telephone number
and the time the call began (which is a timestamp). These data types are unnecessarily
complex to use as primary keys for the high volumes that go through a telephone
switching system. While this information will be recorded, it will be much faster to
use simple numeric columns to define the primary and foreign keys. The values in
these columns can be sequence based.
The sequence mechanism is independent of tables, the row locking mechanism,
and commit or rollback processing. This means that a sequence can issue thousands
of unique values a minute—far faster than any method involving selecting a column
from a table, updating it, and committing the change.
Figure 7-6 shows two sessions selecting values from a sequence SEQ1.
Note that in the figure, each selection of SEQ1.NEXTVAL generates a unique
number. The numbers are issued consecutively in order of the time the selection
was made, and the number increments globally, not just within one session.

Creating Sequences
The full syntax for creating a sequence is as follows:
CREATE SEQUENCE [schema.]sequencename
[INCREMENT BY number]
[START WITH number]
[MAXVALUE number | NOMAXVALUE]
[MINVALUE number | NOMINVALUE]
[CYCLE | NOCYCLE]
[CACHE number | NOCACHE]
[ORDER | NOORDER] ;

Chapter 7: DDL and Schema Objects

299

PART II

Figure 7-6 Use of a sequence by two sessions concurrently

It can be seen that creating a sequence can be very simple. For example, the
sequence used in Figure 7-6 was created with
create sequence seq1;

The options are shown in the following table.
INCREMENT BY

How much higher (or lower) than the last number issued should the next
number be? Defaults to +1 but can be any positive number (or negative
number for a descending sequence).

START WITH

The starting point for the sequence: the number issued by the first
selection. Defaults to 1 but can be anything.

MAXVALUE

The highest number an ascending sequence can go to before generating an
error or returning to its START WITH value. The default is no maximum.

MINVALUE

The lowest number a descending sequence can go to before generating an
error or returning to its START WITH value. The default is no minimum.

CYCLE

Controls the behavior on reaching MAXVALUE or MINVALUE. The default
behavior is to give an error, but if CYCLE is specified the sequence will
return to its starting point and repeat.

CACHE

For performance, Oracle can preissue sequence values in batches and cache
them for issuing to users. The default is to generate and cache the next 20
values.

ORDER

Only relevant for a clustered database: ORDER forces all instances in the
cluster to coordinate incrementing the sequence, so that numbers issued
are always in order even when issued to sessions against different instances.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

300
Appropriate settings for INCREMENT BY, START WITH, and MAXVALUE or
MINVALUE will come from your business analysts.
It is very rare for CYCLE to be used, because it lets the sequence issue duplicate
values. If the sequence is being used to generate primary key values, CYCLE only
makes sense if there is a routine in the database that will delete old rows faster than
the sequence will reissue numbers.
Caching sequence values is vital for performance. Selecting from a sequence is a
point of serialization in the application code: only one session can do this at once.
The mechanism is very efficient: it is much faster than locking a row, updating the
row, and then unlocking it with a COMMIT. But even so, selecting from a sequence
can be a cause of contention between sessions. The CACHE keyword instructs Oracle
to pregenerate sequence numbers in batches. This means that they can be issued faster
than if they had to be generated on demand.
TIP The default number of values to cache is only 20. Experience shows that
this is usually not enough. If your application selects from the sequence 10 times
a second, then set the cache value to 50 thousand. Don’t be shy about this.

Using Sequences
To use a sequence, a session can select either the next value with the NEXTVAL
pseudocolumn, which forces the sequence to increment, or the last (or “current”)
value issued to that session with the CURRVAL pseudocolumn. The NEXTVAL will be
globally unique: each session that selects it will get a different, incremented value for
each SELECT. The CURRVAL will be constant for one session until it selects NEXTVAL
again. There is no way to find out what the last value issued by a sequence was: you
can always obtain the next value by incrementing it with NEXTVAL, and you can
always recall the last value issued to your session with CURRVAL, but you cannot
find the last value issued.
EXAM TIP The CURRVAL of a sequence is the last value issued to the
current session, not necessarily the last value issued.You cannot select
the CURRVAL until after selecting the NEXTVAL.
A typical use of sequences is for primary key values. This example uses a sequence
ORDER_SEQ to generate unique order numbers and LINE_SEQ to generate unique
line numbers for the line items of the order. First create the sequences, which is a
once-off operation:
create sequence order_seq start with 10;
create sequence line_seq start with 10;

Then insert the orders with their lines as a single transaction:
insert into orders (order_id,order_date,customer_id)
values (order_seq.nextval,sysdate,'1000');
insert into order_items (order_id,order_item_id,product_id)
values (order_seq.currval,line_seq.nextval,'A111');
insert into order_items (order_id,order_item_id,product_id)
values (order_seq.currval,line_seq.nextval,'B111');
commit;

Chapter 7: DDL and Schema Objects

301

create table current_on(order_number number);
insert into current_on values(10);
commit;

Then the code to create an order would have to become:
update current_on set order_number=order_number + 1;
insert into orders (order_number,order_date,customer_number)
values ((select order_number from current_on),sysdate,'1000');
commit;

This will certainly work as a means of generating unique order numbers, and
because the increment of the order number is within the transaction that inserts the
order, it can be rolled back with the insert if necessary: there will be no gaps in order
numbers, unless an order is deliberately deleted. But it is far less efficient than using
a sequence, and code like this is famous for causing dreadful contention problems. If
many sessions try to lock and increment the one row containing the current number,
the whole application will hang as they queue up to take their turn.
After creating and using a sequence, it can be modified. The syntax is as follows:
ALTER SEQUENCE sequencename
[INCREMENT BY number]
[START WITH number]
[MAXVALUE number | NOMAXVALUE]
[MINVALUE number | NOMINVALUE]
[CYCLE | NOCYCLE]
[CACHE number | NOCACHE]
[ORDER | NOORDER] ;

PART II

The first INSERT statement raises an order with a unique order number drawn
from the sequence ORDER_SEQ for customer number 1000. The second and third
statements insert the two lines of the order, using the previously issued order number
from ORDER_SEQ as the foreign key to connect the line items to the order, and the
next values from LINE_SEQ to generate a unique identifier for each line. Finally,
the transaction is committed.
A sequence is not tied to any one table. In the preceding example, there would be
no technical reason not to use one sequence to generate values for the primary keys of
the order and of the lines.
A COMMIT is not necessary to make the increment of a sequence permanent:
it is permanent and made visible to the rest of the world the moment it happens. It
can’t be rolled back, either. Sequence updates occur independently of the transaction
management system. For this reason, there will always be gaps in the series. The gaps
will be larger if the database has been restarted and the CACHE clause was used. All
numbers that have been generated and cached but not yet issued will be lost when the
database is shut down. At the next restart, the current value of the sequence will be
the last number generated, not the last issued. So, with the default CACHE of 20,
every shutdown/startup will lose up to 20 numbers.
If the business analysts have stated that there must be no gaps in a sequence,
then another means of generating unique numbers must be used. For the preceding
example of raising orders, the current order number could be stored in this table and
initialized to 10:

OCA/OCP Oracle Database 11g All-in-One Exam Guide

302
This ALTER command is the same as the CREATE command, with one exception:
there is no way to set the starting value. If you want to restart the sequence, the only
way is to drop it and recreate it. To adjust the cache value from default to improve
performance of the preceding order entry example:
alter sequence order_seq cache 1000;

However, if you want to reset the sequence to its starting value, the only way is to drop it:
drop sequence order_seq;

and create it again.
Exercise 7-9: Create and Use Sequences In this exercise, you will create
some sequences and use them. You will need two concurrent sessions, either SQL
Developer or SQL*Plus.
1. Log on to your database twice, as WEBSTORE in separate sessions. Consider
one to be your A session and the other to be your B session.
2. In your A session, create a sequence as follows:
create sequence seq1 start with 10 nocache maxvalue 15 cycle;

The use of NOCACHE is deleterious to performance. If MAXVALUE is specified,
then CYCLE will be necessary to prevent errors when MAXVALUE is reached.
3. Execute the following commands in the appropriate session in the correct
order to observe the use of NEXTVAL and CURRVAL and the cycling of the
sequence:
In Your A Session
st

1

select seq1.nextval from dual;

2nd
rd

3
4

select seq1.nextval from dual;
select seq1.nextval from dual;

th

select seq1.nextval from dual;

5th

select seq1.currval from dual;

6th

select seq1.nextval from dual;

th

select seq1.nextval from dual;

7

8th

select seq1.currval from dual;

th

select seq1.nextval from dual;

9

10

In Your B Session

th

select seq1.nextval from dual;

4. Create a table with a primary key:
create table seqtest(c1 number,c2 varchar2(10));
alter table seqtest add constraint seqtest_pk primary key (c1);

5. Create a sequence to generate primary key values:
create sequence seqtest_pk_s;

Chapter 7: DDL and Schema Objects

303
6. In your A session, insert a row into the new table and commit:
insert into seqtest values(seqtest_pk_s.nextval,'first');
commit;

7. In your B session, insert a row into the new table and do not commit it:
insert into seqtest values(seqtest_pk_s.nextval,'second');

8. In your A session, insert a third row and commit:

9. In your B session, roll back the second insertion:
rollback;

10. In your B session, see the contents of the table:
select * from seqtest;

This demonstrates that sequences are incremented and the next value
published immediately, outside the transaction control mechanism.
11. Tidy up:
drop table seqtest;
drop sequence seqtest_pk_s;
drop sequence seq1;

12. Connect to the WEBSTORE schema with either SQL Developer or SQL*Plus
and create three sequences which will be used in later exercises. (You may
have to connect first as a privileged user like SYSTEM and grant the “CREATE
SEQUENCE” privilege to the WEBSTORE user.)
create sequence prod_seq;
create sequence cust_seq;
create sequence order_seq;

Two-Minute Drill
Categorize the Main Database Objects
• Some objects contain data, principally tables and indexes.
• Programmatic objects such as stored procedures and functions are executable
code.
• Views and synonyms are objects that give access to other objects.
• Tables are two-dimensional structures, storing rows defined with columns.
• Tables exist within a schema. The schema name together with the table name
makes a unique identifier.

List the Data Types That Are Available for Columns
• The most common character data types are VARCHAR2, NUMBER, and DATE.
• There are many other data types.

PART II

insert into seqtest values(seqtest_pk_s.nextval,'third');
commit;

OCA/OCP Oracle Database 11g All-in-One Exam Guide

304
Create a Simple Table
• Tables can be created from nothing or with a subquery.
• After creation, column definitions can be added, dropped, or modified.
• The table definition can include default values for columns.

Create and Use Temporary Tables
• Rows in a temporary table are visible only to the session that inserted them.
• DML on temporary tables does not generate redo.
• Temporary tables exist only in sessions’ PGAs or in temporary segments.
• A temporary table can keep rows for the duration of a session or of a
transaction, depending on how it was created.

Constraints
• Constraints can be defined at table creation time or added later.
• A constraint can be defined inline with its column or at the table level after
the columns.
• Table-level constraints can be more complex than those defined inline.
• A table may only have one primary key but can have many unique keys.
• A primary key is functionally equivalent to unique plus not null.
• A unique constraint does not stop insertion of many null values.
• Foreign key constraints define the relationships between tables.

Indexes
• Indexes are required for enforcing unique and primary key constraints.
• NULLs are not included in B*Tree indexes but are included in bitmap indexes.
• B*Tree indexes can be unique or nonunique, which determines whether they
can accept duplicate key values.
• B*Tree indexes are suitable for high cardinality columns, bitmap indexes for
low cardinality columns.
• Bitmap indexes can be compound, function based, or descending; B*Tree
indexes can also be unique, compressed, and reverse key.

Views
• A simple view has one detail (or base) table and uses neither functions nor
aggregation.
• A complex view can be based on any SELECT statement, no matter how
complicated.

Chapter 7: DDL and Schema Objects

305
• Views are schema objects. To use a view in another schema, the view name
must be qualified with the schema name.
• A view can be queried exactly as though it were a table.
• Views can be joined to other views or to tables, they can be aggregated, and in
some cases they can accept DML statements.

Synonyms
• A synonym is an alternative name for a view or a table.
• Private synonyms are schema objects; public synonyms exist outside user
schemas and can be used without specifying a schema name as a qualifier.
• Synonyms share the same namespace as views and tables and can therefore be
used interchangeably with them.

Sequences
• A sequence generates unique values—unless either MAXVALUE or MINVALUE
and CYCLE have been specified.
• Incrementing a sequence need not be committed and cannot be rolled back.
• Any session can increment the sequence by reading its next value. It is possible
to obtain the last value issued to your session but not the last value issued.

Self Test
1. If a table is created without specifying a schema, in which schema will it be?
(Choose the best answer.)
A. It will be an orphaned table, without a schema.
B. The creation will fail.
C. It will be in the SYS schema.
D. It will be in the schema of the user creating it.
E. It will be in the PUBLIC schema.
2. Several object types share the same namespace and therefore cannot have the
same name in the same schema. Which of the following object types is not in
the same namespace as the others? (Choose the best answer.)
A. Index
B. PL/SQL stored procedure
C. Synonym
D. Table
E. View

PART II

• Views exist only as data dictionary constructs. Whenever you query a view, the
underlying SELECT statement must be run.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

306
3. Which of these statements will fail because the table name is not legal?
(Choose two answers.)
A. create table "SELECT" (col1 date);
B. create table "lowercase" (col1 date);
C. create table number1 (col1 date);
D. create table 1number(col1 date);
E. create table update(col1 date);
4. What are distinguishing characteristics of heap tables? (Choose two answers.)
A. A heap table can store variable-length rows.
B. More than one table can store rows in a single heap.
C. Rows in a heap are in random order.
D. Heap tables cannot be indexed.
E. Tables in a heap do not have a primary key.
5. Which of the following data types are variable length? (Choose all correct
answers.)
A. BLOB
B. CHAR
C. LONG
D. NUMBER
E. RAW
F. VARCHAR2
6. Study these statements:
create table tab1 (c1 number(1), c2 date);
alter session set nls_date_format='dd-mm-yy';
insert into tab1 values (1.1,'31-01-07');

Will the insert succeed? (Choose the best answer.)
A. The insert will fail because the 1.1 is too long.
B. The insert will fail because the '31-01-07' is a string, not a date.
C. The insert will fail for both reasons A and B.
D. The insert will succeed.
7. Which of the following is not supported by Oracle as an internal data type?
(Choose the best answer.)
A. CHAR
B. FLOAT
C. INTEGER
D. STRING

Chapter 7: DDL and Schema Objects

307
8. Consider this statement:
create table t1 as select * from regions where 1=2;

What will be the result? (Choose the best answer.)
A. There will be an error because of the impossible condition.
B. No table will be created because the condition returns FALSE.

D. The table T1 will be created and every row in REGIONS inserted because
the condition returns a NULL as a row filter.
9. When a table is created with a statement such as the following:
create table newtab as select * from tab;

will there be any constraints on the new table? (Choose the best answer.)
A. The new table will have no constraints, because constraints are not copied
when creating tables with a subquery.
B. All the constraints on TAB will be copied to NEWTAB.
C. Primary key and unique constraints will be copied, but not check and notnull constraints.
D. Check and not-null constraints will be copied, but not unique or
primary keys.
E. All constraints will be copied, except foreign key constraints.
10. Which types of constraint require an index? (Choose all that apply.)
A. CHECK
B. NOT NULL
C. PRIMARY KEY
D. UNIQUE
11. A transaction consists of two statements. The first succeeds, but the second
(which updates several rows) fails partway through because of a constraint
violation. What will happen? (Choose the best answer.)
A. The whole transaction will be rolled back.
B. The second statement will be rolled back completely, and the first will be
committed.
C. The second statement will be rolled back completely, and the first will
remain uncommitted.
D. Only the one update that caused the violation will be rolled back,
everything else will be committed.
E. Only the one update that caused the violation will be rolled back,
everything else will remain uncommitted.

PART II

C. The table T1 will be created but no rows inserted because the condition
returns FALSE.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

308
12. Which of the following statements is correct about indexes? (Choose the best
answer.)
A. An index can be based on multiple columns of a table, but the columns
must be of the same datatype.
B. An index can be based on multiple columns of a table, but the columns must
be adjacent and specified in the order that they are defined in the table.
C. An index cannot have the same name as a table, unless the index and the
table are in separate schemas.
D. None of the above statements is correct.
13. Which of the following options can be applied to B*Tree indexes, but not to
bitmap indexes? (Choose all correct answers.)
A. Compression
B. Descending order
C. Function-based key expressions
D. Reverse key indexing
E. Uniqueness
F. Use of compound keys
14. Data in temporary tables has restricted visibility. If a user logs on as HR and
inserts rows into a temporary table, to whom will the rows be visible?
A. To no session other than the one that did the insert
B. To all sessions connected as HR
C. To all sessions, until the session that inserted them terminates
D. To all sessions, until the session that inserted them commits the transaction
15. Where does the data in a temporary table get written to disk? (Choose the
best answer.)
A. It is never written to disk
B. To the user’s temporary tablespace
C. To the temporary tablespace of the user in whose schema the table resides
D. To a disk local to the session’s user process
16. Which of these is a defining characteristic of a complex view, rather than a
simple view? (Choose one or more correct answers.)
A. Restricting the projection by selecting only some of the table’s columns
B. Naming the view’s columns with column aliases
C. Restricting the selection of rows with a WHERE clause
D. Performing an aggregation
E. Joining two tables

Chapter 7: DDL and Schema Objects

309
17. Consider these three statements:
create view v1 as select department_id,department_name,last_name from
departments join employees using (department_id);
select department_name,last_name from v1 where department_id=20;
select d.department_name,e.last_name from departments d, employees e
where d.department_id=e.department_id and
d.department_id=20;

A. The view has already done the work of joining the tables.
B. The view uses ISO standard join syntax, which is faster than the Oracle
join syntax used in the second query.
C. The view is precompiled, so the first query requires less dynamic
compilation than the second query.
D. There is no reason for the first query to be quicker.
18. Study this view creation statement:
create view dept30 as
select department_id,employee_id,last_name from employees
where department_id=30 with check option;

What might make the following statement fail? (Choose the best answer.)
update dept30 set department_id=10 where employee_id=114;

A. Unless specified otherwise, views will be created as WITH READ ONLY.
B. The view is too complex to allow DML operations.
C. The WITH CHECK OPTION will reject any statement that changes the
DEPARTMENT_ID.
D. The statement will succeed.
19. There is a simple view SCOTT.DEPT_VIEW on the table SCOTT.DEPT. This
insert fails with an error:
SQL> insert into dept_view values('SUPPORT','OXFORD');
insert into dept_view values('SUPPORT','OXFORD')
*
ERROR at line 1:
ORA-01400: cannot insert NULL into ("SCOTT"."DEPT"."DEPTNO")

What might be the problem? (Choose the best answer.)
A. The INSERT violates a constraint on the detail table.
B. The INSERT violates a constraint on the view.
C. The view was created as WITH READ ONLY.
D. The view was created as WITH CHECK OPTION.

PART II

The first query will be quicker than the second because (choose the best
answer):

OCA/OCP Oracle Database 11g All-in-One Exam Guide

310
20. What are distinguishing characteristics of a public synonym rather than a
private synonym? (Choose two correct answers.)
A. Public synonyms are always visible to all users.
B. Public synonyms can be accessed by name without a schema name
qualifier.
C. Public synonyms can be selected from without needing any permissions.
D. Public synonyms can have the same names as tables or views.
21. Consider these three statements:
create synonym s1 for employees;
create public synonym s1 for departments;
select * from s1;

Which of the following statements is correct? (Choose the best answer.)
A. The second statement will fail because an object S1 already exists.
B. The third statement will show the contents of EMPLOYEES.
C. The third statement will show the contents of DEPARTMENTS.
D. The third statement will show the contents of the table S1, if such a table
exists in the current schema.
22. A view and a synonym are created as follows:
create view dept_v as select * from dept;
create synonym dept_s for dept_v;

Subsequently the table DEPT is dropped. What will happen if you query the
synonym DEPT_S? (Choose the best answer.)
A. There will not be an error because the synonym addresses the view, which
still exists, but there will be no rows returned.
B. There will not be an error if you first recompile the view with the
command ALTER VIEW DEPT_V COMPILE FORCE;.
C. There will be an error because the synonym will be invalid.
D. There will be an error because the view will be invalid.
E. There will be an error because the view will have been dropped implicitly
when the table was dropped.
23. A sequence is created as follows:
create sequence seq1 maxvalue 50;

If the current value is already 50, when you attempt to select SEQ1.NEXTVAL
what will happen? (Choose the best answer.)
A. The sequence will cycle and issue 0.
B. The sequence will cycle and issue 1.
C. The sequence will reissue 50.
D. There will be an error.

Chapter 7: DDL and Schema Objects

311
24. You create a sequence as follows:
create sequence seq1 start with 1;

After selecting from it a few times, you want to reinitialize it to reissue the
numbers already generated. How can you do this? (Choose the best answer.)
A. You must drop and re-create the sequence.

C. Use the command ALTER SEQUENCE SEQ1 START WITH 1; to reset the
next value to 1.
D. Use the command ALTER SEQUENCE SEQ1 CYCLE; to reset the sequence
to its starting value.

Self Test Answers
1. þ D. The schema will default to the current user.
ý A, B, C, and E. A is wrong because all tables must be in a schema. B is
wrong because the creation will succeed. C is wrong because the SYS schema
is not a default schema. E is wrong because while there is a notional user
PUBLIC, he does not have a schema at all.
2. þ A. Indexes have their own namespace.
ý B, C, D, and E. Stored procedures, synonyms, tables, and views exist in the
same namespace.
3. þ D and E. D violates the rule that a table name must begin with a letter,
and E violates the rule that a table name cannot be a reserved word. Both
rules can be bypassed by using double quotes.
ý A, B, and C. These are wrong because all will succeed (though A and B are
not exactly sensible).
4. þ A and C. A heap is a table of variable-length rows in random order.
ý B, D, and E. B is wrong because a heap table can only be one table. D and
E are wrong because a heap table can (and usually will) have indexes and a
primary key.
5. þ A, C, D, E, and F. All these are variable-length data types.
ý B. CHAR columns are fixed length.
6. þ D. The number will be rounded to one digit, and the string will be cast
as a date.
ý A, B, and C. Automatic rounding and typecasting will correct the “errors,”
though ideally they would not occur.

PART II

B. You can’t. Under no circumstances can numbers from a sequence be
reissued once they have been used.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

312
7. þ D. STRING is not an internal data type.
ý A, B, and C. CHAR, FLOAT, and INTEGER are all internal data types,
though not as widely used as some others.
8. þ C. The condition applies only to the rows selected for insert, not to the
table creation.
ý A, B, and D. A is wrong because the statement is syntactically correct. B is
wrong because the condition does not apply to the DDL, only to the DML.
D is wrong because the condition will exclude all rows from selection.
9. þ D. Check and not-null constraints are not dependent on any structures
other than the table to which they apply and so can safely be copied to a new
table.
ý A, B, C, and E. A is wrong because not-null and check constraints will be
applied to the new table. B, C, and E are wrong because these constraints need
other objects (indexes or a parent table) and so are not copied.
10. þ C and D. Unique and primary key constraints are enforced with indexes.
ý A and B. Check and not-null constraints do not rely on indexes.
11. þ C. A constraint violation will force a rollback of the current statement but
nothing else.
ý A, B, D, and E. A is wrong because all statements that have succeeded
remain intact. B and D are wrong because there is no commit of anything
until it is specifically requested. E is wrong because the whole statement will
be rolled back, not just the failed row.
12. þ D. All the statements are wrong.
ý A, B, and C. A is wrong because compound indexes need not be
on columns of the same datatype. B is wrong because the columns in a
compound index need not be physically adjacent. C is wrong because indexes
and tables do not share the same namespace.
13. þ A, D, and E. Compression, reverse key, and unique can only be applied to
B*Tree indexes.
ý B, C, and F. Descending, function-based, and compound indexes can be
either B*Tree or bitmap.
14. þ A. Rows in a temporary table are visible only to the inserting session.
ý B, C, and D. All these incorrectly describe the scope of visibility of rows in
a temporary table.
15. þ B. If a temporary table cannot fit in a session’s PGA, it will be written to
the session’s temporary tablespace.
ý A, C, and D. A is wrong because temporary tables can be written out
to temporary segments. C is wrong because the location of the temporary

Chapter 7: DDL and Schema Objects

313
segment is session specific, not table specific. D is wrong because it is the
session server process that writes the data, not the user process.
16. þ D and E. Aggregations and joins make a view complex and make DML
impossible.
ý A, B, and C. Selection and projection or renaming columns does not
make the view complex.

18. þ C. The WITH CHECK OPTION will prevent DML that would cause a row
to disappear from the view.
ý A, B, and D. A is wrong because views are by default created read/write.
B is wrong because the view is a simple view. D is wrong because the
statement cannot succeed because the check option will reject it.
19. þ A. There is a NOT NULL or PRIMARY KEY constraint on DEPT.DEPTNO.
ý B, C, and D. B is wrong because constraints are enforced on detail tables,
not on views. C and D are wrong because the error message would be different.
20. þ B and D. Public synonyms are not schema objects and so can only be
addressed directly. They can have the same names as schema objects.
ý A and C. These are wrong because users must be granted privileges on a
public synonym before they can see it or select from it.
21. þ B. The order of priority is to search the schema namespace before the
public namespace, so it will be the private synonym (to EMPLOYEES) that
will be found.
ý A, C, and D. A is wrong because a synonym can exist in both the public
namespace and the schema namespace. C is wrong because the order of
priority will find the private synonym first. D is wrong because it would not
be possible to have a table and a private synonym in the same schema with
the same name.
22. þ D. The synonym will be fine, but the view will be invalid. Oracle will
attempt to recompile the view, but this will fail.
ý A, B, C, and E. A is wrong because the view will be invalid. B is wrong
because the FORCE keyword can only be applied when creating a view (and
it would still be invalid, even so). C is wrong because the synonym will be
fine. E is wrong because views are not dropped implicitly (unlike indexes and
constraints).

PART II

17. þ D. Sad but true. Views will not help performance, unless they include
tuning hints.
ý A, B, and C. A is wrong because a view is only a SELECT statement;
it doesn’t prerun the query. B is wrong because the Oracle optimizer will
sort out any differences in syntax. C is wrong because, although views are
precompiled, this doesn’t affect the speed of compiling a user’s statement.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

314
23. þ D. The default is NOCYCLE, and the sequence cannot advance further.
ý A, B, and C. A and B are wrong because CYCLE is disabled by default. If it
were enabled, the next number issued would be 1 (not zero) because 1 is the
default for START WITH. C is wrong because under no circumstances will a
sequence issue repeating values.
24. þ A. It is not possible to change the next value of a sequence, so you must
re-create it.
ý B, C, and D. B is wrong because, while a NOCYCLE sequence can never
reissue numbers, there is no reason why a new sequence (with the same
name) cannot do so. C is wrong because START WITH can only be specified
at creation time. D is wrong because this will not force an instant cycle, it
will only affect what happens when the sequence reaches its MAXVALUE or
MINVALUE.

CHAPTER 8
DML and Concurrency

Exam Objectives
In this chapter you will learn to
• 051.9.1
Describe Each Data Manipulation Language (DML) Statement
• 051.9.2
Insert Rows into a Table
• 051.9.3
Update Rows in a Table
• 051.9.4
Delete Rows from a Table
• 051.9.5
Control Transactions
• 052.9.1
Manage Data Using DML
• 052.9.2
Identify and Administer PL/SQL Objects
• 052.9.3
Monitor and Resolve Locking Conflicts
• 052.10.1 Explain the Purpose of Undo
• 052.10.2 Understand How Transactions Generate Undo
• 052.10.3 Manage Undo

315

OCA/OCP Oracle Database 11g All-in-One Exam Guide

316
Data in a relational database is managed with the DML (Data Manipulation Language)
commands of SQL. These are INSERT, UPDATE, DELETE, and (with more recent
versions of SQL) MERGE. This chapter discusses what happens in memory, and on
disk, when you execute INSERT, UPDATE, or DELETE statements—the manner in
which changed data is written to blocks of table and index segments and the old
version of the data is written out to blocks of an undo segment. The theory behind
this, summarized as the ACID test, which every relational database must pass, is
explored, and you will see the practicalities of how undo data is managed.
The transaction control statements COMMIT and ROLLBACK, which are closely
associated with DML commands, are explained along with a discussion of some basic
PL/SQL objects. The chapter ends with a detailed examination of concurrent data
access and table and row locking.

Data Manipulation Language (DML) Statements
Strictly speaking, there are five DML commands:
• SELECT
• INSERT
• UPDATE
• DELETE
• MERGE
In practice, most database professionals never include SELECT as part of DML.
It is considered to be a separate language in its own right, which is not unreasonable
when you consider that the next five chapters are dedicated to describing it. The
MERGE command is often dropped as well, not because it isn’t clearly a data
manipulation command but because it doesn’t do anything that cannot be done
with other commands. MERGE can be thought of as a shortcut for executing either
an INSERT or an UPDATE or a DELETE, depending on some condition. A command
often considered with DML is TRUNCATE. This is actually a DDL (Data Definition
Language) command, but as the effect for end users is the same as for a DELETE
(though its implementation is totally different), it does fit with DML.

INSERT
Oracle stores data in the form of rows in tables. Tables are populated with rows (just as
a country is populated with people) in several ways, but the most common method is
with the INSERT statement. SQL is a set-oriented language, so any one command can
affect one row or a set of rows. It follows that one INSERT statement can insert an
individual row into one table or many rows into many tables. The basic versions
of the statement do insert just one row, but more complex variations can, with one
command, insert multiple rows into multiple tables.

Chapter 8: DML and Concurrency

317
TIP There are much faster techniques than INSERT for populating a table
with large numbers of rows. These are the SQL*Loader utility, which can
upload data from files produced by an external feeder system, and Data Pump,
which can transfer data in bulk from one Oracle database to another, either
via disk files or through a network link.

The simplest form of the INSERT statement inserts one row into one table, using
values provided in line as part of the command. The syntax is as follows:
INSERT INTO table [(column [,column...])] VALUES (value [,value...]);

For example:
insert
insert
insert
insert

into
into
into
into

hr.regions
hr.regions
hr.regions
hr.regions

values (10,'Great Britain');
(region_name, region_id) values ('Australasia',11);
(region_id) values (12);
values (13,null);

The first of the preceding commands provides values for both columns of the
REGIONS table. If the table had a third column, the statement would fail because it
relies upon positional notation. The statement does not say which value should be
inserted into which column; it relies on the position of the values: their ordering in
the command. When the database receives a statement using positional notation, it
will match the order of the values to the order in which the columns of the table are
defined. The statement would also fail if the column order was wrong: the database
would attempt the insertion but would fail because of data type mismatches.
The second command nominates the columns to be populated and the values
with which to populate them. Note that the order in which columns are mentioned
now becomes irrelevant—as long as the order of the columns is the same as the order
of the values.
The third example lists one column, and therefore only one value. All other
columns will be left null. This statement will fail if the REGION_NAME column is
not nullable. The fourth example will produce the same result, but because there
is no column list, some value (even a NULL) must be provided for each column.
TIP It is often considered good practice not to rely on positional notation
and instead always to list the columns. This is more work but makes the code
self-documenting (always a good idea!) and also makes the code more resilient
against table structure changes. For instance, if a column is added to a table, all
the INSERT statements that rely on positional notation will fail until they are
rewritten to include a NULL for the new column. INSERT code that names
the columns will continue to run.

PART II

EXAM TIP An INSERT command can insert one row, with column values
specified in the command, or a set of rows created by a SELECT statement.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

318
To insert many rows with one INSERT command, the values for the rows must
come from a query. The syntax is as follows:
INSERT INTO table [column [, column...] ] subquery;

Note that this syntax does not use the VALUES keyword. If the column list is
omitted, then the subquery must provide values for every column in the table. To
copy every row from one table to another, if the tables have the same column
structure, a command such as this is all that is needed:
insert into regions_copy select * from regions;

This presupposes that the table REGIONS_COPY does exist. The SELECT subquery
reads every row from the source table, which is REGIONS, and the INSERT inserts
them into the target table, which is REGIONS_COPY.
EXAM TIP Any SELECT statement, specified as a subquery, can be used as
the source of rows passed to an INSERT. This enables insertion of many rows.
Alternatively, using the VALUES clause will insert one row. The values can be
literals or prompted for as substitution variables.
To conclude the description of the INSERT command, it should be mentioned
that it is possible to insert rows into several tables with one statement. This is not
part of the OCP examination, but for completeness here is an example:
insert all
when 1=1 then
into emp_no_name (department_id,job_id,salary,commission_pct,hire_date)
values (department_id,job_id,salary,commission_pct,hire_date)
when department_id <> 80 then
into emp_non_sales (employee_id,department_id,salary,hire_date)
values (employee_id,department_id,salary,hire_date)
when department_id = 80 then
into emp_sales (employee_id,salary,commission_pct,hire_date)
values (employee_id,salary,commission_pct,hire_date)
select employee_id,department_id,job_id,salary,commission_pct,hire_date
from employees where hire_date > sysdate - 30;

To read this statement, start at the bottom. The subquery retrieves all employees
recruited in the last 30 days. Then go to the top. The ALL keyword means that every
row selected will be considered for insertion into all the tables following, not just into
the first table for which the condition applies. The first condition is 1=1, which is
always true, so every source row will create a row in EMP_NO_NAME. This is a copy
of the EMPLOYEES table with the personal identifiers removed. The second condition
is DEPARTMENT_ID <> 80, which will generate a row in EMP_NON_SALES for every
employee who is not in the sales department; there is no need for this table to have
the COMMISSION_PCT column. The third condition generates a row in EMP_SALES

Chapter 8: DML and Concurrency

319
for all the salesmen; there is no need for the DEPARTMENT_ID column, because they
will all be in department 80.
This is a simple example of a multitable insert, but it should be apparent that with
one statement, and therefore only one pass through the source data, it is possible to
populate many target tables. This can take an enormous amount of strain off the
database.
In this exercise, use various

1. Connect to the WEBSTORE schema with either SQL Developer or SQL*Plus.
2. Query the PRODUCTS, ORDERS, and ORDER_ITEMS tables, to confirm what
data is currently stored:
select * from products;
select * from orders;
select * from order_items;

3. Insert two rows into the PRODUCTS table, providing the values in line:
insert into products values (prod_seq.nextval, '11G SQL Exam Guide',
'ACTIVE',60,sysdate, 20);
insert into products
values (prod_seq.nextval, '11G All-in-One Guide',
'ACTIVE',100,sysdate, 40);

4. Insert two rows into the ORDERS table, explicitly providing the column
names:
insert
values
insert
values

into orders (order_id, order_date, order_status, order_amount, customer_id)
(order_seq.nextval, sysdate, 'COMPLETE', 3, 2);
into orders (order_id, order_date, order_status, order_amount, customer_id)
(order_seq.nextval, sysdate, 'PENDING', 5, 3);

5. Insert three rows into the ORDER_ITEMS table, using substitution variables:
insert into order_items values (&item_id, &order_id, &product_id, &quantity);

When prompted, provide the values: {1, 1, 2,5}, {2,1,1,3}, and {1,2,2,4}.
6. Insert a row into the PRODUCTS table, calculating the PRODUCT_ID to be
100 higher than the current high value. This will need a scalar subquery:
insert into products values ((select max(product_id)+100 from products),
'11G DBA2 Exam Guide', 'INACTIVE', 40, sysdate-365, 0);

7. Confirm the insertion of the rows:
select * from products;
select * from orders;
select * from order_items;

8. Commit the insertions:
commit;

PART II

Exercise 8-1: Use the INSERT Command
techniques to insert rows into a table.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

320
The following illustration shows the results of the exercise, using SQL*Plus:

UPDATE
The UPDATE command is used to change rows that already exist—rows that have
been created by an INSERT command, or possibly by a tool such as Data Pump. As
with any other SQL command, an UPDATE can affect one row or a set of rows. The
size of the set affected by an UPDATE is determined by a WHERE clause, in exactly the
same way that the set of rows retrieved by a SELECT statement is defined by a WHERE
clause. The syntax is identical. All the rows updated will be in one table; it is not
possible for a single update command to affect rows in multiple tables.
When updating a row or a set of rows, the UPDATE command specifies which
columns of the row(s) to update. It is not necessary (or indeed common) to update
every column of the row. If the column being updated already has a value, then this
value is replaced with the new value specified by the UPDATE command. If the
column was not previously populated—which is to say, its value was NULL—then
it will be populated after the UPDATE with the new value.
A typical use of UPDATE is to retrieve one row and update one or more columns
of the row. The retrieval will be done using a WHERE clause that selects a row by its
primary key, the unique identifier that will ensure that only one row is retrieved. Then
the columns that are updated will be any columns other than the primary key column.
It is very unusual to change the value of the primary key. The lifetime of a row begins
when it is inserted, then may continue through several updates, until it is deleted.
Throughout this lifetime, it will not usually change its primary key.
To update a set of rows, use a less restrictive WHERE clause than the primary key.
To update every row in a table, do not use any WHERE clause at all. This set behavior
can be disconcerting when it happens by accident. If you select the rows to be updated
with any column other than the primary key, you may update several rows, not just one.
If you omit the WHERE clause completely, you will update the whole table—perhaps
millions of rows updated with just one statement—when you meant to change just one.

Chapter 8: DML and Concurrency

321
EXAM TIP One UPDATE statement can change rows in only one table, but it
can change any number of rows in that table.

UPDATE table SET column=value [,column=value...] [WHERE condition];

The more complex form of the command uses subqueries for one or more of the
column values and for the WHERE condition. Figure 8-1 shows updates of varying
complexity, executed from SQL*Plus.
The first example is the simplest. One column of one row is set to a literal value.
Because the row is chosen with a WHERE clause that uses the equality predicate on
the table’s primary key, there is an absolute guarantee that at most only one row will
be affected. No row will be changed if the WHERE clause fails to find any rows at all.
The second example shows use of arithmetic and an existing column to set the new
value, and the row selection is not done on the primary key column. If the selection is
not done on the primary key, or if a nonequality predicate (such as BETWEEN) is used,
then the number of rows updated may be more than one. If the WHERE clause is
omitted entirely, the update will be applied to every row in the table.
The third example in Figure 8-1 introduces the use of a subquery to define the set
of rows to be updated. A minor additional complication is the use of a replacement
variable to prompt the user for a value to use in the WHERE clause of the subquery.

Figure 8-1 Examples of using the UPDATE statement

PART II

An UPDATE command must honor any constraints defined for the table, just
as the original INSERT would have. For example, it will not be possible to update a
column that has been marked as mandatory to a NULL value or to update a primary
key column so that it will no longer be unique. The basic syntax is the following:

OCA/OCP Oracle Database 11g All-in-One Exam Guide

322
In this example, the subquery (lines 3 and 4) will select every employee who is in a
department whose name includes the string ‘IT’ and increment their current salary
by 10 percent (unlikely to happen in practice).
It is also possible to use subqueries to determine the value to which a column will
be set, as in the fourth example. In this case, one employee (identified by primary key,
in line 5) is transferred to department 80 (the sales department), and then the subquery
in lines 3 and 4 sets his commission rate to whatever the lowest commission rate in
the department happens to be.
The syntax of an update that uses subqueries is as follows:
UPDATE table
SET column=[subquery] [,column=subquery...]
WHERE column = (subquery) [AND column=subquery...] ;

There is a rigid restriction on the subqueries using update columns in the SET
clause: the subquery must return a scalar value. A scalar value is a single value of
whatever data type is needed: the query must return one row, with one column. If
the query returns several values, the UPDATE will fail. Consider these two examples:
update employees
set salary=(select salary from employees where employee_id=206);
update employees
set salary=(select salary from employees where last_name='Abel');

The first example, using an equality predicate on the primary key, will always
succeed. Even if the subquery does not retrieve a row (as would be the case if there
were no employee with EMPLOYEE_ID equal to 206), the query will still return a
scalar value: a null. In that case, all the rows in EMPLOYEES would have their SALARY
set to NULL—which might not be desired but is not an error as far as SQL is concerned.
The second example uses an equality predicate on the LAST_NAME, which is not
guaranteed to be unique. The statement will succeed if there is only one employee
with that name, but if there were more than one it would fail with the error
“ORA-01427: single-row subquery returns more than one row.” For code that will
work reliably, no matter what the state of the data, it is vital to ensure that the
subqueries used for setting column values are scalar.
TIP A common fix for making sure that queries are scalar is to use MAX or
MIN. This version of the statement will always succeed:
update employees
set salary=(select max(salary) from employees where
last_name='Abel');
However, just because it will work, doesn’t necessarily mean that it does what
is wanted.
The subqueries in the WHERE clause must also be scalar, if it is using the equality
predicate (as in the preceding examples) or the greater/less than predicates. If it is
using the IN predicate, then the query can return multiple rows, as in this example
which uses IN:

Chapter 8: DML and Concurrency

323
update employees
set salary=10000
where department_id in (select department_id from departments
where department_name like '%IT%');

This will apply the update to all employees in a department whose name includes
the string ‘IT’. There are several of these. But even though the query can return several
rows, it must still return only one column.

Exercise 8-2: Use the UPDATE Command In this exercise, use various
techniques to update rows in a table. It is assumed that the WEBSTORE.PRODUCTS
table is as seen in the illustration at the end of Exercise 8-1. If not, adjust the values
as necessary.
1. Connect to the WEBSTORE schema using SQL Developer or SQL*Plus.
2. Update a single row, identified by primary key:
update products set product_description='DBA1 Exam Guide'
where product_id=102;

This statement should return the message “1 row updated.”
3. Update a set of rows, using a subquery to select the rows and to provide values:
update products
set product_id=(1+(select max(product_id) from products where product_id <> 102))
where product_id=102;

This statement should return the message “1 row updated.”
4. Confirm the state of the rows:
select * from products;

5. Commit the changes made:
commit;

DELETE
Previously inserted rows can be removed from a table with the DELETE command.
The command will remove one row or a set of rows from the table, depending on a
WHERE clause. If there is no WHERE clause, every row in the table will be removed
(which can be a little disconcerting if you left out the WHERE clause by mistake).
TIP There are no “warning” prompts for any SQL commands. If you instruct
the database to delete a million rows, it will do so. Immediately. There is none
of that “Are you sure?” business that some environments offer.

PART II

EXAM TIP The subqueries used to SET column values must be scalar
subqueries. The subqueries used to select the rows must also be scalar,
unless they use the IN predicate.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

324
A deletion is all or nothing. It is not possible to nominate columns. When rows
are inserted, you can choose which columns to populate. When rows are updated, you
can choose which columns to update. But a deletion applies to the whole row—the
only choice is which rows in which table. This makes the DELETE command syntactically
simpler than the other DML commands. The syntax is as follows:
DELETE FROM table [WHERE condition];

This is the simplest of the DML commands, particularly if the condition is omitted.
In that case, every row in the table will be removed with no prompt. The only
complication is in the condition. This can be a simple match of a column to a literal:
delete
delete
delete
delete

from
from
from
from

employees
employees
employees
employees

where
where
where
where

employee_id=206;
last_name like 'S%';
department_id=&Which_department;
department_id is null;

The first statement identifies a row by primary key. One row only will be
removed—or no row at all, if the value given does not find a match. The second
statement uses a nonequality predicate that could result in the deletion of many
rows: every employee whose surname begins with an uppercase “S.” The third
statement uses an equality predicate but not on the primary key. It prompts for
a department number with a substitution variable, and all employees in that
department will go. The final statement removes all employees who are not
currently assigned to a department.
The condition can also be a subquery:
delete from employees where department_id in
(select department_id from departments where location_id in
(select location_id from locations where country_id in
(select country_id from countries where region_id in
(select region_id from regions where region_name='Europe')
)
)
)

This example uses a subquery for row selection that navigates the HR geographical
tree (with more subqueries) to delete every employee who works for any department
that is based in Europe. The same rule for the number of values returned by the
subquery applies as for an UPDATE command: if the row selection is based on an
equality predicate (as in the preceding example) the subquery must be scalar, but if
it uses IN the subquery can return several rows.
If the DELETE command finds no rows to delete, this is not an error. The
command will return the message “0 rows deleted” rather than an error message
because the statement did complete successfully—it just didn’t find anything to do.
Exercise 8-3: Use the DELETE Command In this exercise, use various
techniques to delete rows in a table. It is assumed that the WEBSTORE.PRODUCTS
table has been modified during the previous two exercises. If not, adjust the values
as necessary.

Chapter 8: DML and Concurrency

325
1. Connect to the WEBSTORE schema using SQL Developer or SQL*Plus.
2. Remove one row, using the equality predicate on the primary key:
delete from products where product_id=3;

This should return the message “1 row deleted.”
3. Attempt to remove every row in the table by omitting a WHERE clause:
This will fail, due to a constraint violation because there are child records
in the ORDER_ITEMS table that reference PRODUCT_ID values in the
PRODUCTS table via the foreign key constraint FK_PRODUCT_ID.
4. Commit the deletion:
commit;

To remove rows from a table, there are two options: the DELETE command and
the TRUNCATE command. DELETE is less drastic, in that a deletion can be rolled
back whereas a truncation cannot be. DELETE is also more controllable, in that it is
possible to choose which rows to delete, whereas a truncation always affects the whole
table. DELETE is, however, a lot slower and can place a lot of strain on the database.
TRUNCATE is virtually instantaneous and effortless.

TRUNCATE
The TRUNCATE command is not a DML command; it is a DDL command. The
difference is enormous. When DML commands affect data, they insert, update, and
delete rows as part of transactions. Transactions are defined later in this chapter, in
the section “Control Transactions.” For now, let it be said that a transaction can be
controlled, in the sense that the user has the choice of whether to make the work
done in a transaction permanent, or whether to reverse it. This is very useful but
forces the database to do additional work behind the scenes that the user is not aware
of. DDL commands are not user transactions (though within the database, they are in
fact implemented as transactions—but developers cannot control them), and there is
no choice about whether to make them permanent or to reverse them. Once executed,
they are done. However, in comparison to DML, they are very fast.
EXAM TIP Transactions, consisting of INSERT, UPDATE, and DELETE (or even
MERGE) commands, can be made permanent (with a COMMIT) or reversed
(with a ROLLBACK). A TRUNCATE command, like any other DDL command,
is immediately permanent: it can never be reversed.
From the user’s point of view, a truncation of a table is equivalent to executing a
DELETE of every row: a DELETE command without a WHERE clause. But whereas
a deletion may take some time (possibly hours, if there are many rows in the table), a
truncation will go through instantly. It makes no difference whether the table contains
one row or billions; a TRUNCATE will be virtually instantaneous. The table will still
exist, but it will be empty.

PART II

delete from products;

OCA/OCP Oracle Database 11g All-in-One Exam Guide

326
TIP DDL commands, such as TRUNCATE, will fail if there is any DML command
active on the table. A transaction will block the DDL command until the DML
command is terminated with a COMMIT or a ROLLBACK.
EXAM TIP TRUNCATE completely empties the table. There is no concept of
row selection, as there is with a DELETE.
One part of the definition of a table as stored in the data dictionary is the table’s
physical location. When first created, a table is allocated a single area of space, of fixed
size, in the database’s datafiles. This is known as an extent and will be empty. Then, as
rows are inserted, the extent fills up. Once it is full, more extents will be allocated to
the table automatically. A table therefore consists of one or more extents, which hold
the rows. As well as tracking the extent allocation, the data dictionary also tracks how
much of the space allocated to the table has been used. This is done with the high
water mark. The high water mark is the last position in the last extent that has been
used; all space below the high water mark has been used for rows at one time or
another, and none of the space above the high water mark has been used yet.
Note that it is possible for there to be plenty of space below the high water mark
that is not being used at the moment; this is because of rows having been removed
with a DELETE command. Inserting rows into a table pushes the high water mark up.
Deleting them leaves the high water mark where it is; the space they occupied remains
assigned to the table but is freed up for inserting more rows.
Truncating a table resets the high water mark. Within the data dictionary, the
recorded position of the high water mark is moved to the beginning of the table’s first
extent. As Oracle assumes that there can be no rows above the high water mark, this
has the effect of removing every row from the table. The table is emptied and remains
empty until subsequent insertions begin to push the high water mark back up again.
In this manner, one DDL command, which does little more than make an update in
the data dictionary, can annihilate billions of rows in a table.
The syntax to truncate a table couldn’t be simpler:
TRUNCATE TABLE table;

Figure 8-2 shows access to the TRUNCATE command through the SQL Developer
navigation tree, but of course it can also be executed from SQL*Plus.

MERGE
There are many occasions where you want to take a set of data (the source) and
integrate it into an existing table (the target). If a row in the source data already exists
in the target table, you may want to update the target row, or you may want to replace it
completely, or you may want to leave the target row unchanged. If a row in the source
does not exist in the target, you will want to insert it. The MERGE command lets you
do this. A MERGE passes through the source data, for each row attempting to locate a
matching row in the target. If no match is found, a row can be inserted; if a match is

Chapter 8: DML and Concurrency

327

PART II

Figure 8-2 The TRUNCATE command in SQL Developer, from the command line and from the menus

found, the matching row can be updated. The release 10g enhancement means that
the target row can even be deleted, after being matched and updated. The end result
is a target table into which the data in the source has been merged.
A MERGE operation does nothing that could not be done with INSERT, UPDATE,
and DELETE statements—but with one pass through the source data, it can do all
three. Alternative code without a MERGE would require three passes through the data,
one for each command.
The source data for a MERGE statement can be a table or any subquery. The
condition used for finding matching rows in the target is similar to a WHERE clause.
The clauses that update or insert rows are as complex as an UPDATE or an INSERT
command. It follows that MERGE is the most complicated of the DML commands,
which is not unreasonable, as it is (arguably) the most powerful. Use of MERGE is
not on the OCP syllabus, but for completeness here is a simple example:
merge into employees e using new_employees n
on (e.employee_id = n.employee_id)
when matched then
update set e.salary=n.salary
when not matched then
insert (employee_id,last_name,salary)
values (n.employee_id,n.last_name,n.salary);

OCA/OCP Oracle Database 11g All-in-One Exam Guide

328
The preceding statement uses the contents of a table NEW_EMPLOYEES to update
or insert rows in EMPLOYEES. The situation could be that EMPLOYEES is a table of
all staff, and NEW_EMPLOYEES is a table with rows for new staff and for salary changes
for existing staff. The command will pass through NEW_EMPLOYEES and, for each
row, attempt to find a row in EMPLOYEES with the same EMPLOYEE_ID. If there is
a row found, its SALARY column will be updated with the value of the row in NEW_
EMPLOYEES. If there is not such a row, one will be inserted. Variations on the syntax
allow the use of a subquery to select the source rows, and it is even possible to delete
matching rows.

DML Statement Failures
Commands can fail for many reasons, including the following:
• Syntax errors
• References to nonexistent objects or columns
• Access permissions
• Constraint violations
• Space issues
Figure 8-3 shows several attempted executions of a statement with SQL*Plus.

Figure 8-3

Some examples of statement failure

Chapter 8: DML and Concurrency

329

PART II

In Figure 8-3, a user connects as SUE (password, SUE—not an example of good
security) and queries the EMPLOYEES table. The statement fails because of a simple
syntax error, correctly identified by SQL*Plus. Note that SQL*Plus never attempts
to correct such mistakes, even when it knows exactly what you meant to type. Some
third-party tools may be more helpful, offering automatic error correction.
The second attempt to run the statement fails with an error stating that the object
does not exist. This is because it does not exist in the current user’s schema; it exists in
the HR schema. Having corrected that, the third run of the statement succeeds—but
only just. The value passed in the WHERE clause is a string, ‘21-APR-2000’, but the
column HIRE_DATE is not defined in the table as a string, it is defined as a date. To
execute the statement, the database had to work out what the user really meant and
cast the string as a date. In the last example, the typecasting fails. This is because the
string passed is formatted as a European-style date, but the database has been set up
as American: the attempt to match “21” to a month fails. The statement would have
succeeded if the string had been ‘04/21/2007’.
If a statement is syntactically correct and has no errors with the objects to which
it refers, it can still fail because of access permissions. If the user attempting to execute
the statement does not have the relevant permissions on the tables to which it refers,
the database will return an error identical to that which would be returned if the
object did not exist. As far as the user is concerned, it does not exist.
Errors caused by access permissions are a case where SELECT and DML statements
may return different results: it is possible for a user to have permission to see the rows
in a table, but not to insert, update, or delete them. Such an arrangement is not
uncommon; it often makes business sense. Perhaps more confusingly, permissions
can be set up in such a manner that it is possible to insert rows that you are not
allowed to see. And, perhaps worst of all, it is possible to delete rows that you can
neither see nor update. However, such arrangements are not common.
A constraint violation can cause a DML statement to fail. For example, an INSERT
command can insert several rows into a table, and for every row the database will
check whether a row already exists with the same primary key. This occurs as each row
is inserted. It could be that the first few rows (or the first few million rows) go in
without a problem, and then the statement hits a row with a duplicate value. At this
point it will return an error, and the statement will fail. This failure will trigger a
reversal of all the insertions that had already succeeded. This is part of the SQL
standard: a statement must succeed in total, or not at all. The reversal of the work
is a rollback. The mechanisms of a rollback are described in the next section of this
chapter, titled “Control Transactions.”
If a statement fails because of space problems, the effect is similar. A part of the
statement may have succeeded before the database ran out of space. The part that did
succeed will be automatically rolled back. Rollback of a statement is a serious matter.
It forces the database to do a lot of extra work and will usually take at least as long as
the statement has taken already (sometimes much longer).

OCA/OCP Oracle Database 11g All-in-One Exam Guide

330

Control Transactions
The concepts behind a transaction are a part of the relational database paradigm. A
transaction consists of one or more DML statements, followed by either a ROLLBACK
or a COMMIT command. It is possible to use the SAVEPOINT command to give a degree
of control within the transaction. Before going into the syntax, it is necessary to review
the concept of a transaction. A related topic is read consistency; this is automatically
implemented by the Oracle server, but to a certain extent programmers can manage
it by the way they use the SELECT statement.

Database Transactions
Oracle’s mechanism for assuring transactional integrity is the combination of undo
segments and redo log files: this mechanism is undoubtedly the best of any database
yet developed and conforms perfectly with the international standards for data
processing. Other database vendors comply with the same standards with their own
mechanisms, but with varying levels of effectiveness. In brief, any relational database
must be able to pass the ACID test: it must guarantee atomicity, consistency, isolation,
and durability.

A is for Atomicity
The principle of atomicity states that either all parts of a transaction must successfully
complete or none of them. (The reasoning behind the term is that an atom cannot be
split—now well known to be a false assumption.) For example, if your business analysts
have said that every time you change an employee’s salary you must also change the
employee’s grade, then the atomic transaction will consist of two updates. The database
must guarantee that both go through or neither. If only one of the updates were to
succeed, you would have an employee on a salary that was incompatible with his grade:
a data corruption, in business terms. If anything (anything at all!) goes wrong before
the transaction is complete, the database itself must guarantee that any parts that did
go through are reversed; this must happen automatically. But although an atomic
transaction sounds small (like an atom), it can be enormous. To take another example,
it is logically impossible for an accounting suite nominal ledger to be half in August
and half in September: the end-of-month rollover is therefore (in business terms)
one atomic transaction, which may affect millions of rows in thousands of tables and
take hours to complete (or to roll back, if anything goes wrong). The rollback of an
incomplete transaction may be manual (as when you issue the ROLLBACK command),
but it must be automatic and unstoppable in the case of an error.

C is for Consistency
The principle of consistency states that the results of a query must be consistent with
the state of the database at the time the query started. Imagine a simple query that
averages the value of a column of a table. If the table is large, it will take many
minutes to pass through the table. If other users are updating the column while the
query is in progress, should the query include the new or the old values? Should it

Chapter 8: DML and Concurrency

331

I is for Isolation
The principle of isolation states that an incomplete (that is, uncommitted) transaction
must be invisible to the rest of the world. While the transaction is in progress, only
the one session that is executing the transaction is allowed to see the changes; all
other sessions must see the unchanged data, not the new values. The logic behind
this is, first, that the full transaction might not go through (remember the principle
of atomicity and automatic or manual rollback?) and that therefore no other users
should be allowed to see changes that might be reversed. And second, during the
progress of a transaction the data is (in business terms) incoherent: there is a short
time when the employee has had their salary changed but not their grade. Transaction
isolation requires that the database must conceal transactions in progress from other
users: they will see the preupdate version of the data until the transaction completes,
when they will see all the changes as a consistent set. Oracle guarantees transaction
isolation: there is no way any session (other than that making the changes) can see
uncommitted data. A read of uncommitted data is known as a dirty read, which Oracle
does not permit (though some other databases do).

D is for Durability
The principle of durability states that once a transaction completes, it must be impossible
for the database to lose it. During the time that the transaction is in progress, the principle
of isolation requires that no one (other than the session concerned) can see the changes
it has made so far. But the instant the transaction completes, it must be broadcast to the
world, and the database must guarantee that the change is never lost; a relational database
is not allowed to lose data. Oracle fulfills this requirement by writing out all change
vectors that are applied to data to log files as the changes are done. By applying this log
of changes to backups taken earlier, it is possible to repeat any work done in the event
of the database being damaged. Of course, data can be lost through user error such as
inappropriate DML, or dropping or truncating tables. But as far as Oracle and the DBA
are concerned, such events are transactions like any other: according to the principle of
durability, they are absolutely nonreversible.

Executing SQL Statements
The entire SQL language consists of only a dozen or so commands. The ones we are
concerned with here are: SELECT, INSERT, UPDATE, and DELETE.

PART II

include rows that were inserted or deleted after the query started? The principle of
consistency requires that the database ensure that changed values are not seen by the
query; it will give you an average of the column as it was when the query started, no
matter how long the query takes or what other activity is occurring on the tables
concerned. Oracle guarantees that if a query succeeds, the result will be consistent.
However, if the database administrator has not configured the database appropriately,
the query may not succeed: there is a famous Oracle error, “ORA-1555 snapshot too
old,” that is raised. This used to be an extremely difficult problem to fix with earlier
releases of the database, but with recent versions the database administrator should
always be able to prevent this.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

332
Executing a SELECT Statement
The SELECT command retrieves data. The execution of a SELECT statement is a staged
process: the server process executing the statement will first check whether the blocks
containing the data required are already in memory, in the database buffer cache. If
they are, then execution can proceed immediately. If they are not, the server process
must locate them on disk and copy them into the database buffer cache.
EXAM TIP Always remember that server processes read blocks from datafiles
into the database buffer cache, DBWn writes blocks from the database buffer
cache to the datafiles.
Once the data blocks required for the query are in the database buffer cache, any
further processing (such as sorting or aggregation) is carried out in the PGA of the
session. When the execution is complete, the result set is returned to the user process.
How does this relate to the ACID test just described? For consistency, if the query
encounters a block that has been changed since the time the query started, the server
process will go to the undo segment that protected the change, locate the old version
of the data, and (for the purposes of the current query only) roll back the change.
Thus any changes initiated after the query commenced will not be seen. A similar
mechanism guarantees transaction isolation, though this is based on whether the
change has been committed, not only on whether the data has been changed. Clearly,
if the data needed to do this rollback is no longer in the undo segments, this
mechanism will not work. That is when you get the “snapshot too old” error.
Figure 8-4 shows a representation of the way a SELECT statement is processed.

Server process
User process
System global area

1
5

4

Database buffer
cache

2
3

Datafiles

Figure 8-4 The stages of execution of a SELECT

Chapter 8: DML and Concurrency

333
In the figure, Step 1 is the transmission of the SELECT statement from the user
process to the server process. The server will search the database buffer cache to
determine if the necessary blocks are already in memory, and if they are, proceed to
Step 4. If they are not, Step 2 is to locate the blocks in the datafiles, and Step 3 is to
copy them into the database buffer cache. Step 4 transfers the data to the server
process, where there may be some further processing before Step 5 returns the result
of the query to the user process.

For any DML operation, it is necessary to work on both data blocks and undo blocks,
and also to generate redo: the A, C, and I of the ACID test require generation of undo;
the D requires generation of redo.
EXAM TIP Undo is not the opposite of redo! Redo protects all block changes,
no matter whether it is a change to a block of a table segment, an index segment,
or an undo segment. As far as redo is concerned, an undo segment is just
another segment, and any changes to it must be made durable.
The first step in executing DML is the same as executing SELECT: the required
blocks must be found in the database buffer cache, or copied into the database buffer
cache from the datafiles. The only change is that an empty (or expired) block of an
undo segment is needed too. From then on, things are a bit more complicated.
First, locks must be placed on any rows and associated index keys that are going
to be affected by the operation. This is covered later in this chapter.
Then the redo is generated: the server process writes to the log buffer the change
vectors that are going to be applied to the data blocks. This generation of redo is
applied both to table block changes and to undo block changes: if a column of a
row is to be updated, then the rowid and the new value of the column are written
to the log buffer (which is the change that will be applied to the table block), and
also the old value (which is the change that will be applied to the undo block). If the
column is part of an index key, then the changes to be applied to the index are also
written to the log buffer, together with a change to be applied to an undo block to
protect the index change.
Having generated the redo, the update is carried out in the database buffer cache: the
block of table data is updated with the new version of the changed column, and
the old version of the changed column is written to the block of undo segment. From
this point until the update is committed, all queries from other sessions addressing
the changed row will be redirected to the undo data. Only the session that is doing the
update will see the actual current version of the row in the table block. The same
principle applies to any associated index changes.

Executing INSERT and DELETE Statements
Conceptually, INSERT and DELETE are managed in the same fashion as an UPDATE.
The first step is to locate the relevant blocks in the database buffer cache, or to copy
them into it if they are not there.

PART II

Executing an UPDATE Statement

OCA/OCP Oracle Database 11g All-in-One Exam Guide

334
Redo generation is exactly the same: all change vectors to be applied to data and
undo blocks are first written out to the log buffer. For an INSERT, the change vector
to be applied to the table block (and possibly index blocks) is the bytes that make up
the new row (and possibly the new index keys). The vector to be applied to the undo
block is the rowid of the new row. For a DELETE, the change vector to be written to
the undo block is the entire row.
A crucial difference between INSERT and DELETE is in the amount of undo generated.
When a row is inserted, the only undo generated is writing out the new rowid to the
undo block. This is because to roll back an INSERT, the only information Oracle
requires is the rowid, so that this statement can be constructed:
delete from table_name where rowid=rowid_of_the_new_row ;

Executing this statement will reverse the original change.
For a DELETE, the whole row (which might be several kilobytes) must be written
to the undo block, so that the deletion can be rolled back if need be by constructing a
statement that will insert the complete row back into the table.

The Start and End of a Transaction
A session begins a transaction the moment it issues any DML. The transaction
continues through any number of further DML commands until the session issues
either a COMMIT or a ROLLBACK statement. Only committed changes will be made
permanent and become visible to other sessions. It is impossible to nest transactions.
The SQL standard does not allow a user to start one transaction and then start another
before terminating the first. This can be done with PL/SQL (Oracle’s proprietary thirdgeneration language), but not with industry-standard SQL.
The explicit transaction control statements are COMMIT, ROLLBACK, and
SAVEPOINT. There are also circumstances other than a user-issued COMMIT or
ROLLBACK that will implicitly terminate a transaction:
• Issuing a DDL or DCL statement
• Exiting from the user tool (SQL*Plus or SQL Developer or anything else)
• If the client session dies
• If the system crashes
If a user issues a DDL (CREATE, ALTER, or DROP) or DCL (GRANT or REVOKE)
command, the transaction in progress (if any) will be committed: it will be made
permanent and become visible to all other users. This is because the DDL and DCL
commands are themselves transactions. As it is not possible to nest transactions in
SQL, if the user already has a transaction running, the statements the user has run will
be committed along with the statements that make up the DDL or DCL command.
If you start a transaction by issuing a DML command and then exit from the tool
you are using without explicitly issuing either a COMMIT or a ROLLBACK, the
transaction will terminate—but whether it terminates with a COMMIT or a ROLLBACK
is entirely dependent on how the tool is written. Many tools will have different

Chapter 8: DML and Concurrency

335

Transaction Control: COMMIT, ROLLBACK, SAVEPOINT,
SELECT FOR UPDATE
Oracle’s implementation of the relational database paradigm begins a transaction
implicitly with the first DML statement. The transaction continues until a COMMIT or
ROLLBACK statement. The SAVEPOINT command is not part of the SQL standard and
is really just an easy way for programmers to back out some statements, in reverse
order. It need not be considered separately, as it does not terminate a transaction.

COMMIT
Commit processing is where many people (and even some experienced DBAs) show an
incomplete, or indeed completely inaccurate, understanding of the Oracle architecture.
When you say COMMIT, all that happens physically is that LGWR flushes the log buffer
to disk. DBWn does absolutely nothing. This is one of the most important performance
features of the Oracle database.
EXAM TIP What does DBWn do when you issue a COMMIT command?
Answer: absolutely nothing.
To make a transaction durable, all that is necessary is that the changes that make
up the transaction are on disk: there is no need whatsoever for the actual table data to
be on disk, in the datafiles. If the changes are on disk, in the form of multiplexed redo
log files, then in the event of damage to the database the transaction can be reinstantiated
by restoring the datafiles from a backup taken before the damage occurred and applying
the changes from the logs. This process is covered in detail in later chapters—for now,
just hang on to the fact that a COMMIT involves nothing more than flushing the log
buffer to disk, and flagging the transaction as complete. This is why a transaction
involving millions of updates in thousands of files over many minutes or hours can

PART II

behavior, depending on how the tool is exited. (For instance, in the Microsoft Windows
environment, it is common to be able to terminate a program either by selecting the
File | Exit options from a menu on the top left of the window, or by clicking an “X” in
the top-right corner. The programmers who wrote the tool may well have coded different
logic into these functions.) In either case, it will be a controlled exit, so the programmers
should issue either a COMMIT or a ROLLBACK, but the choice is up to them.
If a client’s session fails for some reason, the database will always roll back the
transaction. Such failure could be for a number of reasons: the user process can die or
be killed at the operating system level, the network connection to the database server
may go down, or the machine where the client tool is running can crash. In any of
these cases, there is no orderly issue of a COMMIT or ROLLBACK statement, and it is
up to the database to detect what has happened. The behavior is that the session is
killed, and an active transaction is rolled back. The behavior is the same if the failure
is on the server side. If the database server crashes for any reason, when it next starts
up all transactions from any sessions that were in progress will be rolled back.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

336
be committed in a fraction of a second. Because LGWR writes in very nearly real time,
virtually all the transaction’s changes are on disk already. When you say COMMIT,
LGWR actually does write in real time: your session will hang until the write is complete.
This delay will be the length of time it takes to flush the last bit of redo from the log
buffer to disk, which will take milliseconds. Your session is then free to continue, and
from then on all other sessions will no longer be redirected to the undo blocks when
they address the changed table, unless the principle of consistency requires it.
The change vectors written to the redo log are all the change vectors: those applied
to data blocks (tables and indexes) and those applied to undo segments.
EXAM TIP The redo log stream includes all changes: those applied to data
segments and to undo segments, for both committed and uncommitted
transactions.
Where there is often confusion is that the stream of redo written out to the
log files by LGWR will contain changes for both committed and uncommitted
transactions. Furthermore, at any given moment DBWn may or may not have written
out changed blocks of data segments or undo segments to the datafiles for both
committed and uncommitted transactions. So in principle, your database on disk
is corrupted: the datafiles may well be storing uncommitted work, and be missing
committed changes. But in the event of a crash, the stream of redo on disk always has
enough information to reinstantiate any committed transactions that are not in the
datafiles (by use of the changes applied to data blocks), and to reinstantiate the undo
segments (by use of the changes applied to undo blocks) needed to roll back any
uncommitted transactions that are in the datafiles.
EXAM TIP Any DDL command, or a GRANT or REVOKE, will commit the
current transaction.

ROLLBACK
While a transaction is in progress, Oracle keeps an image of the data as it was before
the transaction. This image is presented to other sessions that query the data while the
transaction is in progress. It is also used to roll back the transaction automatically if
anything goes wrong, or deliberately if the session requests it. The syntax to request a
rollback is as follows:
ROLLBACK [TO SAVEPOINT savepoint] ;

The optional use of savepoints is detailed in the section following.
The state of the data before the rollback is that the data has been changed, but the
information needed to reverse the changes is available. This information is presented
to all other sessions, in order to implement the principle of isolation. The rollback
will discard all the changes by restoring the prechange image of the data; any rows the
transaction inserted will be deleted, any rows the transaction deleted will be inserted

Chapter 8: DML and Concurrency

337
back into the table, and any rows that were updated will be returned to their original
state. Other sessions will not be aware that anything has happened at all; they never
saw the changes. The session that did the transaction will now see the data as it was
before the transaction started.

SAVEPOINT

SAVEPOINT savepoint;

This creates a named point in the transaction that can be used in a subsequent
ROLLBACK command. The following table illustrates the number of rows in a table at
various stages in a transaction. The table is a very simple table called TAB, with one
column.
Command

Rows Visible to
the User

Rows Visible to
Others

truncate table tab;

0

0

insert into tab values ('one');

1

0

savepoint first;

1

0

insert into tab values ('two');

2

0

savepoint second;

2

0

insert into tab values ('three');

3

0

rollback to savepoint second;

2

0

rollback to savepoint first;

1

0

commit;

1

1

delete from tab;

0

1

rollback;

1

1

The example in the table shows two transactions: the first terminated with a
COMMIT, the second with a ROLLBACK. It can be seen that the use of savepoints is
visible only within the transaction: other sessions see nothing that is not committed.

SELECT FOR UPDATE
One last transaction control statement is SELECT FOR UPDATE. Oracle, by default,
provides the highest possible level of concurrency: readers do not block writers, and
writers do not block readers. Or in plain language, there is no problem with one

PART II

Savepoints allow a programmer to set a marker in a transaction that can be used to
control the effect of the ROLLBACK command. Rather than rolling back the whole
transaction and terminating it, it becomes possible to reverse all changes made after a
particular point but leave changes made before that point intact. The transaction itself
remains in progress: still uncommitted, still able to be rolled back, and still invisible
to other sessions.
The syntax is as follows:

OCA/OCP Oracle Database 11g All-in-One Exam Guide

338
session querying data that another session is updating, or one session updating data
that another session is querying. However, there are times when you may wish to
change this behavior and prevent changes to data that is being queried.
It is not unusual for an application to retrieve a set of rows with a SELECT
command, present them to a user for perusal, and prompt them for any changes.
Because Oracle is a multiuser database, it is not impossible that another session has
also retrieved the same rows. If both sessions attempt to make changes, there can be
some rather odd effects. The following table depicts such a situation.
First User

select * from regions;

Second User

select * from regions;
delete from regions
where region_id=5;
commit;

update regions set
region_name='GB'where
region_id=5;

This is what the first user will see, from a SQL*Plus prompt:
SQL> select * from regions;
REGION_ID REGION_NAME
---------- ------------------------5 UK
1 Europe
2 Americas
3 Asia
4 Middle East and Africa
SQL> update regions set region_name='GB' where region_id=5;
0 rows updated.

This is a bit disconcerting. One way around this problem is to lock the rows in
which one is interested:
select * from regions for update;

The FOR UPDATE clause will place a lock on all the rows retrieved. No changes
can be made to them by any session other than that which issued the command, and
therefore the subsequent updates will succeed: it is not possible for the rows to have
been changed. This means that one session will have a consistent view of the data (it
won’t change), but the price to be paid is that other sessions will hang if they try to
update any of the locked rows (they can, of course, query them).
The locks placed by a FOR UPDATE clause will be held until the session issuing
the command issues a COMMIT or ROLLBACK. This must be done to release the
locks, even if no DML commands have been executed.

The So-Called “Autocommit”
To conclude this discussion of commit processing, it is necessary to remove any
confusion about what is often called autocommit, or sometimes implicit commit. You

Chapter 8: DML and Concurrency

339

Exercise 8-4: Manage Data Using DML In this exercise, you will demonstrate
transaction isolation and control. Use two SQL*Plus sessions (or SQL Developer if
you prefer), each connected as user SYSTEM. Run the commands in the steps that
follow in the two sessions in the correct order.
Step

In Your First Session

1

create table t1 as select *
from all_users;

2

select count(*) from t1;

In Your Second Session

select count(*) from t1;

Results are the same in both sessions.
3

delete from t1;

4

select count(*) from t1;

select count(*) from t1;

Results differ because transaction isolation conceals the changes.
5

rollback;

6

select count(*) from t1;

select count(*) from t1;

PART II

will often hear it said that in some situations Oracle will autocommit. One of these
situations is when doing DDL, which is described in the preceding section; another is
when you exit from a user process such as SQL*Plus.
Quite simply, there is no such thing as an automatic commit. When you execute a
DDL statement, there is a perfectly normal COMMIT included in the source code that
implements the DDL command. But what about when you exit from your user process?
If you are using SQL*Plus on a Windows terminal and you issue a DML statement
followed by an EXIT, your transaction will be committed. This is because built into
the SQL*Plus EXIT command there is a COMMIT statement. But what if you click in
the top-right corner of the SQL*Plus window? The window will close, and if you log
in again, you will see that the transaction has been rolled back. This is because the
programmers who wrote SQL*Plus for Microsoft Windows included a ROLLBACK
statement in the code that is executed when you close the window. The behavior of
SQL*Plus on other platforms may well be different; the only way to be sure is to test
it. So whether you get an “autocommit” when you exit from a program in various
ways is entirely dependent on how your programmers wrote your user process. The
Oracle server will simply do what it is told to do.
There is a SQL*Plus command SET AUTOCOMMIT ON. This will cause SQL*Plus
to modify its behavior: it will append a COMMIT to every DML statement issued. So
all statements are committed immediately as soon as they are executed and cannot
be rolled back. But this is happening purely on the user process side; there is still no
autocommit in the database, and the changes made by a long-running statement will
be isolated from other sessions until the statement completes. Of course, a disorderly
exit from SQL*Plus in these circumstances, such as killing it with an operating system
utility while the statement is running, will be detected by PMON and the active
transaction will always be rolled back.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

340
Results are the same in both sessions.
7

delete from t1;

8

select count(*) from t1;

9

create view v1 as select * from t1;

10

select count(*) from t1;

11

rollback;

12

select count(*) from t1;

select count(*)
from t1;
select count(*)
from t1;
select count(*)
from t1;

Oh dear! The DDL statement committed the DELETE, so it can’t be rolled back.
13

drop view v1;

14

drop table t1;

Identify and Administer PL/SQL Objects
PL/SQL is Oracle’s proprietary third-generation language that runs within the
database. You can use it to retrieve and manipulate data with SQL, while using
procedural constructs such as IF . . . THEN . . . ELSE or FOR or WHILE. The PL/SQL
code can be stored on a client machine and sent to the server for execution, or it can
be stored within the database as a named block of code.
EXAM TIP PL/SQL always executes within the database, no matter where it is
stored. Java can run either within the database or on the user machine.

Stored and Anonymous PL/SQL
PL/SQL runs within the database, but it can be stored on either the client or the
server. PL/SQL code can also be entered interactively from a SQL*Plus prompt.
Stored PL/SQL is loaded into the database and stored within the data dictionary
as a named PL/SQL object. When it is saved to the database, it is compiled: the
compilation process checks for syntactical errors and also picks up errors relating
to the data objects the code addresses. This saves time when the code is actually run,
and means that programmers should pick up errors at compilation time, before users
encounter them. Code stored remotely, or ad hoc code issued at the SQL*Plus prompt,
is called anonymous PL/SQL. It is compiled dynamically; which impacts on performance,
and it also raises the possibility that unexpected errors might occur.
Figure 8-5 shows an example of running an anonymous PL/SQL block and of
creating and running a stored procedure.
The anonymous block in Figure 8-5 creates a variable called INCREASE with the
DECLARE statement and sets it to 10. Then the procedural code (within the BEGIN . . .

Chapter 8: DML and Concurrency

341
Figure 8-5
Anonymous and
stored PL/SQL

PART II

END statements) uses the variable within a SQL statement that updates a column of
a table.
The second example in the figure creates a procedure called INC_SAL, stored
within the data dictionary. It takes a numeric argument called INCREASE and uses
this in a SQL UPDATE statement. Then the procedure is invoked with the EXECUTE
command, passing in a value for the argument.
These examples are very simple, but they should illustrate how anonymous PL/SQL
runs just once and therefore must be compiled at execution time, whereas stored PL/SQL
can be compiled in advance and then executed many times.

PL/SQL Objects
There are six commonly used types of PL/SQL objects: Procedure, Function, Package,
Package body, Trigger, and Type body.
All are schema objects stored within the data dictionary. Procedures and functions
are subprograms usually intended for performing repetitive instructions. Packages are
collections of procedures and functions, grouped together for manageability. Triggers
cannot be packaged: they are associated with tables and run whenever an appropriate
DML statement is executed against the tables. Object types are beyond the scope of
the OCP examinations.
TIP SQL*Plus and Database Control are only suitable for small-scale PL/SQL
development. For real work, your programmers will need a proper IDE
(integrated development environment) tool that will assist with syntax
checking, debugging, and source code management.

Procedures and Functions
A procedure is a block of code that carries out some action. It can, optionally, be
defined with a number of arguments. These arguments are replaced with the actual
parameters given when the procedure is invoked. The arguments can be IN arguments,

OCA/OCP Oracle Database 11g All-in-One Exam Guide

342
meaning that they are used to pass data into the procedure, or OUT arguments, meaning
that they are modified by the procedure and after execution the new values are passed
out of the procedure. Arguments can also be IN-OUT, where the one variable serves
both purposes. Within a procedure, you can define any number of variables that,
unlike the arguments, are private to the procedure. To run a procedure, either call
it from within a PL/SQL block or use the interactive EXECUTE command.
A function is similar in concept to a procedure, but it does not have OUT arguments
and cannot be invoked with EXECUTE. It returns a single value, with the RETURN
statement.
Anything that a function can do, a procedure could do also. Functions are generally
used for relatively simple operations: small code blocks that will be used many times.
Procedures are more commonly used to divide code into modules, and may contain
long and complex processes.

Packages
To group related procedures and functions together, your programmers create packages.
A package consists of two objects: a specification and a body. A package specification
lists the functions and procedures in the package, with their call specifications: the
arguments and their data types. It can also define variables and constants accessible to
all the procedures and functions in the package. The package body contains the PL/SQL
code that implements the package: the code that creates the procedures and functions.
To create a package specification, use the CREATE PACKAGE command. For example,
SQL> create or replace package numbers
2 as
3 function odd_even(v1 number) return varchar2;
4 procedure ins_ints(v1 in number);
5 end numbers;
6 /
Package created.

Then to create the package body, use the CREATE OR REPLACE PACKAGE BODY
statement to create the individual functions and procedures.
There are several hundred PL/SQL packages provided as standard with the Oracle
database. These supplied packages are, for the most part, created when you create a
database. To invoke a packaged procedure, you must prefix the procedure name with
the package name. For example,
SQL> exec numbers.odd_even(5);

This will run the ODD_EVEN procedure in the NUMBERS package. The package must
exist in the schema to which the user is connected, or it would be necessary to prefix
the package name with the schema name. The user would also need to have the
EXECUTE privilege on the package.

Database Triggers
Database triggers are a special category of PL/SQL object, in that they cannot be invoked
manually. A trigger runs (or “fires”) automatically, when a particular action is carried

Chapter 8: DML and Concurrency

343

Event

Before or After?

DML triggers:
INSERT
UPDATE
DELETE

Before and/or after

DDL triggers:
CREATE
ALTER
DROP
TRUNCATE

Before and/or after

Database operations:
SERVERERROR
LOGON
LOGOFF
STARTUP
SHUTDOWN
SUSPEND

Can fire once per statement, or once per row. A MERGE command will
fire whatever triggers are appropriate to the action carried out.

After
After
Before
After
Before
After
Fires after a resumable operation is suspended because of a space error.

Note that there is no such thing as a trigger on SELECT, though Chapter 6 showed
how fine-grained auditing can be used to produce a similar effect.
There are numerous uses for triggers. These might include:
• Auditing users’ actions A trigger can capture full details of what was done
and who did it, and write them out to an audit table.
• Executing complex edits An action on one row may, in business terms,
require a number of associated actions on other tables. The trigger can
perform these automatically.
• Security A trigger can check the time, the user’s IP address, the program they
are running, and any other factors that should limit what the session can do.
• Enforcing complex constraints An action may be fine in terms of the
constraints on one table but may need to be validated against the contents
of several other tables.

PART II

out, or when a certain situation arises; this is the triggering event. There are a number
of possible triggering events. For many of them the trigger can be configured to fire
either before or after the event. It is also possible to have both before and after triggers
defined for the same event. The DML triggers, that fire when rows are inserted, updated,
or deleted, can be configured to fire once for each affected row, or once per statement
execution.
All triggers have one factor in common: their execution is completely beyond the
control of the user who caused the triggering event. The user may not even know that
the trigger fired. This makes triggers admirably suited to auditing user actions and
implementing security.
The following table describes the commonly used triggering events.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

344
EXAM TIP It is impossible to run a trigger by any means other than its
triggering event.
Exercise 8-5: Create PL/SQL Objects In this exercise, you will use Database
Control to create PL/SQL objects, and execute them with SQL*Plus.
1. Connect to your database as user SYSTEM with SQL*Plus.
2. Create a table to be used for this exercise:
create table integers(c1 number, c2 varchar2(5));

3. Connect to your database as user SYSTEM with Database Control.
4. From the database home page, take the Schema tab and then the Packages
link in the Programs section. Click CREATE.
5. In the Create Package window, enter NUMBERS as the package name, and
the source code for the package as shown in the next illustration. Click OK to
create the package.

6. From the database home page, take the Schema tab and then the Packages
Bodies link in the Programs section. Click CREATE.
7. In the Create Package Body window, enter NUMBERS as the package name,
and the source code for the package body as in the next illustration. Click OK
to create the package body.

Chapter 8: DML and Concurrency

345

PART II

8. In your SQL*Plus session, describe the package, execute the procedure, and
check the results, as in this illustration:

9. Tidy up by dropping the package and table:
drop package numbers;
drop table integers;

Note that this first DROP will COMMIT the insert of the rows.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

346

Monitor and Resolve Locking Conflicts
In any multiuser database application it is inevitable that, eventually, two users will
wish to work on the same row at the same time. The database must ensure that it is a
physical impossibility. The principle of transaction isolation—the I of the ACID test—
requires that the database guarantee that one session cannot see or be affected by
another session’s transaction until the transaction has completed. To accomplish this,
the database must serialize concurrent access to data; it must ensure that even though
multiple sessions have requested access to the same rows, they actually queue up, and
take turns.
Serialization of concurrent access is accomplished by record and table locking
mechanisms. Locking in an Oracle database is completely automatic. Generally
speaking, problems only arise if software tries to interfere with the automatic locking
mechanism with poorly written code, or if the business analysis is faulty and results
in a business model where sessions will collide.

Shared and Exclusive Locks
The standard level of locking in an Oracle database guarantees the highest possible
level of concurrency. This means that if a session is updating one row, the one row is
locked; nothing else. Furthermore, the row is only locked to prevent other sessions
from updating it—other sessions can read it at any time. The lock is held until the
transaction completes, either with a COMMIT or a ROLLBACK. This is an exclusive
lock: the first session to request the lock on the row gets it, and any other sessions
requesting write access must wait. Read access is permitted—though if the row has
been updated by the locking session, as will usually be the case, then any reads will
involve the use of undo data to make sure that reading sessions do not see any
uncommitted changes.
Only one session can take an exclusive lock on a row, or a whole table, at a time—
but shared locks can be taken on the same object by many sessions. It would not make
any sense to take a shared lock on one row, because the only purpose of a row lock is
to gain the exclusive access needed to modify the row. Shared locks are taken on whole
tables, and many sessions can have a shared lock on the same table. The purpose of
taking a shared lock on a table is to prevent another session acquiring an exclusive
lock on the table: you cannot get an exclusive lock if anyone else already has a shared
lock. Exclusive locks on tables are required to execute DDL statements. You cannot
issue a statement that will modify an object (for instance, dropping a column of a
table) if any other session already has a shared lock on the table.
To execute DML on rows, a session must acquire exclusive locks on the rows to
be changed, and shared locks on the tables containing the rows. If another session
already has exclusive locks on the rows, the session will hang until the locks are
released by a COMMIT or a ROLLBACK. If another session already has a shared lock
on the table and exclusive locks on other rows, that is not a problem. An exclusive
lock on the table would be, but the default locking mechanism does not lock whole
tables unless this is necessary for DDL statements.

Chapter 8: DML and Concurrency

347

The Enqueue Mechanism
Requests for locks are queued. If a session requests a lock and cannot get it because
another session already has the row or object locked, the session will wait. It may be
that several sessions are waiting for access to the same row or object—in that case,
Oracle will keep track of the order in which the sessions requested the lock. When the
session with the lock releases it, the next session will be granted it, and so on. This is
known as the enqueue mechanism.
If you do not want a session to queue up if it cannot get a lock, the only way to
avoid this is to use the WAIT or NOWAIT clauses of the SELECT . . . FOR UPDATE
command. A normal SELECT will always succeed, because SELECT does not require any
locks—but a DML statement will hang. The SELECT . . . FOR UPDATE command will
select rows and lock them in exclusive mode. If any of the rows are locked already, the
SELECT . . . FOR UPDATE statement will be queued and the session will hang until
the locks are released, just as a DML statement would. To avoid sessions hanging,
use either SELECT . . . FOR UPDATE NOWAIT or SELECT . . . FOR UPDATE WAIT ,
where  is a number of seconds. Having obtained the locks with either of the
SELECT . . . FOR UPDATE options, you can then issue the DML commands with no
possibility of the session hanging.
TIP It is possible to append the keywords SKIP LOCKED to a SELECT FOR
UPDATE statement, which will return and lock only rows that are not already
locked by another session. This command existed with earlier releases but is
only supported from release 11g.

Lock Contention
When a session requests a lock on a row or object and cannot get it because another
session has an exclusive lock on the row or object, it will hang. This is lock contention,
and it can cause the database performance to deteriorate appallingly as all the sessions
queue up waiting for locks. Some lock contention may be inevitable, as a result of
normal activity: the nature of the application may be such that different users will

PART II

All DML statements require at least two locks: an exclusive lock on each row
affected, and a shared lock on the table containing the row. The exclusive lock
prevents another session from interfering with the row, and the shared lock prevents
another session from changing the table definition with a DDL statement. These locks
are requested automatically. If a DML statement cannot acquire the exclusive row
locks it needs, then it will hang until it gets them.
To execute DDL commands requires an exclusive lock on the object concerned.
This cannot be obtained until all DML transactions against the table have finished,
thereby releasing both their exclusive row locks and their shared table locks. The
exclusive lock required by any DDL statement is requested automatically, but if it
cannot be obtained—typically, because another session already has the shared lock
granted for DML—then the statement will terminate with an error immediately.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

348
require access to the same data. But in many cases, lock contention is caused by
program and system design.
The Oracle database provides utilities for detecting lock contention, and it is also
possible to solve the problem in an emergency. A special case of lock contention is the
deadlock, which is always resolved automatically by the database itself.

The Causes of Lock Contention
It may be that the nature of the business is such that users do require write access to
the same rows at the same time. If this is a limiting factor in the performance of the
system, the only solution is business process reengineering, to develop a more
efficient business model. But although some locking is a necessary part of business
data processing, there are some faults in application design that can exacerbate the
problem.
Long-running transactions will cause problems. An obvious case is where a user
updates a row and then does not commit the change. Perhaps the user even goes off
to lunch, leaving the transaction unfinished. You cannot stop this happening if users
have access to the database with tools such as SQL*Plus, but it should never occur
with well-written software. The application should take care that a lock is only
imposed just before an update occurs, and released (with a COMMIT or ROLLBACK)
immediately afterward.
Third-party user process products may impose excessively high locking levels. For
example, there are some application development tools that always do a SELECT . . .
FOR UPDATE to avoid the necessity of requerying the data and checking for changes.
Some other products cannot do row-level locking: if a user wants to update one row,
the tool locks a group of rows—perhaps dozens or even hundreds. If your application
software is written with tools such as these, the Oracle database will simply do what it
is told to do: it will impose numerous locks that are unnecessary in business terms. If
you suspect that the software is applying more locks than necessary, investigate
whether it has configuration options to change this behavior.

Detecting and Resolving Lock Contention
There are views that will tell you what is going on with locking in the database, but this
is one case where even very experienced DBAs will often prefer to use the graphical
tools. To reach the Database Control lock manager, take the Performance tab from the
database home page, then the Instance Locks link in the Additional Monitoring Links
section. Figure 8-6 shows the Instance Locks window, with Blocking Locks selected.
There may be any number of locks within the database, but it is usually only the locks
that are causing sessions to hang that are of interest. These are known as blocking locks.
In Figure 8-6, there are two problems. Session number 116, logged on as user
SCOTT, is holding an exclusive lock on one or more rows of the table HR.EMPLOYEES.
This session is not hanging—it is operating normally. But session number 129, logged
on as user MPHO, is blocked—it is waiting for an exclusive lock on one or more of
the rows locked by session 116. Session 129 is hanging at this moment and will
continue to hang until session 116 releases its lock(s) by terminating its transaction,
with a COMMIT or a ROLLBACK. The second problem is worse: JON is blocking two
sessions, those of ISAAC and ROOP.

Chapter 8: DML and Concurrency

349

PART II

Figure 8-6 Showing locks with Database Control

Lock contention is a natural consequence of many users accessing the same data
concurrently. The problem can be exacerbated by badly designed software, but in
principle lock contention is part of normal database activity. It is therefore not possible
for the DBA to resolve it completely—he can only identify that it is a problem, and
suggest to system and application designers that they bear in mind the impact of lock
contention when designing data structures and programs.
If locks are becoming an issue, as in Figure 8-6, they must be investigated. Database
Control can provide the necessary information. Clicking the values in the “SQL ID”
column will let you see what statements being executed caused the lock contention.
In the figure, SCOTT and MPHO have both executed one statement. JON, ISAAC, and
ROOP have executed another. The “ROWID” column can be used to find the exact
row for which the sessions are contending. You cannot drill down to the row from
this window, but the rowid can be used in a SELECT statement to retrieve the row in
another (unblocked) session. When the code and the rows that cause the contention
are known, a solution can be discussed with the system designers and developers.
In an emergency, however, it is possible for the DBA to solve the problem—by
terminating the session, or sessions, that are holding too many locks for too long.
When a session is terminated forcibly, any locks it holds will be released as its active
transaction is rolled back. The blocked sessions will then become free and can continue.
To terminate a session, either use Database Control, or the ALTER SYSTEM KILL
SESSION command. In the preceding example, if you decided that the SCOTT session
is holding its lock for an absurd period of time, you would select the radio button for
the session and click the KILL SESSION button. SCOTT’s transaction will be rolled back,
and MPHO’s session will then be able to take the lock(s) it requires and continue
working. In the case of the second problem in the figure, killing JON’s session would
free up ISAAC, who would then be blocking ROOP.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

350
Deadlocks
It is possible to encounter a situation where two sessions block each other in such a
fashion that both will hang, each waiting for the other to release its lock. This is a deadlock.
Deadlocks are not the DBA’s problem; they are caused by bad program design and
resolved automatically by the database itself. Information regarding deadlocks is written
out to the alert log, with full details in a trace file—part of your daily monitoring will pick
up the occurrence of deadlocks and inform your developers that they are happening.
If a deadlock occurs, both sessions will hang—but only for a brief moment. One of
the sessions will detect the deadlock within seconds, and it will roll back the statement
that caused the problem. This will free up the other session, returning the message
“ORA-00060 Deadlock detected.” This message must be trapped by your programmers
in their exceptions clauses, which should take appropriate action.
It must be emphasized that deadlocks are a program design fault. They occur
because the code attempts to do something that is logically impossible. Well-written
code will always request locks in a sequence that cannot cause deadlocks to occur, or
will test whether incompatible locks already exist before requesting them.
Exercise 8-6: Detect and Resolve Lock Contention In this exercise, you
will first use SQL*Plus to cause a problem, and detect and solve it with Database
Control.
1. Using SQL*Plus, connect to your database in two sessions as user WEBSTORE.
2. In your first session, lock all the rows in the PRODUCTS table:
select * from products for update;

3. In your second session, attempt to update a row. The session will hang:
update products set stock_count=stock_count-1;

4. Connect to your database as user SYSTEM with Database Control.
5. Navigate to the Instance Locks window, by taking the Performance tab from
the database home page, and then the Database Locks link in the Additional
Monitoring Links section.
6. Observe that the second SYSTEM session is shown as waiting for an EXCLUSIVE
lock. Select the radio button for the first, blocking, session and click KILL SESSION.
7. In the confirmation window, click SHOW SQL. This will show a command
something like
ALTER SYSTEM KILL SESSION '120,1318' IMMEDIATE

8. Click RETURN and YES to execute the KILL SESSION command.
9. Returning to your SQL*Plus sessions, you will find that the second session is
now working, but that the first session can no longer run any commands.

Chapter 8: DML and Concurrency

351

Overview of Undo

EXAM TIP Use of undo segments is incompatible with use of rollback
segments: it is one or the other, depending on the setting of the UNDO_
MANAGEMENT parameter.

PART II

Undo data is the information needed to reverse the effects of DML statements. It is
often referred to as rollback data, but try to avoid that term. In earlier releases of Oracle,
the terms rollback data and undo data were used interchangeably, but from 9i onward
they are different: their function is the same, but their management is not. Whenever
a transaction changes data, the preupdate version of the data is written out to a rollback
segment or to an undo segment. The difference is crucial. Rollback segments can still
exist in an 11g database, but with release 9i of the database Oracle introduced the
undo segment as an alternative. Oracle strongly advises that all databases should
use undo segments—rollback segments are retained for backward compatibility, but
they are not referenced in the OCP exam and are therefore not covered in this book.
To roll back a transaction means to use data from the undo segments to construct
an image of the data as it was before the transaction occurred. This is usually done
automatically to satisfy the requirements of the ACID test, but the flashback query
capability (detailed in Chapter 19) leverages the power of the undo mechanism by
giving you the option of querying the database as it was at some time in the past.
And of course, any user can use the ROLLBACK command interactively to back out
any DML statements that were issued and not committed.
The ACID test requires, first, that the database should keep preupdate versions
of data in order that incomplete transactions can be reversed—either automatically in
the case of an error or on demand through the use of the ROLLBACK command. This
type of rollback is permanent and published to all users. Second, for consistency, the
database must be able to present a query with a version of the database as it was at
the time the query started. The server process running the query will go to the undo
segments and construct what is called a read-consistent image of the blocks being
queried, if they were changed after the query started. This type of rollback is temporary
and only visible to the session running the query. Third, undo segments are also used
for transaction isolation. This is perhaps the most complex use of undo data. The
principle of isolation requires that no transaction can be in any way dependent upon
another, incomplete transaction. In effect, even though a multiuser database will have
many transactions in progress at once, the end result must be as though the transactions
were executing one after another. The use of undo data combined with row and table
locks guarantees transaction isolation: the impossibility of incompatible transactions.
Even though several transactions may be running concurrently, isolation requires that
the end result must be as if the transactions were serialized.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

352
Exercise 8-7: Use Undo Data In this exercise, you will investigate the undo
configuration and usage in your database. Use either SQL*Plus or SQL Developer.
1. Connect to the database as user SYSTEM.
2. Determine whether the database is using undo segments or rollback segments
with this query:
select value from v$parameter where name='undo_management';

This should return the value AUTO. If it does not, issue this command, and
then restart the instance:
alter system set undo_management=auto scope =spfile;

3. Determine what undo tablespaces have been created, and which one is being
used with these two queries:
select tablespace_name from dba_tablespaces where contents='UNDO';
select value from v$parameter where name='undo_tablespace';

4. Determine what undo segments are in use in the database, and how big they are:
select tablespace_name,segment_name,segment_id,status from dba_rollback_segs;
select usn,rssize from v$rollstat;

Note that the identifying number for a segment has a different column name
in the two views.
5. Find out how much undo data was being generated in your database in the
recent past:
alter session set nls_date_format='dd-mm-yy hh24:mi:ss';
select begin_time, end_time,
(undoblks * (select value from v$parameter where name='db_block_size'))
undo_bytes from v$undostat;

Transactions and Undo Data
When a transaction starts, Oracle will assign it to one (and only one) undo segment.
Any one transaction can only be protected by one undo segment—it is not possible for
the undo data generated by one transaction to cut across multiple undo segments. This
is not a problem, because undo segments are not of a fixed size. So if a transaction does
manage to fill its undo segment, Oracle will automatically add another extent to the
segment, so that the transaction can continue. It is possible for multiple transactions
to share one undo segment, but in normal running this should not occur. A tuning
problem common with rollback segments was estimating how many rollback segments
would be needed to avoid excessive interleaving of transactions within rollback
segments without creating so many as to waste space. One feature of undo management
is that Oracle will automatically spawn new undo segments on demand, in an attempt
to ensure that it is never necessary for transactions to share undo segments. If Oracle
has found it necessary to extend its undo segments or to generate additional segments,
when the workload drops Oracle will shrink and drop the segments, again automatically.

Chapter 8: DML and Concurrency

353
EXAM TIP No transaction can ever span multiple undo segments, but one
undo segment can support multiple transactions.

EXAM TIP Active undo can never be overwritten; expired undo can be
overwritten. Unexpired undo can be overwritten, but only if there is a
shortage of undo space.
The fact that undo information becomes inactive on commit means that the extents
of undo segments can be used in a circular fashion. Eventually, the whole of the undo
tablespace will be filled with undo data, so when a new transaction starts, or a running
transaction generates some more undo, the undo segment will “wrap” around, and
the oldest undo data within it will be overwritten—always assuming that this oldest
data is not part of a long-running uncommitted transaction, in which case it would
be necessary to extend the undo segment instead.
With the old manually managed rollback segments, a critical part of tuning was
to control which transactions were protected by which rollback segments. A rollback
segment might even be created and brought online specifically for one large transaction.
Automatically managed undo segments make all of that unnecessary, because you
as DBA have no control over which undo segment will protect any one transaction.
Don’t worry about this—Oracle does a better job that you ever could. But if you
wish you can still find out which segment has been assigned to each transaction by
querying the view V$TRANSACTION, which has join columns to V$SESSION and
DBA_ROLLBACK_SEGS, thus letting you build up a complete picture of transaction
activity in your database: how many transactions there are currently running, who is
running them, which undo segments are protecting those transactions, when the

PART II

As a transaction updates table or index data blocks, the information needed to
roll back the changes is written out to blocks of the assigned undo segment. All this
happens in the database buffer cache. Oracle guarantees absolutely the A, for atomicity,
of the ACID test, meaning that all the undo data must be retained until a transaction
commits. If necessary, the DBWn will write the changed blocks of undo data to the
undo segment in the datafiles. By default, Oracle does not, however, guarantee the C,
for consistency, of the ACID test. Oracle guarantees consistency to the extent that if a
query succeeds, the results will be consistent with the state of the database at the time
the query started—but it does not guarantee that the query will actually succeed. This
means that undo data can be categorized as having different levels of necessity. Active
undo is undo data that might be needed to roll back transactions in progress. This
data can never be overwritten, until the transaction completes. At the other extreme,
expired undo is undo data from committed transactions, which Oracle is no longer
obliged to store. This data can be overwritten if Oracle needs the space for another
active transaction. Unexpired undo is an intermediate category; it is neither active
nor expired: the transaction has committed, but the undo data might be needed for
consistent reads, if there are any long-running queries in progress. Oracle will attempt
not to overwrite unexpired undo.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

354
transactions started, and how many blocks of undo each transaction has generated.
A related dynamic performance view is V$ROLLSTAT, which gives information on the
size of the segments.
Figure 8-7 shows queries to investigate transactions in progress. The first query
shows that there are currently two transactions. JON’s transaction has been assigned
to the segment with SEGMENT_ID number 7 and is currently using 277 blocks of
undo space. SCOTT’s much smaller transaction is protected by segment 2. The second
query shows the segment information. The size of each segment will depend on the
size of the transactions that happen to have been assigned to them previously. Note
that the join column to DBA_ROLLBACK_SEGS is called USN.

Managing Undo
A major feature of undo segments is that they are managed automatically, but you
must set the limits within which Oracle will do its management. After considering the
nature and volume of activity in your database, you set certain instance parameters
and adjust the size of your undo tablespace in order to achieve your objectives.

Error Conditions Related to Undo
The principles are simple: first, there should always be sufficient undo space available
to allow all transactions to continue, and second, there should always be sufficient
undo data available for all queries to succeed. The first principle requires that your undo

Figure 8-7

Query showing details of transactions in progress

Chapter 8: DML and Concurrency

355

EXAM TIP If a DML statement runs out of undo space, it will be rolled back.
The rest of the transaction that had already succeeded remains intact and
uncommitted.
If a query encounters a block that has been changed since the query started, it will
go to the undo segment to find the preupdate version of the data. If, when it goes to
the undo segment, that bit of undo data has been overwritten, the query fails on
consistent read with a famous Oracle error ORA-1555, “snapshot too old.”
If the undo tablespace is undersized for the transaction volume and the length of
queries, Oracle has a choice: either let transactions succeed and risk queries failing
with ORA-1555, or let queries succeed and risk transactions failing with ORA-30036.
The default behavior is to let the transactions succeed: to allow them to overwrite
unexpired undo.

Parameters for Undo Management,
and Retention Guarantee
There are three parameters controlling undo: UNDO_MANAGEMENT, UNDO_
TABLESPACE, and UNDO_RETENTION.
UNDO_MANAGEMENT defaults to AUTO with release 11g. It is possible to set
this to MANUAL, meaning that Oracle will not use undo segments at all. This is for
backward compatibility, and if you use this, you will have to do a vast amount of
work creating and tuning rollback segments. Don’t do it. Oracle Corporation strongly
advises setting this parameter to AUTO, to enable use of undo segments. This parameter
is static, meaning that if it is changed the change will not come into effect until the
instance is restarted. The other parameters are dynamic—they can be changed while
the running instance is executing.

PART II

tablespace must be large enough to accommodate the worst case for undo demand. It
should have enough space allocated for the peak usage of active undo data generated by
your transaction workload. Note that this might not be during the highest number of
concurrent transactions; it could be that during normal running you have many small
transactions, but the total undo they generate might be less than that generated by a
single end-of-month batch job. The second principle requires that there be additional
space in the undo tablespace to store unexpired undo data that might be needed for
read consistency.
If a transaction runs out of undo space, it will fail with the error ORA-30036,
“unable to extend segment in undo tablespace.” The statement that hit the problem
is rolled back, but the rest of the transaction remains intact and uncommitted. The
algorithm that assigns space within the undo tablespace to undo segments means that
this error condition will only arise if the undo tablespace is absolutely full of active
undo data.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

356
If you are using UNDO_MANAGEMENT=AUTO, you must also specify UNDO_
TABLESPACE. This parameter nominates a tablespace, which must have been created
as an undo tablespace, as the active undo tablespace. All the undo segments within it
will be brought online (that is, made available for use) automatically.
Lastly, UNDO_RETENTION, set in seconds, is usually optional. It specifies a target
for keeping inactive undo data and determines when it becomes classified as expired
rather than unexpired. If, for example, your longest running query is thirty minutes,
you would set this parameter to 1800. Oracle will then attempt to keep all undo data
for at least 1800 seconds, and your query should therefore never fail with ORA-1555.
If, however, you do not set this parameter, or set it to zero, Oracle will still keep data
for as long as it can anyway. The algorithm controlling which expired undo data is
overwritten first will always choose to overwrite the oldest bit of data; therefore,
UNDO_RETENTION is always at the maximum allowed by the size of the tablespace.
Where the UNDO_RETENTION parameter is not optional is if you have configured
guaranteed undo retention. The default mode of operation for undo is that Oracle
will favor transactions over queries. If the sizing of the undo tablespace is such that a
choice has to be made between the possibility of a query failing with ORA-1555 and
the certainty of a transaction failing with ORA-30036, Oracle will choose to let the
transaction continue by overwriting undo data that a query might need. In other
words, the undo retention is only a target that Oracle will try to achieve. But there
may be circumstances when successful queries are considered more important than
successful transactions. An example might be the end-of-month billing run for a
utilities company, when it might be acceptable to risk transactions being blocked for
a few hours while the reports are generating. Another case is if you are making use of
flashback queries, which rely on undo data.
Guaranteed undo retention, meaning that undo data will never be overwritten
until the time specified by the undo retention has passed, is enabled at the tablespace
level. This attribute can be specified at tablespace creation time, or an undo tablespace can
be altered later to enable it. Once you activate an undo tablespace for which retention
guarantee has been specified, all queries will complete successfully, provided they
finish within the undo retention time; you will never have “snapshot too old” errors
again. The downside is that transactions may fail for lack of undo space.
If the UNDO_RETENTION parameter has been set, and the datafile(s) making
up the undo tablespace is set to autoextend, then Oracle will increase the size of the
datafile automatically if necessary to keep to the undo retention target. This combination
of guaranteed undo retention and autoextending datafiles means that both queries
and transactions will always succeed—assuming you have enough disk space. If you
don’t, the automatic extension will fail.
A database might have one tablespace used in normal operations where undo
retention is not guaranteed, and another to be used during month-end reporting
where retention is guaranteed.

Sizing and Monitoring the Undo Tablespace
The undo tablespace should be large enough to store the worst case of all the undo
generated by concurrent transactions, which will be active undo, plus enough unexpired

Chapter 8: DML and Concurrency

357

Figure 8-8 Undo management settings, through Database Control

PART II

undo to satisfy the longest running query. In an advanced environment, you may also
have to add space to allow for flashback queries as well. The algorithm is simple:
calculate the rate at which undo is being generated at your peak workload, and multiply
by the length of your longest query.
The V$UNDOSTAT view will tell you all you need to know. There is also an
advisor within Database Control that will present the information in an immediately
comprehensible way.
Figure 8-8 shows the undo management screen of Database Control. To reach this,
take the Server tab from the database home page, then the Automatic Undo Management
link in the Database Configuration section.
The configuration section of the screen shows that the undo tablespace currently
in use is called UNDO1, and it is 100MB in size. Undo guarantee has not been set,
but the datafile(s) for the tablespace is auto-extensible. Making your undo datafiles
auto-extensible will ensure that transactions will never run out of space, but Oracle
will not extend them merely to meet the UNDO_RETENTION target; it is therefore
still possible for a query to fail with “snapshot too old.” However, you should not
rely on the auto-extend capability; your tablespace should be the correct size to begin
with. The Change Tablespace button will issue an ALTER SYSTEM command to
activate an alternative undo tablespace.
Further information given on the System Activity tab, shown in Figure 8-9, tells
you that the peak rate for undo generation was only 1664KB per minute, and the

OCA/OCP Oracle Database 11g All-in-One Exam Guide

358

Figure 8-9

Undo activity, summarized by Database Control

longest running query was 25 minutes. It follows that the minimum size of the undo
tablespace to be absolutely sure of preventing errors would be, in kilobytes,
1664 * 25 = 40265

which is just over 40M. If the current size were less than that, this would be pointed
out in the Undo Advisor section. There have been no transaction errors caused by lack
of undospace, and no query failures caused by lack of undo data.

Creating and Managing Undo Tablespaces
So far as datafile management is concerned, an undo tablespace is the same as any
other tablespace: files can be added, resized, taken online and offline, and moved or
renamed. But it is not possible to specify any options regarding storage: you cannot
specify automatic segment space management, you cannot specify a uniform extent
size. To create an undo tablespace, use the keyword UNDO:
CREATE UNDO TABLESPACE tablespace_name
DATAFILE datafile_name SIZE size
[ RETENTION NOGUARANTEE | GUARANTEE ] ;

By default, the tablespace will not guarantee undo retention. This characteristic
can be specified at tablespace creation time, or set later:
ALTER TABLESPACE tablespace_name
retention [ GUARANTEE | NOGUARANTEE ] ;

Chapter 8: DML and Concurrency

359
EXAM TIP Unless specified at creation time in the datafile clause, the
datafile(s) of an undo tablespace will not be set to autoextend. But if your
database is created with DBCA, it will enable automatic extension for the
undo tablespace’s datafile with maximum size unlimited. Automatic extension
can be enabled or disabled at any time, as it can be for any datafile.

• In a RAC database, every instance opening the database must have its own
undo tablespace. This can be controlled by setting the UNDO_TABLESPACE
parameter to a different value for each instance. Each instance will bring its
own undo segments online.
• If the undo tablespace is changed by changing the UNDO_TABLESPACE
parameter, any segments in the previously nominated tablespace that were
supporting a transaction at the time of the change will remain online until
the transaction finishes.

Two-Minute Drill
Describe Each Data Manipulation Language (DML)
Statement
• INSERT enters rows into a table.
• UPDATE adjusts the values in existing rows.
• DELETE removes rows.
• MERGE can combine the functions of INSERT, UPDATE, and DELETE.
• Even though TRUNCATE is not DML, it does remove all rows in a table.
• It is possible for an INSERT to enter rows into multiple tables.
• Subqueries can be used to generate the rows to be inserted, updated, or
deleted.
• An INSERT, UPDATE, or DELETE is not permanent until it is committed.

PART II

It is not possible to create segments in an undo tablespace, other than the undo
segments that will be created automatically. Initially, there will be a pool of ten undo
segments created in an undo tablespace. More will be created if there are more than
ten concurrent transactions. Oracle will monitor the concurrent transaction rate and
adjust the number of segments as necessary.
No matter how many undo tablespaces there may be in a database, generally
speaking only one will be in use at a time. The undo segments in this tablespace will
have a status of online (meaning that they are available for use); the segments in any
other undo tablespaces will have status offline, meaning that they will not be used. If
the undo tablespace is changed, all the undo segments in the old undo tablespace will
be taken offline, and those in the new undo tablespace will be brought online. There
are two exceptions to this:

OCA/OCP Oracle Database 11g All-in-One Exam Guide

360
• TRUNCATE removes every row from a table.
• A TRUNCATE is immediately permanent: it cannot be rolled back.

Control Transactions
• A transaction is a logical unit of work, possibly comprising several DML
statements.
• Transactions are invisible to other sessions until committed.
• Until committed, transactions can be rolled back.
• A SAVEPOINT lets a session roll back part of a transaction.

Manage Data Using DML
• All DML commands generate undo and redo.
• Redo protects all changes to segments—undo segments, as well as data
segments.
• Server processes read from datafiles; DBWn writes to datafiles.

Identify and Administer PL/SQL Objects
• Anonymous PL/SQL is stored on the client; stored PL/SQL in the data
dictionary.
• Procedures and functions can be packaged; triggers cannot be packaged.
• PL/SQL code can call SQL code.

Monitor and Resolve Locking Conflicts
• The default level of locking is row level.
• Locks are required for all DML commands and are optional for SELECT.
• A DML statement requires shared locks on the objects involved and exclusive
locks on the rows involved.
• A DDL lock requires an exclusive lock on the object it affects.
• Deadlocks are resolved automatically.

Overview of Undo
• All DML statements generate undo data.
• Undo data is used for transaction rollback and isolation and to provide read
consistency, and also for flashback queries.
• Automatic undo management using undo segments is the default with
release 11g.

Chapter 8: DML and Concurrency

361
Transactions and Undo Data
• Undo data will always be kept until the transaction that generated it
completes with a COMMIT or a ROLLBACK. This is active undo.
• Undo data will be retained for a period after it becomes inactive to satisfy any
read consistency requirements of long running queries; this is unexpired undo.

Managing Undo
• An instance will use undo segments in one, nominated, undo tablespace.
• More undo tablespaces may exist, but only one will be used at a time.
• The undo tablespace should be large enough to take into account the
maximum rate of undo generation and the longest running query.
• Undo tablespace datafiles are datafiles like any others.

Self Test
1. Which of the following commands can be rolled back? (Choose all correct
answers.)
A. COMMIT
B. DELETE
C. INSERT
D. MERGE
E. TRUNCATE
F. UPDATE
2. If an UPDATE or DELETE command has a WHERE clause that gives it a
scope of several rows, what will happen if there is an error partway through
execution? The command is one of several in a multistatement transaction.
(Choose the best answer.)
A. The command will skip the row that caused the error and continue.
B. The command will stop at the error, and the rows that have been updated
or deleted will remain updated or deleted.
C. Whatever work the command had done before hitting the error will be
rolled back, but work done already by the transaction will remain.
D. The whole transaction will be rolled back.

PART II

• Expired undo is data no longer needed for read consistency and may be
overwritten at any time as space in undo segments is reused.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

362
3. Study the result of this SELECT statement:
SQL> select * from t1;
C1
C2
C3
C4
---------- ---------- ---------- ---------1
2
3
4
5
6
7
8

If you issue this statement:
insert into t1 (c1,c2) values(select c1,c2 from t1);

why will it fail? (Choose the best answer.)
A. Because values are not provided for all the table’s columns: there should
be NULLs for C3 and C4.
B. Because the subquery returns multiple rows: it requires a WHERE clause to
restrict the number of rows returned to one.
C. Because the subquery is not scalar: it should use MAX or MIN to generate
scalar values.
D. Because the VALUES keyword is not used with a subquery.
E. It will succeed, inserting two rows with NULLs for C3 and C4.
4. You want to insert a row and then update it. What sequence of steps should
you follow? (Choose the best answer.)
A. INSERT, UPDATE, COMMIT
B. INSERT, COMMIT, UPDATE, COMMIT
C. INSERT, SELECT FOR UPDATE, UPDATE, COMMIT
D. INSERT, COMMIT, SELECT FOR UPDATE, UPDATE, COMMIT
5. Which of these commands will remove every row in a table? (Choose one or
more correct answers.)
A. A DELETE command with no WHERE clause
B. A DROP TABLE command
C. A TRUNCATE command
D. An UPDATE command, setting every column to NULL and with no
WHERE clause
6. User JOHN updates some rows and asks user ROOPESH to log in and check
the changes before he commits them. Which of the following statements is
true? (Choose the best answer.)
A. ROOPESH can see the changes but cannot alter them because JOHN will
have locked the rows.
B. ROOPESH will not be able to see the changes.
C. JOHN must commit the changes so that ROOPESH can see them and, if
necessary, roll them back.
D. JOHN must commit the changes so that ROOPESH can see them, but only
JOHN can roll them back.

Chapter 8: DML and Concurrency

363
7. There are several steps involved in executing a DML statement. Place these in
the correct order:
A. Apply the change vectors to the database buffer cache.
B. Copy blocks from datafiles into buffers.
C. Search for the relevant blocks in the database buffer cache.
8. When a COMMIT is issued, what will happen? (Choose the best answer.)
A. All the change vectors that make up the transaction are written to disk.
B. DBWn writes the change blocks to disk.
C. LGWR writes the log buffer to disk.
D. The undo data is deleted, so that the changes can no longer be rolled back.
9. What types of segment are protected by redo? (Choose all correct answers.)
A. Index segments
B. Table segments
C. Temporary segments
D. Undo segments
10. Which of these commands will terminate a transaction? (Choose all correct
answers.)
A. CREATE
B. GRANT
C. SAVEPOINT
D. SET AUTOCOMMIT ON
11. What type of PL/SQL objects cannot be packaged? (Choose the best answer.)
A. Functions
B. Procedures
C. Triggers
D. All PL/SQL objects can be packaged, except anonymous blocks
12. If several sessions request an exclusive lock on the same row, what will
happen? (Choose the best answer.)
A. The first session will get the lock; after it releases the lock there is a
random selection of the next session to get the lock.
B. The first session will get an exclusive lock, and the other sessions will get
shared locks.
C. The sessions will be given an exclusive lock in the sequence in which they
requested it.
D. Oracle will detect the conflict and roll back the statements that would
otherwise hang.

PART II

D. Write the change vectors to the log buffer.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

364
13. When a DML statement executes, what happens? (Choose the best answer.)
A. Both the data and the undo blocks on disk are updated, and the changes
are written out to the redo stream.
B. The old version of the data is written to an undo segment, and the new
version is written to the data segments and the redo log buffer.
C. Both data and undo blocks are updated in the database buffer cache, and
the updates also go to the log buffer.
D. The redo log buffer is updated with information needed to redo the
transaction, and the undo blocks are updated with information needed to
reverse the transaction.
14. Your undo tablespace consists of one datafile on one disk, and transactions
are failing for lack of undo space. The disk is full. You have enabled retention
guarantee. Any of the following options could solve the problem, but which
would cause downtime for your users? (Choose the best answer.)
A. Create another, larger, undo tablespace and use alter system set
undo_tablespace= . . . to switch to it.
B. Move the datafile to a disk with more space, and use alter database
resize datafile . . . to make it bigger.
C. Reduce the undo_retention setting with alter system set
undo_retention= . . . .
D. Disable retention guarantee with alter tablespace . . .
retention guarantee.
15. Examine this query and result set:
SQL> select BEGIN_TIME,END_TIME,UNDOBLKS,MAXQUERYLEN from V$UNDOSTAT;
BEGIN_TIME
END_TIME
UNDOBLKS MAXQUERYLEN
----------------- ----------------- ---------- ----------02-01-08:11:35:55 02-01-08:11:41:33
14435
29
02-01-08:11:25:55 02-01-08:11:35:55
120248
296
02-01-08:11:15:55 02-01-08:11:25:55
137497
37
02-01-08:11:05:55 02-01-08:11:15:55
102760
1534
02-01-08:10:55:55 02-01-08:11:05:55
237014
540
02-01-08:10:45:55 02-01-08:10:55:55
156223
1740
02-01-08:10:35:55 02-01-08:10:45:55
145275
420
02-01-08:10:25:55 02-01-08:10:35:55
99074
120

The blocksize of the undo tablespace is 4KB. Which of the following would be
the optimal size for the undo tablespace? (Choose the best answer.)
A. 1GB
B. 2GB
C. 3GB
D. 4GB

Chapter 8: DML and Concurrency

365

Self Test Answers
1. þ B, C, D, and F. These are the DML commands: they can all be rolled back.
ý A and E. COMMIT terminates a transaction, which can then never be
rolled back. TRUNCATE is a DDL command and includes a built-in COMMIT.

3. þ D. The syntax is wrong: use either the VALUES keyword or a subquery, but
not both. Remove the VALUES keyword, and it will run. C3 and C4 would be
populated with NULLs.
ý A, B, C, and E. A is wrong because there is no need to provide values for
columns not listed. B and C are wrong because an INSERT can insert a set of
rows, so there is no need to restrict the number with a WHERE clause or by
using MAX or MIN to return only one row. E is wrong because the statement
is not syntactically correct.
4. þ A. This is the simplest (and therefore the best) way.
ý B, C, and D. All these will work, but they are all needlessly complicated:
no programmer should use unnecessary statements.
5. þ A and C. The TRUNCATE will be faster, but the DELETE will get there too.
ý B and D. B is wrong because this will remove the table as well as the rows
within it. D is wrong because the rows will still be there—even though they
are populated with NULLs.
6. þ B. The principle of isolation means that only JOHN can see his
uncommitted transaction.
ý A, C, and D. A is wrong because transaction isolation means that no
other session will be able to see the changes. C and D are wrong because a
committed transaction can never be rolled back.
7. þ C, B, D, and A. This is the sequence. All others are wrong.
8. þ C. A COMMIT is implemented by placing a COMMIT record in the log
buffer, and LGWR flushing the log buffer to disk.
ý A, B, and D. A is wrong because many of the change vectors (perhaps
all of them) will be on disk already. B is wrong because DBWn does not
participate in commit processing. D is wrong because the undo data may well
persist for some time; a COMMIT is not relevant to this.

PART II

2. þ C. This is the expected behavior: the statement is rolled back, and the rest
of the transaction remains uncommitted.
ý A, B, and D. A is wrong because, while this behavior is in fact configurable,
it is not enabled by default. B is wrong because, while this is in fact possible in
the event of space errors, it is not enabled by default. D is wrong because only
the one statement will be rolled back, not the whole transaction.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

366
9. þ A, B, and D. Changes to any of these will generate redo.
ý C. Changes to temporary segments do not generate redo.
10. þ A and B. Both DDL and access control commands include a COMMIT.
ý C and D. C is wrong because a savepoint is only a marker within a
transaction. D is wrong because this is a SQL*Plus command that acts locally
on the user process; it has no effect on an active transaction.
11. þ C. Triggers cannot be packaged.
ý A, B, and D. A and B are wrong because functions and procedures can be
packaged. D is wrong because neither anonymous blocks nor triggers can be
packaged.
12. þ C. This correctly describes the operation of the enqueue mechanism.
ý A, B, and D. A is wrong because locks are granted sequentially, not
randomly. B is wrong because the shared locks apply to the object; row locks
must be exclusive. D is wrong because this is more like a description of how
deadlocks are managed.
13. þ C. All DML occurs in the database buffer cache, and changes to both data
block and undo blocks are protected by redo.
ý A, B, and D. A is wrong because writing to disk is independent of
executing the statement. B and D are incomplete: redo protects changes to
both data blocks and undo blocks.
14. þ B. This is the option that would require downtime, because the datafile
would have to taken offline during the move and you cannot take it offline
while the database is open.
ý A, C, and D. These are wrong because they are all operations that can be
carried out during normal running without end users being aware.
15. þ C. To calculate, take the largest figure for UNDBLKS, which is for a tenminute period. Divide by 600 to get the rate of undo generation in blocks per
second, and multiply by the block size to get the figure in bytes. Multiply by
the largest figure for MAXQUERYLEN, to find the space needed if the highest
rate of undo generation coincided with the longest query, and divide by a
billion to get the answer in gigabytes:
237014 / 600 * 4192 * 1740 = 2.9 (approximately)
ý A, B, and D. The following algorithm should be followed when sizing an
undo tablespace: Calculate the rate at which undo is being generated at your
peak workload, and multiply by the length of your longest query.

CHAPTER 9
Retrieving, Restricting, and
Sorting Data Using SQL

Exam Objectives
In this chapter you will learn to
• 051.1.1 List the Capabilities of SQL SELECT Statements
• 051.1.2 Execute a Basic SELECT Statement
• 051.2.1 Limit the Rows Retrieved by a Query
• 051.2.2 Sort the Rows Retrieved by a Query
• 051.2.3 Use Ampersand Substitution

367

OCA/OCP Oracle Database 11g All-in-One Exam Guide

368
This chapter contains several sections that are not directly tested by the exam but are
considered prerequisite knowledge for every student. Two tools used extensively for
exercises are SQL*Plus and SQL Developer, which are covered in Chapter 2. Oracle
specialists use these every day in their work. The exercises and many of the examples
are based on two demonstration sets of data. The first, known as the HR schema, is
supplied by Oracle, while the second, the WEBSTORE schema, is designed, created,
and populated later in this chapter. There are instructions on how to launch the tools
and create the demonstration schemas.
The exam-testable sections include the concepts behind the relational paradigm
and normalizing of data into relational structures and of retrieving data stored in
relational tables using the SELECT statement. The statement is introduced in its basic
form and is progressively built on to extend its core functionality. This chapter also
discusses the WHERE clause, which specifies one or more conditions that the Oracle
server evaluates to restrict the rows returned by the statement. A further language
enhancement is introduced by the ORDER BY clause, which provides data sorting
capabilities. The chapter closes by discussing ampersand substitution: a mechanism
that provides a way to reuse the same statement to execute different queries by
substituting query elements at runtime.

List the Capabilities of SQL SELECT Statements
Knowing how to retrieve data in a set format using a query language is the first step
toward understanding the capabilities of SELECT statements. Describing the relations
involved provides a tangible link between the theory of how data is stored in tables
and the practical visualization of the structure of these tables. These topics form an
important precursor to the discussion of the capabilities of the SELECT statement.
The three primary areas explored are as follows:
• Introducing the SQL SELECT statement
• The DESCRIBE table command
• Capabilities of the SELECT statement

Introducing the SQL SELECT Statement
The SELECT statement from Structured Query Language (SQL) has to be the single most
powerful nonspoken language construct. It is an elegant, flexible, and highly extensible
mechanism created to retrieve information from a database table. A database would
serve little purpose if it could not be queried to answer all sorts of interesting questions.
For example, you may have a database that contains personal financial records like your
bank statements, your utility bills, and your salary statements. You could easily ask the
database for a date-ordered list of your electrical utility bills for the last six months or
query your bank statement for a list of payments made to a certain account over the

Chapter 9: Retrieving, Restricting, and Sorting Data Using SQL

369
same period. The beauty of the SELECT statement is encapsulated in its simple, Englishlike format that allows questions to be asked of the database in a natural manner.

The DESCRIBE Table Command

DESC[RIBE] .tablename

This command shall be systematically unpacked. The DESCRIBE keyword can be
shortened to DESC. All tables belong to a schema or owner. If you are describing a
table that belongs to the schema to which you have connected, the  portion
of the command may be omitted. Figure 9-1 shows how the EMPLOYEES table is
described from SQL*Plus after connecting to the database as the HR user with the
DESCRIBE EMPLOYEES command and how the DEPARTMENTS table is described
using the shorthand notation: DESC HR.DEPARTMENTS. The HR. notational prefix
could be omitted, since the DEPARTMENTS table belongs to the HR schema. The HR
schema (and every other schema) has access to a special table called DUAL, which
belongs to the SYS schema. This table can be structurally described with the command
DESCRIBE SYS.DUAL.
Describing tables yields interesting and useful results. You know which columns
of a table can be selected, since their names are exposed. You also know the nature of
the data contained in these columns, since the column data type is exposed. Chapter 7
details column types.
Mandatory columns, which are forced to store data for each row, are exposed by
the “Null?” column output produced by the DESCRIBE command having the value
NOT NULL. You are guaranteed that any column restricted by the NOT NULL constraint
contains some data. It is important to note that NULL has special meaning for the
Oracle server. NULL refers to an absence of data. Blank spaces do not count as NULL,
since they are present in the row and have some length even though they are not visible.

PART II

To get the answers one seeks, one must ask the correct questions. An understanding
of the terms of reference, which in this case are relational tables, is essential for the
formulation of the correct questions. A structural description of a table is useful to
establish what questions can be asked of it. The Oracle server stores information
about all tables in a special set of relational tables called the data dictionary, in order
to manage them. The data dictionary is quite similar to a regular language dictionary.
It stores definitions of database objects in a centralized, ordered, and structured format.
The data dictionary is discussed in detail in Chapter 1.
A clear distinction must be drawn between storing the definition and the contents
of a table. The definition of a table includes information like table name, table owner,
details about the columns that compose the table, and its physical storage size on disk.
This information is also referred to as metadata. The contents of a table are stored in
rows and are referred to as data.
The structural metadata of a table may be obtained by querying the database for
the list of columns that compose it using the DESCRIBE command. The general form
of the syntax for this command is intuitively

OCA/OCP Oracle Database 11g All-in-One Exam Guide

370

Figure 9-1

Describing EMPLOYEES, DEPARTMENTS, and DUAL tables

Capabilities of the SELECT Statement
Relational database tables are built on a mathematical foundation called relational
theory. In this theory, relations or tables are operated on by a formal language called
relational algebra. Relational algebra uses some specialized terms: relations store tuples,
which have attributes. Or in Oracle-speak, tables store rows, which have columns.
SQL is a commercial interpretation of the relational algebra constructs. Three concepts
from relational theory encompass the capability of the SELECT statement: projection,
selection, and joining.
Projection refers to the restriction of columns selected from a table. When requesting
information from a table, you can ask to view all the columns. You can retrieve all data
from the HR.DEPARTMENTS table with a simple SELECT statement. This query will
return DEPARTMENT_ID, DEPARTMENT_NAME, MANAGER_ID, and LOCATION_ID
information for every department record stored in the table. What if you wanted a list
containing only the DEPARTMENT_NAME and MANAGER_ID columns? Well, you
would request just those two columns from the table. This restriction of columns is
called projection.
Selection refers to the restriction of the rows selected from a table. It is often not
desirable to retrieve every row from a table. Tables may contain many rows, and instead
of requesting all of them, selection provides a means to restrict the rows returned.
Perhaps you have been asked to identify only the employees who belong to department
30. With selection it is possible to limit the results set to those rows of data with a
DEPARTMENT_ID value of 30.
Joining, as a relational concept, refers to the interaction of tables with each other
in a query. Third normal form presents the notion of separating different types of data
into autonomous tables to avoid duplication and maintenance anomalies and to

Chapter 9: Retrieving, Restricting, and Sorting Data Using SQL

371

EXAM TIP The three concepts of projection, selection, and joining, which
form the underlying basis for the capabilities of the SELECT statement, are
usually measured in the exam.You may be asked to choose the correct three
fundamental concepts or to choose a statement that demonstrates one or
more of these concepts.

Data Normalization
The process of modeling data into relational tables is known as normalization. There
are commonly said to be three levels of normalization: the first, second, and third
normal forms. There are higher levels of normalization: fourth and fifth normal forms
are well defined, but not commonly used. It is possible for SQL to address un-normalized
data, but this will usually be inefficient, as that is not what the language is designed to
do. In most cases, data stored in a relational database and accessed with SQL should
be normalized to the third normal form.
TIP There are often several possible normalized models for an application.
It is important to use the most appropriate—if the systems analyst gets this
wrong, the implications can be serious for performance, storage needs, and
development effort.
As an example of normalization, consider an un-normalized table called BOOKS
that stores details of books, authors, and publishers, using the ISBN number as the
primary key. A primary key is the one attribute (or attributes) that can uniquely identify
a record. These are two entries:
ISBN

Title

Authors

Publisher

12345

Oracle 11g OCP SQL
Fundamentals 1 Exam Guide

John Watson,
Roopesh Ramklass

McGraw-Hill, Spear Street,
San Francisco, CA 94105

67890

Oracle 11g New Features
Exam Guide

Sam Alapati

McGraw-Hill, Spear Street,
San Francisco, CA 94105

Storing the data in this table gives rise to several anomalies. First, here is the insertion
anomaly: it is impossible to enter details of authors who are not yet published, because

PART II

associate related data using primary and foreign key relationships. These relationships
provide the mechanism to join tables with each other (discussed in Chapter 12).
Assume there is a need to retrieve the e-mail addresses for employees who work
in the Sales department. The EMAIL column belongs to the EMPLOYEES table, while
the DEPARTMENT_NAME column belongs to the DEPARTMENTS table. Projection and
selection from the DEPARTMENTS table may be used to obtain the DEPARTMENT_ID
value that corresponds to the Sales department. The matching rows in the EMPLOYEES
table may be joined to the DEPARTMENTS table based on this common DEPARTMENT_
ID value. The EMAIL column may then be projected from this set of results.
The SQL SELECT statement is mathematically governed by these three tenets. An
unlimited combination of projections, selections, and joins provides the language to
extract the relational data required.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

372
there will be no ISBN number under which to store them. Second, a book cannot be
deleted without losing the details of the publisher: a deletion anomaly. Third, if a
publisher’s address changes, it will be necessary to update the rows for every book it
has published: an update anomaly. Furthermore, it will be very difficult to identify every
book written by one author. The fact that a book may have several authors means that
the “author” field must be multivalued, and a search will have to search all the values.
Related to this is the problem of having to restructure the table if a book comes along
with more authors than the original design can handle. Also, the storage is very inefficient
due to replication of address details across rows, and the possibility of error as this data is
repeatedly entered is high. Normalization should solve all these issues.
The first normal form is to remove the repeating groups, in this case, the multiple
authors: pull them out into a separate table called AUTHORS. The data structures will
now look like the following.
Two rows in the BOOKS table:
ISBN

TITLE

PUBLISHER

12345

Oracle 11g OCP SQL Fundamentals 1 Exam Guide

McGraw-Hill, Spear Street,
San Francisco, California

67890

Oracle 11g New Features Exam Guide

McGraw-Hill, Spear Street,
San Francisco, California

And three rows in the AUTHORS table:
NAME

ISBN

John Watson

12345

Roopesh Ramklass

12345

Sam Alapati

67890

The one row in the BOOKS table is now linked to two rows in the AUTHORS table.
This solves the insertion anomaly (there is no reason not to insert as many unpublished
authors as necessary), the retrieval problem of identifying all the books by one author
(one can search the AUTHORS table on just one name) and the problem of a fixed
maximum number of authors for any one book (simply insert as many AUTHORS as
are needed).
This is the first normal form: no repeating groups.
The second normal form removes columns from the table that are not dependent
on the primary key. In this example, that is the publisher’s address details: these are
dependent on the publisher, not the ISBN. The BOOKS table and a new PUBLISHERS
table will then look like this:
BOOKS
ISBN

TITLE

PUBLISHER

12345

Oracle 11g OCP SQL
Fundamentals 1 Exam Guide

McGraw-Hill

67890

Oracle 11g New Features
Exam Guide

McGraw-Hill

Chapter 9: Retrieving, Restricting, and Sorting Data Using SQL

373
PUBLISHERS
PUBLISHER

STREET

CITY

STATE

McGraw-Hill

Spear Street

San Francisco

California

PUBLISHERS
PUBLISHER

ADDRESS CODE

McGraw-Hill

123

ADDRESSES
ADDRESS CODE

STREET

CITY

STATE

123

Spear Street

San Francisco

California

One characteristic of normalized data that should be emphasized now is the use
of primary keys and foreign keys. A primary key is the unique identifier of a row in a
table, either one column or a concatenation of several columns (known as a composite
key). Every table should have a primary key defined. This is a requirement of the
relational paradigm. Note that the Oracle database deviates from this standard: it is
possible to define tables without a primary key—though it is usually not a good idea,
and some other RDBMSs do not permit this.
A foreign key is a column (or a concatenation of several columns) that can be used
to identify a related row in another table. A foreign key in one table will match a primary
key in another table. This is the basis of the many-to-one relationship. A many-to-one
relationship is a connection between two tables, where many rows in one table refer
to a single row in another table. This is sometimes called a parent-child relationship: one
parent can have many children. In the BOOKS example so far, the keys are as follows:
TABLE

KEYS

BOOKS

Primary key: ISBN
Foreign key: Publisher

AUTHORS

Primary key: Name + ISBN
Foreign key: ISBN

PUBLISHERS

Primary key: Publisher
Foreign key: Address code

ADDRESSES

Primary key: Address code

PART II

All the books published by one publisher will now point to a single record in
PUBLISHERS. This solves the problem of storing the address many times, and it also
solves the consequent update anomalies and the data consistency errors caused by
inaccurate multiple entries.
Third normal form removes all columns that are interdependent. In the PUBLISHERS
table, this means the address columns: the street exists in only one city, and the city
can be in only one state; one column should do, not three. This could be achieved by
adding an address code, pointing to a separate address table:

OCA/OCP Oracle Database 11g All-in-One Exam Guide

374
These keys define relationships such as that one book can have several authors.
There are various standards for documenting normalized data structures, developed
by different organizations as structured formal methods. Generally speaking, it really
doesn’t matter which method one uses as long as everyone reading the documents
understands it. Part of the documentation will always include a listing of the
attributes that make up each entity (also known as the columns that make up
each table) and an entity-relationship diagram representing graphically the foreign
to primary key connections. A widely used standard is as follows:
• Primary key columns identified with a hash (#)
• Foreign key columns identified with a backslash (\)
• Mandatory columns (those that cannot be left empty) with an asterisk (*)
• Optional columns with a lowercase “o”
The second necessary part of documenting the normalized data model is the entityrelationship diagram. This represents the connections between the tables graphically.
There are different standards for these; Figure 9-2 shows the entity-relationship diagram
for the BOOKS example using a very simple notation limited to showing the direction
of the one-to-many relationships, using what are often called crow’s feet to indicate
which sides of the relationship are the many and the one. It can be seen that one BOOK
can have multiple AUTHORS, one PUBLISHER can publish many books. Note that the
diagram also states that both AUTHORS and PUBLISHERS have exactly one ADDRESS.
More complex notations can be used to show whether the link is required or optional,
information that will match that given in the table columns listed previously.
This is a very simple example of normalization, and it is not in fact complete. If one
author were to write several books, this would require multiple values in the ISBN column
of the AUTHORS table. That would be a repeating group, which would have to be
removed because repeating groups break the rule for first normal form. A challenging
exercise with data normalization is ensuring that the structures can handle all possibilities.
A table in a real-world application may have hundreds of columns and dozens of
foreign keys. Entity-relationship diagrams for applications with hundreds or thousands
of entities can be challenging to interpret.

Figure 9-2
An entityrelationship
diagram relating
AUTHORS, BOOKS,
PUBLISHERS, and
ADDRESSES

ADDRESSES

AUTHORS

BOOKS

PUBLISHERS

Chapter 9: Retrieving, Restricting, and Sorting Data Using SQL

375

Create the Demonstration Schemas

The HR and WEBSTORE Schemas
The HR demonstration schema consists of seven tables, linked by primary key to foreign
key relationships. Figure 9-3 illustrates the relationships between the tables, as an
entity-relationship diagram.
Two of the relationships shown in Figure 9-3 may not be immediately
comprehensible. First, there is a many-to-one relationship from EMPLOYEES to
EMPLOYEES. This is what is known as a self-referencing foreign key. This means that
many employees can be connected to one employee, and it’s based on the fact that

Figure 9-3
The HR entityrelationship diagram

REGIONS

COUNTRIES

LOCATIONS

DEPARTMENTS

JOB_HISTORY

EMPLOYEES

JOBS

PART II

Throughout this book, there are many examples of SQL code that run against tables.
The examples use tables in the HR schema, which is sample data that simulates a
simple human resources application, and the WEBSTORE schema, which simulates
an order entry application.
The HR schema can be created when the database is created; it is an option
presented by the Database Configuration Assistant. If they do not exist, they can be
created later by running some scripts that will exist in the database Oracle Home.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

376
many employees may have one manager, but the manager is also an employee. The
relationship is implemented by the column manager_id being a foreign key to
employee_id, which is the table’s primary key.
The second relationship that may require explanation is between DEPARTMENTS
and EMPLOYEES, which is bidirectional. The one department–to–many employees
relationship simply states that there may be many staff members in each department,
based on the EMPLOYEES department_id column being a foreign key to the
DEPARTMENTS primary key department_id column. The one employee–to–many
departments relationship shows that one employee could be the manager of several
departments and is implemented by the manager_id column in DEPARTMENTS being
a foreign key to the primary key employee_id column in EMPLOYEES.
Table 9-1 shows the columns of each table in the HR schema, using the notation
described in the earlier section “Data Normalization” to indicate primary keys (#),
foreign keys (\), and whether columns are optional (o) or mandatory (*).
The tables are as follows:
• REGIONS has rows for major geographical areas.
• COUNTRIES has rows for each country, which are optionally assigned to a
region.
• LOCATIONS includes individual addresses, which are optionally assigned to
a country.
• DEPARTMENTS has a row for each department, optionally assigned to a
location and optionally with a manager (who must exist as an employee).
• EMPLOYEES has a row for every employee, each of whom must be assigned to
a job and optionally to a department and to a manager. The managers must
themselves be employees.
• JOBS lists all possible jobs in the organization. It is possible for many
employees to have the same job.
• JOB_HISTORY lists previous jobs held by employees, uniquely identified by
employee_id and start_date; it is not possible for an employee to hold two jobs
concurrently. Each job history record will refer to one employee, who will have
had one job at that time and may have been a member of one department.
This HR schema is used for many of the exercises and examples embedded in the
chapters of this book and does need to be available.
The WEBSTORE schema might already have been created if you worked through
this book from Chapter 1. In this chapter, the entities and their relationships will be
defined and we will create the schema and the necessary objects. The WEBSTORE schema
consists of four tables, linked by primary key to foreign key relationships. Figure 9-4
illustrates the relationships between the tables, as an entity-relationship diagram.

Chapter 9: Retrieving, Restricting, and Sorting Data Using SQL

377
Table 9-1
The Tables and
Columns on the
HR Schema

Columns

REGIONS

#*

region_id

o

region_name

#*

country_id

o

country_name

\o

region_id

#*

location_id

o

street_address

o

postal_code

*

city

o

state_province

\o

country_id

#*

department_id

*

department_name

\o

manager_id

\o

location_id

#*

employee_id

o

first_name

*

last_name

*

e-mail

o

phone_number

*

hire_date

\*

job_id

o

salary

o

commission_pct

\o

manager_id

COUNTRIES

LOCATIONS

DEPARTMENTS

EMPLOYEES

JOBS

JOB_HISTORY

\o

department_id

#*

job_id

*

job_title

o

min_salary

o

max_salary

#*

employee_id

#*

start_date

*

end_date

\*

job_id

\o

department_id

PART II

Table

OCA/OCP Oracle Database 11g All-in-One Exam Guide

378

Figure 9-4 The WEBSTORE entity-relationship diagram

The store maintains product, customer, and order details in the appropriately
named tables. Each order may consist of multiple products with various quantities,
and these records are stored in the ORDER_ITEMS table. Each table has a primary key
except for the ORDER_ITEMS table. The order_item_id column stores line item
numbers for each distinct product that is part of the order, but each order is associated
with one or more records from the ORDER_ITEMS table.
The tables are as follows:
• PRODUCTS has rows for each item, including description, status, price, and
stock information. ORDER_ITEMS may be associated with only one product.
A foreign key relationship exists between these tables, ensuring that only valid
products can appear in records in the ORDER_ITEMS table.
• CUSTOMERS stores information for each customer.

Chapter 9: Retrieving, Restricting, and Sorting Data Using SQL

379
• ORDERS stores customer order information. One customer may be associated
with many orders. A foreign key constraint governs this relationship, ensuring
that orders cannot be placed by nonexistent customers.
• ORDER_ITEMS stores the detail line items associated with each order.

Demonstration Schema Creation

sqlplus / as sysdba

There are various options for this connection, but the preceding syntax will usually
work if the database is running on the same machine where you are running SQL*Plus.
Then invoke the script from the SQL> prompt:
SQL> @?/demo/schema/human_resources/hr_main.sql

The “?” character is a variable that SQL*Plus will expand into the path to the
Oracle Home directory. The script will prompt for HR’s password, default tablespace,
and temporary tablespace; the SYS password; and a destination for the logfile of the
script’s running. Typical values for the default tablespace and temporary tablespace are
USERS and TEMP, but these will have to have been created already. After completion,
you will be connected to the database as the new HR user. To verify this, run this
statement:
SQL> show user;

You will see that you are currently connected as HR; then run
SQL> select table_name from user_tables;

You will see a list of the seven tables in the HR schema.
To create the WEBSTORE schema (if it does not already exist), run the following
statements to create the necessary objects and insert a dataset that will be used in later
exercises and examples:
sqlplus / as sysdba
create user webstore identified by admin123
default tablespace users temporary tablespace temp quota unlimited on users;
grant create session, create table, create sequence to webstore;
connect webstore/admin123

PART II

If the database you are using was created specifically for studying for the OCP SQL
examination, the demonstration schemas should have been created already. They are an
option presented by the Database Configuration Assistant when it creates a database.
If the schemas were not created at database creation time, they can be created by
running scripts installed into the Oracle Home of the database. These scripts will need
to be run from SQL*Plus or SQL Developer as a user with SYSDBA privileges. The script
will prompt for certain values as it runs. For example, on Linux, first launch SQL*Plus
from an operating system prompt:

OCA/OCP Oracle Database 11g All-in-One Exam Guide

380
create table customers(
customer_id number(8) not null constraint pk_customer_id primary key,
join_date date default sysdate not null,
customer_status varchar2(8) not null, customer_name
varchar2(20) not null,
creditrating
varchar2(10) not null, email varchar2(50) not null);
create table products(
product_id number(8) not null constraint pk_product_id primary key,
product_description varchar2(20) not null,
product_status varchar2(8) not null, price number(10,2) not null,
price_date date not null, stock_count number(8) not null);
create table orders(
order_id
number(8) not null constraint pk_order_id primary key,
order_date
date not null, order_status varchar2(8) not null,
order_amount number(10,2) not null,
customer_id
number(8) constraint fk_customer_id references customers (customer_id));
create table order_items(
order_item_id number(8) not null,
order_id
number(8) constraint fk_order_id references orders(order_id),
product_id
number(8) constraint fk_prod_id references products(product_id),
quantity
number);
create sequence cust_seq;
create sequence order_seq;
create sequence prod_seq;

Once these schema objects are created, use the following INSERT statements that
make use of substitution variables to populate (or seed) the tables with several rows
of data based on the sample data in Table 9-2.
insert into customers
(customer_id, customer_status, customer_name, creditrating, email) values
(cust_seq.nextval, '&cust_status', '&cust_name', '&creditrating', '&email');
insert into products(product_id, product_description,
product_status, price, price_date, stock_count)
values (prod_seq.nextval, '&product_description',
'&product_status', &price, sysdate, &stock_count);
insert into orders(order_id, order_date, order_status,
order_amount, customer_id)
values (order_seq.nextval, sysdate, '&order_status',
&order_amount, &customer_id);
insert into order_items values (&item_id, &order_id, &product_id, &quantity);

Chapter 9: Retrieving, Restricting, and Sorting Data Using SQL

381
Customers

Customer Status

Customer Name

Credit Rating

Email

NEW

Ameetha

Platinum

ameetha@largecorp.com

OLD

Sid

Gold

sid@mediumcorp.com

OLD

Coda

Bronze

coda@largecorp.com

Table:

Products

Product Description

Product Status

Price

Stock Count

11G SQL Exam Guide

ACTIVE

60

20

100

40

11G All-in-One Guide

ACTIVE

Table:

Orders

Order Status

Order Amount

Customer Id

COMPLETE

680

2

PENDING

400

3

Table:

Order Items

Order Item Id

Order Id

Product Id

Quantity

1

1

2

5

2

1

1

3

1

2

2

4

Table 9-2 Sample Data for the WEBSTORE Schema

Execute a Basic SELECT Statement
The practical capabilities of the SELECT statement are realized in its execution. The
key to executing any query language statement is a thorough understanding of its
syntax and the rules governing its usage. You will learn more about this topic first, then
about the execution of a basic query, and finally about expressions and operators,
which exponentially increase the utility of data stored in relational tables. Next, the
concept of a null value is demystified, as its pitfalls are exposed. These topics are
covered in the following four sections:
• Syntax of the primitive SELECT statement
• Rules are meant to be followed
• SQL expressions and operators
• NULL is nothing

PART II

Table:

OCA/OCP Oracle Database 11g All-in-One Exam Guide

382
Syntax of the Primitive SELECT Statement
In its most primitive form, the SELECT statement supports the projection of columns and
the creation of arithmetic, character, and date expressions. It also facilitates the elimination
of duplicate values from the results set. The basic SELECT statement syntax is as follows:
SELECT *|{[DISTINCT] column|expression [alias],...}
FROM table;

The special keywords or reserved words of the SELECT statement syntax appear in
uppercase. When using the commands, however, the case of the reserved words in your
query statement does not matter. Reserved words cannot be used as column names or
other database object names. SELECT, DISTINCT, and FROM are three keywords. A
SELECT statement always contains two or more clauses. The two mandatory clauses are
the SELECT clause and the FROM clause. The pipe symbol (|) is used to denote OR. So
you can read the first form of the preceding SELECT statement as
SELECT *
FROM table;

In this format, the asterisk symbol (*) is used to denote all columns. SELECT *
is a succinct way of asking Oracle to return all possible columns. It is used as a
shorthand, time-saving symbol instead of typing in SELECT column1, column2,
column3, column4,…,columnX, to select all the columns. The FROM clause specifies
which table to query to fetch the columns requested in the SELECT clause.
You can issue the following SQL command to retrieve all the columns and all the
rows from the REGIONS table in the HR schema:
select * from regions;

When this command is executed, it returns all the rows of data and all the columns
belonging to this table. Use of the asterisk in a SELECT statement is sometimes referred
to as a “blind” query because the exact columns to be fetched are not specified.
The second form of the basic SELECT statement has the same FROM clause as the
first form, but the SELECT clause is different:
SELECT {[DISTINCT] column|expression [alias],…}FROM table;

This SELECT clause can be simplified into two formats:
SELECT column1 (possibly other columns or expressions) [alias optional]
OR
SELECT DISTINCT column1 (possibly other columns or expressions) [alias optional]

An alias is an alternative name for referencing a column or expression. Aliases
are typically used for displaying output in a user-friendly manner. They also serve as
shorthand when referring to columns or expressions to reduce typing. Aliases will be
discussed in detail later in this chapter. By explicitly listing only the relevant columns in the
SELECT clause you, in effect, project the exact subset of the results you wish to retrieve. The
following statement will return just the REGION_NAME column of the REGIONS table:
select region_name from regions;

Chapter 9: Retrieving, Restricting, and Sorting Data Using SQL

383
You may be asked to obtain all the job roles in the organization that employees have
historically fulfilled. For this you can issue the command: SELECT * FROM JOB_
HISTORY. However, in addition, the SELECT * construct returns the EMPLOYEE_ID,
START_DATE, and END_DATE columns. The uncluttered results set containing only
the JOB_ID and DEPARTMENT_ID columns can be obtained with the following
statement:

Using the DISTINCT keyword allows duplicate rows to be eliminated from the
results set. In numerous situations a unique set of rows is required. It is important to
note that the criterion employed by the Oracle server in determining whether a row is
unique or distinct depends entirely on what is specified after the DISTINCT keyword
in the SELECT clause. Selecting distinct JOB_ID values from the JOB_HISTORY table
with the following query will return the eight distinct job types.
select distinct job_id from job_history;

An important feature of the DISTINCT keyword is the elimination of duplicate
values from combinations of columns.

Rules Are Meant to Be Followed
SQL is a fairly strict language in terms of syntax rules, but it remains simple and
flexible enough to support a variety of programming styles. This section discusses
some of the basic rules governing SQL statements.

Uppercase or Lowercase
It is a matter of personal taste about the case in which SQL statements are submitted
to the database. Many developers, including the authors of this book, prefer to write
their SQL statements in lowercase. There is also a common misconception that SQL
reserved words need to be specified in uppercase. Again, this is up to you. Adhering
to a consistent and standardized format is advised.
There is one caveat regarding case sensitivity. When interacting with literal values,
case does matter. Consider the JOB_ID column from the JOB_HISTORY table. This
column contains rows of data that happen to be stored in the database in uppercase;
for example, SA_REP and ST_CLERK. When requesting that the results set be restricted
by a literal column, the case is critical. The Oracle server treats the request for the rows
in the JOB_HISTORY table that contain a value of St_Clerk in the JOB_ID column
differently from the request for rows that have a value of ST_CLERK in JOB_ID column.
Metadata about different database objects is stored by default in uppercase in the
data dictionary. If you query a database dictionary table to return a list of tables owned
by the HR schema, it is likely that the table names returned will be stored in uppercase.
This does not mean that a table cannot be created with a lowercase name; it can be. It
is just more common and the default behavior of the Oracle server to create and store
tables, columns, and other database object metadata in uppercase in the database
dictionary.

PART II

select job_id,department_id from job_history;

OCA/OCP Oracle Database 11g All-in-One Exam Guide

384
EXAM TIP SQL statements may be submitted to the database in any case.
You must pay careful attention to case when interacting with character literal
data and aliases. Requesting a column called JOB_ID or job_id returns the
same column, but asking for rows where the JOB_ID value is PRESIDENT
is different from asking for rows where the value is President.

Statement Terminators
Semicolons are generally used as SQL statement terminators. SQL*Plus always
requires a statement terminator, and usually a semicolon is used. A single SQL
statement or even groups of associated statements are often saved as script files for
future use. Individual statements in SQL scripts are commonly terminated by a line
break (or carriage return) and a forward slash on the next line, instead of a semicolon.
You can create a SELECT statement, terminate it with a line break, include a forward
slash to execute the statement, and save it in a script file. The script file can then be
called from within SQL*Plus. Note that SQL Developer does not require a statement
terminator if only a single statement is present, but it will not object if one is used. It
is good practice to always terminate your SQL statements with a semicolon. Several
examples of SQL*Plus statements follow:
select country_name, country_id, location_id from countries;
select city, location_id,
state_province, country_id
from locations
/

The first example demonstrates two important rules. First, the statement is terminated
by a semicolon. Second, the entire statement is written on one line. It is entirely
acceptable for a SQL statement either to be written on one line or to span multiple
lines as long as no words in the statement span multiple lines. The second sample of
code demonstrates a statement that spans three lines that is terminated by a new line
and executed with a forward slash.

Indentation, Readability, and Good Practice
Consider the following query:
select city, location_id,
state_province, country_id
from locations
/

This example highlights the benefits of indenting your SQL statement to enhance the
readability of your code. The Oracle server does not object if the entire statement is
written on one line without indentation. It is good practice to separate different
clauses of the SELECT statement onto different lines. When an expression in a clause
is particularly complex, it often enhances readability to separate that term of the
statement onto a new line. When developing SQL to meet your reporting needs, the
process is often iterative. The SQL interpreter is far more useful during development if

Chapter 9: Retrieving, Restricting, and Sorting Data Using SQL

385
complex expressions are isolated on separate lines, since errors are usually thrown in
the format of: “ERROR at line X:” This makes the debugging process much simpler.

1. Start SQL*Plus and connect to the HR schema.
2. You may initially be tempted to find the answer in the DEPARTMENTS table.
A careful examination reveals that the question asks for information about
employees. This information is contained in the EMPLOYEES table.
3. The word “unique” should guide you to use the DISTINCT keyword.
4. Combining Steps 2 and 3, you can construct the following SQL statement:
select distinct department_id
from employees;

5. As shown in the following illustration, this query returns 12 rows. Notice that
the third row is empty. This is a null value in the DEPARTMENT_ID column.

6. The answer to the first question is therefore: Eleven unique departments have
employees working in them, but at least one employee has not been assigned
to a department.
Question 2: How many countries are there in the Europe region?
1. This question comprises two parts. Consider the REGIONS table, which
contains four regions each uniquely identified by a REGION_ID value, and
the COUNTRIES table, which has a REGION_ID column indicating which
region a country belongs to.

PART II

Exercise 9-1: Answer Your First Questions with SQL In this step-by-step
exercise, you make a connection using SQL*Plus as the HR user to answer two questions
using the SELECT statement.
Question 1: How many unique departments have employees currently working in
them?

OCA/OCP Oracle Database 11g All-in-One Exam Guide

386
2. The first query needs to identify the REGION_ID of the Europe region. This is
accomplished by the SQL statement, which shows that the Europe region has
a REGION_ID value of 1.
select * from regions;

3. To identify which countries have 1 as their REGION_ID, you can execute the
SQL query
select region_id, country_name from countries;

4. Manually counting the country rows with a REGION_ID of 1 returned shows
that there are eight countries in the Europe region as far as the HR data model
is concerned.

SQL Expressions and Operators
The general form of the SELECT statement introduced the notion that columns and
expressions are selectable. An expression usually consists of an operation being
performed on one or more column values or expressions. The operators that can act
upon values to form an expression depend on the underlying data type. They are the
four cardinal arithmetic operators (addition, subtraction, multiplication, and division)
for numeric columns; the concatenation operator for character or string columns; and
the addition and subtraction operators for date and timestamp columns. As in regular
arithmetic, there is a predefined order of evaluation (operator precedence) when more
than one operator occurs in an expression. Round brackets have the highest precedence.
Division and multiplication operations are next in the hierarchy and are evaluated
before addition and subtraction, which have lowest precedence.
Operators with the same level of precedence are evaluated from left to right.
Round brackets may therefore be used to enforce nondefault operator precedence.
Using brackets generously when constructing complex expressions is good practice
and is encouraged. It leads to readable code that is less prone to error. Expressions
expose a large number of useful data manipulation possibilities.

Arithmetic Operators
Consider the JOB_HISTORY table, which stores the start date and end date of an
employee’s term in a previous job role. It may be useful for tax or pension purposes to
calculate how long an employee worked in that role. This information can be obtained
using an arithmetic expression. Several elements of both the SQL statement and the
results returned from Figure 9-5 warrant further discussion.
The SELECT clause specifies five elements. The first four are regular columns of
the JOB_HISTORY table, while the latter provides the source information required to
calculate the number of days that an employee filled a particular position. Consider
employee number 176 on the ninth row of output. This employee started as a Sales
Manager on January 1, 1999, and ended employment on December 31, 1999.
Therefore, this employee worked for exactly one year, which, in 1999, consisted of
365 days.

Chapter 9: Retrieving, Restricting, and Sorting Data Using SQL

387

PART II

Figure 9-5 Arithmetic expression to calculate number of days worked

The number of days for which an employee was employed can be calculated by
using the fifth element in the SELECT clause, which is an expression. This expression
demonstrates that arithmetic performed on columns containing date information
returns numeric values that represent a certain number of days.
To enforce operator precedence of the subtraction operation, the subexpression
end_date-start_date is enclosed in round brackets. Adding 1 makes the result inclusive
of the final day.
TIP As you practice SQL on your test database environment, you may
encounter two infamous Oracle errors: “ORA-00923: FROM keyword not
found where expected” and “ORA-00942: table or view does not exist.”
These usually indicate spelling or punctuation errors, such as missing
enclosing quotes around character literals.

Expression and Column Aliasing
Figure 9-5 introduced a new concept called column aliasing. Notice that the expression
has a meaningful heading named Days Employed. This heading is an alias. An alias is

OCA/OCP Oracle Database 11g All-in-One Exam Guide

388
an alternate name for a column or an expression. If this expression did not make use
of an alias, the column heading would be (END_DATE-START_DATE)+1, which is not
very user friendly. Aliases are especially useful with expressions or calculations and
may be implemented in several ways. There are a few rules governing the use of column
aliases in SELECT statements. The alias “Days Employed” in Figure 9-5 was specified
by leaving a space and entering the alias in double quotation marks. These quotation
marks are necessary for two reasons. First, this alias is made up of more than one word.
Second, case preservation of an alias is only possible if the alias is double quoted. If a
multiworded space-separated alias is specified, an “ORA-00923: FROM keyword not
found where expected” error is returned if it is not double quoted. SQL offers a more
formalized way of inserting aliases by inserting the AS keyword between the column
or expression and the alias as shown in the first line of this query:
SELECT EMPLOYEE_ID AS "Employee ID",
JOB_ID AS "Occupation",
START_DATE, END_DATE,
(END_DATE-START_DATE)+1 "Days Employed"
FROM JOB_HISTORY;

Character and String Concatenation Operator
The double pipe symbols || represent the character concatenation operator. This
operator is used to join character expressions or columns together to create a larger
character expression. Columns of a table may be linked to each other or to strings of
literal characters to create one resultant character expression.
The concatenation operator is flexible enough to be used multiple times and
almost anywhere in a character expression. Consider the following query:
SELECT 'THE '||REGION_NAME||' region is on Planet Earth' "Planetary Location",
FROM REGIONS;

Here, the character literal “The” is concatenated to the contents of the REGION_NAME
column. This new string of characters is further concatenated to the character literal
“region is on Planet Earth”, and the entire expression is aliased with the friendly
heading “Planetary Location”.

Literals and the DUAL Table
Literals are commonly used in expressions and refer to numeric, character, or date and
time values found in SELECT clauses that do not originate from any database object.
Concatenating character literals to existing column data can be useful, but what about
processing literals that have nothing to do with existing column data? To ensure
relational consistency, Oracle offers a clever solution to the problem of using the database
to evaluate expressions that have nothing to do with any tables or columns. To get the
database to evaluate an expression, a syntactically legal SELECT statement must be
submitted. What if you wanted to know the sum of two numeric literals? Oracle solves
the problem of relational interaction with the database operating on literal expressions
by providing a special single-rowed, single-columned table called DUAL.

Chapter 9: Retrieving, Restricting, and Sorting Data Using SQL

389
Recall the DUAL table described in Figure 9-1. It contains one column called
DUMMY of the character data type. You can execute the query SELECT * FROM
DUAL, and the data value “X” is returned as the contents of the DUMMY column.
Testing complex expressions during development, by querying the dual table, is an
effective method to evaluate whether these expressions are correct. Literal expressions
can be queried from any table, but remember that the expression will be processed
for every row in the table, while querying the DUAL table returns only one row.

select 'literal '||'processing using the DUAL table'
from dual;

The first statement will return four lines in the results set, since there are four rows of
data in the REGIONS table, while the second returns only one row.

Two Single Quotes or the Alternative Quote Operator
The literal character strings concatenated so far have been singular words prepended
and appended to column expressions. These character literals are specified using
single quotation marks. For example:
select 'I am a character literal string' from dual;

What about character literals that contain single quotation marks? Plurals pose a
particular problem for character literal processing. Consider the following statement:
select 'Plural's have one quote too many' from dual;

Executing this statement causes an ORA-00923 Oracle error to be generated. So,
how are words that contain single quotation marks dealt with? There are essentially
two mechanisms available. The most popular of these is to add an additional single
quotation mark next to each naturally occurring single quotation mark in the character
string. The following statement demonstrates how the previous error is avoided by
replacing the character literal 'Plural's with the literal 'Plural''s.
select 'Plural''s have one quote too many' from dual;

Using two single quotes to handle each naturally occurring single quote in a
character literal can become messy and error prone as the number of affected literals
increases. Oracle offers a neat way to deal with this type of character literal in the form
of the alternative quote (q) operator. The problem is that Oracle chose the single
quote character as the special symbol with which to enclose or wrap other character
literals. These character-enclosing symbols could have been anything other than single
quotation marks.
Bearing this in mind, consider the alternative quote (q) operator. The q operator
enables you to choose from a set of possible pairs of wrapping symbols for character
literals as alternatives to the single quote symbols. The options are any single-byte or

PART II

select 'literal '||'processing using the REGIONS table'
from regions;

OCA/OCP Oracle Database 11g All-in-One Exam Guide

390
multibyte character or the four brackets: (round brackets), {curly braces}, [square
brackets], or . Using the q operator, the character delimiter can effectively
be changed from a single quotation mark to any other character, as shown here:
SELECT q'' "q<>"
FROM DUAL;
SELECT q'[Even square brackets' [] can be used for Plural's]' "q[]"
FROM DUAL;
SELECT q'XWhat about UPPER CASE X for Plural'sX' "qX"
FROM DUAL;

The syntax of the alternative quote operator is as follows:
q'delimiter character literal which may include single quotes delimiter'

where delimiter can be any character or bracket. The first and second examples show
the use of angle and square brackets as character delimiters, while the third example
demonstrates how an uppercase “X” has been used as the special character delimiter
symbol through the alternative quote operator. Note that the “X” character can itself
be included in the string—so long as it is not followed by a quotation mark.

NULL Is Nothing
Null refers to an absence of data. A row that contains a null value lacks data for that
column. Null is formally defined as a value that is unavailable, unassigned, unknown,
or inapplicable. Failure to heed the special treatment that null values require will
almost certainly lead to an error, or worse, an inaccurate answer. This section focuses
on interacting with null column data with the SELECT statement and its impact on
expressions.

Not Null and Nullable Columns
Tables store rows of data that are divided into one or more columns. These columns
have names and data types associated with them. Some of them are constrained by
database rules to be mandatory columns. It is compulsory for some data to be stored
in the NOT NULL columns in each row. When columns of a table, however, are not
compelled by the database constraints to hold data for a row, these columns run the
risk of being empty.
TIP Any arithmetic calculation with a NULL value always returns NULL.

Oracle offers a mechanism for interacting arithmetically with NULL values using
the general functions discussed in Chapter 10. Division by a null value results in null,
unlike division by zero, which results in an error. When a null is encountered by the
character concatenation operator, however, it is simply ignored. The character

Chapter 9: Retrieving, Restricting, and Sorting Data Using SQL

391
concatenation operators ignore null, while the arithmetic operations involving null
values always result in null.

Foreign Keys and Nullable Columns

Exercise 9-2: Construct Expressions In this exercise you will construct two
queries to display results with an appropriate layout, one from the WEBSTORE schema
and the other from the HR schema.
1. Query the WEBSTORE.CUSTOMERS table to retrieve a list of the format: X
has been a member for Y days, where X is the CUSTOMER_NAME and Y is
the number of days between today and the day the customer joined. Alias the
expression: Customer Loyalty.
2. Add a character string expression that concatenates string literals around the
CUSTOMER_NAME value and the date expression. A possible solution is
select customer_name||' has been a member for: '||(sysdate-join_date)||'
days.' "Customer Loyalty" from customers;

3. Query the HR.JOBS table and return a single expression of the form The Job
Id for the  job is: . Take note that the job_title should
have an apostrophe and an “s” appended to it to read more naturally. A
sample of this output for the organization president is: “The Job Id for the

PART II

Data model design sometimes leads to problematic situations when tables are related
to each other via a primary and foreign key relationship, but the column that the
foreign key is based on is nullable.
The DEPARTMENTS table has, as its primary key, the DEPARTMENT_ID column.
The EMPLOYEES table has a DEPARTMENT_ID column that is constrained by its
foreign key relationship to the DEPARTMENT_ID column in the DEPARTMENTS
table. This means that no record in the EMPLOYEES table is allowed to have in its
DEPARTMENT_ID column a value that is not in the DEPARTMENTS table. This
referential integrity forms the basis for third normal form and is critical to overall
database integrity.
But what about NULL values? Can the DEPARTMENT_ID column in the
DEPARTMENTS table contain nulls? The answer is no. Oracle insists that any column
that is a primary key is implicitly constrained to be mandatory. But what about
implicit constraints on foreign key columns? This is a quandary for Oracle, since in
order to remain flexible and cater to the widest audience, it cannot insist that columns
related through referential integrity constraints must be mandatory. Further, not all
situations demand this functionality.
The DEPARTMENT_ID column in the EMPLOYEES table is actually nullable.
Therefore, the risk exists that there are records with null DEPARTMENT_ID values
present in this table. In fact, there are such records in the EMPLOYEES table. The HR
data model allows employees, correctly or not, to belong to no department. When
performing relational joins between tables, it is entirely possible to miss or exclude
certain records that contain nulls in the join column. Chapter 12 discusses ways to
deal with this challenge.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

392
President’s job is: AD_PRES”. Alias this column expression: Job Description
using the AS keyword. There are multiple solutions to this problem. The
approach chosen here is to handle the naturally occurring single quotation
mark with an additional single quote. You could make use of the alternate
quote operator to delimit the naturally occurring quote with another character.
4. A single expression aliased as Job Description is required; you may construct
it by concatenating the literal “The Job Id for the” to the JOB_TITLE column.
This string is then concatenated to the literal “‘s job is: ”, which is further
concatenated to the JOB_ID column. An additional single quotation mark is
added to yield the SELECT statement that follows:
select 'The Job Id for the '||job_title||'''s job is: '||job_id
AS "Job Description" from jobs;

Limit the Rows Retrieved by a Query
One of the cornerstone principles in relational theory is selection. Selection is actualized
using the WHERE clause of the SELECT statement, sometimes referred to as the predicate.
Conditions that restrict the dataset returned take many forms and operate on columns as
well as expressions. Only rows that conform to these conditions are returned. Conditions
restrict rows using comparison operators in conjunction with columns and literal values.
Boolean operators provide a mechanism to specify multiple conditions to restrict the
rows returned. Boolean, conditional, concatenation, and arithmetic operators are
discussed to establish their order of precedence when they are encountered in a SELECT
statement.

The WHERE Clause
The WHERE clause extends the SELECT statement by providing the ability to restrict
rows returned based on one or more conditions. Querying a table with just the SELECT
and FROM clauses results in every row of data stored in the table being returned. Using
the DISTINCT keyword, duplicate values are excluded, and the resultant rows are
restricted to some degree. What if very specific information is required from a table,
for example, only the data where a column contains a specific value? How would you
retrieve the countries that belong to the Europe region from the COUNTRIES table?
What about retrieving just those employees who work as sales representatives? These
questions are answered using the WHERE clause to specify exactly which rows must be
returned. The format of the SQL SELECT statement that includes the WHERE clause is
SELECT *|{[DISTINCT] column|expression [alias],...}
FROM table
[WHERE condition(s)];

The WHERE clause always follows the FROM clause. The square brackets indicate
that the WHERE clause is optional. One or more conditions may be simultaneously
applied to restrict the result set. A condition is specified by comparing two terms using
a conditional operator. These terms may be column values, literals, or expressions. The

Chapter 9: Retrieving, Restricting, and Sorting Data Using SQL

393
equality operator is most commonly used to restrict result sets. An example of using a
WHERE clause is shown next:
select country_name
from countries
where region_id=3;

Numeric-Based Conditions
Conditions must be formulated appropriately for different column data types. The
conditions restricting rows based on numeric columns can be specified in several
different ways. Consider the SALARY column in the EMPLOYEES table. This column
has a data type of NUMBER(8,2). The SALARY column can be restricted as follows:
select last_name, salary from employees where salary = 10000;

The LAST_NAME and SALARY values of the employees who earn $10,000 are retrieved,
since the data types on either side of the operator match and are compatible.
A numeric column can be compared to another numeric column in the same row
to construct a WHERE clause condition, as the following query demonstrates:
select last_name, salary from employees
where salary = department_id;

This WHERE clause is too restrictive and results in no rows being selected because the
range of SALARY values is 2100 to 999999.99, and the range of DEPARTMENT_ID
values is 10 to 110. Since there is no overlap in the range of DEPARTMENT_ID and
SALARY values, there are no rows that satisfy this condition and therefore nothing is
returned.
WHERE clause conditions may also be used to compare numeric columns and
expressions or to compare expressions to other expressions:
select last_name, salary from employees
where salary = department_id*100;
select last_name, salary from employees
where salary/10 = department_id*10;

The first example compares the SALARY column with DEPARTMENT_ID*100 for
each row. The second example compares two expressions. Notice that the conditions
in both examples are algebraically identical, and the same dataset is retrieved when
both are executed.

Character-Based Conditions
Conditions determining which rows are selected based on character data are specified
by enclosing character literals in the conditional clause, within single quotes. The JOB_ID
column in the EMPLOYEES table has a data type of VARCHAR2(10). Suppose you

PART II

This example projects the COUNTRY_NAME column from the COUNTRIES table.
Instead of selecting every row, the WHERE clause restricts the rows returned to only
those containing a 3 in the REGION_ID column.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

394
wanted a list of the LAST_NAME values of those employees currently employed as
sales representatives. The JOB_ID value for a sales representative is SA_REP. The
following statement produces such a list:
select last_name from employees where job_id='SA_REP';

If you tried specifying the character literal without the quotes, an Oracle error would
be raised. Remember that character literal data is case sensitive, so the following WHERE
clauses are not equivalent.
Clause 1: where job_id=SA_REP
Clause 2: where job_id='Sa_Rep'
Clause 3: where job_id='sa_rep'
Clause 1 generates an “ORA-00904: ‘SA_REP’: invalid identifier” error, since the literal
SA_REP is not wrapped in single quotes. Clause 2 and Clause 3 are syntactically
correct but not equivalent. Further, neither of these clauses yields any data, since there
are no rows in the EMPLOYEES table having JOB_ID column values that are either
Sa_Rep or sa_rep.
Character-based conditions are not limited to comparing column values with
literals. They may also be specified using other character columns and expressions.
Character-based expressions may form either one or both parts of a condition
separated by a conditional operator. These expressions can be formed by
concatenating literal values with one or more character columns. The following four
clauses demonstrate some of the options for character-based conditions:
Clause 1: where 'A '||last_name||first_name = 'A King'
Clause 2: where first_name||' '||last_name = last_name||'
'||first_name
Clause 3: where 'SA_REP'||'King' = job_id||last_name
Clause 4: where job_id||last_name ='SA_REP'||'King'
Clause 1 concatenates the string literal “A” to the LAST_NAME and FIRST_NAME
columns. This expression is compared to the literal “A King”. Clause 2 demonstrates
that character expressions may be placed on both sides of the conditional operator.
Clause 3 illustrates that literal expressions may also be placed on the left of the
conditional operator. It is logically equivalent to clause 4, which has swapped the
operands in clause 3 around. Both clauses 3 and 4 identically restrict the results.

Date-Based Conditions
DATE columns are useful for storing date and time information. Date literals must be
enclosed in single quotation marks just like character data. When used in conditional
WHERE clauses, date columns may be compared to other date columns, literals, or
expressions. The literals are automatically converted into DATE values based on the
default date format, which is DD-MON-RR. If a literal occurs in an expression
involving a DATE column, it is automatically converted into a date value using the
default format mask. DD represents days, MON represents the first three letters of a
month, and RR represents a Year 2000–compliant year (that is, if RR is between 50

Chapter 9: Retrieving, Restricting, and Sorting Data Using SQL

395
and 99, then the Oracle server returns the previous century, or else it returns the
current century). The full four-digit year, YYYY, can also be specified. Consider the
following four WHERE clauses:
start_date
start_date
start_date
start_date

=
=
=
=

end_date;
'01-JAN-2001';
'01-JAN-01';
'01-JAN-99';

The first clause tests equality between two DATE columns. Rows that contain the
same values in their START_DATE and END_DATE columns will be returned. Note,
however, that DATE values are only equal to each other if there is an exact match
between all their components, including day, month, year, hours, minutes, and
seconds. Chapter 10 discusses the details of storing DATE values. Until then, don’t
worry about the hours, minutes, and seconds components. In the second WHERE
clause, the START_DATE column is compared to the character literal: ‘01-JAN-2001’.
The entire four-digit year component (YYYY) has been specified. This is acceptable
to the Oracle server. The third condition is equivalent to the second, since the literal
‘01-JAN-01’ is converted to the date value 01-JAN-2001. This is due to the RR component
being less than 50, so the current (twenty-first) century, 20, is prefixed to the year
RR component to provide a century value. The century component for the literal
‘01-JAN-99’ becomes the previous century (19) and is converted to a date value of
01-JAN-1999 for the fourth condition, since the RR component, 99, is greater than 50.
Date arithmetic using the addition and subtraction operators is supported. An
expression like END_DATE – START_DATE returns the number of days between
START_DATE and END_DATE. START_DATE + 30 returns a date 30 days later than
START_DATE.
EXAM TIP Conditional clauses compare two terms using comparison
operators. Knowing the data types of the terms is important so that they
can be enclosed in single quotes, if necessary.

Comparison Operators
The equality operator is generally used to illustrate the concept of restricting rows
using a WHERE clause. There are several alternative operators that may also be used.
The inequality operators like “less than” or “greater than or equal to” may be used to
return rows conforming to inequality conditions. The BETWEEN operator facilitates
range-based comparison to test whether a column value lies between two values. The
IN operator tests set membership, so a row is returned if the column value tested in
the condition is a member of a set of literals. The pattern matching comparison
operator LIKE is extremely powerful, allowing components of character column
data to be matched to literals conforming to a specific pattern. The last comparison
operator discussed in this section is the IS NULL operator, which returns rows where
the column value contains a null value. These operators may be used in any combination
in the WHERE clause.

PART II

Clause 1: where
Clause 2: where
Clause 3: where
Clause 4: where

OCA/OCP Oracle Database 11g All-in-One Exam Guide

396
Equality and Inequality
Limiting the rows returned by a query involves specifying a suitable WHERE clause. If
the clause is too restrictive, then few or no rows are returned. If the conditional clause
is too broadly specified, then more rows than are required are returned. Exploring the
different available operators should equip you with the language to request exactly
those rows you are interested in. Testing for equality in a condition is both natural and
intuitive. Such a condition is formed using the “is equal to” (=) operator. A row is
returned if the equality condition is true for that row. Consider the following query:
select last_name, salary from employees where job_id='SA_REP';

The JOB_ID column of every row in the EMPLOYEES table is tested for equality
with the character literal SA_REP. For character information to be equal, there must be
an exact case-sensitive match. When such a match is encountered, the values for the
projected columns, LAST_NAME and SALARY, are returned for that row. Note that
although the conditional clause is based on the JOB_ID column, it is not necessary
for this column to be projected by the query.
Inequality-based conditions enhance the WHERE clause specification. Range and
pattern matching comparisons are possible using inequality and equality operators,
but it is often preferable to use the BETWEEN and LIKE operators for these comparisons.
The inequality operators are described in Table 9-3.
Inequality operators allow range-based queries to be fulfilled. You may be required
to provide a set of results where a column value is greater than another value. The
following query may be issued to obtain a list of LAST_NAME and SALARY values
for employees who earn more that $5000:
select last_name, salary from employees where salary > 5000;

The composite inequality operators (made up of more than one symbol) are utilized
in the following clauses:
Clause 1: where salary <= 3000;
Clause 2: where salary <> department_id;
Clause 1 returns those rows that contain a SALARY value that is less than or equal
to 3000. Clause 2 demonstrates one of the two forms of the “not equal to” operators.
Clause 2 returns the rows that have SALARY column values that are not equal to the
DEPARTMENT_ID values.
Table 9-3
Inequality
Operators

Operator

Description

<

Less than

>

Greater than

<=

Less than or equal to

>=

Greater than or equal to

<>

Not equal to

!=

Not equal to

Chapter 9: Retrieving, Restricting, and Sorting Data Using SQL

397
Numeric inequality is naturally intuitive. The comparison of character and date
terms, however, is more complex. Testing character inequality is interesting because
the strings being compared on either side of the inequality operator are converted to
a numeric representation of its characters. Based on the database character set and
NLS (National Language Support) settings, each character string is assigned a numeric
value. These numeric values form the basis for the evaluation of the inequality
comparison. Consider the following statement:

The character literal ‘King’ is converted to a numeric representation. Assuming a
US7ASCII database character set with AMERICAN NLS settings, the literal ‘King’ is
converted into a sum of its ordinal character values: K + i + n + g = (75+105+110+103=393).
For each row in the EMPLOYEES table, the LAST_NAME column data is similarly
converted to a numeric value. If this value is less than 393, then the row is selected.
The same process for comparing numeric data using the inequality operators applies
to character data.
Inequality comparisons operating on date values follow a similar process to
character data. The Oracle server stores dates in an internal numeric format, and
these values are compared within the conditions. Consider the following query:
select last_name from employees where hire_date < '01-JAN-2000';

This query retrieves each employee record containing a HIRE_DATE value that is
earlier than ‘01-JAN-2000’.

Range Comparison with the BETWEEN Operator
The BETWEEN operator tests whether a column or expression value falls within
a range of two boundary values. The item must be at least the same as the lower
boundary value or at most the same as the higher boundary value or fall within
the range, for the condition to be true.
Suppose you want the last names of employees who earn a salary in the range of
$3400 and $4000. A possible solution using the BETWEEN operator is as follows:
select last_name from employees where salary between 3400 and 4000;

Conditions specified with the BETWEEN operator can be equivalently denoted
using two inequality-based conditions:
select last_name from employees where salary >=3400 and salary <=4000;

It is shorter and simpler to specify the range condition using the BETWEEN operator.

Set Comparison with the IN Operator
The IN operator tests whether an item is a member of a set of literal values. The set is
specified by comma-separating the literals and enclosing them in round brackets. If
the literals are character or date values, then these must be delimited using single quotes.
You may include as many literals in the set as you wish. Consider the following example:
select last_name from employees where salary in (1000,4000,6000);

PART II

select last_name from employees where last_name < 'King';

OCA/OCP Oracle Database 11g All-in-One Exam Guide

398
The SALARY value in each row is compared for equality to the literals specified in
the set. If the SALARY value equals 1000, 4000, or 6000, the LAST_NAME value for
that row is returned. The following two statements demonstrate use of the IN operator
with DATE and CHARACTER data.
select last_name from employees
where last_name in ('Watson','Garbharran','Ramklass');
select last_name from employees
where hire_date in ('01-JAN-1998','01-DEC-1999');

Pattern Comparison with the LIKE Operator
The LIKE operator is designed exclusively for character data and provides a powerful
mechanism for searching for letters or words. LIKE is accompanied by two wildcard
characters: the percentage symbol (%) and the underscore character (_). The percentage
symbol is used to specify zero or more wildcard characters, while the underscore
character specifies one wildcard character. A wildcard may represent any character.
You can use the following query to provide a list of employees whose first names
begin with the letter “A”:
select first_name from employees where first_name like 'A%';

The character literal that the FIRST_NAME column is compared to is enclosed in
single quotes like a regular character literal. In addition, it has a percentage symbol,
which has a special meaning in the context of the LIKE operator. The percentage
symbol substitutes zero or more characters appended to the letter “A”. The wildcard
characters can appear at the beginning, the middle, or the end of the character literal.
They can even appear alone, as in
where first_name like '%';

In this case, every row containing a FIRST_NAME value that is not null will be
returned. Wildcard symbols are not mandatory when using the LIKE operator. In such
cases, LIKE behaves as an equality operator testing for exact character matches; so the
following two WHERE clauses are equivalent:
where last_name like 'King';
where last_name = 'King';

The underscore wildcard symbol substitutes exactly one other character in a literal.
Consider searching for employees whose last names are four letters long, begin with a
“K,” have an unknown second letter, and end with an “ng.” You may issue the following
statement:
where last_name like 'K_ng';

As Figure 9-6 shows, the two wildcard symbols can be used independently, together,
or even multiple times in a single WHERE condition. The first query retrieves those
records where COUNTRY_NAME begins with the letter “I” followed by one or more
characters, one of which must be a lowercase “a.”

Chapter 9: Retrieving, Restricting, and Sorting Data Using SQL

399

PART II

Figure 9-6 The wildcard symbols of the LIKE operator

The second query retrieves those countries whose names contain the letter “i” as
its fifth character. The length of the COUNTRY_NAME values and the letter they begin
with are unimportant. The four underscore wildcard symbols preceding the lowercase
“i” in the WHERE clause represent exactly four characters (which could be any characters).
The fifth letter must be an “i,” and the percentage symbol specifies that the COUNTRY_
NAME can have zero or more characters from the sixth character onward.
What about when you are searching for a literal that contains a percentage or
underscore character? A naturally occurring underscore character may be escaped (or
treated as a regular nonspecial symbol) using the ESCAPE identifier in conjunction
with an ESCAPE character. In the following example, any JOB_ID values that begin
with the three characters “SA_” will be returned:
select job_id from jobs
where job_id like 'SA\_%' escape '\';

OCA/OCP Oracle Database 11g All-in-One Exam Guide

400
Traditionally, the ESCAPE character is the backslash symbol, but it does not have
to be. The following statement is equivalent to the preceding one but uses a dollar
symbol as the ESCAPE character instead.
select job_id from jobs
where job_id like 'SA$_%' escape '$';

The percentage symbol may be similarly escaped when it occurs naturally as
character data.
Exercise 9-3: Use the LIKE Operator Construct a query to retrieve a list of
department names that end with the letters “ing” from the DEPARTMENTS table.
1. Start SQL*Plus and connect to the HR schema.
2. The WHERE clause must perform a comparison between the DEPARTMENT_
NAME column values and a pattern beginning with zero or more characters but
ending with three specific characters, “ing”. The operator enabling character
pattern matching is the LIKE operator. The pattern the DEPARTMENT_NAME
column must conform to is ‘%ing’.
3. Thus, the correct query is
select department_name from departments where department_name like '%ing';

NULL Comparison with the IS NULL Operator
NULL values inevitably find their way into database tables. It is sometimes required
that only those records that contain a NULL value in a specific column are sought.
The IS NULL operator selects only the rows where a specific column value is NULL.
Testing column values for equality to NULL is performed using the IS NULL operator
instead of the “is equal to” operator (=).
Consider the following query, which fetches the LAST_NAME column from the
EMPLOYEES table for those rows that have NULL values stored in the COMMISSION_
PCT column:
select last_name from employees
where commission_pct is null;

This WHERE clause reads naturally and retrieves only the records that contain
NULL COMMISSION_PCT values.

Boolean Operators
Boolean or logical operators enable multiple conditions to be specified in the WHERE
clause of the SELECT statement. This facilitates a more refined data extraction capability.
Consider isolating those employee records with FIRST_NAME values that begin with
the letter “J” and that earn a COMMISSION_PCT greater than 10 percent. First, the
data in the EMPLOYEES table must be restricted to FIRST_NAME values like “J%”,
and second, the COMMISSION_PCT values for the records must be tested to ascertain
if they are larger than 10 percent. These two separate conditions may be associated

Chapter 9: Retrieving, Restricting, and Sorting Data Using SQL

401
using the Boolean AND operator and are applied consecutively in a WHERE clause.
A result set conforming to any or all conditions or to the negation of one or more
conditions may be specified using the OR, AND, and NOT Boolean operators respectively.

The AND Operator

select first_name, last_name, commission_pct, hire_date from employees
where first_name like 'J%' and commission_pct > 0.1;

Notice that the WHERE clause now has two conditions, but only one WHERE
keyword. The AND operator separates the two conditions. To specify further mandatory
conditions, simply add them and ensure that they are separated by additional AND
operators. You can specify as many conditions as you wish. Remember, though, the
more AND conditions specified, the more restrictive the query becomes.

The OR Operator
The OR operator separates multiple conditions, at least one of which must be satisfied
by the row selected to warrant inclusion in the results set. If two conditions specified
in a WHERE clause are joined with an OR operator, then a row is tested consecutively
for conformance to either or both conditions before being retrieved. Conforming to
just one of the OR conditions is sufficient for the record to be returned. If it conforms
to none of the conditions, the row is excluded. Retrieving employee records having
FIRST_NAME values beginning with the letter “B” or those with a COMMISSION_PCT
greater than 35 percent can be written as:
select first_name, last_name, commission_pct, hire_date from employees
where first_name like 'B%' or commission_pct > 0.35;

Notice that the two conditions are separated by the OR keyword. All employee
records with FIRST_NAME values beginning with an uppercase “B” will be returned
regardless of their COMMISSION_PCT values, even if they are NULL. Records with
COMMISSION_PCT values greater that 35 percent (regardless of what letter their
FIRST_NAME begins with) are also returned.
Further OR conditions may be specified by separating them with an OR operator.
The more OR conditions you specify, the less restrictive your query becomes.

The NOT Operator
The NOT operator negates conditional operators. A selected row must conform to the
logical opposite of the condition in order to be included in the results set. Conditional
operators may be negated by the NOT operator as shown by the WHERE clauses listed
in Table 9-4.

PART II

The AND operator merges conditions into one large condition to which a row must
conform to be included in the results set. If two conditions specified in a WHERE
clause are joined with an AND operator, then a row is tested consecutively for
conformance to both conditions before being retrieved. If it conforms to neither or
only one of the conditions, the row is excluded. Employee records with FIRST_NAME
values beginning with the letter “J” and COMMISSION_PCT greater than 10 percent
can be retrieved using the following query:

OCA/OCP Oracle Database 11g All-in-One Exam Guide

402
Positive

Negative

where last_name='King'

where NOT (last_name='King')

where first_name LIKE 'R%'

where first_name NOT LIKE 'R%'

where department_id IN (10,20,30)

where department_id NOT IN (10,20,30)

where salary BETWEEN 1 and 3000

where salary NOT BETWEEN 1 and 3000

where commission_pct IS NULL

where commission_pct IS NOT NULL

Table 9-4

Conditions Negated by the NOT Operator

The NOT operator negates the comparison operator in a condition, whether it’s
an equality, inequality, range-based, pattern matching, set membership, or null testing
operator.

Precedence Rules
Arithmetic, character, comparison, and Boolean expressions were examined in the
context of the WHERE clause. But how do these operators interact with each other? The
precedence hierarchy for the previously mentioned operators is shown in Table 9-5.
Operators at the same level of precedence are evaluated from left to right if they
are encountered together in an expression. When the NOT operator modifies the LIKE,
IS NULL, and IN comparison operators, their precedence level remains the same as
the positive form of these operators.
Consider the following SELECT statement that demonstrates the interaction of
various different operators:
select last_name,salary,department_id,job_id,commission_pct
from employees
where last_name like '%a%' and salary > department_id * 200
or
job_id in ('MK_REP','MK_MAN') and commission_pct is not null
Precedence Level

Operator Symbol

Operation

1

()

Parentheses or brackets

2

/,*

Division and multiplication

3

+,-

Addition and subtraction

4

||

Concatenation

5

=,<,>,<=,>=

Equality and inequality comparison

6

[NOT] LIKE, IS [NOT] NULL,
[NOT] IN

Pattern, null, and set comparison

7

[NOT] BETWEEN

Range comparison

8

!=,<>

Not equal to

9

NOT

NOT logical condition

10

AND

AND logical condition

11

OR

OR logical condition

Table 9-5 Operator Precedence Hierarchy

Chapter 9: Retrieving, Restricting, and Sorting Data Using SQL

403

select last_name,salary,department_id,job_id,commission_pct
from employees
where last_name like '%a%' and salary > department_id * 100 and commission_pct is not
null
or
job_id = 'MK_MAN'

There are two composite conditions in this query. The first condition retrieves the
records with the character “a” in the LAST_NAME field AND a SALARY value greater
than 100 times the DEPARTMENT_ID value AND where the COMMISSION_PCT
value is not null. The second condition fetches those rows with JOB_ID values of
MK_MAN. A row is returned by this query, if it conforms to either condition one
OR condition two, but not necessarily to both.
EXAM TIP Boolean operators OR and AND allow multiple WHERE clause
conditions to be specified while the NOT operator negates a conditional
operator and may be used several times within the same condition. The
equality, inequality, BETWEEN, IN, and LIKE comparison operators test two
terms within a single condition. Only one comparison operator is used per
conditional clause.

Sort the Rows Retrieved by a Query
The usability of the retrieved datasets may be significantly enhanced with a mechanism to
order or sort the information. Information may be sorted alphabetically, numerically, and
chronologically in ascending or descending order. Further, the data may be sorted by one
or more columns, including columns that are not listed in the SELECT clause. Sorting is
usually performed once the results of a SELECT statement have been fetched. The sorting
parameters do not influence the records returned by a query, just the presentation of the
results. Exactly the same rows are returned by a statement including a sort clause as are
returned by a statement excluding a sort clause. Only the ordering of the output may
differ. Sorting the results of a query is accomplished using the ORDER BY clause.

The ORDER BY Clause
The ORDER BY clause is always the last clause in a SELECT statement. As the full
syntax of the SELECT statement is progressively exposed, you will observe new clauses

PART II

The LAST_NAME, SALARY, DEPARTMENT_ID, JOB_ID, and COMMISSION_PCT
columns are projected from the EMPLOYEES table based on two discrete conditions.
The first condition retrieves the records containing the character “a” in the LAST_NAME
field AND with a SALARY value greater than 200 times the DEPARTMENT_ID value.
The product of DEPARTMENT_ID and 200 is processed before the inequality operator,
since the precedence of multiplication is higher than the inequality comparison.
The second condition fetches those rows with JOB_ID values of either MK_MAN or
MK_REP in which COMMISSION_PCT values are not null. For a row to be returned by
this query, either the first OR second conditions need to be fulfilled. Changing the
order of the conditions in the WHERE clause changes its meaning due to the different
precedence of the operators. Consider the following query:

OCA/OCP Oracle Database 11g All-in-One Exam Guide

404
added, but none of them will be positioned after the ORDER BY clause. The format
of the ORDER BY clause in the context of the SQL SELECT statement is as follows:
SELECT *|{[DISTINCT] column|expression [alias],...}
FROM table
[WHERE condition(s)]
[ORDER BY {col(s)|expr|numeric_pos} [ASC|DESC] [NULLS FIRST|LAST]];

Ascending and Descending Sorting
Ascending sort order is natural for most types of data and is therefore the default sort
order used whenever the ORDER BY clause is specified. An ascending sort order for
numbers is lowest to highest, while it is earliest to latest for dates and alphabetically
for characters. The first form of the ORDER BY clause shows that results of a query
may be sorted by one or more columns or expressions:
ORDER BY col(s)|expr;

Suppose that a report is requested that must contain an employee’s LAST_NAME,
HIRE_DATE, and SALARY information, sorted alphabetically by the LAST_NAME
column for all sales representatives and marketing managers. This report could be
extracted with
select last_name, hire_date, salary from employees
where job_id in ('SA_REP','MK_MAN')
order by last_name;

The data selected may be ordered by any of the columns from the tables in the
FROM clause, including those that do not appear in the SELECT list. By appending
the keyword DESC to the ORDER BY clause, rows are returned sorted in descending
order. The optional NULLS LAST keywords specify that if the sort column contains
null values, then these rows are to be listed last after sorting the remaining NOT NULL
values. To specify that rows with null values in the sort column should be displayed
first, append the NULLS FIRST keywords to the ORDER BY clause. A dataset may be
sorted based on an expression as follows:
select last_name, salary, hire_date, sysdate-hire_date tenure
from employees order by tenure;

The smallest TENURE value appears first in the output, since the ORDER BY
clause specifies that the results will be sorted by the expression alias. Note that the
results could be sorted by the explicit expression and the alias could be omitted, but
using aliases renders the query easier to read.
Several implicit default options are selected when you use the ORDER BY clause. The
most important of these is that unless DESC is specified, the sort order is assumed to be
ascending. If null values occur in the sort column, the default sort order is assumed to be
NULLS LAST for ascending sorts and NULLS FIRST for descending sorts. If no ORDER BY
clause is specified, the same query executed at different times may return the same set of
results in different row order, so no assumptions should be made regarding the default
row order.

Chapter 9: Retrieving, Restricting, and Sorting Data Using SQL

405
Positional Sorting
Oracle offers an alternate shorter way to specify the sort column or expression. Instead
of specifying the column name, the position of the column as it occurs in the SELECT
list is appended to the ORDER BY clause. Consider the following example:
select last_name, hire_date, salary from employees order by 2;

Composite Sorting
Results may be sorted by more than one column using composite sorting. Multiple
columns may be specified (either literally or positionally) as the composite sort key
by comma-separating them in the ORDER BY clause. To fetch the JOB_ID, LAST_
NAME, SALARY, and HIRE_DATE values from the EMPLOYEES table such that the
results must be sorted in reverse alphabetical order by JOB_ID first, then in ascending
alphabetical order by LAST_NAME, and finally in numerically descending order based
on the SALARY column, you can run the following query:
select job_id, last_name, salary, hire_date from employees
where job_id in ('SA_REP','MK_MAN') order by job_id desc, last_name, 3 desc;

Exercise 9-4: Use the ORDER BY Clause The JOBS table contains descriptions
of different types of jobs an employee in the organization may occupy. It contains the
JOB_ID, JOB_TITLE, MIN_SALARY, and MAX_SALARY columns. You are required to
write a query that extracts the JOB_TITLE, MIN_SALARY, and MAX_SALARY columns,
as well as an expression called VARIANCE, which is the difference between the MAX_
SALARY and MIN_SALARY values, for each row. The results must include only JOB_
TITLE values that contain either the word “President” or “Manager.” Sort the list in
descending order based on the VARIANCE expression. If more than one row has the
same VARIANCE value, then, in addition, sort these rows by JOB_TITLE in reverse
alphabetic order.
1. Start SQL Developer and connect to the HR schema.
2. Sorting is accomplished with the ORDER BY clause. Composite sorting is
required using both the VARIANCE expression and the JOB_TITLE column
in descending order.
3. Executing this statement returns a set of results matching the request:
SELECT JOB_TITLE, MIN_SALARY, MAX_SALARY, (MAX_SALARY - MIN_SALARY) VARIANCE
FROM JOBS WHERE JOB_TITLE LIKE '%President%' OR JOB_TITLE LIKE '%Manager%'
ORDER BY VARIANCE DESC, JOB_TITLE DESC;

Ampersand Substitution
As you develop and perfect SQL statements, they may be saved for future use. It is
sometimes desirable to have a generic form of a statement that has a variable or
placeholder defined that can be substituted at runtime. Oracle offers this functionality

PART II

The ORDER BY clause specifies the numeric literal 2. This is equivalent to
specifying ORDER BY HIRE_DATE, since that is the second column in the SELECT
clause. Positional sorting applies only to columns in the SELECT list.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

406
in the form of ampersand substitution. Every element of the SELECT statement may be
substituted, and the reduction of queries to their core elements to facilitate reuse can
save you hours of tedious and repetitive work. This section examines substitution
variables and the DEFINE and VERIFY commands.

Substitution Variables
Substitution variables may be regarded as placeholders. A SQL query is composed
of two or more clauses. Each clause can be divided into subclauses, which are in turn
made up of character text. Any text, subclause, or clause element is a candidate for
substitution.

Single Ampersand Substitution
The most basic and popular form of SQL element is single ampersand substitution. The
ampersand character (&) is the symbol chosen to designate a substitution variable in
a statement and precedes the variable name with no spaces between them. When the
statement is executed, the Oracle server processes the statement, notices a substitution
variable, and attempts to resolve this variable’s value in one of two ways. First, it checks
whether the variable is defined in the user session. (The DEFINE command is discussed
later in this chapter.) If the variable is not defined, the user process prompts for a
value that will be substituted in place of the variable. Once a value is submitted, the
statement is complete and is executed by the Oracle server. The ampersand substitution
variable is resolved at execution time and is sometimes known as runtime binding or
runtime substitution.
You may be required to look up contact information like PHONE_NUMBER data
given either LAST_NAME or EMPLOYEE_ID values. This generic query may be written as
select employee_id, last_name, phone_number from employees
where last_name = &LASTNAME or employee_id = &EMPNO;

When running this query, Oracle prompts you to input a value for the variable
called LASTNAME. You enter an employee’s last name, if you know it, for example,
‘King’. If you don’t know the last name but know the employee ID number, you can
type in any value and press the ENTER key to submit the value. Oracle then prompts
you to enter a value for the EMPNO variable. After typing in a value, for example, 0,
and hitting ENTER, there are no remaining substitution variables for Oracle to resolve
and the following statement is executed:
select employee_id, last_name, phone_number from employees
where last_name = 'King' or employee_id = 0;

Variables can be assigned any alphanumeric name that is a valid identifier name.
The literal you substitute when prompted for a variable must be an appropriate data
type for that context; otherwise, an “ORA-00904: invalid identifier” error is returned.
If the variable is meant to substitute a character or date value, the literal needs to be
enclosed in single quotes. A useful technique is to enclose the ampersand substitution

Chapter 9: Retrieving, Restricting, and Sorting Data Using SQL

407
variable in single quotes when dealing with character and date values. In this way, the
user is required to submit a literal value without worrying about enclosing it in quotes.

Double Ampersand Substitution

select first_name, last_name from employees
where last_name like '%&SEARCH%' and first_name like '%&SEARCH%';

The two conditions are identical but apply to different columns. When this statement
is executed, you are first prompted to enter a substitution value for the SEARCH variable
used in the comparison with the LAST_NAME column. Thereafter, you are prompted to
enter a substitution value for the SEARCH variable used in the comparison with the
FIRST_NAME column. This poses two problems. First, it is inefficient to enter the same
value twice, but second and more important, typographical errors may confound the
query, since Oracle does not verify that the same literal value is entered each time
substitution variables with the same name are used. In this example, the logical
assumption is that the contents of the variables substituted should be the same, but the
fact that the variables have the same name has no meaning to the Oracle server, and it
makes no such assumption. The first example in Figure 9-7 shows the results of running
the preceding query and submitting two distinct values for the SEARCH substitution
variable. In this particular example, the results are incorrect since the requirement was
to retrieve FIRST_NAME and LAST_NAME pairs that contained the identical string of
characters.
When a substitution variable is referenced multiple times in the same query and
your intention is that the variable must have the same value at each occurrence in the
statement, it is preferable to make use of double ampersand substitution. This involves
prefixing the first occurrence of the substitution variable that occurs multiple times
in a query, with two ampersand symbols instead of one. When Oracle encounters a
double ampersand substitution variable, a session value is defined for that variable
and you are not prompted to enter a value to be substituted for this variable in
subsequent references.
The second example in Figure 9-7 demonstrates how the SEARCH variable is
preceded by two ampersands in the condition with the FIRST_NAME column and
thereafter is prefixed by one ampersand in the condition with the LAST_NAME
column. When the statement is executed, you are prompted to enter a value to be
substituted for the SEARCH variable only once for the condition with the FIRST_
NAME column. This value is then automatically resolved from the session value of
the variable in subsequent references to it, as in the condition with the LAST_NAME
column. To undefine the SEARCH variable, you need to use the UNDEFINE command
described later in this chapter.

PART II

When a substitution variable is referenced multiple times in the same query, Oracle will
prompt you to enter a value for every occurrence of the single ampersand substitution
variable. For complex scripts, this can be very inefficient and tedious. The following
statement retrieves the FIRST_NAME and LAST_NAME data from the EMPLOYEES table
for those rows that contain the same set of characters in both these fields:

OCA/OCP Oracle Database 11g All-in-One Exam Guide

408

Figure 9-7

Double ampersand substitution

TIP Whether you work as a developer, database administrator, or business
end user, all SQL queries you encounter may be broadly classified as either
ad hoc or repeated queries. Ad hoc queries are usually one-off statements
written during some data investigation exercise that are unlikely to be reused.
The repeated queries are those that are run frequently or periodically, which
are usually saved as script files and run with little to no modification whenever
required. Reuse prevents costly redevelopment time and allows these consistent
queries to potentially benefit from Oracle’s native automatic tuning features
geared toward improving query performance.

Substituting Column Names
Literal elements of the WHERE clause have been the focus of the discussion on
substitution thus far, but virtually any element of a SQL statement is a candidate for
substitution. In the following statement, the FIRST_NAME and JOB_ID columns are
static and will always be retrieved, but the third column selected is variable and
specified as a substitution variable named COL. The result set is further sorted by this
variable column in the ORDER BY clause:
select first_name, job_id, &&col
from employees
where job_id in ('MK_MAN','SA_MAN')
order by &col;

Chapter 9: Retrieving, Restricting, and Sorting Data Using SQL

409
Unlike character and date literals, column name references do not require single
quotes either when explicitly specified or when substituted via ampersand substitution.

Substituting Expressions and Text

select &rest_of_statement;

When the statement is executed, you are prompted to submit a value for the
variable called REST_OF_STATEMENT, which when appended to the SELECT keyword,
is any legitimate query. Useful candidates for ampersand substitution are statements
that are run multiple times and differ slightly from each other.

Define and Verify
Double ampersand substitution is used to avoid repetitive input when the same variable
occurs multiple times in a statement. When a double ampersand substitution occurs,
the variable is stored as a session variable. As the statement executes, all further
occurrences of the variable are automatically resolved using the stored session variable.
Any subsequent executions of the statement within the same session automatically
resolve the substitution variables from stored session values. This is not always desirable
and indeed limits the usefulness of substitution variables. Oracle does, however, provide
a mechanism to UNDEFINE these session variables. The VERIFY command is specific
to SQL*Plus and controls whether or not substituted elements are echoed on the user’s
screen prior to executing a SQL statement that uses substitution variables.

The DEFINE and UNDEFINE Commands
Session-level variables are implicitly created when they are initially referenced in SQL
statements using double ampersand substitution. They persist or remain available for
the duration of the session or until they are explicitly undefined. A session ends when
the user exits their client tool like SQL*Plus or when the user process is terminated.
The problem with persistent session variables is they tend to detract from the
generic nature of statements that use ampersand substitution variables. Fortunately,
these session variables can be removed with the UNDEFINE command. Within a
script or at the command line of SQL*Plus or SQL Developer, the syntax to undefine
session variables is
UNDEFINE variable;

Consider a simple generic example that selects a static and variable column from
the EMPLOYEES table and sorts the output based on the variable column:
select last_name, &&COLNAME
from employees where department_id=30 order by &COLNAME;

PART II

Almost any element of a SQL statement may be substituted at runtime. The constraint
is that Oracle requires at least the first word to be static. In the case of the SELECT
statement, at the very minimum, the SELECT keyword is required and the remainder
of the statement may be substituted as follows:

OCA/OCP Oracle Database 11g All-in-One Exam Guide

410
The first time this statement executes, you are prompted to supply a value for
the COLNAME variable. Assume you enter SALARY. This value is substituted and the
statement executes. A subsequent execution of this statement within the same session
does not prompt for any COLNAME values, since it is already defined as SALARY in
the context of this session and can only be undefined with the UNDEFINE COLNAME
command. Once the variable has been undefined, the next execution of the statement
prompts the user for a value for the COLNAME variable.
The DEFINE command serves two purposes. It can be used to retrieve a list of all
the variables currently defined in your SQL session; it can also be used to explicitly
define a value for a variable referenced as a substitution variable by one or more
statements during the lifetime of that session. The syntax for the two variants of the
DEFINE command are as follows:
DEFINE;
DEFINE variable=value;

As Figure 9-8 demonstrates, a variable called EMPNAME is defined explicitly to
have the value ‘King’. The stand-alone DEFINE command in SQL*Plus then returns
a number of session variables prefixed with an underscore character as well as other

Figure 9-8

The DEFINE command

Chapter 9: Retrieving, Restricting, and Sorting Data Using SQL

411

select 'Coda & Sid' from dual;

By turning off the ampersand substitution functionality, this query may be
executed without prompts:
SET DEFINE OFF
select 'Coda & Sid' from dual;
SET DEFINE ON

Once the statement executes, the SET DEFINE ON command may be used to
switch the substitution functionality back on. If DEFINE is OFF and the context that an
ampersand is used in a statement cannot be resolved literally, Oracle returns an error.

The VERIFY Command
Two categories of commands are available when dealing with the Oracle server: SQL
language commands and the SQL client control commands. The SELECT statement is
a language command, while the SET command controls the SQL client environment.
There are many different language and control commands available, but the control
commands relevant to substitution are DEFINE and VERIFY.
The VERIFY command controls whether the substitution variable submitted is
displayed onscreen so that you can verify that the correct substitution has occurred. A
message is displayed showing the old clause followed by the new clause containing the
substituted value. The VERIFY command is switched ON and OFF with the command
SET VERIFY ON|OFF. If VERIFY is first switched OFF and a query that uses ampersand
substitution is executed, you are prompted to input a value. The value is then substituted,
the statement runs, and its results are displayed. If VERIFY is then switched ON and the
same query is executed, once you input a value but before the statement commences
execution, Oracle displays the clause containing the reference to the substitution variable
as the old clause with its line number and, immediately below this, the new clause displays
the statement containing the substituted value.
Exercise 9-5: Using Ampersand Substitution You are required to write a
reusable query using the current tax rate and the EMPLOYEE_ID number as inputs and
return the EMPLOYEE_ID, FIRST_NAME, SALARY, ANNUAL SALARY (SALARY * 12),

PART II

substitution variables defined earlier. Two different but simple queries are executed,
and the explicitly defined substitution variable EMPNAME is referenced by both
queries. Finally, the variable is UNDEFINED.
Support of session-persistent variables may be switched off and on as required
using the SET DEFINE OFF command. The SET command is not a SQL language
command, but rather a SQL environment control command. When you specify SET
DEFINE OFF, the client tool (for example, SQL*Plus) does not save session variables
or attach special meaning to the ampersand symbol. This allows the ampersand
symbol to be used as an ordinary literal character if necessary. The SET DEFINE
ON|OFF command therefore determines whether or not ampersand substitution is
available in your session. The following query uses the ampersand symbol as a literal
value. When it is executed, you are prompted to submit a value for bind variable SID.

OCA/OCP Oracle Database 11g All-in-One Exam Guide

412
TAX_RATE, and TAX (TAX_RATE * ANNUAL SALARY) information for use by the HR
department clerks.
1. Start SQL*Plus and connect to the HR schema.
2. The select list must include the four specified columns as well as two expressions.
The first expression, aliased as ANNUAL SALARY, is a simple calculation, while
the second expression, aliased as TAX, depends on the TAX_RATE. Since the
TAX RATE may vary, this value must be substituted at runtime.
3. A possible solution is
SELECT &&EMPLOYEE_ID, FIRST_NAME, SALARY, SALARY * 12 AS "ANNUAL SALARY",
&&TAX_RATE, (&TAX_RATE * (SALARY * 12)) AS "TAX"
FROM EMPLOYEES WHERE EMPLOYEE_ID = &EMPLOYEE_ID;

4. The double ampersand preceding EMPLOYEE_ID and TAX_RATE in the
SELECT clause stipulates to Oracle that when the statement is executed, the
user must be prompted to submit a value for each substitution variable that
will be used wherever they are subsequently referenced as &EMPLOYEE_ID
and &TAX_RATE, respectively.

Two-Minute Drill
List the Capabilities of SQL SELECT Statements
• The three fundamental operations that SELECT statements are capable of are
projection, selection, and joining.
• Projection refers to the restriction of columns selected from a table. Using
projection, you retrieve only the columns of interest and not every possible
column.
• Selection refers to the extraction of rows from a table. Selection includes the
further restriction of the extracted rows based on various criteria or conditions.
This allows you to retrieve only the rows that are of interest and not every row
in the table.
• Joining involves linking two or more tables based on common attributes.
Joining allows data to be stored in third normal form in discrete tables,
instead of in one large table.
• The DESCRIBE command lists the names, data types, and nullable status of all
columns in a table.

Execute a Basic SELECT Statement
• The SELECT clause determines the projection of columns. In other words, the
SELECT clause specifies which columns are included in the results returned.

Chapter 9: Retrieving, Restricting, and Sorting Data Using SQL

413
• The DISTINCT keyword preceding items in the SELECT clause causes duplicate
combinations of these items to be excluded from the returned results set.
• Expressions and regular columns may be aliased using the AS keyword or by
leaving a space between the column or expression and the alias.

Limit the Rows Retrieved by a Query
• One or more conditions constitute a WHERE clause. These conditions specify
rules to which the data in a row must conform to be eligible for selection.
• For each row tested in a condition, there are terms on the left and right of a
comparison operator. Terms in a condition can be column values, literals, or
expressions.
• Comparison operators may test two terms in many ways. Equality or inequality
tests are very common, but range, set, and pattern comparisons are also available.
• Boolean ope