Oracle Database Administrator’s Guide 10g Release 2 Administrator

Administrator's%20Guide

User Manual: Pdf

Open the PDF directly: View PDF PDF.
Page Count: 876

DownloadOracle Database Administrator’s Guide 10g Release 2 Administrator
Open PDF In BrowserView PDF
Oracle® Database
Administrator's Guide
10g Release 2 (10.2)
B14231-02

May 2006

Oracle Database Administrator’s Guide, 10g Release 2 (10.2)
B14231-02
Copyright © 2001, 2006, Oracle. All rights reserved.
Primary Author:

Steve Fogel

Contributing Author:

Paul Lane

Contributors: David Austin, Prasad Bagal, Cathy Baird, Mark Bauer, Eric Belden, Bill Bridge, Allen
Brumm, Sudip Datta, Mark Dilman, Jacco Draaijer, Harvey Eneman, Amit Ganesh, Vira Goorah, Carolyn
Gray, Joan Gregoire, Daniela Hansell, Lilian Hobbs, Wei Huang, Robert Jenkins, Bhushan Khaladkar, Balaji
Krishnan, Vasudha Krishnaswamy, Sushil Kumar, Bill Lee, Sue K. Lee, Yunrui Li, Ilya Listvinsky, Bryn
Llewellyn, Rich Long, Catherine Luu, Scott Lynn, Raghu Mani, Colin McGregor, Mughees Minhas, Valarie
Moore, Niloy Mukherjee, Sujatha Muthulingam, Gary Ngai, Waleed Ojeil, Rod Payne, Hanlin Qian, Ananth
Raghavan, Ravi Ramkissoon, Ann Rhee, Tom Sepez, Vikram Shukla, Bipul Sinha, Wayne Smith, Jags
Srinivasan, Deborah Steiner, Janet Stern, Michael Stewart, Mahesh Subramaniam, Anh-Tuan Tran, Alex
Tsukerman, Kothanda Umamageswaran, Eric Voss, Daniel M. Wong, Wanli Yang, Wei Zhang
The Programs (which include both the software and documentation) contain proprietary information; they
are provided under a license agreement containing restrictions on use and disclosure and are also protected
by copyright, patent, and other intellectual and industrial property laws. Reverse engineering, disassembly,
or decompilation of the Programs, except to the extent required to obtain interoperability with other
independently created software or as specified by law, is prohibited.
The information contained in this document is subject to change without notice. If you find any problems in
the documentation, please report them to us in writing. This document is not warranted to be error-free.
Except as may be expressly permitted in your license agreement for these Programs, no part of these
Programs may be reproduced or transmitted in any form or by any means, electronic or mechanical, for any
purpose.
If the Programs are delivered to the United States Government or anyone licensing or using the Programs on
behalf of the United States Government, the following notice is applicable:
U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data
delivered to U.S. Government customers are "commercial computer software" or "commercial technical data"
pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As
such, use, duplication, disclosure, modification, and adaptation of the Programs, including documentation
and technical data, shall be subject to the licensing restrictions set forth in the applicable Oracle license
agreement, and, to the extent applicable, the additional rights set forth in FAR 52.227-19, Commercial
Computer Software--Restricted Rights (June 1987). Oracle USA, Inc., 500 Oracle Parkway, Redwood City, CA
94065.
The Programs are not intended for use in any nuclear, aviation, mass transit, medical, or other inherently
dangerous applications. It shall be the licensee's responsibility to take all appropriate fail-safe, backup,
redundancy and other measures to ensure the safe use of such applications if the Programs are used for such
purposes, and we disclaim liability for any damages caused by such use of the Programs.
Oracle, JD Edwards, PeopleSoft, and Siebel are registered trademarks of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective owners.
The Programs may provide links to Web sites and access to content, products, and services from third
parties. Oracle is not responsible for the availability of, or any content provided on, third-party Web sites.
You bear all risks associated with the use of such content. If you choose to purchase any products or services
from a third party, the relationship is directly between you and the third party. Oracle is not responsible for:
(a) the quality of third-party products or services; or (b) fulfilling any of the terms of the agreement with the
third party, including delivery of products or services and warranty obligations related to purchased
products or services. Oracle is not responsible for any loss or damage of any sort that you may incur from
dealing with any third party.

Contents
Send Us Your Comments .................................................................................................................... xxv
Preface ............................................................................................................................................................ xxvii
Audience..................................................................................................................................................
Documentation Accessibility ................................................................................................................
Structure .................................................................................................................................................
Related Documents .................................................................................................................................
Conventions ............................................................................................................................................

xxvii
xxvii
xxviii
xxxi
xxxii

What's New in Oracle Database Administrator's Guide? .............................................. xxxvii
Oracle Database 10g Release 2 (10.2) New Features in the Administrator's Guide.................... xxxvii
Oracle Database 10g Release 1 (10.1) New Features in the Administrator's Guide.......................... xli

Part I
1

Basic Database Administration

Overview of Administering an Oracle Database
Types of Oracle Database Users ............................................................................................................
Database Administrators ..................................................................................................................
Security Officers .................................................................................................................................
Network Administrators...................................................................................................................
Application Developers.....................................................................................................................
Application Administrators..............................................................................................................
Database Users ...................................................................................................................................
Tasks of a Database Administrator .......................................................................................................
Task 1: Evaluate the Database Server Hardware ..........................................................................
Task 2: Install the Oracle Database Software .................................................................................
Task 3: Plan the Database..................................................................................................................
Task 4: Create and Open the Database ...........................................................................................
Task 5: Back Up the Database...........................................................................................................
Task 6: Enroll System Users..............................................................................................................
Task 7: Implement the Database Design.........................................................................................
Task 8: Back Up the Fully Functional Database ............................................................................
Task 9: Tune Database Performance ...............................................................................................
Task 10: Download and Install Patches ..........................................................................................
Task 11: Roll Out to Additional Hosts ............................................................................................

1-1
1-2
1-2
1-2
1-2
1-3
1-3
1-3
1-4
1-4
1-4
1-5
1-5
1-5
1-5
1-5
1-6
1-6
1-6
iii

Selecting an Instance with Environment Variables........................................................................... 1-7
Identifying Your Oracle Database Software Release ........................................................................ 1-7
Release Number Format.................................................................................................................... 1-8
Checking Your Current Release Number....................................................................................... 1-8
Database Administrator Security and Privileges............................................................................... 1-9
The Database Administrator's Operating System Account ......................................................... 1-9
Database Administrator Usernames ............................................................................................... 1-9
Database Administrator Authentication .......................................................................................... 1-11
Administrative Privileges .............................................................................................................. 1-11
Selecting an Authentication Method............................................................................................ 1-13
Using Operating System Authentication..................................................................................... 1-14
Using Password File Authentication............................................................................................ 1-15
Creating and Maintaining a Password File...................................................................................... 1-16
Using ORAPWD.............................................................................................................................. 1-16
Setting REMOTE_LOGIN_ PASSWORDFILE ............................................................................ 1-18
Adding Users to a Password File.................................................................................................. 1-18
Maintaining a Password File ......................................................................................................... 1-19
Server Manageability ........................................................................................................................... 1-20
Automatic Manageability Features .............................................................................................. 1-20
Data Utilities .................................................................................................................................... 1-22

2

Creating an Oracle Database
Deciding How to Create an Oracle Database ..................................................................................... 2-1
Manually Creating an Oracle Database ............................................................................................... 2-2
Considerations Before Creating the Database ............................................................................... 2-2
Creating the Database........................................................................................................................ 2-4
Understanding the CREATE DATABASE Statement ..................................................................... 2-10
Protecting Your Database: Specifying Passwords for Users SYS and SYSTEM .................... 2-10
Creating a Locally Managed SYSTEM Tablespace..................................................................... 2-11
Creating the SYSAUX Tablespace ................................................................................................ 2-12
Using Automatic Undo Management: Creating an Undo Tablespace.................................... 2-13
Creating a Default Permanent Tablespace .................................................................................. 2-14
Creating a Default Temporary Tablespace.................................................................................. 2-14
Specifying Oracle-Managed Files at Database Creation ........................................................... 2-15
Supporting Bigfile Tablespaces During Database Creation...................................................... 2-16
Specifying the Database Time Zone and Time Zone File.......................................................... 2-17
Specifying FORCE LOGGING Mode ........................................................................................... 2-18
Understanding Initialization Parameters ......................................................................................... 2-19
Determining the Global Database Name..................................................................................... 2-21
Specifying a Flash Recovery Area ................................................................................................ 2-22
Specifying Control Files ................................................................................................................ 2-22
Specifying Database Block Sizes ................................................................................................... 2-23
Managing the System Global Area (SGA) .................................................................................. 2-24
Specifying the Maximum Number of Processes......................................................................... 2-33
Specifying the Method of Undo Space Management ................................................................ 2-33
The COMPATIBLE Initialization Parameter and Irreversible Compatibility ........................ 2-34
Setting the License Parameter ....................................................................................................... 2-34

iv

Troubleshooting Database Creation ..................................................................................................
Dropping a Database ............................................................................................................................
Managing Initialization Parameters Using a Server Parameter File ...........................................
What Is a Server Parameter File? ..................................................................................................
Migrating to a Server Parameter File ...........................................................................................
Creating a Server Parameter File ..................................................................................................
The SPFILE Initialization Parameter ............................................................................................
Using ALTER SYSTEM to Change Initialization Parameter Values .......................................
Exporting the Server Parameter File ............................................................................................
Backing Up the Server Parameter File .........................................................................................
Errors and Recovery for the Server Parameter File....................................................................
Viewing Parameter Settings ..........................................................................................................
Defining Application Services for Oracle Database 10g ..............................................................
Deploying Services .........................................................................................................................
Configuring Services .....................................................................................................................
Using Services .................................................................................................................................
Considerations After Creating a Database.......................................................................................
Some Security Considerations.......................................................................................................
Enabling Transparent Data Encryption .......................................................................................
Creating a Secure External Password Store ................................................................................
Installing the Oracle Database Sample Schemas ........................................................................
Viewing Information About the Database .......................................................................................

3

2-35
2-35
2-35
2-36
2-36
2-37
2-38
2-38
2-39
2-40
2-40
2-40
2-41
2-42
2-42
2-43
2-44
2-44
2-45
2-46
2-46
2-46

Starting Up and Shutting Down
Starting Up a Database............................................................................................................................ 3-1
Options for Starting Up a Database................................................................................................. 3-1
Understanding Initialization Parameter Files................................................................................ 3-2
Preparing to Start Up an Instance.................................................................................................... 3-4
Starting Up an Instance ..................................................................................................................... 3-4
Altering Database Availability .............................................................................................................. 3-7
Mounting a Database to an Instance ............................................................................................... 3-7
Opening a Closed Database.............................................................................................................. 3-7
Opening a Database in Read-Only Mode....................................................................................... 3-8
Restricting Access to an Open Database......................................................................................... 3-8
Shutting Down a Database..................................................................................................................... 3-9
Shutting Down with the NORMAL Clause ................................................................................... 3-9
Shutting Down with the IMMEDIATE Clause .............................................................................. 3-9
Shutting Down with the TRANSACTIONAL Clause ............................................................... 3-10
Shutting Down with the ABORT Clause ..................................................................................... 3-10
Shutdown Timeout ......................................................................................................................... 3-11
Quiescing a Database ........................................................................................................................... 3-11
Placing a Database into a Quiesced State .................................................................................... 3-12
Restoring the System to Normal Operation ................................................................................ 3-12
Viewing the Quiesce State of an Instance .................................................................................... 3-13
Suspending and Resuming a Database ............................................................................................ 3-13

v

4

Managing Oracle Database Processes
About Dedicated and Shared Server Processes.................................................................................. 4-1
Dedicated Server Processes .............................................................................................................. 4-1
Shared Server Processes .................................................................................................................... 4-2
Configuring Oracle Database for Shared Server ............................................................................... 4-4
Initialization Parameters for Shared Server ................................................................................... 4-4
Enabling Shared Server ..................................................................................................................... 4-5
Configuring Dispatchers ................................................................................................................... 4-7
Monitoring Shared Server.............................................................................................................. 4-12
About Oracle Database Background Processes............................................................................... 4-13
Managing Processes for Parallel SQL Execution ............................................................................ 4-14
About Parallel Execution Servers ................................................................................................. 4-15
Altering Parallel Execution for a Session..................................................................................... 4-15
Managing Processes for External Procedures .................................................................................. 4-16
Terminating Sessions............................................................................................................................ 4-16
Identifying Which Session to Terminate...................................................................................... 4-17
Terminating an Active Session...................................................................................................... 4-17
Terminating an Inactive Session ................................................................................................... 4-18
Monitoring the Operation of Your Database ................................................................................... 4-18
Server-Generated Alerts................................................................................................................. 4-18
Monitoring the Database Using Trace Files and the Alert Log ................................................ 4-21
Monitoring Locks ............................................................................................................................ 4-23
Monitoring Wait Events ................................................................................................................. 4-24
Process and Session Views ............................................................................................................ 4-24

Part II
5

Oracle Database Structure and Storage

Managing Control Files
What Is a Control File? ............................................................................................................................
Guidelines for Control Files ..................................................................................................................
Provide Filenames for the Control Files .........................................................................................
Multiplex Control Files on Different Disks ....................................................................................
Back Up Control Files ........................................................................................................................
Manage the Size of Control Files .....................................................................................................
Creating Control Files .............................................................................................................................
Creating Initial Control Files ............................................................................................................
Creating Additional Copies, Renaming, and Relocating Control Files ....................................
Creating New Control Files ..............................................................................................................
Troubleshooting After Creating Control Files....................................................................................
Checking for Missing or Extra Files ................................................................................................
Handling Errors During CREATE CONTROLFILE .....................................................................
Backing Up Control Files........................................................................................................................
Recovering a Control File Using a Current Copy ..............................................................................
Recovering from Control File Corruption Using a Control File Copy.......................................
Recovering from Permanent Media Failure Using a Control File Copy....................................
Dropping Control Files ...........................................................................................................................

vi

5-1
5-2
5-2
5-2
5-3
5-3
5-3
5-3
5-4
5-4
5-7
5-7
5-7
5-7
5-8
5-8
5-8
5-9

Displaying Control File Information ................................................................................................... 5-9

6

Managing the Redo Log
What Is the Redo Log?............................................................................................................................. 6-1
Redo Threads ...................................................................................................................................... 6-1
Redo Log Contents............................................................................................................................. 6-2
How Oracle Database Writes to the Redo Log .............................................................................. 6-2
Planning the Redo Log ............................................................................................................................ 6-4
Multiplexing Redo Log Files ............................................................................................................ 6-4
Placing Redo Log Members on Different Disks............................................................................. 6-6
Setting the Size of Redo Log Members ........................................................................................... 6-7
Choosing the Number of Redo Log Files ....................................................................................... 6-7
Controlling Archive Lag .................................................................................................................. 6-8
Creating Redo Log Groups and Members........................................................................................... 6-9
Creating Redo Log Groups ............................................................................................................... 6-9
Creating Redo Log Members......................................................................................................... 6-10
Relocating and Renaming Redo Log Members .............................................................................. 6-10
Dropping Redo Log Groups and Members ..................................................................................... 6-12
Dropping Log Groups .................................................................................................................... 6-12
Dropping Redo Log Members....................................................................................................... 6-12
Forcing Log Switches............................................................................................................................ 6-13
Verifying Blocks in Redo Log Files ................................................................................................... 6-13
Clearing a Redo Log File...................................................................................................................... 6-14
Viewing Redo Log Information ......................................................................................................... 6-15

7

Managing Archived Redo Logs
What Is the Archived Redo Log?........................................................................................................... 7-1
Choosing Between NOARCHIVELOG and ARCHIVELOG Mode .............................................. 7-2
Running a Database in NOARCHIVELOG Mode ........................................................................ 7-2
Running a Database in ARCHIVELOG Mode ............................................................................... 7-3
Controlling Archiving ............................................................................................................................. 7-4
Setting the Initial Database Archiving Mode................................................................................. 7-4
Changing the Database Archiving Mode ...................................................................................... 7-4
Performing Manual Archiving......................................................................................................... 7-5
Adjusting the Number of Archiver Processes ............................................................................... 7-5
Specifying the Archive Destination ..................................................................................................... 7-6
Specifying Archive Destinations...................................................................................................... 7-6
Understanding Archive Destination Status ................................................................................... 7-8
Specifying the Mode of Log Transmission.......................................................................................... 7-9
Normal Transmission Mode............................................................................................................. 7-9
Standby Transmission Mode ........................................................................................................ 7-10
Managing Archive Destination Failure ............................................................................................ 7-11
Specifying the Minimum Number of Successful Destinations................................................. 7-11
Rearchiving to a Failed Destination ............................................................................................. 7-13
Controlling Trace Output Generated by the Archivelog Process ................................................ 7-13
Viewing Information About the Archived Redo Log .................................................................... 7-14

vii

Dynamic Performance Views ........................................................................................................ 7-14
The ARCHIVE LOG LIST Command........................................................................................... 7-15

8

Managing Tablespaces
Guidelines for Managing Tablespaces................................................................................................. 8-1
Using Multiple Tablespaces.............................................................................................................. 8-1
Assigning Tablespace Quotas to Users ........................................................................................... 8-2
Creating Tablespaces ............................................................................................................................... 8-2
Locally Managed Tablespaces.......................................................................................................... 8-3
Bigfile Tablespaces ............................................................................................................................. 8-6
Temporary Tablespaces..................................................................................................................... 8-8
Multiple Temporary Tablespaces: Using Tablespace Groups .................................................. 8-11
Specifying Nonstandard Block Sizes for Tablespaces ................................................................... 8-12
Controlling the Writing of Redo Records......................................................................................... 8-13
Altering Tablespace Availability ....................................................................................................... 8-13
Taking Tablespaces Offline............................................................................................................ 8-13
Bringing Tablespaces Online ......................................................................................................... 8-15
Using Read-Only Tablespaces ............................................................................................................ 8-15
Making a Tablespace Read-Only .................................................................................................. 8-16
Making a Read-Only Tablespace Writable ................................................................................. 8-17
Creating a Read-Only Tablespace on a WORM Device ............................................................ 8-18
Delaying the Opening of Datafiles in Read-Only Tablespaces ................................................ 8-18
Renaming Tablespaces ......................................................................................................................... 8-19
Dropping Tablespaces ......................................................................................................................... 8-20
Managing the SYSAUX Tablespace ................................................................................................... 8-20
Monitoring Occupants of the SYSAUX Tablespace ................................................................... 8-21
Moving Occupants Out Of or Into the SYSAUX Tablespace.................................................... 8-21
Controlling the Size of the SYSAUX Tablespace ........................................................................ 8-21
Diagnosing and Repairing Locally Managed Tablespace Problems........................................... 8-22
Scenario 1: Fixing Bitmap When Allocated Blocks are Marked Free (No Overlap).............. 8-23
Scenario 2: Dropping a Corrupted Segment ............................................................................... 8-23
Scenario 3: Fixing Bitmap Where Overlap is Reported ............................................................. 8-23
Scenario 4: Correcting Media Corruption of Bitmap Blocks..................................................... 8-24
Scenario 5: Migrating from a Dictionary-Managed to a Locally Managed Tablespace........ 8-24
Migrating the SYSTEM Tablespace to a Locally Managed Tablespace ...................................... 8-24
Transporting Tablespaces Between Databases ................................................................................ 8-25
Introduction to Transportable Tablespaces................................................................................. 8-25
About Transporting Tablespaces Across Platforms................................................................... 8-26
Limitations on Transportable Tablespace Use............................................................................ 8-27
Compatibility Considerations for Transportable Tablespaces ................................................. 8-28
Transporting Tablespaces Between Databases: A Procedure and Example .......................... 8-29
Using Transportable Tablespaces: Scenarios .............................................................................. 8-37
Moving Databases Across Platforms Using Transportable Tablespaces ................................ 8-40
Viewing Tablespace Information ....................................................................................................... 8-40
Example 1: Listing Tablespaces and Default Storage Parameters ........................................... 8-41
Example 2: Listing the Datafiles and Associated Tablespaces of a Database......................... 8-41
Example 3: Displaying Statistics for Free Space (Extents) of Each Tablespace ...................... 8-42

viii

9

Managing Datafiles and Tempfiles
Guidelines for Managing Datafiles...................................................................................................... 9-1
Determine the Number of Datafiles ................................................................................................ 9-2
Determine the Size of Datafiles ........................................................................................................ 9-3
Place Datafiles Appropriately .......................................................................................................... 9-4
Store Datafiles Separate from Redo Log Files................................................................................ 9-4
Creating Datafiles and Adding Datafiles to a Tablespace .............................................................. 9-4
Changing Datafile Size ........................................................................................................................... 9-5
Enabling and Disabling Automatic Extension for a Datafile....................................................... 9-5
Manually Resizing a Datafile ........................................................................................................... 9-6
Altering Datafile Availability ................................................................................................................ 9-6
Bringing Datafiles Online or Taking Offline in ARCHIVELOG Mode...................................... 9-7
Taking Datafiles Offline in NOARCHIVELOG Mode.................................................................. 9-7
Altering the Availability of All Datafiles or Tempfiles in a Tablespace .................................... 9-8
Renaming and Relocating Datafiles..................................................................................................... 9-8
Procedures for Renaming and Relocating Datafiles in a Single Tablespace ............................. 9-9
Procedure for Renaming and Relocating Datafiles in Multiple Tablespaces ......................... 9-10
Dropping Datafiles ............................................................................................................................... 9-11
Verifying Data Blocks in Datafiles .................................................................................................... 9-12
Copying Files Using the Database Server ........................................................................................ 9-12
Copying a File on a Local File System.......................................................................................... 9-13
Third-Party File Transfer ............................................................................................................... 9-14
File Transfer and the DBMS_SCHEDULER Package................................................................. 9-14
Advanced File Transfer Mechanisms........................................................................................... 9-15
Mapping Files to Physical Devices .................................................................................................... 9-15
Overview of Oracle Database File Mapping Interface .............................................................. 9-16
How the Oracle Database File Mapping Interface Works......................................................... 9-16
Using the Oracle Database File Mapping Interface.................................................................... 9-20
File Mapping Examples.................................................................................................................. 9-23
Viewing Datafile Information ...................................................................................................... 9-25

10

Managing the Undo Tablespace
What Is Undo?........................................................................................................................................
Introduction to Automatic Undo Management ..............................................................................
Overview of Automatic Undo Management ..............................................................................
Undo Retention ...............................................................................................................................
Setting the Undo Retention Period....................................................................................................
Sizing the Undo Tablespace ................................................................................................................
Using Auto-Extensible Tablespaces .............................................................................................
Sizing Fixed-Size Undo Tablespaces ............................................................................................
Managing Undo Tablespaces ..............................................................................................................
Creating an Undo Tablespace .......................................................................................................
Altering an Undo Tablespace ........................................................................................................
Dropping an Undo Tablespace .....................................................................................................
Switching Undo Tablespaces.........................................................................................................
Establishing User Quotas for Undo Space...................................................................................

10-1
10-2
10-2
10-3
10-5
10-5
10-5
10-5
10-6
10-7
10-8
10-8
10-9
10-9

ix

Migrating to Automatic Undo Management.................................................................................. 10-10
Viewing Information About Undo .................................................................................................. 10-10

Part III
11

Automated File and Storage Management
Using Oracle-Managed Files

What Are Oracle-Managed Files?.......................................................................................................
Who Can Use Oracle-Managed Files?..........................................................................................
Benefits of Using Oracle-Managed Files......................................................................................
Oracle-Managed Files and Existing Functionality .....................................................................
Enabling the Creation and Use of Oracle-Managed Files .............................................................
Setting the DB_CREATE_FILE_DEST Initialization Parameter...............................................
Setting the DB_RECOVERY_FILE_DEST Parameter.................................................................
Setting the DB_CREATE_ONLINE_LOG_DEST_n Initialization Parameter ........................
Creating Oracle-Managed Files ..........................................................................................................
How Oracle-Managed Files Are Named .....................................................................................
Creating Oracle-Managed Files at Database Creation...............................................................
Creating Datafiles for Tablespaces Using Oracle-Managed Files ..........................................
Creating Tempfiles for Temporary Tablespaces Using Oracle-Managed Files ...................
Creating Control Files Using Oracle-Managed Files ...............................................................
Creating Redo Log Files Using Oracle-Managed Files............................................................
Creating Archived Logs Using Oracle-Managed Files ............................................................
Behavior of Oracle-Managed Files...................................................................................................
Dropping Datafiles and Tempfiles .............................................................................................
Dropping Redo Log Files .............................................................................................................
Renaming Files ..............................................................................................................................
Managing Standby Databases ....................................................................................................
Scenarios for Using Oracle-Managed Files ....................................................................................
Scenario 1: Create and Manage a Database with Multiplexed Redo Logs ...........................
Scenario 2: Create and Manage a Database with Database and Flash Recovery Areas .....
Scenario 3: Adding Oracle-Managed Files to an Existing Database......................................

12

Using Automatic Storage Management
What Is Automatic Storage Management? .......................................................................................
Overview of the Components of Automatic Storage Management ............................................
Administering an Automatic Storage Management Instance ......................................................
Installing ASM .................................................................................................................................
Authentication for Accessing an ASM Instance .........................................................................
Setting Initialization Parameters for an ASM Instance..............................................................
Starting Up an ASM Instance ......................................................................................................
Shutting Down an ASM Instance................................................................................................
Administering Automatic Storage Management Disk Groups .................................................
Considerations and Guidelines for Configuring Disk Groups ..............................................
Creating a Disk Group..................................................................................................................
Altering the Disk Membership of a Disk Group ......................................................................
Mounting and Dismounting Disk Groups ................................................................................

x

11-1
11-2
11-3
11-3
11-3
11-4
11-5
11-5
11-5
11-6
11-7
11-12
11-13
11-14
11-16
11-17
11-17
11-18
11-18
11-18
11-18
11-18
11-19
11-22
11-23

12-1
12-3
12-4
12-5
12-6
12-7
12-10
12-13
12-14
12-14
12-19
12-20
12-25

Checking Internal Consistency of Disk Group Metadata ......................................................
Dropping Disk Groups.................................................................................................................
Managing Disk Group Directories ............................................................................................
Managing Alias Names for ASM Filenames .............................................................................
Dropping Files and Associated Aliases from a Disk Group...................................................
Managing Disk Group Templates ..............................................................................................
Using Automatic Storage Management in the Database.............................................................
What Types of Files Does ASM Support?..................................................................................
About ASM Filenames..................................................................................................................
Starting the ASM and Database Instances.................................................................................
Creating and Referencing ASM Files in the Database .............................................................
Creating a Database in ASM........................................................................................................
Creating Tablespaces in ASM......................................................................................................
Creating Redo Logs in ASM ........................................................................................................
Creating a Control File in ASM...................................................................................................
Creating Archive Log Files in ASM............................................................................................
Recovery Manager (RMAN) and ASM ......................................................................................
Migrating a Database to Automatic Storage Management .........................................................
Accessing Automatic Storage Management Files with the XML DB Virtual Folder .............
Inside /sys/asm ............................................................................................................................
Sample FTP Session ......................................................................................................................
Viewing Information About Automatic Storage Management..................................................

Part IV
13

12-26
12-26
12-26
12-28
12-29
12-29
12-32
12-33
12-34
12-39
12-40
12-42
12-42
12-43
12-44
12-45
12-45
12-46
12-46
12-47
12-48
12-48

Schema Objects

Managing Schema Objects
Creating Multiple Tables and Views in a Single Operation.........................................................
Analyzing Tables, Indexes, and Clusters..........................................................................................
Using DBMS_STATS to Collect Table and Index Statistics.......................................................
Validating Tables, Indexes, Clusters, and Materialized Views ................................................
Listing Chained Rows of Tables and Clusters ............................................................................
Truncating Tables and Clusters ..........................................................................................................
Using DELETE.................................................................................................................................
Using DROP and CREATE ............................................................................................................
Using TRUNCATE..........................................................................................................................
Enabling and Disabling Triggers .......................................................................................................
Enabling Triggers ............................................................................................................................
Disabling Triggers...........................................................................................................................
Managing Integrity Constraints .........................................................................................................
Integrity Constraint States ............................................................................................................
Setting Integrity Constraints Upon Definition..........................................................................
Modifying, Renaming, or Dropping Existing Integrity Constraints .....................................
Deferring Constraint Checks .......................................................................................................
Reporting Constraint Exceptions ................................................................................................
Viewing Constraint Information.................................................................................................
Renaming Schema Objects................................................................................................................

13-1
13-2
13-2
13-3
13-4
13-5
13-6
13-6
13-6
13-7
13-8
13-8
13-9
13-9
13-11
13-12
13-13
13-14
13-15
13-16

xi

Managing Object Dependencies ......................................................................................................
About Dependencies Among Schema Objects..........................................................................
Manually Recompiling Views .....................................................................................................
Manually Recompiling Procedures and Functions ..................................................................
Manually Recompiling Packages................................................................................................
Managing Object Name Resolution ................................................................................................
Switching to a Different Schema .....................................................................................................
Displaying Information About Schema Objects ..........................................................................
Using a PL/SQL Package to Display Information About Schema Objects ..........................
Using Views to Display Information About Schema Objects.................................................

14

Managing Space for Schema Objects
Managing Tablespace Alerts ...............................................................................................................
Setting Alert Thresholds ................................................................................................................
Viewing Alerts .................................................................................................................................
Limitations .......................................................................................................................................
Managing Space in Data Blocks.........................................................................................................
Specifying the INITRANS Parameter...........................................................................................
Managing Storage Parameters ............................................................................................................
Identifying the Storage Parameters ..............................................................................................
Specifying Storage Parameters at Object Creation.....................................................................
Setting Storage Parameters for Clusters ......................................................................................
Setting Storage Parameters for Partitioned Tables.....................................................................
Setting Storage Parameters for Index Segments .........................................................................
Setting Storage Parameters for LOBs, Varrays, and Nested Tables ........................................
Changing Values of Storage Parameters .....................................................................................
Understanding Precedence in Storage Parameters ....................................................................
Managing Resumable Space Allocation ...........................................................................................
Resumable Space Allocation Overview .......................................................................................
Enabling and Disabling Resumable Space Allocation.............................................................
Using a LOGON Trigger to Set Default Resumable Mode .....................................................
Detecting Suspended Statements................................................................................................
Operation-Suspended Alert.........................................................................................................
Resumable Space Allocation Example: Registering an AFTER SUSPEND Trigger ............
Reclaiming Wasted Space ..................................................................................................................
Understanding Reclaimable Unused Space ..............................................................................
Using the Segment Advisor.........................................................................................................
Shrinking Database Segments Online ........................................................................................
Deallocating Unused Space .........................................................................................................
Understanding Space Usage of Datatypes .....................................................................................
Displaying Information About Space Usage for Schema Objects ............................................
Using PL/SQL Packages to Display Information About Schema Object Space Usage ......
Using Views to Display Information About Space Usage in Schema Objects .....................
Capacity Planning for Database Objects .......................................................................................
Estimating the Space Use of a Table ..........................................................................................
Estimating the Space Use of an Index .......................................................................................
Obtaining Object Growth Trends ..............................................................................................

xii

13-16
13-17
13-18
13-19
13-19
13-19
13-21
13-21
13-21
13-22

14-1
14-2
14-3
14-3
14-4
14-4
14-5
14-5
14-6
14-6
14-6
14-6
14-7
14-7
14-7
14-8
14-8
14-10
14-12
14-12
14-14
14-14
14-15
14-16
14-16
14-29
14-31
14-31
14-31
14-31
14-32
14-35
14-36
14-36
14-36

15

Managing Tables
About Tables...........................................................................................................................................
Guidelines for Managing Tables........................................................................................................
Design Tables Before Creating Them...........................................................................................
Consider Your Options for the Type of Table to Create............................................................
Specify the Location of Each Table ...............................................................................................
Consider Parallelizing Table Creation .........................................................................................
Consider Using NOLOGGING When Creating Tables ............................................................
Consider Using Table Compression when Creating Tables .....................................................
Estimate Table Size and Plan Accordingly..................................................................................
Restrictions to Consider When Creating Tables .........................................................................
Creating Tables ......................................................................................................................................
Creating a Table...............................................................................................................................
Creating a Temporary Table..........................................................................................................
Parallelizing Table Creation ..........................................................................................................
Loading Tables .......................................................................................................................................
Inserting Data with DML Error Logging.....................................................................................
Inserting Data Into Tables Using Direct-Path INSERT............................................................
Automatically Collecting Statistics on Tables ...............................................................................
Altering Tables.....................................................................................................................................
Reasons for Using the ALTER TABLE Statement ....................................................................
Altering Physical Attributes of a Table......................................................................................
Moving a Table to a New Segment or Tablespace ...................................................................
Manually Allocating Storage for a Table ...................................................................................
Modifying an Existing Column Definition................................................................................
Adding Table Columns ................................................................................................................
Renaming Table Columns............................................................................................................
Dropping Table Columns ...........................................................................................................
Redefining Tables Online..................................................................................................................
Features of Online Table Redefinition .......................................................................................
Performing Online Redefinition with DBMS_REDEFINITION.............................................
Results of the Redefinition Process.............................................................................................
Performing Intermediate Synchronization................................................................................
Aborting Online Table Redefinition and Cleaning Up After Errors .....................................
Restrictions for Online Redefinition of Tables ..........................................................................
Online Redefinition of a Single Partition...................................................................................
Online Table Redefinition Examples..........................................................................................
Privileges Required for the DBMS_REDEFINITION Package ...............................................
Auditing Table Changes Using Flashback Transaction Query ..................................................
Recovering Tables Using the Flashback Table Feature................................................................
Dropping Tables ..................................................................................................................................
Using Flashback Drop and Managing the Recycle Bin ...............................................................
What Is the Recycle Bin? ..............................................................................................................
Enabling and Disabling the Recycle Bin ....................................................................................
Viewing and Querying Objects in the Recycle Bin ..................................................................
Purging Objects in the Recycle Bin .............................................................................................
Restoring Tables from the Recycle Bin.......................................................................................

15-1
15-2
15-2
15-3
15-3
15-4
15-4
15-4
15-5
15-5
15-6
15-6
15-6
15-8
15-8
15-9
15-12
15-16
15-16
15-17
15-17
15-18
15-18
15-19
15-19
15-19
15-20
15-21
15-22
15-22
15-26
15-27
15-27
15-27
15-28
15-30
15-35
15-36
15-36
15-37
15-38
15-38
15-39
15-40
15-40
15-41

xiii

Managing Index-Organized Tables ................................................................................................
What Are Index-Organized Tables? ...........................................................................................
Creating Index-Organized Tables...............................................................................................
Maintaining Index-Organized Tables ........................................................................................
Creating Secondary Indexes on Index-Organized Tables.......................................................
Analyzing Index-Organized Tables ...........................................................................................
Using the ORDER BY Clause with Index-Organized Tables..................................................
Converting Index-Organized Tables to Regular Tables ..........................................................
Managing External Tables .................................................................................................................
Creating External Tables ..............................................................................................................
Altering External Tables...............................................................................................................
Dropping External Tables ............................................................................................................
System and Object Privileges for External Tables ....................................................................
Viewing Information About Tables ................................................................................................

16

Managing Indexes
About Indexes ........................................................................................................................................
Guidelines for Managing Indexes .....................................................................................................
Create Indexes After Inserting Table Data ..................................................................................
Index the Correct Tables and Columns .......................................................................................
Order Index Columns for Performance .......................................................................................
Limit the Number of Indexes for Each Table ..............................................................................
Drop Indexes That Are No Longer Required ............................................................................
Estimate Index Size and Set Storage Parameters........................................................................
Specify the Tablespace for Each Index .........................................................................................
Consider Parallelizing Index Creation.........................................................................................
Consider Creating Indexes with NOLOGGING ........................................................................
Consider Costs and Benefits of Coalescing or Rebuilding Indexes .........................................
Consider Cost Before Disabling or Dropping Constraints........................................................
Creating Indexes....................................................................................................................................
Creating an Index Explicitly ..........................................................................................................
Creating a Unique Index Explicitly ..............................................................................................
Creating an Index Associated with a Constraint........................................................................
Collecting Incidental Statistics when Creating an Index...........................................................
Creating a Large Index ...................................................................................................................
Creating an Index Online...............................................................................................................
Creating a Function-Based Index................................................................................................
Creating a Key-Compressed Index.............................................................................................
Altering Indexes ..................................................................................................................................
Altering Storage Characteristics of an Index.............................................................................
Rebuilding an Existing Index ......................................................................................................
Monitoring Index Usage ..............................................................................................................
Monitoring Space Use of Indexes ....................................................................................................
Dropping Indexes................................................................................................................................
Viewing Index Information...............................................................................................................

xiv

15-42
15-43
15-44
15-48
15-49
15-50
15-51
15-51
15-52
15-53
15-55
15-56
15-56
15-57

16-1
16-2
16-2
16-3
16-3
16-3
16-4
16-4
16-4
16-5
16-5
16-5
16-6
16-6
16-7
16-7
16-7
16-9
16-9
16-9
16-10
16-11
16-11
16-12
16-12
16-13
16-13
16-14
16-15

17

Managing Partitioned Tables and Indexes
About Partitioned Tables and Indexes ..............................................................................................
Partitioning Methods............................................................................................................................
When to Use Range Partitioning...................................................................................................
When to Use Hash Partitioning ....................................................................................................
When to Use List Partitioning ......................................................................................................
When to Use Composite Range-Hash Partitioning ...................................................................
When to Use Composite Range-List Partitioning .....................................................................
Creating Partitioned Tables.................................................................................................................
Creating Range-Partitioned Tables and Global Indexes ...........................................................
Creating Hash-Partitioned Tables and Global Indexes .............................................................
Creating List-Partitioned Tables .................................................................................................
Creating Composite Range-Hash Partitioned Tables ..............................................................
Creating Composite Range-List Partitioned Tables.................................................................
Using Subpartition Templates to Describe Composite Partitioned Tables ..........................
Using Multicolumn Partitioning Keys .......................................................................................
Using Table Compression with Partitioned Tables..................................................................
Using Key Compression with Partitioned Indexes ..................................................................
Creating Partitioned Index-Organized Tables..........................................................................
Partitioning Restrictions for Multiple Block Sizes....................................................................
Maintaining Partitioned Tables........................................................................................................
Updating Indexes Automatically................................................................................................
Adding Partitions..........................................................................................................................
Coalescing Partitions ....................................................................................................................
Dropping Partitions ......................................................................................................................
Exchanging Partitions...................................................................................................................
Merging Partitions ........................................................................................................................
Modifying Default Attributes......................................................................................................
Modifying Real Attributes of Partitions ....................................................................................
Modifying List Partitions: Adding Values ................................................................................
Modifying List Partitions: Dropping Values.............................................................................
Modifying a Subpartition Template ...........................................................................................
Moving Partitions..........................................................................................................................
Redefining Partitions Online .......................................................................................................
Rebuilding Index Partitions.........................................................................................................
Renaming Partitions .....................................................................................................................
Splitting Partitions ........................................................................................................................
Truncating Partitions ....................................................................................................................
Dropping Partitioned Tables.............................................................................................................
Partitioned Tables and Indexes Example........................................................................................
Viewing Information About Partitioned Tables and Indexes ....................................................

18

17-1
17-2
17-3
17-4
17-4
17-5
17-6
17-7
17-8
17-9
17-10
17-11
17-12
17-13
17-15
17-18
17-18
17-18
17-20
17-21
17-24
17-26
17-29
17-30
17-32
17-34
17-38
17-38
17-40
17-40
17-41
17-42
17-43
17-43
17-44
17-45
17-50
17-52
17-52
17-53

Managing Clusters
About Clusters ....................................................................................................................................... 18-1
Guidelines for Managing Clusters .................................................................................................... 18-2
Choose Appropriate Tables for the Cluster ................................................................................ 18-3

xv

Choose Appropriate Columns for the Cluster Key....................................................................
Specify the Space Required by an Average Cluster Key and Its Associated Rows ..............
Specify the Location of Each Cluster and Cluster Index Rows ................................................
Estimate Cluster Size and Set Storage Parameters.....................................................................
Creating Clusters ...................................................................................................................................
Creating Clustered Tables..............................................................................................................
Creating Cluster Indexes................................................................................................................
Altering Clusters ...................................................................................................................................
Altering Clustered Tables ..............................................................................................................
Altering Cluster Indexes ................................................................................................................
Dropping Clusters.................................................................................................................................
Dropping Clustered Tables............................................................................................................
Dropping Cluster Indexes..............................................................................................................
Viewing Information About Clusters ...............................................................................................

19

Managing Hash Clusters
About Hash Clusters.............................................................................................................................
When to Use Hash Clusters.................................................................................................................
Situations Where Hashing Is Useful.............................................................................................
Situations Where Hashing Is Not Advantageous ......................................................................
Creating Hash Clusters ........................................................................................................................
Creating a Sorted Hash Cluster.....................................................................................................
Creating Single-Table Hash Clusters ...........................................................................................
Controlling Space Use Within a Hash Cluster............................................................................
Estimating Size Required by Hash Clusters................................................................................
Altering Hash Clusters .........................................................................................................................
Dropping Hash Clusters ......................................................................................................................
Viewing Information About Hash Clusters.....................................................................................

20

19-1
19-2
19-2
19-2
19-2
19-3
19-4
19-4
19-7
19-7
19-7
19-8

Managing Views, Sequences, and Synonyms
Managing Views....................................................................................................................................
About Views ....................................................................................................................................
Creating Views ................................................................................................................................
Replacing Views ..............................................................................................................................
Using Views in Queries .................................................................................................................
Updating a Join View .....................................................................................................................
Altering Views ...............................................................................................................................
Dropping Views ............................................................................................................................
Managing Sequences..........................................................................................................................
About Sequences ...........................................................................................................................
Creating Sequences .......................................................................................................................
Altering Sequences........................................................................................................................
Using Sequences............................................................................................................................
Dropping Sequences .....................................................................................................................
Managing Synonyms..........................................................................................................................
About Synonyms...........................................................................................................................
Creating Synonyms.......................................................................................................................

xvi

18-3
18-3
18-4
18-4
18-4
18-5
18-5
18-6
18-6
18-7
18-7
18-8
18-8
18-8

20-1
20-1
20-1
20-3
20-4
20-5
20-12
20-12
20-12
20-12
20-13
20-13
20-14
20-16
20-17
20-17
20-17

Using Synonyms in DML Statements ....................................................................................... 20-18
Dropping Synonyms..................................................................................................................... 20-18
Viewing Information About Views, Synonyms, and Sequences............................................... 20-18

21

Using DBMS_REPAIR to Repair Data Block Corruption
Options for Repairing Data Block Corruption................................................................................
About the DBMS_REPAIR Package ..................................................................................................
DBMS_REPAIR Procedures...........................................................................................................
Limitations and Restrictions..........................................................................................................
Using the DBMS_REPAIR Package...................................................................................................
Task 1: Detect and Report Corruptions .......................................................................................
Task 2: Evaluate the Costs and Benefits of Using DBMS_REPAIR..........................................
Task 3: Make Objects Usable .........................................................................................................
Task 4: Repair Corruptions and Rebuild Lost Data ...................................................................
DBMS_REPAIR Examples...................................................................................................................
Examples: Building a Repair Table or Orphan Key Table ........................................................
Example: Detecting Corruption ....................................................................................................
Example: Fixing Corrupt Blocks ...................................................................................................
Example: Finding Index Entries Pointing to Corrupt Data Blocks ..........................................
Example: Skipping Corrupt Blocks ..............................................................................................

Part V
22

Database Security

Managing Users and Securing the Database
The Importance of Establishing a Security Policy for Your Database ........................................
Managing Users and Resources..........................................................................................................
Managing User Privileges and Roles ................................................................................................
Auditing Database Use ........................................................................................................................

Part VI
23

22-1
22-1
22-2
22-2

Database Resource Management and Task Scheduling

Managing Automatic System Tasks Using the Maintenance Window
Maintenance Windows ........................................................................................................................
Predefined Automatic System Tasks .................................................................................................
Automatic Statistics Collection Job...............................................................................................
Automatic Segment Advisor Job ..................................................................................................
Resource Management for Automatic System Tasks .....................................................................

24

21-1
21-1
21-2
21-2
21-2
21-3
21-4
21-4
21-5
21-5
21-6
21-7
21-8
21-9
21-9

23-1
23-2
23-2
23-2
23-3

Using the Database Resource Manager
What Is the Database Resource Manager? .......................................................................................
What Problems Does the Database Resource Manager Address? ...........................................
How Does the Database Resource Manager Address These Problems?.................................
What Are the Elements of the Database Resource Manager?...................................................
Understanding Resource Plans .....................................................................................................
Administering the Database Resource Manager ...........................................................................

24-1
24-1
24-2
24-2
24-3
24-7

xvii

Creating a Simple Resource Plan ......................................................................................................
Creating Complex Resource Plans .....................................................................................................
Using the Pending Area for Creating Plan Schemas ...............................................................
Creating Resource Plans...............................................................................................................
Creating Resource Consumer Groups ......................................................................................
Specifying Resource Plan Directives .........................................................................................
Managing Resource Consumer Groups .........................................................................................
Assigning an Initial Resource Consumer Group......................................................................
Changing Resource Consumer Groups .....................................................................................
Managing the Switch Privilege ...................................................................................................
Automatically Assigning Resource Consumer Groups to Sessions ......................................
Enabling the Database Resource Manager.....................................................................................
Putting It All Together: Database Resource Manager Examples ..............................................
Multilevel Schema Example ........................................................................................................
Example of Using Several Resource Allocation Methods .......................................................
An Oracle-Supplied Plan .............................................................................................................
Monitoring and Tuning the Database Resource Manager ..........................................................
Creating the Environment ...........................................................................................................
Why Is This Necessary to Produce Expected Results? ...........................................................
Monitoring Results........................................................................................................................
Interaction with Operating-System Resource Control ................................................................
Guidelines for Using Operating-System Resource Control ....................................................
Dynamic Reconfiguration ............................................................................................................
Viewing Database Resource Manager Information .....................................................................
Viewing Consumer Groups Granted to Users or Roles ..........................................................
Viewing Plan Schema Information.............................................................................................
Viewing Current Consumer Groups for Sessions ....................................................................
Viewing the Currently Active Plans...........................................................................................

25

Moving from DBMS_JOB to DBMS_SCHEDULER
Moving from DBMS_JOB to DBMS_SCHEDULER ......................................................................
Creating a Job...................................................................................................................................
Altering a Job ...................................................................................................................................
Removing a Job from the Job Queue ............................................................................................

26

25-1
25-1
25-2
25-2

Scheduler Concepts
Overview of the Scheduler..................................................................................................................
What Can the Scheduler Do?.........................................................................................................
Basic Scheduler Concepts ....................................................................................................................
Programs ..........................................................................................................................................
Schedules ..........................................................................................................................................
Jobs ....................................................................................................................................................
Events................................................................................................................................................
Chains ...............................................................................................................................................
How Programs, Jobs, and Schedules are Related.......................................................................
Advanced Scheduler Concepts ...........................................................................................................
Job Classes ........................................................................................................................................

xviii

24-8
24-9
24-10
24-12
24-13
24-14
24-17
24-18
24-18
24-20
24-21
24-24
24-24
24-25
24-26
24-27
24-27
24-28
24-28
24-29
24-30
24-30
24-31
24-31
24-32
24-33
24-33
24-33

26-1
26-2
26-3
26-3
26-3
26-4
26-5
26-6
26-6
26-7
26-7

Windows...........................................................................................................................................
Window Groups..............................................................................................................................
Scheduler Architecture.......................................................................................................................
The Job Table..................................................................................................................................
The Job Coordinator .....................................................................................................................
How Jobs Execute..........................................................................................................................
Job Slaves........................................................................................................................................
Using the Scheduler in Real Application Clusters Environments .........................................

27

26-8
26-9
26-10
26-10
26-11
26-11
26-11
26-12

Using the Scheduler
Scheduler Objects and Their Naming...............................................................................................
Using Jobs ...............................................................................................................................................
Job Tasks and Their Procedures....................................................................................................
Creating Jobs ....................................................................................................................................
Copying Jobs....................................................................................................................................
Altering Jobs ....................................................................................................................................
Running Jobs....................................................................................................................................
Stopping Jobs ...................................................................................................................................
Dropping Jobs..................................................................................................................................
Disabling Jobs ..................................................................................................................................
Enabling Jobs ...................................................................................................................................
Using Programs......................................................................................................................................
Program Tasks and Their Procedures ..........................................................................................
Creating Programs ..........................................................................................................................
Altering Programs.........................................................................................................................
Dropping Programs ......................................................................................................................
Disabling Programs ......................................................................................................................
Enabling Programs........................................................................................................................
Using Schedules ..................................................................................................................................
Schedule Tasks and Their Procedures .......................................................................................
Creating Schedules........................................................................................................................
Altering Schedules ........................................................................................................................
Dropping Schedules......................................................................................................................
Setting the Repeat Interval...........................................................................................................
Using Job Classes ................................................................................................................................
Job Class Tasks and Their Procedures .......................................................................................
Creating Job Classes......................................................................................................................
Altering Job Classes ......................................................................................................................
Dropping Job Classes....................................................................................................................
Using Windows....................................................................................................................................
Window Tasks and Their Procedures ........................................................................................
Creating Windows ........................................................................................................................
Altering Windows.........................................................................................................................
Opening Windows ........................................................................................................................
Closing Windows ..........................................................................................................................
Dropping Windows ......................................................................................................................
Disabling Windows ......................................................................................................................

27-1
27-2
27-2
27-2
27-5
27-5
27-5
27-7
27-7
27-8
27-8
27-9
27-9
27-9
27-11
27-11
27-12
27-12
27-12
27-13
27-13
27-13
27-14
27-14
27-18
27-18
27-19
27-19
27-19
27-19
27-20
27-21
27-22
27-22
27-23
27-23
27-24

xix

Enabling Windows........................................................................................................................
Overlapping Windows .................................................................................................................
Using Window Groups.......................................................................................................................
Window Group Tasks and Their Procedures ...........................................................................
Creating Window Groups............................................................................................................
Dropping Window Groups..........................................................................................................
Adding a Member to a Window Group ....................................................................................
Dropping a Member from a Window Group............................................................................
Enabling a Window Group..........................................................................................................
Disabling a Window Group.........................................................................................................
Using Events.........................................................................................................................................
Using Events Raised by the Scheduler.......................................................................................
Using Events Raised by an Application.....................................................................................
Using Chains ........................................................................................................................................
Chain Tasks and Their Procedures.............................................................................................
Creating Chains .............................................................................................................................
Defining Chain Steps ....................................................................................................................
Adding Rules to a Chain ..............................................................................................................
Enabling Chains ............................................................................................................................
Creating Jobs for Chains ..............................................................................................................
Dropping Chains ...........................................................................................................................
Running Chains.............................................................................................................................
Dropping Rules from a Chain .....................................................................................................
Disabling Chains ...........................................................................................................................
Dropping Chain Steps ..................................................................................................................
Altering Chain Steps.....................................................................................................................
Handling Stalled Chains ..............................................................................................................
Allocating Resources Among Jobs...................................................................................................
Allocating Resources Among Jobs Using Resource Manager ................................................
Example of Resource Allocation for Jobs...................................................................................

28

Administering the Scheduler
Configuring the Scheduler ..................................................................................................................
Monitoring and Managing the Scheduler ........................................................................................
How to View Scheduler Information ...........................................................................................
How to View the Currently Active Window and Resource Plan ............................................
How to View Scheduler Privileges ...............................................................................................
How to Find Information About Currently Running Jobs .......................................................
How the Job Coordinator Works ................................................................................................
How to Monitor and Manage Window and Job Logs .............................................................
How to Manage Scheduler Privileges ........................................................................................
How to Drop a Job ........................................................................................................................
How to Drop a Running Job ........................................................................................................
Why Does a Job Fail to Run? .......................................................................................................
Job Recovery After a Failure........................................................................................................
How to Change Job Priorities......................................................................................................
How to Monitor Running Chains ...............................................................................................

xx

27-24
27-25
27-27
27-27
27-28
27-28
27-28
27-29
27-29
27-29
27-30
27-30
27-33
27-37
27-37
27-38
27-38
27-39
27-40
27-41
27-41
27-42
27-42
27-42
27-43
27-43
27-44
27-44
27-44
27-45

28-1
28-6
28-7
28-8
28-8
28-9
28-10
28-10
28-14
28-16
28-16
28-17
28-18
28-18
28-19

Why Does a Program Become Disabled? ..................................................................................
Why Does a Window Fail to Take Effect? .................................................................................
How the Scheduler Guarantees Availability.............................................................................
How to Handle Scheduler Security ............................................................................................
How to Manage the Scheduler in a RAC Environment ..........................................................
Import/Export and the Scheduler .....................................................................................................
Examples of Using the Scheduler ....................................................................................................
Examples of Creating Jobs ...........................................................................................................
Examples of Creating Job Classes...............................................................................................
Examples of Creating Programs .................................................................................................
Examples of Creating Windows .................................................................................................
Example of Creating Window Groups ......................................................................................
Examples of Setting Attributes....................................................................................................
Examples of Creating Chains ......................................................................................................
Examples of Creating Jobs and Schedules Based on Events...................................................

Part VII
29

28-19
28-19
28-19
28-20
28-20
28-20
28-20
28-21
28-21
28-22
28-23
28-24
28-24
28-26
28-27

Distributed Database Management

Distributed Database Concepts
Distributed Database Architecture ....................................................................................................
Homogenous Distributed Database Systems..............................................................................
Heterogeneous Distributed Database Systems...........................................................................
Client/Server Database Architecture...........................................................................................
Database Links.......................................................................................................................................
What Are Database Links?.............................................................................................................
What Are Shared Database Links? ...............................................................................................
Why Use Database Links?..............................................................................................................
Global Database Names in Database Links.................................................................................
Names for Database Links .............................................................................................................
Types of Database Links ..............................................................................................................
Users of Database Links ...............................................................................................................
Creation of Database Links: Examples.......................................................................................
Schema Objects and Database Links ..........................................................................................
Database Link Restrictions...........................................................................................................
Distributed Database Administration ............................................................................................
Site Autonomy ...............................................................................................................................
Distributed Database Security.....................................................................................................
Auditing Database Links .............................................................................................................
Administration Tools....................................................................................................................
Transaction Processing in a Distributed System...........................................................................
Remote SQL Statements ...............................................................................................................
Distributed SQL Statements ........................................................................................................
Shared SQL for Remote and Distributed Statements ..............................................................
Remote Transactions.....................................................................................................................
Distributed Transactions..............................................................................................................
Two-Phase Commit Mechanism .................................................................................................

29-1
29-1
29-3
29-4
29-5
29-6
29-7
29-8
29-8
29-9
29-10
29-11
29-13
29-14
29-16
29-16
29-16
29-17
29-21
29-22
29-23
29-23
29-24
29-24
29-24
29-24
29-25

xxi

Database Link Name Resolution ................................................................................................
Schema Object Name Resolution ................................................................................................
Global Name Resolution in Views, Synonyms, and Procedures ..........................................
Distributed Database Application Development .........................................................................
Transparency in a Distributed Database System......................................................................
Remote Procedure Calls (RPCs)..................................................................................................
Distributed Query Optimization ................................................................................................
Character Set Support for Distributed Environments .................................................................
Client/Server Environment.........................................................................................................
Homogeneous Distributed Environment ..................................................................................
Heterogeneous Distributed Environment .................................................................................

30

Managing a Distributed Database
Managing Global Names in a Distributed System ........................................................................
Understanding How Global Database Names Are Formed .....................................................
Determining Whether Global Naming Is Enforced ...................................................................
Viewing a Global Database Name................................................................................................
Changing the Domain in a Global Database Name ...................................................................
Changing a Global Database Name: Scenario ............................................................................
Creating Database Links......................................................................................................................
Obtaining Privileges Necessary for Creating Database Links..................................................
Specifying Link Types ....................................................................................................................
Specifying Link Users.....................................................................................................................
Using Connection Qualifiers to Specify Service Names Within Link Names......................
Using Shared Database Links...........................................................................................................
Determining Whether to Use Shared Database Links .............................................................
Creating Shared Database Links.................................................................................................
Configuring Shared Database Links ..........................................................................................
Managing Database Links .................................................................................................................
Closing Database Links ................................................................................................................
Dropping Database Links ............................................................................................................
Limiting the Number of Active Database Link Connections .................................................
Viewing Information About Database Links ................................................................................
Determining Which Links Are in the Database........................................................................
Determining Which Link Connections Are Open ....................................................................
Creating Location Transparency.......................................................................................................
Using Views to Create Location Transparency ........................................................................
Using Synonyms to Create Location Transparency.................................................................
Using Procedures to Create Location Transparency .............................................................
Managing Statement Transparency .................................................................................................
Managing a Distributed Database: Examples ...............................................................................
Example 1: Creating a Public Fixed User Database Link ........................................................
Example 2: Creating a Public Fixed User Shared Database Link...........................................
Example 3: Creating a Public Connected User Database Link ...............................................
Example 4: Creating a Public Connected User Shared Database Link..................................
Example 5: Creating a Public Current User Database Link ....................................................

xxii

29-25
29-27
29-29
29-31
29-31
29-33
29-33
29-33
29-34
29-34
29-35

30-1
30-1
30-2
30-3
30-3
30-3
30-6
30-6
30-7
30-8
30-10
30-10
30-11
30-12
30-12
30-14
30-14
30-15
30-16
30-16
30-16
30-17
30-18
30-19
30-20
30-21
30-23
30-24
30-25
30-25
30-25
30-26
30-26

31

Developing Applications for a Distributed Database System
Managing the Distribution of Application Data ............................................................................
Controlling Connections Established by Database Links ............................................................
Maintaining Referential Integrity in a Distributed System .........................................................
Tuning Distributed Queries ................................................................................................................
Using Collocated Inline Views ......................................................................................................
Using Cost-Based Optimization....................................................................................................
Using Hints ......................................................................................................................................
Analyzing the Execution Plan .......................................................................................................
Handling Errors in Remote Procedures ............................................................................................

32

Distributed Transactions Concepts
What Are Distributed Transactions? .................................................................................................
DML and DDL Transactions .........................................................................................................
Transaction Control Statements....................................................................................................
Session Trees for Distributed Transactions......................................................................................
Clients ..............................................................................................................................................
Database Servers .............................................................................................................................
Local Coordinators .........................................................................................................................
Global Coordinator .........................................................................................................................
Commit Point Site ..........................................................................................................................
Two-Phase Commit Mechanism ........................................................................................................
Prepare Phase...................................................................................................................................
Commit Phase................................................................................................................................
Forget Phase...................................................................................................................................
In-Doubt Transactions........................................................................................................................
Automatic Resolution of In-Doubt Transactions......................................................................
Manual Resolution of In-Doubt Transactions...........................................................................
Relevance of System Change Numbers for In-Doubt Transactions ......................................
Distributed Transaction Processing: Case Study ..........................................................................
Stage 1: Client Application Issues DML Statements ................................................................
Stage 2: Oracle Database Determines Commit Point Site .......................................................
Stage 3: Global Coordinator Sends Prepare Response ............................................................
Stage 4: Commit Point Site Commits .........................................................................................
Stage 5: Commit Point Site Informs Global Coordinator of Commit ....................................
Stage 6: Global and Local Coordinators Tell All Nodes to Commit......................................
Stage 7: Global Coordinator and Commit Point Site Complete the Commit .......................

33

31-1
31-1
31-2
31-3
31-3
31-4
31-6
31-7
31-8

32-1
32-2
32-2
32-3
32-4
32-4
32-4
32-4
32-5
32-7
32-8
32-10
32-11
32-11
32-12
32-13
32-14
32-14
32-14
32-15
32-16
32-17
32-17
32-17
32-18

Managing Distributed Transactions
Specifying the Commit Point Strength of a Node ..........................................................................
Naming Transactions ............................................................................................................................
Viewing Information About Distributed Transactions .................................................................
Determining the ID Number and Status of Prepared Transactions ........................................
Tracing the Session Tree of In-Doubt Transactions ...................................................................
Deciding How to Handle In-Doubt Transactions ...........................................................................
Discovering Problems with a Two-Phase Commit ....................................................................

33-1
33-2
33-2
33-2
33-4
33-5
33-6

xxiii

Determining Whether to Perform a Manual Override ..............................................................
Analyzing the Transaction Data ...................................................................................................
Manually Overriding In-Doubt Transactions..................................................................................
Manually Committing an In-Doubt Transaction........................................................................
Manually Rolling Back an In-Doubt Transaction .......................................................................
Purging Pending Rows from the Data Dictionary..........................................................................
Executing the PURGE_LOST_DB_ENTRY Procedure ............................................................
Determining When to Use DBMS_TRANSACTION ...............................................................
Manually Committing an In-Doubt Transaction: Example ........................................................
Step 1: Record User Feedback .....................................................................................................
Step 2: Query DBA_2PC_PENDING..........................................................................................
Step 3: Query DBA_2PC_NEIGHBORS on Local Node ..........................................................
Step 4: Querying Data Dictionary Views on All Nodes ..........................................................
Step 5: Commit the In-Doubt Transaction.................................................................................
Step 6: Check for Mixed Outcome Using DBA_2PC_PENDING...........................................
Data Access Failures Due to Locks ..................................................................................................
Transaction Timeouts ...................................................................................................................
Locks from In-Doubt Transactions .............................................................................................
Simulating Distributed Transaction Failure ..................................................................................
Forcing a Distributed Transaction to Fail ..................................................................................
Disabling and Enabling RECO....................................................................................................
Managing Read Consistency.............................................................................................................

Index

xxiv

33-6
33-7
33-8
33-8
33-9
33-9
33-10
33-10
33-11
33-11
33-11
33-13
33-14
33-16
33-16
33-17
33-17
33-17
33-17
33-18
33-18
33-19

Send Us Your Comments
Oracle Database Administrator’s Guide 10g Release 2 (10.2)
B14231-02

Oracle welcomes your comments and suggestions on the quality and usefulness of this
publication. Your input is an important part of the information used for revision.
■

Did you find any errors?

■

Is the information clearly presented?

■

Do you need more information? If so, where?

■

Are the examples correct? Do you need more examples?

■

What features did you like most about this manual?

If you find any errors or have any other suggestions for improvement, please indicate
the title and part number of the documentation and the chapter, section, and page
number (if available). You can send comments to us in the following ways:
■

Electronic mail: infodev_us@oracle.com

■

FAX: (650) 506-7227. Attn: Server Technologies Documentation Manager

■

Postal service:
Oracle Corporation
Server Technologies Documentation Manager
500 Oracle Parkway, Mailstop 4op11
Redwood Shores, CA 94065
USA

If you would like a reply, please give your name, address, telephone number, and
electronic mail address (optional).
If you have problems with the software, please contact your local Oracle Support
Services.

xxv

xxvi

Preface
This document describes how to create and administer an Oracle Database.
This preface contains these topics:
■

Audience

■

Documentation Accessibility

■

Structure

■

Related Documents

■

Conventions

Audience
This document is intended for database administrators who perform the following
tasks:
■

Create an Oracle Database

■

Ensure the smooth operation of an Oracle Database

■

Monitor the operation of an Oracle Database

To use this document, you need to be familiar with relational database concepts. You
should also be familiar with the operating system environment under which you are
running the Oracle Database.

Documentation Accessibility
Our goal is to make Oracle products, services, and supporting documentation
accessible, with good usability, to the disabled community. To that end, our
documentation includes features that make information available to users of assistive
technology. This documentation is available in HTML format, and contains markup to
facilitate access by the disabled community. Accessibility standards will continue to
evolve over time, and Oracle is actively engaged with other market-leading
technology vendors to address technical obstacles so that our documentation can be
accessible to all of our customers. For more information, visit the Oracle Accessibility
Program Web site at
http://www.oracle.com/accessibility/

Accessibility of Code Examples in Documentation
Screen readers may not always correctly read the code examples in this document. The
conventions for writing code require that closing braces should appear on an
xxvii

otherwise empty line; however, some screen readers may not always read a line of text
that consists solely of a bracket or brace.
Accessibility of Links to External Web Sites in Documentation
This documentation may contain links to Web sites of other companies or
organizations that Oracle does not own or control. Oracle neither evaluates nor makes
any representations regarding the accessibility of these Web sites.
TTY Access to Oracle Support Services
Oracle provides dedicated Text Telephone (TTY) access to Oracle Support Services
within the United States of America 24 hours a day, seven days a week. For TTY
support, call 800.446.2398.

Structure
This document contains:
Part I, "Basic Database Administration"
This part contains information about creating a database, starting and shutting down a
database, and managing Oracle processes.
Chapter 1, "Overview of Administering an Oracle Database"
This chapter serves as an introduction to typical tasks performed by database
administrators, such as installing software and planning a database.
Chapter 2, "Creating an Oracle Database"
This chapter describes how to create a database. Consult this chapter when you are
planning a database.
Chapter 3, "Starting Up and Shutting Down"
This chapter describes how to start a database, alter its availability, and shut it down. It
also describes the parameter files related to starting up and shutting down.
Chapter 4, "Managing Oracle Database Processes"
This chapter describes how to identify different Oracle Database processes, such as
dedicated server processes and shared server processes. Consult this chapter when
configuring, modifying, tracking and managing processes.
Part II, "Oracle Database Structure and Storage"
This part describes the structure and management of the Oracle Database and its
storage.
Chapter 5, "Managing Control Files"
This chapter describes how to manage control files, including the following tasks:
naming, creating, troubleshooting, and dropping control files.
Chapter 6, "Managing the Redo Log"
This chapter describes how to manage the online redo log, including the following
tasks: planning, creating, renaming, dropping, or clearing redo log files.

xxviii

Chapter 7, "Managing Archived Redo Logs"
This chapter describes archiving.
Chapter 8, "Managing Tablespaces"
This chapter provides guidelines for managing tablespaces. It describes how to create,
manage, alter, and drop tablespaces and how to move data between tablespaces.
Chapter 9, "Managing Datafiles and Tempfiles"
This chapter provides guidelines for managing datafiles. It describes how to create,
change, alter, and rename datafiles and how to view information about datafiles.
Chapter 10, "Managing the Undo Tablespace"
This chapter describes how to manage undo space using an undo tablespace.
Part III, "Automated File and Storage Management"
This part describes how to use Oracle-Managed Files and Automatic Storage
Management.
Chapter 11, "Using Oracle-Managed Files"
This chapter describes how to use the Oracle Database server to create and manage
database files.
Chapter 12, "Using Automatic Storage Management"
This chapter describes how to use Automatic Storage Management.
Part IV, "Schema Objects"
This section describes how to manage schema objects, including the following: tables,
indexes, clusters, hash clusters, views, sequences, and synonyms.
Chapter 13, "Managing Schema Objects"
This chapter describes management of schema objects. It contains information about
analyzing objects, truncation of tables and clusters, database triggers, integrity
constraints, and object dependencies.
Chapter 14, "Managing Space for Schema Objects"
This chapter describes common tasks such as setting storage parameters, deallocating
space, and managing space.
Chapter 15, "Managing Tables"
This chapter contains table management guidelines, as well as information about
creating, altering, maintaining and dropping tables.
Chapter 16, "Managing Indexes"
This chapter contains guidelines about indexes, including creating, altering,
monitoring and dropping indexes.
Chapter 17, "Managing Partitioned Tables and Indexes"
This chapter describes partitioned tables and indexes and how to create and manage
them.

xxix

Chapter 18, "Managing Clusters"
This chapter contains guidelines for creating, altering, or dropping clusters.
Chapter 19, "Managing Hash Clusters"
This chapter contains guidelines for creating, altering, or dropping hash clusters.
Chapter 20, "Managing Views, Sequences, and Synonyms"
This chapter describes how to manage views, sequences and synonyms.
Chapter 21, "Using DBMS_REPAIR to Repair Data Block Corruption"
This chapter describes methods for detecting and repairing data block corruption.
Part V, "Database Security"
This part discusses the importance of establishing a security policy for your database
and users.
Chapter 22, "Managing Users and Securing the Database"
This chapter discusses the importance of establishing a security policy for your
database and users.
Part VI, "Database Resource Management and Task Scheduling"
This part describes database resource management and task scheduling.
Chapter 23, "Managing Automatic System Tasks Using the Maintenance Window"
This chapter describes how to use automatic system tasks.
Chapter 24, "Using the Database Resource Manager"
This chapter describes how to use the Database Resource Manager to allocate
resources.
Chapter 25, "Moving from DBMS_JOB to DBMS_SCHEDULER"
This chapter describes how to take statements created with DBMS_JOB and rewrite
them using DBMS_SCHEDULER.
Chapter 26, "Scheduler Concepts"
Oracle Database provides advanced scheduling capabilities through the database
Scheduler. This chapter introduces you to its concepts.
Chapter 27, "Using the Scheduler"
This chapter describes how to use the Scheduler.
Chapter 28, "Administering the Scheduler"
This chapter describes the tasks a database administrator needs to perform so end
users can schedule jobs using the Scheduler.
Part VII, "Distributed Database Management"
This part describes distributed database management.

xxx

Chapter 29, "Distributed Database Concepts"
This chapter describes the basic concepts and terminology of Oracle's distributed
database architecture.
Chapter 30, "Managing a Distributed Database"
This chapter describes how to manage and maintain a distributed database system.
Chapter 31, "Developing Applications for a Distributed Database System"
This chapter describes the considerations for developing an application to run in a
distributed database system.
Chapter 32, "Distributed Transactions Concepts"
This chapter describes what distributed transactions are and how the Oracle Databases
maintains their integrity.
Chapter 33, "Managing Distributed Transactions"
This chapter describes how to manage and troubleshoot distributed transactions.

Related Documents
For more information, see these Oracle resources:
■

Oracle Database 2 Day DBA

■

Oracle Database Concepts

■

Oracle Database SQL Reference

■

Oracle Database Reference

■

Oracle Database PL/SQL Packages and Types Reference

■

Oracle Database Error Messages

■

Oracle Database Net Services Administrator's Guide

■

Oracle Database Backup and Recovery Basics

■

Oracle Database Backup and Recovery Advanced User's Guide

■

Oracle Database Performance Tuning Guide

■

Oracle Database Application Developer's Guide - Fundamentals

■

Oracle Database PL/SQL User's Guide and Reference

■

SQL*Plus User's Guide and Reference

Many of the examples in this book use the sample schemas, which are installed by
default when you select the Basic Installation option with an Oracle Database
installation. Refer to Oracle Database Sample Schemas for information on how these
schemas were created and how you can use them yourself.
Printed documentation is available for sale in the Oracle Store at
http://oraclestore.oracle.com/
To download free release notes, installation documentation, white papers, or other
collateral, please visit the Oracle Technology Network (OTN). You must register online
before using OTN; registration is free and can be done at
http://www.oracle.com/technology/membership/

xxxi

If you already have a username and password for OTN, then you can go directly to the
documentation section of the OTN Web site at
http://www.oracle.com/technology/documentation/

Conventions
This section describes the conventions used in the text and code examples of this
documentation set. It describes:
■

Conventions in Text

■

Conventions in Code Examples

■

Conventions for Windows Operating Systems

Conventions in Text
We use various conventions in text to help you more quickly identify special terms.
The following table describes those conventions and provides examples of their use.
Convention

Meaning

Bold

Bold typeface indicates terms that are
When you specify this clause, you create an
defined in the text or terms that appear in a index-organized table.
glossary, or both.

Italics

Italic typeface indicates book titles or
emphasis.

Oracle Database Concepts

Uppercase monospace typeface indicates
elements supplied by the system. Such
elements include parameters, privileges,
datatypes, Recovery Manager keywords,
SQL keywords, SQL*Plus or utility
commands, packages and methods, as well
as system-supplied column names,
database objects and structures,
usernames, and roles.

You can specify this clause only for a NUMBER
column.

Lowercase monospace typeface indicates
executable programs, filenames, directory
names, and sample user-supplied
elements. Such elements include computer
and database names, net service names
and connect identifiers, user-supplied
database objects and structures, column
names, packages and classes, usernames
and roles, program units, and parameter
values.

Enter sqlplus to start SQL*Plus.

UPPERCASE
monospace
(fixed-width)
font

lowercase
monospace
(fixed-width)
font

Note: Some programmatic elements use a
mixture of UPPERCASE and lowercase.
Enter these elements as shown.
lowercase
italic
monospace
(fixed-width)
font

xxxii

Example

Ensure that the recovery catalog and target
database do not reside on the same disk.

You can back up the database by using the
BACKUP command.
Query the TABLE_NAME column in the
USER_TABLES data dictionary view.
Use the DBMS_STATS.GENERATE_STATS
procedure.

The password is specified in the orapwd file.
Back up the datafiles and control files in the
/disk1/oracle/dbs directory.
The department_id, department_name, and
location_id columns are in the
hr.departments table.
Set the QUERY_REWRITE_ENABLED initialization
parameter to true.
Connect as oe user.
The JRepUtil class implements these methods.

Lowercase italic monospace font represents You can specify the parallel_clause.
placeholders or variables.
Run old_release.SQL where old_release
refers to the release you installed prior to
upgrading.

Conventions in Code Examples
Code examples illustrate SQL, PL/SQL, SQL*Plus, or other command-line statements.
They are displayed in a monospace (fixed-width) font and separated from normal text
as shown in this example:
SELECT username FROM dba_users WHERE username = 'MIGRATE';

The following table describes typographic conventions used in code examples and
provides examples of their use.
Convention

Meaning

Example

[ ]

Brackets enclose one or more optional
items. Do not enter the brackets.

DECIMAL (digits [ , precision ])

{ }

Braces enclose two or more items, one of
which is required. Do not enter the braces.

{ENABLE | DISABLE}

|

A vertical bar represents a choice of two or
more options within brackets or braces.
Enter one of the options. Do not enter the
vertical bar.

{ENABLE | DISABLE}
[COMPRESS | NOCOMPRESS]

...

Horizontal ellipsis points indicate either:
■

■

CREATE TABLE ... AS subquery;
That we have omitted parts of the
code that are not directly related to the
SELECT col1, col2, ... , coln FROM
example
employees;
That you can repeat a portion of the
code

Vertical ellipsis points indicate that we
have omitted several lines of code not
directly related to the example.

SQL> SELECT NAME FROM V$DATAFILE;
NAME
-----------------------------------/fsl/dbs/tbs_01.dbf
/fs1/dbs/tbs_02.dbf
.
.
.
/fsl/dbs/tbs_09.dbf
9 rows selected.

Other notation

You must enter symbols other than
brackets, braces, vertical bars, and ellipsis
points as shown.

acctbal NUMBER(11,2);
acct
CONSTANT NUMBER(4) := 3;

Italics

Italicized text indicates placeholders or
variables for which you must supply
particular values.

CONNECT SYSTEM/system_password
DB_NAME = database_name

UPPERCASE

Uppercase typeface indicates elements
supplied by the system. We show these
terms in uppercase in order to distinguish
them from terms you define. Unless terms
appear in brackets, enter them in the order
and with the spelling shown. However,
because these terms are not case sensitive,
you can enter them in lowercase.

SELECT last_name, employee_id FROM
employees;
SELECT * FROM USER_TABLES;
DROP TABLE hr.employees;

.
.
.

xxxiii

Convention

Meaning

Example

lowercase

Lowercase typeface indicates
programmatic elements that you supply.
For example, lowercase indicates names of
tables, columns, or files.

SELECT last_name, employee_id FROM
employees;
sqlplus hr/hr
CREATE USER mjones IDENTIFIED BY ty3MU9;

Note: Some programmatic elements use a
mixture of UPPERCASE and lowercase.
Enter these elements as shown.

Conventions for Windows Operating Systems
The following table describes conventions for Windows operating systems and
provides examples of their use.
Convention

Meaning

Example

Choose Start >
menu item

How to start a program. The '>' character
indicates a hierarchical menu (submenu).

To start the Database Configuration Assistant,
choose Start > Programs > Oracle HOME_NAME > Configuration and Migration
Tools > Database Configuration Assistant.

File and directory
names

c:\winnt"\"system32 is the same as
File and directory names are not case
sensitive. The following special characters C:\WINNT\SYSTEM32
are not allowed: left angle bracket (<), right
angle bracket (>), colon (:), double
quotation marks ("), slash (/), pipe (|), and
dash (-). The special character backslash (\)
is treated as an element separator, even
when it appears in quotes. If the filename
begins with \\, then Windows assumes it
uses the Universal Naming Convention.

C:\>

Represents the Windows command
prompt of the current hard disk drive. The
escape character in a command prompt is
the caret (^). Your prompt reflects the
subdirectory in which you are working.
Referred to as the command prompt in this
manual.

Special characters

The backslash (\) special character is
C:\> exp HR/HR TABLES=emp QUERY=\"WHERE
sometimes required as an escape character job='REP'\"
for the double quotation mark (") special
character at the Windows command
prompt. Parentheses and the single
quotation mark (') do not require an escape
character. Refer to your Windows
operating system documentation for more
information on escape and special
characters.

HOME_NAME

Represents the Oracle home name. The
home name can be up to 16 alphanumeric
characters. The only special character
allowed in the home name is the
underscore.

xxxiv

C:\oracle\oradata>

C:\> net start OracleHOME_NAMETNSListener

Convention

Meaning

Example

ORACLE_HOME
and
ORACLE_BASE

In releases prior to Oracle8i release 8.1.3,
when you installed Oracle components, all
subdirectories were located under a top
level ORACLE_HOME directory. The default
for Windows NT was C:\orant.

Go to the
ORACLE_BASE\ORACLE_HOME\rdbms\admin
directory.

This release complies with Optimal
Flexible Architecture (OFA) guidelines. All
subdirectories are not under a top level
ORACLE_HOME directory. There is a top
level directory called ORACLE_BASE that
by default is
C:\oracle\product\10.1.0. If you
install the latest Oracle release on a
computer with no other Oracle software
installed, then the default setting for the
first Oracle home directory is
C:\oracle\product\10.1.0\db_n,
where n is the latest Oracle home number.
The Oracle home directory is located
directly under ORACLE_BASE.
All directory path examples in this guide
follow OFA conventions.
Refer to Oracle Database Installation Guide
for Microsoft Windows (32-Bit) for additional
information about OFA compliances and
for information about installing Oracle
products in non-OFA compliant
directories.

xxxv

xxxvi

What's New in Oracle Database
Administrator's Guide?
This section describes new features of the Oracle Database 10g Release 2 (10.2) and
provides pointers to additional information. New features information from previous
releases is also retained to help those users migrating to the current release.
The following sections describe the new features in Oracle Database:
■

Oracle Database 10g Release 2 (10.2) New Features in the Administrator's Guide

■

Oracle Database 10g Release 1 (10.1) New Features in the Administrator's Guide

Oracle Database 10g Release 2 (10.2) New Features in the
Administrator's Guide
■

Transparent data encryption
With the CREATE TABLE or ALTER TABLE commands, you can specify table
columns for which data is encrypted before being stored in the datafile. If users
attempt to circumvent the database access control mechanisms by looking inside
datafiles directly with operating system tools, encryption prevents such users from
viewing sensitive data.
Oracle Database Security Guide for information on
Transparent Data Encryption.

See Also:

■

Increased maximum number of partitions per schema object
The maximum number of partitions and subpartitions that can be defined for a
table or index has been increased to 1024K - 1.

■

DML error logging
A new error logging clause for all DML statements enables certain types of errors
(for example, constraint violations or data conversion errors) to be logged to an
error logging table, allowing the statement to continue instead of terminating and
rolling back.
See Also:

■

"Inserting Data with DML Error Logging" on page 15-9

Enhancements to Automatic Shared Memory Management
The Streams Pool is now also automatically tuned when automatic shared
memory management is enabled. A new view, V$SGA_TARGET_ADVICE, provides
information to help with tuning SGA_TARGET.
xxxvii

See Also:

"Using Automatic Shared Memory Management" on

page 2-25
■

Improved automatic tuning of undo retention results in fewer "ORA-01555:
snapshot too old" messages.
Automatic tuning of undo retention now always tunes for the maximum possible
retention for the undo tablespace based on tablespace size and current system
activity. This improves the success rate of Oracle Flashback operations and
long-running queries.
This new tuning method for maximum possible retention
applies only to fixed size undo tablespaces. For AUTOEXTEND
tablespaces, the behavior is unchanged.

Note:

See Also:
■

The Segment Advisor now reports tables with excessive row chaining.
See Also:

■

"Undo Retention" on page 10-3

"Using the Segment Advisor" on page 14-16

The Segment Advisor now runs automatically during the maintenance window.
Upon installation, a Scheduler job is automatically created to run the Segment
Advisor during the maintenance window. You can also continue to run the
Segment Advisor manually. The automatically run Segment Advisor (the
"Automatic Segment Advisor") examines statistics on the entire database and
selects segments and tablespaces to analyze. You can view recommendations with
Enterprise Manager, with the DBA_ADVISOR_* views, or with the new
DBMS_SPACE.ASA_RECOMMENDATIONS procedure.
See Also:

■

"Using the Segment Advisor" on page 14-16

Enhancements to the online segment shrink capability
Online segment shrink now supports:
–

LOB segments

–

IOT overflow segments (both standalone and as a dependent object of an IOT
index segment)
See Also:

■

"Shrinking Database Segments Online" on page 14-29

Enhancements to online table redefinition
Online redefinition of tables now supports:

xxxviii

–

Redefining a single partition of a partitioned table

–

Clustered tables, Advanced Queuing queue tables, and materialized view logs

–

Object data types (objects, VARRAYs, and nested tables, including nested table
dependent objects)

–

Check and NOT NULL constraints

–

Preservation of table statistics

–

Parallel execution for Long-to-LOB migration

In addition, dependent PL/SQL package recompilation is no longer required
when the redefined table has the same number, types, and order of columns as the
original table.
See Also:
■

"Redefining Tables Online" on page 15-21

Support for XMLTypes in the transportable tablespace facility
See Also: "Transporting Tablespaces Between Databases" on
page 8-25

■

Enhancements to space management
The DBMS_SPACE_ADMIN package provides new tools to troubleshoot space
management problems.
See Also: "Diagnosing and Repairing Locally Managed Tablespace
Problems" on page 8-22

■

Control files no longer need to be recreated when changing certain configuration
parameters
Control files can now dynamically grow in size when you increase the values of
the following parameters: MAXLOGFILES, MAXLOGMEMBERS, MAXLOGHISTORY,
and MAXINSTANCES.
See Also:

■

"When to Create New Control Files" on page 5-4

Tablespace low-space alert thresholds by free space remaining
Low-space alert thresholds for locally managed tablespaces can now be by percent
full or by free space remaining (in KB). Free-space-remaining thresholds are more
useful for very large tablespaces.
See Also:

■

"Managing Tablespace Alerts" on page 14-1

Fast partition split is now supported for partitioned index-organized tables.
See Also: "Optimizing SPLIT PARTITION and SPLIT
SUBPARTITION Operations" on page 17-49

■

Automatically enabled resource manager
The Resource Manager no longer needs to be enabled at startup to quiesce the
database or to activate a resource plan when a Scheduler window opens.
See Also: "Quiescing a Database" on page 3-11 and "Using
Windows" on page 27-19

■

Enhanced Resource Manager monitoring
The Resource Manager now provides more extensive statistics on sessions, plans,
and consumer groups, enabling you to better monitor and tune your Resource
Manager settings.

xxxix

See Also:
■

"Monitoring Results" on page 24-29

Automatic Storage Management (ASM) files are now accessible through an XML
DB virtual folder
ASM files can now be accessed through the XML DB repository, either
programmatically or with protocols like FTP and HTTP/WebDAV.
See Also: "Accessing Automatic Storage Management Files with the
XML DB Virtual Folder" on page 12-46 and Oracle XML DB Developer's
Guide

■

Automatic Storage Management now has a command-line utility (ASMCMD)
With ASMCMD you can easily view and manipulate files and directories within
disk groups. ASMCMD can list the contents of disk groups, perform searches,
create and remove directories and aliases, display space utilization, and more.
See Also:

■

Oracle Database Utilities

Automatic Storage Management now supports high-redundancy (3-way mirrored)
files in normal redundancy disk groups
Individual files in a normal redundancy disk group can be specified as high
redundancy (with 3-way mirroring) by creating the file with a template that has
the redundancy attribute set to HIGH. This applies when explicitly naming a
template when creating a file, or when modifying a system default template for
the disk group. In addition, the system default template CONTROLFILE in normal
redundancy disk groups now has the redundancy attribute set to HIGH.
See Also:

■

Chapter 12, "Using Automatic Storage Management"

Automatic Storage Management (ASM) supports multiple database versions
ASM maintains forward and backward compatibility between most 10.x versions
of Oracle Database and 10.x versions of ASM. That is, any combination of versions
10.1.x.y and 10.2.x.y for either the ASM instance or the database instance works
correctly, with this caveat: For a 10.1.x.y database instance to connect to a 10.2.x.y
ASM instance, the database must be version 10.1.0.3 or later.
"Using Automatic Storage Management in the Database"
on page 12-32

See Also:

■

The DBMS_FILE_TRANSFER package can now copy files between a local file
system and an Automatic Storage Management (ASM) disk group.
The DBMS_FILE_TRANSFER package can use a local file system or an ASM disk
group as the source or destination for a file transfer. You can now copy files from
ASM to the file system or from the file system to ASM.
See Also:

■

"Copying Files Using the Database Server" on page 9-12

The ALTER DISKGROUP command has a new REBALANCE WAIT clause.
ALTER DISKGROUP commands that cause a rebalance of an ASM disk
group—commands that add, drop, or resize disks, or the command that starts a
manual rebalance operation—can now wait until the rebalance operation
completes before returning. This is especially useful in scripts.

xl

See Also: "Altering the Disk Membership of a Disk Group" on
page 12-20

■

Enhancements to the Scheduler
The Scheduler supports a new type of object called a chain, which is a grouping of
programs that are linked together for a single, combined objective. Scheduler jobs
can now be started when a specified event occurs, and the Scheduler can raise
events when a job state changes (for example, from running to complete).
See Also: "Using Events" on page 27-30 and "Using Chains" on
page 27-37

The Scheduler also has an expanded calendaring syntax that enables you to define
more complex schedules, such as "the last work day of each fiscal quarter."
Existing schedules can be combined to create composite schedules.
See Also:
■

"Using Schedules" on page 27-12

Resource optimized DROP TABLE...PURGE for partitioned tables
To avoid running into resource constraints, the DROP TABLE...PURGE command
for a partitioned table drops the table in multiple transactions, where each
transaction drops a subset of the partitions or subpartitions and then commits. If
the DROP TABLE...PURGE command fails, you can take corrective action, if any,
and then restart the command.
See Also:

"Dropping Partitioned Tables" on page 17-52

Oracle Database 10g Release 1 (10.1) New Features in the
Administrator's Guide
■

■

■

Server manageability features are introduced in "Server Manageability" on
page 1-20.
Automatic system task maintenance is discussed in Chapter 23, "Managing
Automatic System Tasks Using the Maintenance Window".
Bigfile tablespaces
Oracle Database lets you create single-file tablespaces, called bigfile tablespaces,
which can contain up to 232 or 4G blocks. The benefits of bigfile tablespaces are the
following:
–

They significantly enhance the storage capacity of an Oracle Database.

–

They reduce the number of datafiles needed for an ultra large database.

–

They simplify database management by providing datafile transparency.

See"Supporting Bigfile Tablespaces During Database Creation" on page 2-16 and
"Bigfile Tablespaces" on page 8-6.
■

Multiple default temporary tablespace support for SQL operations
You can create a temporary tablespace group that can be specifically assigned to
users in the same way that a single temporary tablespace is assigned. A tablespace
group can also be specified as the default temporary tablespace for the database.

xli

See "Multiple Temporary Tablespaces: Using Tablespace Groups" on page 8-11.
■

Rename tablespace
The RENAME TO clause of the ALTER TABLESPACE statement enables you to
rename tablespaces.
See "Renaming Tablespaces" on page 8-19.

■

Cross-platform transportable tablespaces
Tablespaces can be transported from one platform to another. The RMAN
CONVERT command is used to do the conversion.
See "Transporting Tablespaces Between Databases: A Procedure and Example" on
page 8-29.

■

SYSAUX tablespace
Oracle Database creates an auxiliary system tablespace called SYSAUX at database
creation. This tablespace can be used by various Oracle Database features and
products, rather than saving their data in separate tablespaces or in the SYSTEM
tablespace.
See "Creating the SYSAUX Tablespace" on page 2-12 and "Managing the SYSAUX
Tablespace" on page 8-20.

■

Automatic Storage Management
Automatic Storage Management provides a logical volume manager integrated
with Oracle Database, eliminating the need for you to purchase a third-party
product. Oracle Database creates Oracle-managed files within user-defined disk
groups that provide redundancy and striping.
See Chapter 12, "Using Automatic Storage Management".

■

Drop database
The new DROP DATABASE statement lets you delete a database and all of its files
that are listed in the control file.
See "Dropping a Database" on page 2-35.

■

Oracle Flashback Transaction Query
This feature introduces the FLASHBACK_TRANSACTION_QUERY view, which lets
you examine changes to the database at the transaction level. As a result, you can
diagnose problems, perform analysis, and audit transactions.
See "Auditing Table Changes Using Flashback Transaction Query" on page 15-36.

■

Oracle Flashback Version Query
Using undo data stored in the database, you can now view multiple changes to
one or more rows, along with the metadata for the changes.

■

Oracle Flashback Table
A new FLASHBACK TABLE statement lets you quickly recover a table to a point in
time in the past without restoring a backup.
See "Recovering Tables Using the Flashback Table Feature" on page 15-36.

■

Oracle Flashback Drop
Oracle Database now provides a way to restore accidentally dropped tables. When
tables are dropped, they are placed into a recycle bin from which they can later be
recovered.

xlii

See "Using Flashback Drop and Managing the Recycle Bin" on page 15-38.
■

Enhanced online redefinition
New procedures have been added to the DBMS_REDEFINITION package that
automate the cloning of dependent objects such as indexes, triggers, privileges,
and constraints. Some restrictions have been lifted, allowing more types of tables
to be redefined.
See "Redefining Tables Online" on page 15-21.

■

Automatic statistics collection
You no longer need to specify the MONITORING keyword in the CREATE TABLE or
ALTER TABLE statement to enable the automatic collecting of statistics for a table.
Statistics are now collected automatically as controlled by the
STATISTICS_LEVEL initialization parameter. Automatic statistics collection is the
default.
See "Automatically Collecting Statistics on Tables" on page 15-16.

■

Scheduler
Oracle Database provides advanced scheduling capabilities through the database
Scheduler.
See Part VI, "Database Resource Management and Task Scheduling".

■

Database Resource Manager enhancement
The following are enhancements to the Database Resource Manager:
–

Adaptive consumer group mapping
You can configure the Database Resource Manager to automatically assign
consumer groups to sessions by providing mappings between session
attributes and consumer groups.
See "Automatically Assigning Resource Consumer Groups to Sessions" on
page 24-21

–

New plan directives
New resource plan directives let you set idle timeouts, cancel long-running
SQL statements, terminate long-running sessions, and restore sessions to their
original consumer group at the end of a top call.
See "Specifying Resource Plan Directives" on page 24-14.

–

New policies
Two new resource manager policies have also been added: the RATIO CPU
allocation policy and the RUN_TO_COMPLETION scheduling policy.

■

New initialization parameter RESUMABLE_TIMEOUT
The RESUMABLE_TIMEOUT initialization parameter lets you enable resumable
space allocation and set a timeout period across all sessions.
See "Enabling and Disabling Resumable Space Allocation" on page 14-10.

■

Application services
Tuning by "service and SQL" augments tuning by "session and SQL" in the
majority of systems where all sessions are anonymous and shared.
See "Defining Application Services for Oracle Database 10g" and Oracle Database
Performance Tuning Guide for more information.
xliii

■

Simplified recovery through resetlogs
The format for archive log file naming, as specified by the ARCHIVE_LOG_FORMAT
initialization parameter, now includes the resetlogs ID, which allows for easier
recovery of a database from a previous backup.
See "Specifying Archive Destinations" on page 7-6.

■

Automated shared server configuration and simplified shared server
configuration parameters.
You no longer need to preconfigure initialization parameters for shared server.
Parameters can be configured dynamically, and most parameters are now limiting
parameters to control resources. The recommended method for enabling shared
server now is by setting SHARED_SERVERS initialization parameter, rather than
the DISPATCHERS initialization parameter.
See "Configuring Oracle Database for Shared Server" on page 4-4.

■

Consolidation of session-specific trace output
For shared server sessions, the trcsess command-line utility consolidates in one
place the trace pertaining to a user session.
See "Reading the Trace File for Shared Server Sessions" on page 4-23.

■

Block remote access to restricted instances
Remote access to a restricted instance through an Oracle Net listener is blocked.
See "Restricting Access to an Instance at Startup" on page 3-5.

■

Dynamic SGA enhancements
The JAVA_POOL_SIZE initialization parameter is now dynamic. There is a new
STREAMS_POOL_SIZE initialization parameter, which is also dynamic. A new
view, V$SGAINFO, provides a consolidated and concise summary of SGA
information.
See "Managing the System Global Area (SGA)" on page 2-24.

■

Irreversible database compatibility
In previous releases you were allowed to lower the compatibility setting for your
database. Now, when you advance the compatibility of the database with the
COMPATIBLE initialization parameter, you can no longer start the database using a
lower compatibility setting, except by doing a point-in-time recovery to a time
before the compatibility was advanced.
See "The COMPATIBLE Initialization Parameter and Irreversible Compatibility" on
page 2-34.

■

Flash recovery area
You can create a flash recovery area in your database where Oracle Database can
store and manage files related to backup and recovery.
See "Specifying a Flash Recovery Area" on page 2-22.

■

Sorted hash clusters
Sorted hash clusters are new data structures that allow faster retrieval of data for
applications where data is consumed in the order in which it was inserted.
See "Creating a Sorted Hash Cluster" on page 19-3.

■

xliv

Copying Files Using the Database Server

You do not have to use the operating system to copy database files. You can use
the DBMS_FILE_TRANSFER package to copy files.
See "Copying Files Using the Database Server" on page 9-12
■

Deprecation of MAXTRANS physical attribute parameter
The MAXTRANS physical attribute parameter for database objects has been
deprecated. Oracle Database now automatically allows up to 255 concurrent
update transactions for any data block, depending on the available space in the
block.
See "Specifying the INITRANS Parameter" on page 14-4.

■

Deprecation of use of rollback segments (manual undo management mode)
Manual undo management mode has been deprecated and is no longer
documented in this book. Use an undo tablespace and automatic undo
management instead.
See Chapter 10, "Managing the Undo Tablespace".

■

Deprecation of the UNDO_SUPPRESS_ERRORS initialization parameter
When operating in automatic undo management mode, the database now ignores
any manual undo management mode SQL statements instead of returning error
messages.
See "Overview of Automatic Undo Management" on page 10-2.

■

Deprecation of the PARALLEL_AUTOMATIC_TUNING initialization parameter
Oracle Database provides defaults for the parallel execution initialization
parameters that are adequate and tuned for most situations. The
PARALLEL_AUTOMATIC_TUNING initialization parameter is now redundant and
has been deprecated.

■

Removal of LogMiner chapter
The chapter on LogMiner has been moved to Oracle Database Utilities

■

Removal of job queues chapter on using the DBMS_JOB package
Using the DBMS_JOB package to submit jobs has been replaced by Scheduler
functionality.
See Part VI, "Database Resource Management and Task Scheduling".

xlv

xlvi

Part I
Basic Database Administration
Part I provides an overview of the responsibilities of a database administrator. This
part describes the creation of a database and how to start up and shut down an
instance of the database. Part I contains the following chapters:
■

Chapter 1, "Overview of Administering an Oracle Database"

■

Chapter 2, "Creating an Oracle Database"

■

Chapter 3, "Starting Up and Shutting Down"

■

Chapter 4, "Managing Oracle Database Processes"

1
Overview of Administering an Oracle
Database
This chapter presents an overview of the environment and tasks of an Oracle Database
administrator (DBA). It also discusses DBA security and how you obtain the necessary
administrative privileges.
The following topics are discussed:
■

Types of Oracle Database Users

■

Tasks of a Database Administrator

■

Selecting an Instance with Environment Variables

■

Identifying Your Oracle Database Software Release

■

Database Administrator Security and Privileges

■

Database Administrator Authentication

■

Creating and Maintaining a Password File

■

Server Manageability

Types of Oracle Database Users
The types of users and their roles and responsibilities depend on the database site. A
small site can have one database administrator who administers the database for
application developers and users. A very large site can find it necessary to divide the
duties of a database administrator among several people and among several areas of
specialization.
This section contains the following topics:
■

Database Administrators

■

Security Officers

■

Network Administrators

■

Application Developers

■

Application Administrators

■

Database Users

Overview of Administering an Oracle Database 1-1

Types of Oracle Database Users

Database Administrators
Each database requires at least one database administrator (DBA). An Oracle Database
system can be large and can have many users. Therefore, database administration is
sometimes not a one-person job, but a job for a group of DBAs who share
responsibility.
A database administrator's responsibilities can include the following tasks:
■
■

■

■

■

Installing and upgrading the Oracle Database server and application tools
Allocating system storage and planning future storage requirements for the
database system
Creating primary database storage structures (tablespaces) after application
developers have designed an application
Creating primary objects (tables, views, indexes) once application developers have
designed an application
Modifying the database structure, as necessary, from information given by
application developers

■

Enrolling users and maintaining system security

■

Ensuring compliance with Oracle license agreements

■

Controlling and monitoring user access to the database

■

Monitoring and optimizing the performance of the database

■

Planning for backup and recovery of database information

■

Maintaining archived data on tape

■

Backing up and restoring the database

■

Contacting Oracle for technical support

Security Officers
In some cases, a site assigns one or more security officers to a database. A security
officer enrolls users, controls and monitors user access to the database, and maintains
system security. As a DBA, you might not be responsible for these duties if your site
has a separate security officer. Please refer to Oracle Database Security Guide for
information about the duties of security officers.

Network Administrators
Some sites have one or more network administrators. A network administrator, for
example, administers Oracle networking products, such as Oracle Net Services. Please
refer to Oracle Database Net Services Administrator's Guide for information about the
duties of network administrators.
See Also: Part VII, "Distributed Database Management", for
information on network administration in a distributed
environment

Application Developers
Application developers design and implement database applications. Their
responsibilities include the following tasks:
■

Designing and developing the database application

1-2 Oracle Database Administrator’s Guide

Tasks of a Database Administrator

■

Designing the database structure for an application

■

Estimating storage requirements for an application

■

Specifying modifications of the database structure for an application

■

Relaying this information to a database administrator

■

Tuning the application during development

■

Establishing security measures for an application during development

Application developers can perform some of these tasks in collaboration with DBAs.
Please refer to Oracle Database Application Developer's Guide - Fundamentals for
information about application development tasks.

Application Administrators
An Oracle Database site can assign one or more application administrators to
administer a particular application. Each application can have its own administrator.

Database Users
Database users interact with the database through applications or utilities. A typical
user's responsibilities include the following tasks:
■

Entering, modifying, and deleting data, where permitted

■

Generating reports from the data

Tasks of a Database Administrator
The following tasks present a prioritized approach for designing, implementing, and
maintaining an Oracle Database:
Task 1: Evaluate the Database Server Hardware
Task 2: Install the Oracle Database Software
Task 3: Plan the Database
Task 4: Create and Open the Database
Task 5: Back Up the Database
Task 6: Enroll System Users
Task 7: Implement the Database Design
Task 8: Back Up the Fully Functional Database
Task 9: Tune Database Performance
Task 10: Download and Install Patches
Task 11: Roll Out to Additional Hosts
These tasks are discussed in the sections that follow.
When upgrading to a new release, back up your existing
production environment, both software and database, before
installation. For information on preserving your existing
production database, see Oracle Database Upgrade Guide.

Note:

Overview of Administering an Oracle Database 1-3

Tasks of a Database Administrator

Task 1: Evaluate the Database Server Hardware
Evaluate how Oracle Database and its applications can best use the available computer
resources. This evaluation should reveal the following information:
■

How many disk drives are available to the Oracle products

■

How many, if any, dedicated tape drives are available to Oracle products

■

How much memory is available to the instances of Oracle Database you will run
(see your system configuration documentation)

Task 2: Install the Oracle Database Software
As the database administrator, you install the Oracle Database server software and any
front-end tools and database applications that access the database. In some distributed
processing installations, the database is controlled by a central computer (database
server) and the database tools and applications are executed on remote computers
(clients). In this case, you must also install the Oracle Net components necessary to
connect the remote machines to the computer that executes Oracle Database.
For more information on what software to install, see "Identifying Your Oracle
Database Software Release" on page 1-7.
See Also: For specific requirements and instructions for
installation, refer to the following documentation:
■
■

The Oracle documentation specific to your operating system
The installation guides for your front-end tools and Oracle Net
drivers

Task 3: Plan the Database
As the database administrator, you must plan:
■

The logical storage structure of the database

■

The overall database design

■

A backup strategy for the database

It is important to plan how the logical storage structure of the database will affect
system performance and various database management operations. For example,
before creating any tablespaces for your database, you should know how many
datafiles will make up the tablespace, what type of information will be stored in each
tablespace, and on which disk drives the datafiles will be physically stored. When
planning the overall logical storage of the database structure, take into account the
effects that this structure will have when the database is actually created and running.
Consider how the logical storage structure of the database will affect:
■

The performance of the computer executing running Oracle Database

■

The performance of the database during data access operations

■

The efficiency of backup and recovery procedures for the database

Plan the relational design of the database objects and the storage characteristics for
each of these objects. By planning the relationship between each object and its physical
storage before creating it, you can directly affect the performance of the database as a
unit. Be sure to plan for the growth of the database.

1-4 Oracle Database Administrator’s Guide

Tasks of a Database Administrator

In distributed database environments, this planning stage is extremely important. The
physical location of frequently accessed data dramatically affects application
performance.
During the planning stage, develop a backup strategy for the database. You can alter
the logical storage structure or design of the database to improve backup efficiency.
It is beyond the scope of this book to discuss relational and distributed database
design. If you are not familiar with such design issues, please refer to accepted
industry-standard documentation.
Part II, "Oracle Database Structure and Storage", and Part IV, "Schema Objects",
provide specific information on creating logical storage structures, objects, and
integrity constraints for your database.

Task 4: Create and Open the Database
After you complete the database design, you can create the database and open it for
normal use. You can create a database at installation time, using the Database
Configuration Assistant, or you can supply your own scripts for creating a database.
Please refer to Chapter 2, "Creating an Oracle Database", for information on creating a
database and Chapter 3, "Starting Up and Shutting Down" for guidance in starting up
the database.

Task 5: Back Up the Database
After you create the database structure, carry out the backup strategy you planned for
the database. Create any additional redo log files, take the first full database backup
(online or offline), and schedule future database backups at regular intervals.
See Also:
■

Oracle Database Backup and Recovery Basics

■

Oracle Database Backup and Recovery Advanced User's Guide

Task 6: Enroll System Users
After you back up the database structure, you can enroll the users of the database in
accordance with your Oracle license agreement, and grant appropriate privileges and
roles to these users. Please refer to Chapter 22, "Managing Users and Securing the
Database" for guidance in this task.

Task 7: Implement the Database Design
After you create and start the database, and enroll the system users, you can
implement the planned logical structure database by creating all necessary
tablespaces. When you have finished creating tablespaces, you can create the database
objects.
Part II, "Oracle Database Structure and Storage" and Part IV, "Schema Objects" provide
information on creating logical storage structures and objects for your database.

Task 8: Back Up the Fully Functional Database
When the database is fully implemented, again back up the database. In addition to
regularly scheduled backups, you should always back up your database immediately
after implementing changes to the database structure.

Overview of Administering an Oracle Database 1-5

Tasks of a Database Administrator

Task 9: Tune Database Performance
Optimizing the performance of the database is one of your ongoing responsibilities as
a DBA. Oracle Database provides a database resource management feature that helps
you to control the allocation of resources among various user groups. The database
resource manager is described in Chapter 24, "Using the Database Resource Manager".
See Also: Oracle Database Performance Tuning Guide for
information about tuning your database and applications

Task 10: Download and Install Patches
After installation and on a regular basis, download and install patches. Patches are
available as single interim patches and as patchsets (or patch releases). Interim
patches address individual software bugs and may or may not be needed at your
installation. Patch releases are collections of bug fixes that are applicable for all
customers. Patch releases have release numbers. For example, if you installed Oracle
Database 10.2.0.0, the first patch release will have a release number of 10.2.0.1.
See Also: Oracle Database Installation Guide for your platform for
instructions on downloading and installing patches.

Task 11: Roll Out to Additional Hosts
After you have an Oracle Database installation properly configured, tuned, patched,
and tested, you may want to roll that exact installation out to other hosts. Reasons to
do this include the following:
■
■

You have multiple production database systems.
You want to create development and test systems that are identical to your
production system.

Instead of installing, tuning, and patching on each additional host, you can clone your
tested Oracle Database installation to other hosts, saving time and eliminating
inconsistencies. There are two types of cloning available to you:
■

Cloning an Oracle home—Just the configured and patched binaries from the
Oracle home directory and subdirectories are copied to the destination host and
"fixed" to match the new environment. You can then start an instance with this
cloned home and create a database.
You can use the Enterprise Manager Clone Oracle Home tool to clone an Oracle
home to one or more destination hosts. You can also manually clone an Oracle
home using a set of provided scripts and Oracle Universal Installer.

■

Cloning a database—The tuned database, including database files, initialization
parameters, and so on, are cloned to an existing Oracle home (possibly a cloned
home).
You can use the Enterprise Manager Clone Database tool to clone an Oracle
database instance to an existing Oracle home.
See Also:
■

■

Oracle Universal Installer and OPatch User's Guide and Enterprise
Manager online help for details on how to clone an Oracle home.
Enterprise Manager online help for instructions for cloning a
database.

1-6 Oracle Database Administrator’s Guide

Identifying Your Oracle Database Software Release

Selecting an Instance with Environment Variables
Before you attempt to use SQL*Plus to connect locally to an Oracle instance, you must
ensure that environment variables are set properly. When multiple database instances
exist on one server, or when an Automatic Storage Management (ASM) instance exists
on the same server as one or more database instances, the environment variables
determine which instance SQL*Plus connects to. (This is also true when there is only
one Oracle instance on a server.)
For example, each Oracle instance (database or ASM) has a unique system identifier
(SID). To connect to an instance, you must at a minimum set the ORACLE_SID
environment variable to the SID of that instance. Depending on the operating system,
you may need to set other environment variables to properly change from one instance
to another.
Refer to the Oracle Database Installation Guide or administration guide for your
operating system for details on environment variables and for information on
switching instances.
This discussion applies only when you make a local
connection—that is, when you initiate a SQL*Plus connection from the
same machine on which the target instance resides, without specifying
an Oracle Net Services connect identifier. When you make a
connection through Oracle Net Services, either with SQL*Plus on the
local or a remote machine, or with Enterprise Manager, the
environment is automatically set for you.

Note:

For more information on connect identifiers, see Oracle Database Net
Services Administrator's Guide.
Solaris Example
The following Solaris example sets the environment variables that are required for
selecting an instance. When switching between instances with different Oracle homes,
the ORACLE_HOME environment variable must be changed.
% setenv ORACLE_SID SAL1
% setenv ORACLE_HOME /u01/app/oracle/product/10.1.0/db_1
% setenv LD_LIBRARY_PATH /usr/lib:/usr/dt/lib:/usr/openwin/lib:/usr/ccs/lib

Most UNIX installations come with two scripts, oraenv and coraenv, that can be
used to easily set these environment variables. For more information, see
Administrator's Reference for UNIX Systems.
Windows Example
On Windows, you must set only the ORACLE_SID environment variable to select an
instance before starting SQL*Plus.
SET ORACLE_SID=SAL1

Identifying Your Oracle Database Software Release
Because Oracle Database continues to evolve and can require maintenance, Oracle
periodically produces new releases. Not all customers initially subscribe to a new
release or require specific maintenance for their existing release. As a result, multiple
releases of the product exist simultaneously.

Overview of Administering an Oracle Database 1-7

Identifying Your Oracle Database Software Release

As many as five numbers may be required to fully identify a release. The significance
of these numbers is discussed in the sections that follow.

Release Number Format
To understand the release nomenclature used by Oracle, examine the following
example of an Oracle Database server labeled "Release 10.1.0.1.0".
Figure 1–1 Example of an Oracle Database Release Number

10.1.0.1.0
Platform specific
release number

Major database
release number

Component specific
release number

Database maintenance
release number

Application server
release number

Starting with release 9.2, maintenance releases of Oracle
Database are denoted by a change to the second digit of a release
number. In previous releases, the third digit indicated a particular
maintenance release.

Note:

Major Database Release Number
The first digit is the most general identifier. It represents a major new version of the
software that contains significant new functionality.

Database Maintenance Release Number
The second digit represents a maintenance release level. Some new features may also
be included.

Application Server Release Number
The third digit reflects the release level of the Oracle Application Server (OracleAS).

Component-Specific Release Number
The fourth digit identifies a release level specific to a component. Different
components can have different numbers in this position depending upon, for example,
component patch sets or interim releases.

Platform-Specific Release Number
The fifth digit identifies a platform-specific release. Usually this is a patch set. When
different platforms require the equivalent patch set, this digit will be the same across
the affected platforms.

Checking Your Current Release Number
To identify the release of Oracle Database that is currently installed and to see the
release levels of other database components you are using, query the data dictionary
view PRODUCT_COMPONENT_VERSION. A sample query follows. (You can also query

1-8 Oracle Database Administrator’s Guide

Database Administrator Security and Privileges

the V$VERSION view to see component-level information.) Other product release
levels may increment independent of the database server.
COL PRODUCT FORMAT A35
COL VERSION FORMAT A15
COL STATUS FORMAT A15
SELECT * FROM PRODUCT_COMPONENT_VERSION;
PRODUCT
---------------------------------------NLSRTL
Oracle Database 10g Enterprise Edition
PL/SQL
...

VERSION
----------10.2.0.1.0
10.2.0.1.0
10.2.0.1.0

STATUS
----------Production
Prod
Production

It is important to convey to Oracle the results of this query when you report problems
with the software.

Database Administrator Security and Privileges
To perform the administrative tasks of an Oracle Database DBA, you need specific
privileges within the database and possibly in the operating system of the server on
which the database runs. Access to a database administrator's account should be
tightly controlled.
This section contains the following topics:
■

The Database Administrator's Operating System Account

■

Database Administrator Usernames

The Database Administrator's Operating System Account
To perform many of the administrative duties for a database, you must be able to
execute operating system commands. Depending on the operating system on which
Oracle Database is running, you might need an operating system account or ID to gain
access to the operating system. If so, your operating system account might require
operating system privileges or access rights that other database users do not require
(for example, to perform Oracle Database software installation). Although you do not
need the Oracle Database files to be stored in your account, you should have access to
them.
See Also: Your operating system specific Oracle documentation.
The method of creating the account of the database administrator is
specific to the operating system.

Database Administrator Usernames
Two user accounts are automatically created when Oracle Database is installed:
■

SYS (default password: CHANGE_ON_INSTALL)

■

SYSTEM (default password: MANAGER)

Overview of Administering an Oracle Database 1-9

Database Administrator Security and Privileges

Both Oracle Universal Installer (OUI) and Database
Configuration Assistant (DBCA) now prompt for SYS and SYSTEM
passwords and do not accept the default passwords
"change_on_install" or "manager", respectively.

Note:

If you create the database manually, Oracle strongly recommends
that you specify passwords for SYS and SYSTEM at database
creation time, rather than using these default passwords. Please
refer to "Protecting Your Database: Specifying Passwords for Users
SYS and SYSTEM" on page 2-10 for more information.
Create at least one additional administrative user and grant to that user an appropriate
administrative role to use when performing daily administrative tasks. Do not use SYS
and SYSTEM for these purposes.
Note Regarding Security Enhancements: In this release of Oracle
Database and in subsequent releases, several enhancements are
being made to ensure the security of default database user
accounts. You can find a security checklist for this release in Oracle
Database Security Guide. Oracle recommends that you read this
checklist and configure your database accordingly.

SYS
When you create an Oracle Database, the user SYS is automatically created and
granted the DBA role.
All of the base tables and views for the database data dictionary are stored in the
schema SYS. These base tables and views are critical for the operation of Oracle
Database. To maintain the integrity of the data dictionary, tables in the SYS schema are
manipulated only by the database. They should never be modified by any user or
database administrator, and no one should create any tables in the schema of user SYS.
(However, you can change the storage parameters of the data dictionary settings if
necessary.)
Ensure that most database users are never able to connect to Oracle Database using the
SYS account.

SYSTEM
When you create an Oracle Database, the user SYSTEM is also automatically created
and granted the DBA role.
The SYSTEM username is used to create additional tables and views that display
administrative information, and internal tables and views used by various Oracle
Database options and tools. Never use the SYSTEM schema to store tables of interest to
non-administrative users.

The DBA Role
A predefined DBA role is automatically created with every Oracle Database
installation. This role contains most database system privileges. Therefore, the DBA
role should be granted only to actual database administrators.

1-10 Oracle Database Administrator’s Guide

Database Administrator Authentication

The DBA role does not include the SYSDBA or SYSOPER
system privileges. These are special administrative privileges that
allow an administrator to perform basic database administration
tasks, such as creating the database and instance startup and
shutdown. These system privileges are discussed in
"Administrative Privileges" on page 1-11.

Note:

Database Administrator Authentication
As a DBA, you often perform special operations such as shutting down or starting up
a database. Because only a DBA should perform these operations, the database
administrator usernames require a secure authentication scheme.
This section contains the following topics:
■

Administrative Privileges

■

Selecting an Authentication Method

■

Using Operating System Authentication

■

Using Password File Authentication

Administrative Privileges
Administrative privileges that are required for an administrator to perform basic
database operations are granted through two special system privileges, SYSDBA and
SYSOPER. You must have one of these privileges granted to you, depending upon the
level of authorization you require.
Note: The SYSDBA and SYSOPER system privileges allow access
to a database instance even when the database is not open. Control
of these privileges is totally outside of the database itself.

The SYSDBA and SYSOPER privileges can also be thought of as
types of connections that enable you to perform certain database
operations for which privileges cannot be granted in any other
fashion. For example, you if you have the SYSDBA privilege, you
can connect to the database by specifying CONNECT AS SYSDBA.

SYSDBA and SYSOPER
The following operations are authorized by the SYSDBA and SYSOPER system
privileges:

Overview of Administering an Oracle Database

1-11

Database Administrator Authentication

System Privilege

Operations Authorized

SYSDBA

■

Perform STARTUP and SHUTDOWN operations

■

ALTER DATABASE: open, mount, back up, or change character set

■

CREATE DATABASE

■

DROP DATABASE

■

CREATE SPFILE

■

ALTER DATABASE ARCHIVELOG

■

ALTER DATABASE RECOVER

■

Includes the RESTRICTED SESSION privilege

Effectively, this system privilege allows a user to connect as user SYS.
SYSOPER

■

Perform STARTUP and SHUTDOWN operations

■

CREATE SPFILE

■

ALTER DATABASE OPEN/MOUNT/BACKUP

■

ALTER DATABASE ARCHIVELOG

■

■

ALTER DATABASE RECOVER (Complete recovery only. Any form
of incomplete recovery, such as UNTIL
TIME|CHANGE|CANCEL|CONTROLFILE requires connecting as
SYSDBA.)
Includes the RESTRICTED SESSION privilege

This privilege allows a user to perform basic operational tasks, but
without the ability to look at user data.

The manner in which you are authorized to use these privileges depends upon the
method of authentication that you use.
When you connect with SYSDBA or SYSOPER privileges, you connect with a default
schema, not with the schema that is generally associated with your username. For
SYSDBA this schema is SYS; for SYSOPER the schema is PUBLIC.

Connecting with Administrative Privileges: Example
This example illustrates that a user is assigned another schema (SYS) when connecting
with the SYSDBA system privilege. Assume that the sample user oe has been granted
the SYSDBA system privilege and has issued the following statements:
CONNECT oe/oe
CREATE TABLE admin_test(name VARCHAR2(20));

Later, user oe issues these statements:
CONNECT oe/oe AS SYSDBA
SELECT * FROM admin_test;

User oe now receives the following error:
ORA-00942: table or view does not exist

Having connected as SYSDBA, user oe now references the SYS schema, but the table
was created in the oe schema.
See Also:
■

"Using Operating System Authentication" on page 1-14

■

"Using Password File Authentication" on page 1-15

1-12 Oracle Database Administrator’s Guide

Database Administrator Authentication

Selecting an Authentication Method
The following methods are available for authenticating database administrators:
■

Operating system (OS) authentication

■

A password file
Notes:
■

■

These methods replace the CONNECT INTERNAL syntax
provided with earlier versions of Oracle Database. CONNECT
INTERNAL is no longer supported.
Operating system authentication takes precedence over
password file authentication. If you meet the requirements for
operating system authentication, then even if you use a
password file, you will be authenticated by operating system
authentication.

Your choice will be influenced by whether you intend to administer your database
locally on the same machine where the database resides, or whether you intend to
administer many different databases from a single remote client. Figure 1–2 illustrates
the choices you have for database administrator authentication schemes.
Figure 1–2 Database Administrator Authentication Methods
Remote Database
Administration

Do you
have a secure
connection?

Local Database
Administration

Yes

Do you
want to use OS
authentication?

No

Yes

Use OS
authentication

No

Use a
password file

If you are performing remote database administration, consult your Oracle Net
documentation to determine whether you are using a secure connection. Most popular
connection protocols, such as TCP/IP and DECnet, are not secure.
See Also:

Oracle Database Net Services Administrator's Guide

Nonsecure Remote Connections
To connect to Oracle Database as a privileged user over a nonsecure connection, you
must be authenticated by a password file. When using password file authentication,
the database uses a password file to keep track of database usernames that have been
granted the SYSDBA or SYSOPER system privilege. This form of authentication is
discussed in "Using Password File Authentication" on page 1-15.

Overview of Administering an Oracle Database

1-13

Database Administrator Authentication

Local Connections and Secure Remote Connections
You can connect to Oracle Database as a privileged user over a local connection or a
secure remote connection in two ways:
■

■

If the database has a password file and you have been granted the SYSDBA or
SYSOPER system privilege, then you can connect and be authenticated by a
password file.
If the server is not using a password file, or if you have not been granted SYSDBA
or SYSOPER privileges and are therefore not in the password file, you can use
operating system authentication. On most operating systems, authentication for
database administrators involves placing the operating system username of the
database administrator in a special group, generically referred to as OSDBA. Users
in that group are granted SYSDBA privileges. A similar group, OSOPER, is used to
grant SYSOPER privileges to users.

Using Operating System Authentication
This section describes how to authenticate an administrator using the operating
system.

OSDBA and OSOPER
Two special operating system groups control database administrator connections
when using operating system authentication. These groups are generically referred to
as OSDBA and OSOPER. The groups are created and assigned specific names as part
of the database installation process. The specific names vary depending upon your
operating system and are listed in the following table:
Operating System Group

UNIX User Group

Windows User Group

OSDBA

dba

ORA_DBA

OSOPER

oper

ORA_OPER

The default names assumed by the Oracle Universal Installer can be overridden. How
you create the OSDBA and OSOPER groups is operating system specific.
Membership in the OSDBA or OSOPER group affects your connection to the database
in the following ways:
■

■

■

If you are a member of the OSDBA group and you specify AS SYSDBA when you
connect to the database, then you connect to the database with the SYSDBA system
privilege.
If you are a member of the OSOPER group and you specify AS SYSOPER when
you connect to the database, then you connect to the database with the SYSOPER
system privilege.
If you are not a member of either of these operating system groups and you
attempt to connect as SYSDBA or SYSOPER, the CONNECT command fails.
See Also: Your operating system specific Oracle documentation
for information about creating the OSDBA and OSOPER groups

Preparing to Use Operating System Authentication
To enable operating system authentication of an administrative user:
1.

Create an operating system account for the user.

1-14 Oracle Database Administrator’s Guide

Database Administrator Authentication

2.

Add the account to the OSDBA or OSOPER operating system defined groups.

Connecting Using Operating System Authentication
A user can be authenticated, enabled as an administrative user, and connected to a
local database by typing one of the following SQL*Plus commands:
CONNECT / AS SYSDBA
CONNECT / AS SYSOPER

For a remote database connection over a secure connection, the user must also specify
the net service name of the remote database:
CONNECT /@net_service_name AS SYSDBA
CONNECT /@net_service_name AS SYSOPER

SQL*Plus User's Guide and Reference for syntax of the
CONNECT command

See Also:

Using Password File Authentication
This section describes how to authenticate an administrative user using password file
authentication.

Preparing to Use Password File Authentication
To enable authentication of an administrative user using password file authentication
you must do the following:
1.

If not already created, create the password file using the ORAPWD utility:
ORAPWD FILE=filename PASSWORD=password ENTRIES=max_users

2.

Set the REMOTE_LOGIN_PASSWORDFILE initialization parameter to EXCLUSIVE.
(This is the default).
Note: REMOTE_LOGIN_PASSWORDFILE is a static initialization
parameter and therefore cannot be changed without restarting the
database.

3.

Connect to the database as user SYS (or as another user with the administrative
privileges).

4.

If the user does not already exist in the database, create the user.

5.

Grant the SYSDBA or SYSOPER system privilege to the user:
GRANT SYSDBA to oe;

This statement adds the user to the password file, thereby enabling connection AS
SYSDBA.
See Also: "Creating and Maintaining a Password File" on
page 1-16 for instructions for creating and maintaining a password
file.

Connecting Using Password File Authentication
Administrative users can be connected and authenticated to a local or remote database
by using the SQL*Plus CONNECT command. They must connect using their username

Overview of Administering an Oracle Database

1-15

Creating and Maintaining a Password File

and password and the AS SYSDBA or AS SYSOPER clause. For example, user oe has
been granted the SYSDBA privilege, so oe can connect as follows:
CONNECT oe/oe AS SYSDBA

However, user oe has not been granted the SYSOPER privilege, so the following
command will fail:
CONNECT oe/oe AS SYSOPER

Operating system authentication takes precedence over
password file authentication. Specifically, if you are a member of
the OSDBA or OSOPER group for the operating system, and you
connect as SYSDBA or SYSOPER, you will be connected with
associated administrative privileges regardless of the
username/password that you specify.

Note:

If you are not in the OSDBA or OSOPER groups, and you are not in
the password file, then attempting to connect as SYSDBA or as
SYSOPER fails.
SQL*Plus User's Guide and Reference for syntax of the
CONNECT command

See Also:

Creating and Maintaining a Password File
You can create a password file using the password file creation utility, ORAPWD. For
some operating systems, you can create this file as part of your standard installation.
This section contains the following topics:
■

Using ORAPWD

■

Setting REMOTE_LOGIN_ PASSWORDFILE

■

Adding Users to a Password File

■

Maintaining a Password File

Using ORAPWD
When you invoke this password file creation utility without supplying any
parameters, you receive a message indicating the proper use of the command as
shown in the following sample output:
> orapwd
Usage: orapwd file= password= entries= force=
where
file - name of password file (mand),
password - password for SYS (mand),
entries - maximum number of distinct DBAs and OPERs (opt),
force - whether to overwrite existing file (opt)
There are no spaces around the equal-to (=) character.

The following command creates a password file named acct.pwd that allows up to
30 privileged users with different passwords. In this example, the file is initially
created with the password secret for users connecting as SYS.
orapwd FILE=acct.pwd PASSWORD=secret ENTRIES=30

1-16 Oracle Database Administrator’s Guide

Creating and Maintaining a Password File

The parameters in the ORAPWD utility are described in the sections that follow.
FILE

This parameter sets the name of the password file being created. You must specify the
full path name for the file. The contents of this file are encrypted, and the file cannot be
read directly. This parameter is mandatory.
The types of filenames allowed for the password file are operating system specific.
Some operating systems require the password file to adhere to a specific format and be
located in a specific directory. Other operating systems allow the use of environment
variables to specify the name and location of the password file. For name and location
information for the Unix and Linux operating systems, see Administrator's Reference for
UNIX-Based Operating Systems. For Windows, see Platform Guide for Microsoft Windows.
For other operating systems, see your operating system documentation.
If you are running multiple instances of Oracle Database using Oracle Real
Application Clusters, the environment variable for each instance should point to the
same password file.
Caution: It is critically important to the security of your system
that you protect your password file and the environment variables
that identify the location of the password file. Any user with access
to these could potentially compromise the security of the
connection.
PASSWORD

This parameter sets the password for user SYS. If you issue the ALTER USER
statement to change the password for SYS after connecting to the database, both the
password stored in the data dictionary and the password stored in the password file
are updated. This parameter is mandatory.
Note: You cannot change the password for SYS if
REMOTE_LOGIN_PASSWORDFILE is set to SHARED. An error
message is issued if you attempt to do so.
ENTRIES

This parameter specifies the number of entries that you require the password file to
accept. This number corresponds to the number of distinct users allowed to connect to
the database as SYSDBA or SYSOPER. The actual number of allowable entries can be
higher than the number of users, because the ORAPWD utility continues to assign
password entries until an operating system block is filled. For example, if your
operating system block size is 512 bytes, it holds four password entries. The number of
password entries allocated is always a multiple of four.
Entries can be reused as users are added to and removed from the password file. If
you intend to specify REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE, and to allow the
granting of SYSDBA and SYSOPER privileges to users, this parameter is required.
Caution: When you exceed the allocated number of password
entries, you must create a new password file. To avoid this
necessity, allocate a number of entries that is larger than you think
you will ever need.

Overview of Administering an Oracle Database

1-17

Creating and Maintaining a Password File

FORCE

This parameter, if set to Y, enables you to overwrite an existing password file. An error
is returned if a password file of the same name already exists and this parameter is
omitted or set to N.

Setting REMOTE_LOGIN_ PASSWORDFILE
In addition to creating the password file, you must also set the initialization parameter
REMOTE_LOGIN_PASSWORDFILE to the appropriate value. The values recognized are:
■

■

■

NONE: Setting this parameter to NONE causes Oracle Database to behave as if the
password file does not exist. That is, no privileged connections are allowed over
nonsecure connections.
EXCLUSIVE: (The default) An EXCLUSIVE password file can be used with only
one instance of one database. Only an EXCLUSIVE file can be modified. Using an
EXCLUSIVE password file enables you to add, modify, and delete users. It also
enables you to change the SYS password with the ALTER USER command.
SHARED: A SHARED password file can be used by multiple databases running on
the same server, or multiple instances of a Real Application Clusters (RAC)
database. A SHARED password file cannot be modified. This means that you
cannot add users to a SHARED password file. Any attempt to do so or to change
the password of SYS or other users with the SYSDBA or SYSOPER privileges
generates an error. All users needing SYSDBA or SYSOPER system privileges must
be added to the password file when REMOTE_LOGIN_PASSWORDFILE is set to
EXCLUSIVE. After all users are added, you can change
REMOTE_LOGIN_PASSWORDFILE to SHARED, and then share the file.
This option is useful if you are administering multiple databases or a RAC
database.

If REMOTE_LOGIN_PASSWORDFILE is set to EXCLUSIVE or SHARED and the password
file is missing, this is equivalent to setting REMOTE_LOGIN_PASSWORDFILE to NONE.

Adding Users to a Password File
When you grant SYSDBA or SYSOPER privileges to a user, that user's name and
privilege information are added to the password file. If the server does not have an
EXCLUSIVE password file (that is, if the initialization parameter
REMOTE_LOGIN_PASSWORDFILE is NONE or SHARED, or the password file is missing),
Oracle Database issues an error if you attempt to grant these privileges.
A user's name remains in the password file only as long as that user has at least one of
these two privileges. If you revoke both of these privileges, Oracle Database removes
the user from the password file.
Creating a Password File and Adding New Users to It
Use the following procedure to create a password and add new users to it:
1.

Follow the instructions for creating a password file as explained in "Using
ORAPWD" on page 1-16.

2.

Set the REMOTE_LOGIN_PASSWORDFILE initialization parameter to EXCLUSIVE.
(This is the default.)

1-18 Oracle Database Administrator’s Guide

Creating and Maintaining a Password File

Note: REMOTE_LOGIN_PASSWORDFILE is a static initialization
parameter and therefore cannot be changed without restarting the
database.
3.

Connect with SYSDBA privileges as shown in the following example:
CONNECT SYS/password AS SYSDBA

4.

Start up the instance and create the database if necessary, or mount and open an
existing database.

5.

Create users as necessary. Grant SYSDBA or SYSOPER privileges to yourself and
other users as appropriate. See "Granting and Revoking SYSDBA and SYSOPER
Privileges", later in this section.

Granting and Revoking SYSDBA and SYSOPER Privileges
If your server is using an EXCLUSIVE password file, use the GRANT statement to grant
the SYSDBA or SYSOPER system privilege to a user, as shown in the following
example:
GRANT SYSDBA TO oe;

Use the REVOKE statement to revoke the SYSDBA or SYSOPER system privilege from a
user, as shown in the following example:
REVOKE SYSDBA FROM oe;

Because SYSDBA and SYSOPER are the most powerful database privileges, the WITH
ADMIN OPTION is not used in the GRANT statement. That is, the grantee cannot in turn
grant the SYSDBA or SYSOPER privilege to another user. Only a user currently
connected as SYSDBA can grant or revoke another user's SYSDBA or SYSOPER system
privileges. These privileges cannot be granted to roles, because roles are available only
after database startup. Do not confuse the SYSDBA and SYSOPER database privileges
with operating system roles.
Oracle Database Security Guide for more information on
system privileges

See Also:

Viewing Password File Members
Use the V$PWFILE_USERS view to see the users who have been granted SYSDBA or
SYSOPER system privileges for a database. The columns displayed by this view are as
follows:
Column

Description

USERNAME

This column contains the name of the user that is recognized by the
password file.

SYSDBA

If the value of this column is TRUE, then the user can log on with
SYSDBA system privileges.

SYSOPER

If the value of this column is TRUE, then the user can log on with
SYSOPER system privileges.

Maintaining a Password File
This section describes how to:

Overview of Administering an Oracle Database

1-19

Server Manageability

■

Expand the number of password file users if the password file becomes full

■

Remove the password file

Expanding the Number of Password File Users
If you receive the file full error (ORA-1996) when you try to grant SYSDBA or
SYSOPER system privileges to a user, you must create a larger password file and
regrant the privileges to the users.
Replacing a Password File
Use the following procedure to replace a password file:
1.

Identify the users who have SYSDBA or SYSOPER privileges by querying the
V$PWFILE_USERS view.

2.

Delete the existing password file.

3.

Follow the instructions for creating a new password file using the ORAPWD utility
in "Using ORAPWD" on page 1-16. Ensure that the ENTRIES parameter is set to a
number larger than you think you will ever need.

4.

Follow the instructions in "Adding Users to a Password File" on page 1-18.

Removing a Password File
If you determine that you no longer require a password file to authenticate users, you
can delete the password file and then optionally reset the
REMOTE_LOGIN_PASSWORDFILE initialization parameter to NONE. After you remove
this file, only those users who can be authenticated by the operating system can
perform SYSDBA or SYSOPER database administration operations.

Server Manageability
Oracle Database is a sophisticated self-managing database that automatically
monitors, adapts, and repairs itself. It automates routine DBA tasks and reduces the
complexity of space, memory, and resource administration. Several advisors are
provided to help you analyze specific objects. The advisors report on a variety of
aspects of the object and describe recommended actions. Oracle Database proactively
sends alerts when a problem is anticipated or when any of the user-selected metrics.
In addition to its self-managing features, Oracle Database provides utilities to help you
move data in and out of the database.
This section describes these server manageability topics:
■

Automatic Manageability Features

■

Data Utilities

Automatic Manageability Features
Oracle Database has a self-management infrastructure that allows the database to
learn about itself and use this information to adapt to workload variations and to
automatically remedy any potential problem. This section discusses the automatic
manageability features of Oracle Database.

1-20 Oracle Database Administrator’s Guide

Server Manageability

Automatic Workload Repository
Automatic Workload Repository (AWR) is a built-in repository in every Oracle
Database. At regular intervals, the database makes a snapshot of all its vital statistics
and workload information and stores them in AWR. By default, the snapshots are
made every 60 minutes, but you can change this frequency. The snapshots are stored in
the AWR for a period of time (seven days by default) after which they are
automatically purged.
See Also:
■

■

Oracle Database Concepts and Oracle Database 2 Day DBA for
overviews of the Automatic Workload Repository
Oracle Database Performance Tuning Guide for details on using
Automatic Workload Repository for statistics collection

Automatic Maintenance Tasks
Oracle Database uses the information stored in AWR to identify the need to perform
routine maintenance tasks, such as optimizer statistics refresh and rebuilding indexes.
Then the database uses the Scheduler to run such tasks in a predefined maintenance
window.
See Also:
■

■

Oracle Database Concepts for an overview of the maintenance
window
Chapter 23, "Managing Automatic System Tasks Using the
Maintenance Window" for detailed information on using the
predefined maintenance window

Server-Generated Alerts
Some problems cannot be resolved automatically and require the database
administrator's attention. For these problems, such as space shortage, Oracle Database
provides server-generated alerts to notify you when then problem arises. The alerts
also provide recommendations on how to resolve the problem.
See Also: "Server-Generated Alerts" on page 4-18 in this book for
detailed information on using APIs to administer server-generated
alerts

Advisors
Oracle Database provides advisors to help you optimize a number of subsystems in
the database. An advisory framework ensures consistency in the way in which
advisors are invoked and results are reported. The advisors are use primarily by the
database to optimize its own performance. However, they can also be invoked by
administrators to get more insight into the functioning of a particular subcomponent.
See Also:
■
■

Oracle Database Concepts for an overview of the advisors
Oracle Database 2 Day DBA for more detailed information on
using the advisors

Overview of Administering an Oracle Database

1-21

Server Manageability

Data Utilities
Several utilities are available to help you maintain the data in your Oracle Database.
This section introduces two of these utilities:
■

SQL*Loader

■

Export and Import Utilities
Oracle Database Utilities for detailed information about
these utilities

See Also:

SQL*Loader
SQL*Loader is used both by database administrators and by other users of Oracle
Database. It loads data from standard operating system files (such as, files in text or C
data format) into database tables.

Export and Import Utilities
Oracle export and import utilities enable you to move existing data in Oracle format
between one Oracle Database and another. For example, export files can archive
database data or move data among different databases that run on the same or
different operating systems.

1-22 Oracle Database Administrator’s Guide

2
Creating an Oracle Database
This chapter discusses the process of creating an Oracle Database, and contains the
following topics:
■

Deciding How to Create an Oracle Database

■

Manually Creating an Oracle Database

■

Understanding the CREATE DATABASE Statement

■

Understanding Initialization Parameters

■

Troubleshooting Database Creation

■

Dropping a Database

■

Managing Initialization Parameters Using a Server Parameter File

■

Defining Application Services for Oracle Database 10g

■

Considerations After Creating a Database

■

Viewing Information About the Database
See Also:
■

■

Part III, "Automated File and Storage Management", for
information about creating a database whose underlying
operating system files are automatically created and managed
by the Oracle Database server
Oracle Real Application Clusters Installation and Configuration
Guide for additional information specific to an Oracle Real
Application Clusters environment

Deciding How to Create an Oracle Database
You can create an Oracle Database in three ways:
■

Use the Database Configuration Assistant (DBCA).
DBCA can be launched by the Oracle Universal Installer, depending upon the type
of install that you select, and provides a graphical user interface (GUI) that guides
you through the creation of a database. You can also launch DBCA as a standalone
tool at any time after Oracle Database installation to create or make a copy (clone)
of a database. Refer to Oracle Database 2 Day DBA for detailed information on
creating a database using DBCA.

■

Use the CREATE DATABASE statement.

Creating an Oracle Database 2-1

Manually Creating an Oracle Database

You can use the CREATE DATABASE SQL statement to create a database. If you do
so, you must complete additional actions before you have an operational database.
These actions include creating users and temporary tablespaces, building views of
the data dictionary tables, and installing Oracle built-in packages. These actions
can be performed by executing prepared scripts, many of which are supplied for
you.
If you have existing scripts for creating your database, consider editing those
scripts to take advantage of new Oracle Database features. Oracle provides a
sample database creation script and a sample initialization parameter file with the
Oracle Database software files. Both the script and the file can be edited to suit
your needs. See "Manually Creating an Oracle Database" on page 2-2.
■

Upgrade an existing database.
If you are already using a earlier release of Oracle Database, database creation is
required only if you want an entirely new database. You can upgrade your
existing Oracle Database and use it with the new release of the database software.
The Oracle Database Upgrade Guide manual contains information about upgrading
an existing Oracle Database.

The remainder of this chapter discusses creating a database manually.

Manually Creating an Oracle Database
This section takes you through the planning stage and the actual creation of the
database.

Considerations Before Creating the Database
Database creation prepares several operating system files to work together as an
Oracle Database. You need only create a database once, regardless of how many
datafiles it has or how many instances access it. You can create a database to erase
information in an existing database and create a new database with the same name
and physical structure.
The following topics can help prepare you for database creation.
■

Planning for Database Creation

■

Meeting Creation Prerequisites

Planning for Database Creation
Prepare to create the database by research and careful planning. Table 2–1 lists some
recommended actions:
Table 2–1

Database Planning Tasks

Action

Additional Information

Plan the database tables and indexes and estimate the amount of
space they will require.

Part II, "Oracle Database
Structure and Storage"
Part IV, "Schema Objects"

2-2 Oracle Database Administrator’s Guide

Manually Creating an Oracle Database

Table 2–1 (Cont.) Database Planning Tasks
Action

Additional Information

Plan the layout of the underlying operating system files your
database will comprise. Proper distribution of files can improve
database performance dramatically by distributing the I/O during
file access. You can distribute I/O in several ways when you install
Oracle software and create your database. For example, you can
place redo log files on separate disks or use striping. You can situate
datafiles to reduce contention. And you can control data density
(number of rows to a data block).

Oracle Database
Performance Tuning Guide

Consider using Oracle-managed files and Automatic Storage
Management to create and manage the operating system files that
make up your database storage.

Part III, "Automated File
and Storage
Management"

Select the global database name, which is the name and location of
the database within the network structure. Create the global
database name by setting both the DB_NAME and DB_DOMAIN
initialization parameters.

"Determining the Global
Database Name" on
page 2-21

Familiarize yourself with the initialization parameters contained in
the initialization parameter file. Become familiar with the concept
and operation of a server parameter file. A server parameter file
lets you store and manage your initialization parameters
persistently in a server-side disk file.

"Understanding
Initialization Parameters"
on page 2-19

Your Oracle operating
system specific
documentation

"What Is a Server
Parameter File?" on
page 2-36
Oracle Database Reference

Oracle Database
Globalization Support
All character data, including data in the data dictionary, is stored in
Guide
the database character set. You must specify the database character
set when you create the database.
Select the database character set.

If clients using different character sets will access the database, then
choose a superset that includes all client character sets. Otherwise,
character conversions may be necessary at the cost of increased
overhead and potential data loss.
You can also specify an alternate character set.
Caution: AL32UTF8 is the Oracle Database character set that is
appropriate for XMLType data. It is equivalent to the IANA
registered standard UTF-8 encoding, which supports all valid XML
characters.
Do not confuse Oracle Database database character set UTF8 (no
hyphen) with database character set AL32UTF8 or with character
encoding UTF-8. Database character set UTF8 has been superseded
by AL32UTF8. Do not use UTF8 for XML data. UTF8 supports only
Unicode version 3.1 and earlier; it does not support all valid XML
characters. AL32UTF8 has no such limitation.
Using database character set UTF8 for XML data could potentially
cause a fatal error or affect security negatively. If a character that is
not supported by the database character set appears in an
input-document element name, a replacement character (usually
"?") is substituted for it. This will terminate parsing and raise an
exception.
Consider what time zones your database must support.
Oracle Database uses one of two time zone files as the source of
valid time zones. The default time zone file is timezonelrg.dat.
It contains more time zones than the other time zone file,
timezone.dat.

"Specifying the Database
Time Zone File" on
page 2-17

Creating an Oracle Database 2-3

Manually Creating an Oracle Database

Table 2–1 (Cont.) Database Planning Tasks
Action

Additional Information

Select the standard database block size. This is specified at database "Specifying Database
creation by the DB_BLOCK_SIZE initialization parameter and
Block Sizes" on page 2-23
cannot be changed after the database is created.
The SYSTEM tablespace and most other tablespaces use the
standard block size. Additionally, you can specify up to four
nonstandard block sizes when creating tablespaces.
Determine the appropriate initial sizing for the SYSAUX tablespace.

"Creating the SYSAUX
Tablespace" on page 2-12

Plan to use a default tablespace for non-SYSTEM users to prevent
inadvertent saving of database objects in the SYSTEM tablespace.

"Creating a Default
Permanent Tablespace"
on page 2-14

Plan to use an undo tablespace to manage your undo data.

Chapter 10, "Managing
the Undo Tablespace"

Develop a backup and recovery strategy to protect the database
from failure. It is important to protect the control file by
multiplexing, to choose the appropriate backup mode, and to
manage the online and archived redo logs.

Chapter 6, "Managing
the Redo Log"
Chapter 7, "Managing
Archived Redo Logs"
Chapter 5, "Managing
Control Files"
Oracle Database Backup
and Recovery Basics

Familiarize yourself with the principles and options of starting up
and shutting down an instance and mounting and opening a
database.

Chapter 3, "Starting Up
and Shutting Down"

Meeting Creation Prerequisites
Before you can create a new database, the following prerequisites must be met:
■

■

■
■

The desired Oracle software must be installed. This includes setting various
environment variables unique to your operating system and establishing the
directory structure for software and database files.
You must have the operating system privileges associated with a fully operational
database administrator. You must be specially authenticated by your operating
system or through a password file, allowing you to start up and shut down an
instance before the database is created or opened. This authentication is discussed
in "Database Administrator Authentication" on page 1-11.
Sufficient memory must be available to start the Oracle Database instance.
Sufficient disk storage space must be available for the planned database on the
computer that runs Oracle Database.

All of these are discussed in the Oracle Database Installation Guide specific to your
operating system. If you use the Oracle Universal Installer, it will guide you through
your installation and provide help in setting environment variables and establishing
directory structure and authorizations.

Creating the Database
This section presents the steps involved when you create a database manually. These
steps should be followed in the order presented. The prerequisites described in the
preceding section must already have been completed. That is, you have established the

2-4 Oracle Database Administrator’s Guide

Manually Creating an Oracle Database

environment for creating your Oracle Database, including most operating system
dependent environmental variables, as part of the Oracle software installation process.
Step 1: Decide on Your Instance Identifier (SID)
Step 2: Establish the Database Administrator Authentication Method
Step 3: Create the Initialization Parameter File
Step 4: Connect to the Instance
Step 5: Create a Server Parameter File (Recommended)
Step 6: Start the Instance
Step 7: Issue the CREATE DATABASE Statement
Step 8: Create Additional Tablespaces
Step 9: Run Scripts to Build Data Dictionary Views
Step 10: Run Scripts to Install Additional Options (Optional)
Step 11: Back Up the Database.
The examples shown in these steps create an example database mynewdb.
Notes:
■

■

The steps in this section contain cross-references to other parts
of this book and to other books. These cross-references take you
to material that will help you to learn about and understand
the initialization parameters and database structures with
which you are not yet familiar.
If you are using Oracle Automatic Storage Management to
manage your disk storage, you must start the ASM instance
and configure your disk groups before performing the
following steps. For information about Automatic Storage
Management, see Chapter 12, "Using Automatic Storage
Management".

Step 1: Decide on Your Instance Identifier (SID)
An instance is made up of the system global area (SGA) and the background processes
of an Oracle Database. Decide on a unique Oracle system identifier (SID) for your
instance and set the ORACLE_SID environment variable accordingly. This identifier is
used to distinguish this instance from other Oracle Database instances that you may
create later and run concurrently on your system.
The following example for UNIX operating systems sets the SID for the instance that
you will connect to in Step 4: Connect to the Instance:
% setenv ORACLE_SID mynewdb

Step 2: Establish the Database Administrator Authentication Method
You must be authenticated and granted appropriate system privileges in order to
create a database. You can use the password file or operating system authentication
method. Database administrator authentication and authorization is discussed in the
following sections of this book:
■

"Database Administrator Security and Privileges" on page 1-9
Creating an Oracle Database 2-5

Manually Creating an Oracle Database

■

"Database Administrator Authentication" on page 1-11

■

"Creating and Maintaining a Password File" on page 1-16

Step 3: Create the Initialization Parameter File
When an Oracle instance starts, it reads an initialization parameter file. This file can be
a read-only text file, which must be modified with a text editor, or a read/write binary
file, which can be modified dynamically by the database (for tuning) or with SQL
commands that you submit. The binary file, which is preferred, is called a server
parameter file. In this step, you create a text initialization parameter file. In a later
step, you can optionally create a server parameter file from the text file.
One way to create the text initialization parameter file is to edit a copy of the sample
initialization parameter file that Oracle provides on the distribution media, or the
sample presented in this book.
On Unix operating systems, the Oracle Universal Installer
installs a sample text initialization parameter file in the following
location:

Note:

$ORACLE_HOME/dbs/init.ora

For convenience, store your initialization parameter file in the Oracle Database default
location, using the default name. Then when you start your database, it will not be
necessary to specify the PFILE clause of the STARTUP command, because Oracle
Database automatically looks in the default location for the initialization parameter
file.
For name, location, and sample content for the initialization parameter file, and for a
discussion of how to set initialization parameters, see "Understanding Initialization
Parameters" on page 2-19.

Step 4: Connect to the Instance
Start SQL*Plus and connect to your Oracle Database instance AS SYSDBA.
$ SQLPLUS /nolog
CONNECT SYS/password AS SYSDBA

Step 5: Create a Server Parameter File (Recommended)
Oracle recommends that you create a server parameter file. The server parameter file
enables you to change initialization parameters with database commands and persist
the changes across a shutdown and startup. You create the server parameter file from
your edited text initialization file. For more information, see "Managing Initialization
Parameters Using a Server Parameter File" on page 2-35.
The following script creates a server parameter file from the text initialization
parameter file and writes it to the default location. The script can be executed before or
after instance startup, but after you connect as SYSDBA. The database must be
restarted before the server parameter file takes effect.
-- create the server parameter file
CREATE SPFILE='/u01/oracle/dbs/spfilemynewdb.ora' FROM
PFILE='/u01/oracle/admin/initmynewdb/scripts/init.ora';
SHUTDOWN
-- the next startup will use the server parameter file
EXIT

2-6 Oracle Database Administrator’s Guide

Manually Creating an Oracle Database

Step 6: Start the Instance
Start an instance without mounting a database. Typically, you do this only during
database creation or while performing maintenance on the database. Use the STARTUP
command with the NOMOUNT clause. In this example, because the server parameter file
is stored in the default location, you are not required to specify the PFILE clause:
STARTUP NOMOUNT

At this point, the SGA is created and background processes are started in preparation
for the creation of a new database. The database itself does not yet exist.
See Also:
■

■

"Managing Initialization Parameters Using a Server Parameter
File" on page 2-35
Chapter 3, "Starting Up and Shutting Down", to learn how to
use the STARTUP command

Step 7: Issue the CREATE DATABASE Statement
To create the new database, use the CREATE DATABASE statement. The following
statement creates database mynewdb:
CREATE DATABASE mynewdb
USER SYS IDENTIFIED BY pz6r58
USER SYSTEM IDENTIFIED BY y1tz5p
LOGFILE GROUP 1 ('/u01/oracle/oradata/mynewdb/redo01.log') SIZE 100M,
GROUP 2 ('/u01/oracle/oradata/mynewdb/redo02.log') SIZE 100M,
GROUP 3 ('/u01/oracle/oradata/mynewdb/redo03.log') SIZE 100M
MAXLOGFILES 5
MAXLOGMEMBERS 5
MAXLOGHISTORY 1
MAXDATAFILES 100
MAXINSTANCES 1
CHARACTER SET US7ASCII
NATIONAL CHARACTER SET AL16UTF16
DATAFILE '/u01/oracle/oradata/mynewdb/system01.dbf' SIZE 325M REUSE
EXTENT MANAGEMENT LOCAL
SYSAUX DATAFILE '/u01/oracle/oradata/mynewdb/sysaux01.dbf' SIZE 325M REUSE
DEFAULT TABLESPACE tbs_1
DEFAULT TEMPORARY TABLESPACE tempts1
TEMPFILE '/u01/oracle/oradata/mynewdb/temp01.dbf'
SIZE 20M REUSE
UNDO TABLESPACE undotbs
DATAFILE '/u01/oracle/oradata/mynewdb/undotbs01.dbf'
SIZE 200M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;

A database is created with the following characteristics:
■

■

■

The database is named mynewdb. Its global database name is
mynewdb.us.oracle.com. See "DB_NAME Initialization Parameter" and
"DB_DOMAIN Initialization Parameter" on page 2-22.
Three control files are created as specified by the CONTROL_FILES initialization
parameter, which was set before database creation in the initialization parameter
file. See "Sample Initialization Parameter File" on page 2-20 and "Specifying
Control Files" on page 2-22.
The password for user SYS is pz6r58 and the password for SYSTEM is y1tz5p.
The two clauses that specify the passwords for SYS and SYSTEM are not

Creating an Oracle Database 2-7

Manually Creating an Oracle Database

mandatory in this release of Oracle Database. However, if you specify either
clause, you must specify both clauses. For further information about the use of
these clauses, see "Protecting Your Database: Specifying Passwords for Users SYS
and SYSTEM" on page 2-10.
■

■

The new database has three redo log files as specified in the LOGFILE clause.
MAXLOGFILES, MAXLOGMEMBERS, and MAXLOGHISTORY define limits for the redo
log. See Chapter 6, "Managing the Redo Log".
MAXDATAFILES specifies the maximum number of datafiles that can be open in
the database. This number affects the initial sizing of the control file.
You can set several limits during database creation. Some of
these limits are limited by and affected by operating system limits.
For example, if you set MAXDATAFILES, Oracle Database allocates
enough space in the control file to store MAXDATAFILES filenames,
even if the database has only one datafile initially. However,
because the maximum control file size is limited and operating
system dependent, you might not be able to set all CREATE
DATABASE parameters at their theoretical maximums.
Note:

For more information about setting limits during database creation,
see the Oracle Database SQL Reference and your operating system
specific Oracle documentation.
■

■
■

■

■

■

■

■

■

■

MAXINSTANCES specifies that only one instance can have this database mounted
and open.
The US7ASCII character set is used to store data in this database.
The AL16UTF16 character set is specified as the NATIONAL CHARACTER SET,
used to store data in columns specifically defined as NCHAR, NCLOB, or
NVARCHAR2.
The SYSTEM tablespace, consisting of the operating system file
/u01/oracle/oradata/mynewdb/system01.dbf is created as specified by
the DATAFILE clause. If a file with that name already exists, it is overwritten.
The SYSTEM tablespace is a locally managed tablespace. See "Creating a Locally
Managed SYSTEM Tablespace" on page 2-11.
A SYSAUX tablespace is created, consisting of the operating system file
/u01/oracle/oradata/mynewdb/sysaux01.dbf as specified in the SYSAUX
DATAFILE clause. See "Creating the SYSAUX Tablespace" on page 2-12.
The DEFAULT TABLESPACE clause creates and names a default permanent
tablespace for this database.
The DEFAULT TEMPORARY TABLESPACE clause creates and names a default
temporary tablespace for this database. See "Creating a Default Temporary
Tablespace" on page 2-14.
The UNDO TABLESPACE clause creates and names an undo tablespace that is used
to store undo data for this database if you have specified
UNDO_MANAGEMENT=AUTO in the initialization parameter file. See "Using
Automatic Undo Management: Creating an Undo Tablespace" on page 2-13.
Redo log files will not initially be archived, because the ARCHIVELOG clause is not
specified in this CREATE DATABASE statement. This is customary during database
creation. You can later use an ALTER DATABASE statement to switch to

2-8 Oracle Database Administrator’s Guide

Manually Creating an Oracle Database

ARCHIVELOG mode. The initialization parameters in the initialization parameter
file for mynewdb relating to archiving are LOG_ARCHIVE_DEST_1 and
LOG_ARCHIVE_FORMAT. See Chapter 7, "Managing Archived Redo Logs".
See Also:
■

■

"Understanding the CREATE DATABASE Statement" on
page 2-10
Oracle Database SQL Reference for more information about
specifying the clauses and parameter values for the CREATE
DATABASE statement

Step 8: Create Additional Tablespaces
To make the database functional, you need to create additional files and tablespaces
for users. The following sample script creates some additional tablespaces:
CONNECT SYS/password AS SYSDBA
-- create a user tablespace to be assigned as the default tablespace for users
CREATE TABLESPACE users LOGGING
DATAFILE '/u01/oracle/oradata/mynewdb/users01.dbf'
SIZE 25M REUSE AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL;
-- create a tablespace for indexes, separate from user tablespace
CREATE TABLESPACE indx LOGGING
DATAFILE '/u01/oracle/oradata/mynewdb/indx01.dbf'
SIZE 25M REUSE AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL;

For information about creating tablespaces, see Chapter 8, "Managing Tablespaces".

Step 9: Run Scripts to Build Data Dictionary Views
Run the scripts necessary to build views, synonyms, and PL/SQL packages:
CONNECT SYS/password AS SYSDBA
@/u01/oracle/rdbms/admin/catalog.sql
@/u01/oracle/rdbms/admin/catproc.sql
EXIT

The following table contains descriptions of the scripts:
Script

Description

CATALOG.SQL

Creates the views of the data dictionary tables, the dynamic
performance views, and public synonyms for many of the views.
Grants PUBLIC access to the synonyms.

CATPROC.SQL

Runs all scripts required for or used with PL/SQL.

Step 10: Run Scripts to Install Additional Options (Optional)
You may want to run other scripts. The scripts that you run are determined by the
features and options you choose to use or install. Many of the scripts available to you
are described in the Oracle Database Reference.
If you plan to install other Oracle products to work with this database, see the
installation instructions for those products. Some products require you to create
additional data dictionary tables. Usually, command files are provided to create and
load these tables into the database data dictionary.

Creating an Oracle Database 2-9

Understanding the CREATE DATABASE Statement

See your Oracle documentation for the specific products that you plan to install for
installation and administration instructions.

Step 11: Back Up the Database.
Take a full backup of the database to ensure that you have a complete set of files from
which to recover if a media failure occurs. For information on backing up a database,
see Oracle Database Backup and Recovery Basics.

Understanding the CREATE DATABASE Statement
When you execute a CREATE DATABASE statement, Oracle Database performs (at least)
a number of operations. The actual operations performed depend on the clauses that
you specify in the CREATE DATABASE statement and the initialization parameters that
you have set. Oracle Database performs at least these operations:
■

Creates the datafiles for the database

■

Creates the control files for the database

■

Creates the redo log files for the database and establishes the ARCHIVELOG mode.

■

Creates the SYSTEM tablespace

■

Creates the SYSAUX tablespace

■

Creates the data dictionary

■

Sets the character set that stores data in the database

■

Sets the database time zone

■

Mounts and opens the database for use

This section discusses several of the clauses of the CREATE DATABASE statement. It
expands upon some of the clauses discussed in "Step 7: Issue the CREATE DATABASE
Statement" on page 2-7 and introduces additional ones. Many of the CREATE
DATABASES clauses discussed here can be used to simplify the creation and
management of your database.
The following topics are contained in this section:
■

Protecting Your Database: Specifying Passwords for Users SYS and SYSTEM

■

Creating a Locally Managed SYSTEM Tablespace

■

Creating the SYSAUX Tablespace

■

Using Automatic Undo Management: Creating an Undo Tablespace

■

Creating a Default Temporary Tablespace

■

Specifying Oracle-Managed Files at Database Creation

■

Supporting Bigfile Tablespaces During Database Creation

■

Specifying the Database Time Zone and Time Zone File

■

Specifying FORCE LOGGING Mode

Protecting Your Database: Specifying Passwords for Users SYS and SYSTEM
The clauses of the CREATE DATABASE statement used for specifying the passwords for
users SYS and SYSTEM are:
■

USER SYS IDENTIFIED BY password

2-10 Oracle Database Administrator’s Guide

Understanding the CREATE DATABASE Statement

■

USER SYSTEM IDENTIFIED BY password

If you omit these clauses, these users are assigned the default passwords
change_on_install and manager, respectively. A record is written to the alert log
indicating that the default passwords were used. To protect your database, you should
change these passwords using the ALTER USER statement immediately after database
creation.
Oracle strongly recommends that you specify these clauses, even though they are
optional in this release of Oracle Database. The default passwords are commonly
known, and if you neglect to change them later, you leave database vulnerable to
attack by malicious users.
See Also:

"Some Security Considerations" on page 2-44

Creating a Locally Managed SYSTEM Tablespace
Specify the EXTENT MANAGEMENT LOCAL clause in the CREATE DATABASE statement
to create a locally managed SYSTEM tablespace. The COMPATIBLE initialization
parameter must be set to 9.2 or higher for this statement to be successful. If you do not
specify the EXTENT MANAGEMENT LOCAL clause, by default the database creates a
dictionary-managed SYSTEM tablespace. Dictionary-managed tablespaces are
deprecated.
A locally managed SYSTEM tablespace has AUTOALLOCATE enabled by default, which
means that the system determines and controls the number and size of extents. You
may notice an increase in the initial size of objects created in a locally managed
SYSTEM tablespace because of the autoallocate policy. It is not possible to create a
locally managed SYSTEM tablespace and specify UNIFORM extent size.
When you create your database with a locally managed SYSTEM tablespace, ensure
that the following conditions are met:
■

A default temporary tablespace must exist, and that tablespace cannot be the
SYSTEM tablespace.
To meet this condition, you can specify the DEFAULT TEMPORARY TABLESPACE
clause in the CREATE DATABASE statement, or you can omit the clause and let
Oracle Database create the tablespace for you using a default name and in a
default location.

■

You can include the UNDO TABLESPACE clause in the CREATE DATABASE
statement to create a specific undo tablespace. If you omit that clause, Oracle
Database creates a locally managed undo tablespace for you using the default
name and in a default location.
See Also:
■

■
■

Oracle Database SQL Reference for more specific information
about the use of the DEFAULT TEMPORARY TABLESPACE and
UNDO TABLESPACE clauses when EXTENT MANAGEMENT
LOCAL is specified for the SYSTEM tablespace
"Locally Managed Tablespaces" on page 8-3
"Migrating the SYSTEM Tablespace to a Locally Managed
Tablespace" on page 8-24

Creating an Oracle Database

2-11

Understanding the CREATE DATABASE Statement

Creating the SYSAUX Tablespace
The SYSAUX tablespace is always created at database creation. The SYSAUX tablespace
serves as an auxiliary tablespace to the SYSTEM tablespace. Because it is the default
tablespace for many Oracle Database features and products that previously required
their own tablespaces, it reduces the number of tablespaces required by the database
and that you must maintain. Other functionality or features that previously used the
SYSTEM tablespace can now use the SYSAUX tablespace, thus reducing the load on the
SYSTEM tablespace.
You can specify only datafile attributes for the SYSAUX tablespace, using the SYSAUX
DATAFILE clause in the CREATE DATABASE statement. Mandatory attributes of the
SYSAUX tablespace are set by Oracle Database and include:
■

PERMANENT

■

READ WRITE

■

EXTENT MANAGMENT LOCAL

■

SEGMENT SPACE MANAGMENT AUTO

You cannot alter these attributes with an ALTER TABLESPACE statement, and any
attempt to do so will result in an error. You cannot drop or rename the SYSAUX
tablespace.
The size of the SYSAUX tablespace is determined by the size of the database
components that occupy SYSAUX. See Table 2–2 for a list of all SYSAUX occupants.
Based on the initial sizes of these components, the SYSAUX tablespace needs to be at
least 240 MB at the time of database creation. The space requirements of the SYSAUX
tablespace will increase after the database is fully deployed, depending on the nature
of its use and workload. For more information on how to manage the space
consumption of the SYSAUX tablespace on an ongoing basis, please refer to the
"Managing the SYSAUX Tablespace" on page 8-20.
If you include a DATAFILE clause for the SYSTEM tablespace, then you must specify
the SYSAUX DATAFILE clause as well, or the CREATE DATABASE statement will fail.
This requirement does not exist if the Oracle-managed files feature is enabled (see
"Specifying Oracle-Managed Files at Database Creation" on page 2-15).
If you issue the CREATE DATABASE statement with no other clauses, then the software
creates a default database with datafiles for the SYSTEM and SYSAUX tablespaces
stored in system-determined default locations, or where specified by an
Oracle-managed files initialization parameter.
The SYSAUX tablespace has the same security attributes as the SYSTEM tablespace.
Note: This book discusses the creation of the SYSAUX database at
database creation. When upgrading from a release of Oracle
Database that did not require the SYSAUX tablespace, you must
create the SYSAUX tablespace as part of the upgrade process. This is
discussed in Oracle Database Upgrade Guide.

Table 2–2 lists the components that use the SYSAUX tablespace as their default
tablespace during installation, and the tablespace in which they were stored in earlier
releases:

2-12 Oracle Database Administrator’s Guide

Understanding the CREATE DATABASE Statement

Table 2–2

Database Components and the SYSAUX Tablespace

Component Using SYSAUX

Tablespace in Earlier Releases

Analytical Workspace Object Table

SYSTEM

Enterprise Manager Repository

OEM_REPOSITORY

LogMiner

SYSTEM

Logical Standby

SYSTEM

OLAP API History Tables

CWMLITE

Oracle Data Mining

ODM

Oracle Spatial

SYSTEM

Oracle Streams

SYSTEM

Oracle Text

DRSYS

Oracle Ultra Search

DRSYS

Oracle interMedia ORDPLUGINS Components

SYSTEM

Oracle interMedia ORDSYS Components

SYSTEM

Oracle interMedia SI_INFORMTN_SCHEMA
Components

SYSTEM

Server Manageability Components

New in Oracle Database 10g

Statspack Repository

User-defined

Oracle Scheduler

New in Oracle Database 10g

Workspace Manager

SYSTEM

The installation procedures for these components provide the means of establishing
their occupancy of the SYSAUX tablespace.
See Also: "Managing the SYSAUX Tablespace" on page 8-20 for
information about managing the SYSAUX tablespace

Using Automatic Undo Management: Creating an Undo Tablespace
Automatic undo management uses an undo tablespace.To enable automatic undo
management, set the UNDO_MANAGEMENT initialization parameter to AUTO in your
initialization parameter file. In this mode, undo data is stored in an undo tablespace
and is managed by Oracle Database. If you want to define and name the undo
tablespace yourself, you must also include the UNDO TABLESPACE clause in the
CREATE DATABASE statement at database creation time. If you omit this clause, and
automatic undo management is enabled (by setting the UNDO_MANAGEMENT
initialization parameter to AUTO), the database creates a default undo tablespace
named SYS_UNDOTBS.
See Also:
■

■

"Specifying the Method of Undo Space Management" on
page 2-33
Chapter 10, "Managing the Undo Tablespace", for information
about the creation and use of undo tablespaces

Creating an Oracle Database

2-13

Understanding the CREATE DATABASE Statement

Creating a Default Permanent Tablespace
The DEFAULT TABLESPACE clause of the CREATE DATABASE statement specifies a
default permanent tablespace for the database. Oracle Database assigns to this
tablespace any non-SYSTEM users for whom you do not explicitly specify a different
permanent tablespace. If you do not specify this clause, then the SYSTEM tablespace is
the default permanent tablespace for non-SYSTEM users. Oracle strongly recommends
that you create a default permanent tablespace.
Oracle Database SQL Reference for the syntax of the
DEFAULT TABLESPACE clause of CREATE DATABASE and ALTER
DATABASE
See Also:

Creating a Default Temporary Tablespace
The DEFAULT TEMPORARY TABLESPACE clause of the CREATE DATABASE statement
creates a default temporary tablespace for the database. Oracle Database assigns this
tablespace as the temporary tablespace for users who are not explicitly assigned a
temporary tablespace.
You can explicitly assign a temporary tablespace or tablespace group to a user in the
CREATE USER statement. However, if you do not do so, and if no default temporary
tablespace has been specified for the database, then by default these users are assigned
the SYSTEM tablespace as their temporary tablespace. It is not good practice to store
temporary data in the SYSTEM tablespace, and it is cumbersome to assign every user a
temporary tablespace individually. Therefore, Oracle recommends that you use the
DEFAULT TEMPORARY TABLESPACE clause of CREATE DATABASE.
When you specify a locally managed SYSTEM tablespace,
the SYSTEM tablespace cannot be used as a temporary tablespace. In
this case the database creates a default temporary tablespace. This
behavior is explained in "Creating a Locally Managed SYSTEM
Tablespace" on page 2-11.

Note:

You can add or change the default temporary tablespace after database creation. You
do this by creating a new temporary tablespace or tablespace group with a CREATE
TEMPORARY TABLESPACE statement, and then assign it as the temporary tablespace
using the ALTER DATABASE DEFAULT TEMPORARY TABLESPACE statement. Users
will automatically be switched (or assigned) to the new default temporary tablespace.
The following statement assigns a new default temporary tablespace:
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE tempts2;

The new default temporary tablespace must already exist. When using a locally
managed SYSTEM tablespace, the new default temporary tablespace must also be
locally managed.
You cannot drop or take offline a default temporary tablespace, but you can assign a
new default temporary tablespace and then drop or take offline the former one. You
cannot change a default temporary tablespace to a permanent tablespace.
Users can obtain the name of the current default temporary tablespace by querying the
PROPERTY_NAME and PROPERTY_VALUE columns of the DATABASE_PROPERTIES
view. These columns contain the values "DEFAULT_TEMP_TABLESPACE" and the
default temporary tablespace name, respectively.

2-14 Oracle Database Administrator’s Guide

Understanding the CREATE DATABASE Statement

See Also:
■

■

■

Oracle Database SQL Reference for the syntax of the DEFAULT
TEMPORARY TABLESPACE clause of CREATE DATABASE and
ALTER DATABASE
"Temporary Tablespaces" on page 8-8 for information about
creating and using temporary tablespaces
"Multiple Temporary Tablespaces: Using Tablespace Groups"
on page 8-11 for information about creating and using
temporary tablespace groups

Specifying Oracle-Managed Files at Database Creation
You can minimize the number of clauses and parameters that you specify in your
CREATE DATABASE statement by using the Oracle-managed files feature. You do this
either by specifying a directory in which your files are created and managed by Oracle
Database, or by using Automatic Storage Management. When you use Automatic
Storage Management, you specify a disk group in which the database creates and
manages your files, including file redundancy and striping.
By including any of the initialization parameters DB_CREATE_FILE_DEST,
DB_CREATE_ONLINE_LOG_DEST_n, or DB_RECOVERY_FILE_DEST in your
initialization parameter file, you instruct Oracle Database to create and manage the
underlying operating system files of your database. Oracle Database will
automatically create and manage the operating system files for the following database
structures, depending on which initialization parameters you specify and how you
specify clauses in your CREATE DATABASE statement:
■

Tablespaces

■

Temporary tablespaces

■

Control files

■

Redo log files

■

Archive log files

■

Flashback logs

■

Block change tracking files

■

RMAN backups
See Also: "Specifying a Flash Recovery Area" on page 2-22 for
information about setting initialization parameters that create a
flash recovery area

The following CREATE DATABASE statement shows briefly how the Oracle-managed
files feature works, assuming you have specified required initialization parameters:
CREATE DATABASE rbdb1
USER SYS IDENTIFIED BY pz6r58
USER SYSTEM IDENTIFIED BY y1tz5p
UNDO TABLESPACE undotbs
DEFAULT TEMPORARY TABLESPACE tempts1;
■

No DATAFILE clause is specified, so the database creates an Oracle-managed
datafile for the SYSTEM tablespace.

Creating an Oracle Database

2-15

Understanding the CREATE DATABASE Statement

■

■

■

■

■

■

No LOGFILE clauses are included, so the database creates two Oracle-managed
redo log file groups.
No SYSAUX DATAFILE is included, so the database creates an Oracle-managed
datafile for the SYSAUX tablespace.
No DATAFILE subclause is specified for the UNDO TABLESPACE clause, so the
database creates an Oracle-managed datafile for the undo tablespace.
No TEMPFILE subclause is specified for the DEFAULT TEMPORARY TABLESPACE
clause, so the database creates an Oracle-managed tempfile.
If no CONTROL_FILES initialization parameter is specified in the initialization
parameter file, then the database also creates an Oracle-managed control file.
If you are using a server parameter file (see "Managing Initialization Parameters
Using a Server Parameter File" on page 2-35), the database automatically sets the
appropriate initialization parameters.
See Also:
■

■

Chapter 11, "Using Oracle-Managed Files", for information
about the Oracle-managed files feature and how to use it
Chapter 12, "Using Automatic Storage Management", for
information about Automatic Storage Management

Supporting Bigfile Tablespaces During Database Creation
Oracle Database simplifies management of tablespaces and enables support for
ultra-large databases by letting you create bigfile tablespaces. Bigfile tablespaces can
contain only one file, but that file can have up to 4G blocks. The maximum number of
datafiles in an Oracle Database is limited (usually to 64K files). Therefore, bigfile
tablespaces can significantly enhance the storage capacity of an Oracle Database.
This section discusses the clauses of the CREATE DATABASE statement that let you
include support for bigfile tablespaces.
"Bigfile Tablespaces" on page 8-6 for more information
about bigfile tablespaces

See Also:

Specifying the Default Tablespace Type
The SET DEFAULT...TABLESPACE clause of the CREATE DATABASE statement to
determines the default type of tablespace for this database in subsequent CREATE
TABLESPACE statements. Specify either SET DEFAULT BIGFILE TABLESPACE or
SET DEFAULT SMALLFILE TABLESPACE. If you omit this clause, the default is a
smallfile tablespace, which is the traditional type of Oracle Database tablespace. A
smallfile tablespace can contain up to 1022 files with up to 4M blocks each.
The use of bigfile tablespaces further enhances the Oracle-managed files feature,
because bigfile tablespaces make datafiles completely transparent for users. SQL
syntax for the ALTER TABLESPACE statement has been extended to allow you to
perform operations on tablespaces, rather than the underlying datafiles.
The CREATE DATABASE statement shown in "Specifying Oracle-Managed Files at
Database Creation" on page 2-15 can be modified as follows to specify that the default
type of tablespace is a bigfile tablespace:
CREATE DATABASE rbdb1
USER SYS IDENTIFIED BY pz6r58
USER SYSTEM IDENTIFIED BY y1tz5p

2-16 Oracle Database Administrator’s Guide

Understanding the CREATE DATABASE Statement

SET DEFAULT BIGFILE TABLESPACE
UNDO TABLESPACE undotbs
DEFAULT TEMPORARY TABLESPACE tempts1;

To dynamically change the default tablespace type after database creation, use the SET
DEFAULT TABLESPACE clause of the ALTER DATABASE statement:
ALTER DATABASE SET DEFAULT BIGFILE TABLESPACE;

You can determine the current default tablespace type for the database by querying the
DATABASE_PROPERTIES data dictionary view as follows:
SELECT PROPERTY_VALUE FROM DATABASE_PROPERTIES
WHERE PROPERTY_NAME = 'DEFAULT_TBS_TYPE';

Overriding the Default Tablespace Type
The SYSTEM and SYSAUX tablespaces are always created with the default tablespace
type. However, you can explicitly override the default tablespace type for the UNDO
and DEFAULT TEMPORARY tablespace during the CREATE DATABASE operation.
For example, you can create a bigfile UNDO tablespace in a database with the default
tablespace type of smallfile as follows:
CREATE DATABASE rbdb1
...
BIGFILE UNDO TABLESPACE undotbs
DATAFILE '/u01/oracle/oradata/mynewdb/undotbs01.dbf'
SIZE 200M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;

You can create a smallfile DEFAULT TEMPORARY tablespace in a database with the
default tablespace type of bigfile as follows:
CREATE DATABASE rbdb1
SET DEFAULT BIGFILE TABLSPACE
...
SMALLFILE DEFAULT TEMPORARY TABLESPACE tempts1
TEMPFILE '/u01/oracle/oradata/mynewdb/temp01.dbf'
SIZE 20M REUSE
...

Specifying the Database Time Zone and Time Zone File
You can specify the database time zone and the supporting time zone file.

Setting the Database Time Zone
Set the database time zone when the database is created by using the SET TIME_ZONE
clause of the CREATE DATABASE statement. If you do not set the database time zone,
then it defaults to the time zone of the server's operating system.
You can change the database time zone for a session by using the SET TIME_ZONE
clause of the ALTER SESSION statement.
See Also: Oracle Database Globalization Support Guide for more
information about setting the database time zone

Specifying the Database Time Zone File
Two time zone files are included in the Oracle home directory. The default time zone
file is $ORACLE_HOME/oracore/zoneinfo/timezonelrg.dat. A smaller time
zone file can be found in $ORACLE_HOME/oracore/zoneinfo/timezone.dat.
Creating an Oracle Database

2-17

Understanding the CREATE DATABASE Statement

If you are already using the smaller time zone file and you want to continue to use it in
an Oracle Database 10g environment or if you want to use the smaller time zone file
instead of the default time zone file, then complete the following tasks:
1.

Shut down the database.

2.

Set the ORA_TZFILE environment variable to the full path name of the
timezone.dat file.

3.

Restart the database.

If you are already using the default time zone file, then it is not practical to change to
the smaller time zone file because the database may contain data with time zones that
are not part of the smaller time file.
All databases that share information must use the same time zone datafile.
The time zone files contain the valid time zone names. The following information is
also included for each time zone:
■

Offset from Coordinated Universal Time (UTC)

■

Transition times for Daylight Saving Time

■

Abbreviations for standard time and Daylight Saving Time

To view the time zone names in the file being used by your database, use the following
query:
SELECT * FROM V$TIMEZONE_NAMES;

Specifying FORCE LOGGING Mode
Some data definition language statements (such as CREATE TABLE) allow the
NOLOGGING clause, which causes some database operations not to generate redo
records in the database redo log. The NOLOGGING setting can speed up operations that
can be easily recovered outside of the database recovery mechanisms, but it can
negatively affect media recovery and standby databases.
Oracle Database lets you force the writing of redo records even when NOLOGGING has
been specified in DDL statements. The database never generates redo records for
temporary tablespaces and temporary segments, so forced logging has no affect for
objects.
Oracle Database SQL Reference for information about
operations that can be done in NOLOGGING mode

See Also:

Using the FORCE LOGGING Clause
To put the database into FORCE LOGGING mode, use the FORCE LOGGING clause in
the CREATE DATABASE statement. If you do not specify this clause, the database is not
placed into FORCE LOGGING mode.
Use the ALTER DATABASE statement to place the database into FORCE LOGGING
mode after database creation. This statement can take a considerable time for
completion, because it waits for all unlogged direct writes to complete.
You can cancel FORCE LOGGING mode using the following SQL statement:
ALTER DATABASE NO FORCE LOGGING;

Independent of specifying FORCE LOGGING for the database, you can selectively
specify FORCE LOGGING or NO FORCE LOGGING at the tablespace level. However, if
FORCE LOGGING mode is in effect for the database, it takes precedence over the
2-18 Oracle Database Administrator’s Guide

Understanding Initialization Parameters

tablespace setting. If it is not in effect for the database, then the individual tablespace
settings are enforced. Oracle recommends that either the entire database is placed into
FORCE LOGGING mode, or individual tablespaces be placed into FORCE LOGGING
mode, but not both.
The FORCE LOGGING mode is a persistent attribute of the database. That is, if the
database is shut down and restarted, it remains in the same logging mode. However, if
you re-create the control file, the database is not restarted in the FORCE LOGGING
mode unless you specify the FORCE LOGGING clause in the CREATE CONTROL FILE
statement.
"Controlling the Writing of Redo Records" on page 8-13
for information about using the FORCE LOGGING clause for
tablespace creation.

See Also:

Performance Considerations of FORCE LOGGING Mode
FORCE LOGGING mode results in some performance degradation. If the primary
reason for specifying FORCE LOGGING is to ensure complete media recovery, and
there is no standby database active, then consider the following:
■

How many media failures are likely to happen?

■

How serious is the damage if unlogged direct writes cannot be recovered?

■

Is the performance degradation caused by forced logging tolerable?

If the database is running in NOARCHIVELOG mode, then generally there is no benefit
to placing the database in FORCE LOGGING mode. Media recovery is not possible in
NOARCHIVELOG mode, so if you combine it with FORCE LOGGING, the result may be
performance degradation with little benefit.

Understanding Initialization Parameters
When an Oracle instance starts, it reads initialization parameters from an initialization
parameter file. This file can be either a read-only text file, or a read/write binary file.
The binary file is called a server parameter file, and it always resides on the server. A
server parameter file enables you to change initialization parameters with ALTER
SYSTEM commands and to persist the changes across a shutdown and startup. It also
provides a basis for self-tuning by the Oracle Database server. For these reasons, it is
recommended that you use a server parameter file. You can create one from your
edited text initialization file or by using the Database Configuration Assistant.
Before you create a server parameter file, you can start an instance with a text
initialization parameter file. Upon startup, the Oracle instance first searches for a
server parameter file in a default location, and if it does not find one, searches for a
text initialization parameter file. You can also override an existing server parameter
file by naming a text initialization parameter file as an argument of the STARTUP
command.
For more information on server parameter files, see "Managing Initialization
Parameters Using a Server Parameter File" on page 2-35. For more information on the
STARTUP command, see "Understanding Initialization Parameter Files" on page 3-2.
Default file names and locations for the text initialization parameter file are shown in
the following table:

Creating an Oracle Database

2-19

Understanding Initialization Parameters

Platform

Default Name

Default Location

UNIX
and
Linux

init$ORACLE_SID.ora

$ORACLE_HOME/dbs

For example, the
initialization parameter file
for the mynewdb database
is named:

For example, the initialization parameter file for the
mynewdb database is stored in the following
location:
/u01/oracle/dbs/initmynewdb.ora

initmynewdb.ora
Windows

init%ORACLE_SID%.ora

%ORACLE_HOME%\database

Sample Initialization Parameter File
The following is an example of an initialization parameter file used to create a
database on a UNIX system.
control_files

db_name

= (/u0d/lcg03/control.001.dbf,
/u0d/lcg03/control.002.dbf,
/u0d/lcg03/control.003.dbf)
= lcg03

db_domain

= us.oracle.com

log_archive_dest_1
=
"LOCATION=/net/fstlcg03/private/yaliu/testlog/log.lcg03.fstlcg03/lcg03/arch"
log_archive_dest_state_1
= enable
db_block_size
pga_aggregate_target

= 8192
= 2500M

processes
sessions
open_cursors

= 1000
= 1200
= 1024

undo_management

= AUTO

shared_servers
remote_listener

= 3
= tnsfstlcg03

undo_tablespace
compatible

= smu_nd1
= 10.2.0

sga_target

= 1500M

nls_language
= AMERICAN
nls_territory
= AMERICA
db_recovery_file_dest
=
/net/fstlcg03/private/yaliu/testlog/log.lcg03.fstlcg03/lcg03/arch
db_recovery_file_dest_size = 100G

Oracle Database provides generally appropriate values in the sample initialization
parameter file provided with your database software or created for you by the
Database Configuration Assistant. You can edit these Oracle-supplied initialization
parameters and add others, depending upon your configuration and options and how
you plan to tune the database. For any relevant initialization parameters not
specifically included in the initialization parameter file, the database supplies defaults.
If you are creating an Oracle Database for the first time, Oracle suggests that you
minimize the number of parameter values that you alter. As you become more familiar
with your database and environment, you can dynamically tune many initialization
2-20 Oracle Database Administrator’s Guide

Understanding Initialization Parameters

parameters using the ALTER SYSTEM statement. If you are using a text initialization
parameter file, your changes are effective only for the current instance. To make them
permanent, you must update them manually in the initialization parameter file, or
they will be lost over the next shutdown and startup of the database. If you are using a
server parameter file, initialization parameter file changes made by the ALTER
SYSTEM statement can persist across shutdown and startup. This is discussed in
"Managing Initialization Parameters Using a Server Parameter File".
This section introduces you to some of the basic initialization parameters you can add
or edit before you create your new database.
The following topics are contained in this section:
■

Determining the Global Database Name

■

Specifying a Flash Recovery Area

■

Specifying Control Files

■

Specifying Database Block Sizes

■

Managing the System Global Area (SGA)

■

Specifying the Maximum Number of Processes

■

Specifying the Method of Undo Space Management

■

The COMPATIBLE Initialization Parameter and Irreversible Compatibility

■

Setting the License Parameter
Oracle Database Reference for descriptions of all
initialization parameters including their default settings

See Also:

Determining the Global Database Name
The global database name consists of the user-specified local database name and the
location of the database within a network structure. The DB_NAME initialization
parameter determines the local name component of the database name, and the
DB_DOMAIN parameter indicates the domain (logical location) within a network
structure. The combination of the settings for these two parameters must form a
database name that is unique within a network.
For example, to create a database with a global database name of
test.us.acme.com, edit the parameters of the new parameter file as follows:
DB_NAME = test
DB_DOMAIN = us.acme.com

You can rename the GLOBAL_NAME of your database using the ALTER DATABASE
RENAME GLOBAL_NAME statement. However, you must also shut down and restart the
database after first changing the DB_NAME and DB_DOMAIN initialization parameters
and re-creating the control file.
Oracle Database Utilities for information about using the
DBNEWID utility, which is another means of changing a database
name

See Also:

DB_NAME Initialization Parameter
DB_NAME must be set to a text string of no more than eight characters. During database
creation, the name provided for DB_NAME is recorded in the datafiles, redo log files,
and control file of the database. If during database instance startup the value of the

Creating an Oracle Database

2-21

Understanding Initialization Parameters

DB_NAME parameter (in the parameter file) and the database name in the control file
are not the same, the database does not start.

DB_DOMAIN Initialization Parameter
DB_DOMAIN is a text string that specifies the network domain where the database is
created. This is typically the name of the organization that owns the database. If the
database you are about to create will ever be part of a distributed database system,
give special attention to this initialization parameter before database creation.
See Also: Part VII, "Distributed Database Management" for more
information about distributed databases

Specifying a Flash Recovery Area
A flash recovery area is a location in which Oracle Database can store and manage files
related to backup and recovery. It is distinct from the database area, which is a location
for the Oracle-managed current database files (datafiles, control files, and online redo
logs).
You specify a flash recovery area with the following initialization parameters:
■

DB_RECOVERY_FILE_DEST: Location of the flash recovery area. This can be a
directory, file system, or Automatic Storage Management (ASM) disk group. It
cannot be a raw file system.
In a RAC environment, this location must be on a cluster file system, ASM disk
group, or a shared directory configured through NFS.

■

DB_RECOVERY_FILE_DEST_SIZE: Specifies the maximum total bytes to be used
by the flash recovery area. This initialization parameter must be specified before
DB_RECOVERY_FILE_DEST is enabled.

In a RAC environment, the settings for these two parameters must be the same on all
instances.
You cannot enable these parameters if you have set values for the
LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST parameters. You must
disable those parameters before setting up the flash recovery area. You can instead set
values for the LOG_ARCHIVE_DEST_n parameters. If you do not set values for local
LOG_ARCHIVE_DEST_n, then setting up the flash recovery area will implicitly set
LOG_ARCHIVE_DEST_10 to the flash recovery area.
Oracle recommends using a flash recovery area, because it can simplify backup and
recovery operations for your database.
See Also: Oracle Database Backup and Recovery Basics to learn how
to create and use a flash recovery area

Specifying Control Files
The CONTROL_FILES initialization parameter specifies one or more control filenames
for the database. When you execute the CREATE DATABASE statement, the control
files listed in the CONTROL_FILES parameter are created.
If you do not include CONTROL_FILES in the initialization parameter file, then Oracle
Database creates a control file using a default operating system dependent filename or,
if you have enabled Oracle-managed files, creates Oracle-managed control files.
If you want the database to create new operating system files when creating database
control files, the filenames listed in the CONTROL_FILES parameter must not match
2-22 Oracle Database Administrator’s Guide

Understanding Initialization Parameters

any filenames that currently exist on your system. If you want the database to reuse or
overwrite existing files when creating database control files, ensure that the filenames
listed in the CONTROL_FILES parameter match the filenames that are to be reused.
Caution: Use extreme caution when setting this specifying
CONTROL_FILE filenames. If you inadvertently specify a file that
already exists and execute the CREATE DATABASE statement, the
previous contents of that file will be overwritten.

Oracle strongly recommends you use at least two control files stored on separate
physical disk drives for each database.
See Also:
■
■

Chapter 5, "Managing Control Files"
"Specifying Oracle-Managed Files at Database Creation" on
page 2-15

Specifying Database Block Sizes
The DB_BLOCK_SIZE initialization parameter specifies the standard block size for the
database. This block size is used for the SYSTEM tablespace and by default in other
tablespaces. Oracle Database can support up to four additional nonstandard block
sizes.

DB_BLOCK_SIZE Initialization Parameter
The most commonly used block size should be picked as the standard block size. In
many cases, this is the only block size that you need to specify. Typically,
DB_BLOCK_SIZE is set to either 4K or 8K. If you do not set a value for this parameter,
the default data block size is operating system specific, which is generally adequate.
You cannot change the block size after database creation except by re-creating the
database. If the database block size is different from the operating system block size,
ensure that the database block size is a multiple of the operating system block size. For
example, if your operating system block size is 2K (2048 bytes), the following setting
for the DB_BLOCK_SIZE initialization parameter is valid:
DB_BLOCK_SIZE=4096

A larger data block size provides greater efficiency in disk and memory I/O (access
and storage of data). Therefore, consider specifying a block size larger than your
operating system block size if the following conditions exist:
■

■

Oracle Database is on a large computer system with a large amount of memory
and fast disk drives. For example, databases controlled by mainframe computers
with vast hardware resources typically use a data block size of 4K or greater.
The operating system that runs Oracle Database uses a small operating system
block size. For example, if the operating system block size is 1K and the default
data block size matches this, the database may be performing an excessive amount
of disk I/O during normal operation. For best performance in this case, a database
block should consist of multiple operating system blocks.
See Also: Your operating system specific Oracle documentation
for details about the default block size.

Creating an Oracle Database

2-23

Understanding Initialization Parameters

Nonstandard Block Sizes
Tablespaces of nonstandard block sizes can be created using the CREATE
TABLESPACE statement and specifying the BLOCKSIZE clause. These nonstandard
block sizes can have any of the following power-of-two values: 2K, 4K, 8K, 16K or 32K.
Platform-specific restrictions regarding the maximum block size apply, so some of
these sizes may not be allowed on some platforms.
To use nonstandard block sizes, you must configure subcaches within the buffer cache
area of the SGA memory for all of the nonstandard block sizes that you intend to use.
The initialization parameters used for configuring these subcaches are described in the
next section, "Managing the System Global Area (SGA)".
The ability to specify multiple block sizes for your database is especially useful if you
are transporting tablespaces between databases. You can, for example, transport a
tablespace that uses a 4K block size from an OLTP environment to a data warehouse
environment that uses a standard block size of 8K.
See Also:
■

"Creating Tablespaces" on page 8-2

■

"Transporting Tablespaces Between Databases" on page 8-25

Managing the System Global Area (SGA)
This section discusses the initialization parameters that affect the amount of memory
allocated to the System Global Area (SGA). Except for the SGA_MAX_SIZE
initialization parameter, they are dynamic parameters whose values can be changed by
the ALTER SYSTEM statement. The size of the SGA is dynamic, and can grow or
shrink by dynamically altering these parameters.
This section contains the following topics:
■

Components and Granules in the SGA

■

Limiting the Size of the SGA

■

Using Automatic Shared Memory Management

■

Setting the Buffer Cache Initialization Parameters

■

Using Manual Shared Memory Management

■

Viewing Information About the SGA
See Also:
■

■

Oracle Database Performance Tuning Guide for information about
tuning the components of the SGA
Oracle Database Concepts for a conceptual discussion of
automatic shared memory management

Components and Granules in the SGA
The SGA comprises a number of memory components, which are pools of memory
used to satisfy a particular class of memory allocation requests. Examples of memory
components include the shared pool (used to allocate memory for SQL and PL/SQL
execution), the java pool (used for java objects and other java execution memory), and
the buffer cache (used for caching disk blocks). All SGA components allocate and
deallocate space in units of granules. Oracle Database tracks SGA memory use in
internal numbers of granules for each SGA component.

2-24 Oracle Database Administrator’s Guide

Understanding Initialization Parameters

The memory for dynamic components in the SGA is allocated in the unit of granules.
Granule size is determined by total SGA size. Generally speaking, on most platforms,
if the total SGA size is equal to or less than 1 GB, then granule size is 4 MB. For SGAs
larger than 1 GB, granule size is 16 MB. Some platform dependencies may arise. For
example, on 32-bit Windows NT, the granule size is 8 MB for SGAs larger than 1 GB.
Consult your operating system specific documentation for more details.
You can query the V$SGAINFO view to see the granule size that is being used by an
instance. The same granule size is used for all dynamic components in the SGA.
If you specify a size for a component that is not a multiple of granule size, Oracle
Database rounds the specified size up to the nearest multiple. For example, if the
granule size is 4 MB and you specify DB_CACHE_SIZE as 10 MB, the database actually
allocates 12 MB.

Limiting the Size of the SGA
The SGA_MAX_SIZE initialization parameter specifies the maximum size of the System
Global Area for the lifetime of the instance. You can dynamically alter the initialization
parameters affecting the size of the buffer caches, shared pool, large pool, Java pool,
and streams pool but only to the extent that the sum of these sizes and the sizes of the
other components of the SGA (fixed SGA, variable SGA, and redo log buffers) does not
exceed the value specified by SGA_MAX_SIZE.
If you do not specify SGA_MAX_SIZE, then Oracle Database selects a default value that
is the sum of all components specified or defaulted at initialization time. If you do
specify SGA_MAX_SIZE, and at the time the database is initialized the value is less
than the sum of the memory allocated for all components, either explicitly in the
parameter file or by default, then the database ignores the setting for SGA_MAX_SIZE.

Using Automatic Shared Memory Management
You enable the automatic shared memory management feature by setting the
SGA_TARGET parameter to a non-zero value. This parameter in effect replaces the
parameters that control the memory allocated for a specific set of individual
components, which are now automatically and dynamically resized (tuned) as needed.
Note: The STATISTICS_LEVEL initialization parameter must be set
to TYPICAL (the default) or ALL for automatic shared memory
management to function.

The SGA_TARGET initialization parameter reflects the total size of the SGA. Table 2–3
lists the SGA components for which SGA_TARGET includes memory—the
automatically sized SGA components—and the initialization parameters
corresponding to those components.
Table 2–3

Automatically Sized SGA Components and Corresponding Parameters

SGA Component

Initialization Parameter

Fixed SGA and other internal allocations needed by the Oracle
Database instance

N/A

The shared pool

SHARED_POOL_SIZE

The large pool

LARGE_POOL_SIZE

The Java pool

JAVA_POOL_SIZE

The buffer cache

DB_CACHE_SIZE

Creating an Oracle Database

2-25

Understanding Initialization Parameters

Table 2–3 (Cont.) Automatically Sized SGA Components and Corresponding Parameters
SGA Component

Initialization Parameter

The Streams pool

STREAMS_POOL_SIZE

The parameters listed in Table 2–4, if they are set, take their memory from
SGA_TARGET, leaving what is available for the components listed in Table 2–3.
Table 2–4

Manually Sized SGA Components that Use SGA_TARGET Space

SGA Component

Initialization Parameter

The log buffer

LOG_BUFFER

The keep and recycle buffer caches

DB_KEEP_CACHE_SIZE
DB_RECYCLE_CACHE_SIZE

Nonstandard block size buffer caches

DB_nK_CACHE_SIZE

In addition to setting SGA_TARGET to a non-zero value, you must set the value of all
automatically sized SGA components to zero to enable full automatic tuning of these
components.
Alternatively, you can set one or more of the automatically sized SGA components to a
non-zero value, which is then used as the minimum setting for that component during
SGA tuning. This is discussed in detail later in this section.
An easier way to enable automatic shared memory
management is to use Oracle Enterprise Manager (EM). When you
enable automatic shared memory management and set the Total SGA
Size, EM automatically generates the ALTER SYSTEM statements to
set SGA_TARGET to the specified size and to set all automatically sized
SGA components to zero.

Note:

If you use SQL*Plus to set SGA_TARGET, you must then set the
automatically sized SGA components to zero or to a minimum value.
The V$SGA_TARGET_ADVICE view provides information that helps you decide on a
value for SGA_TARGET. For more information, see Oracle Database Performance Tuning
Guide.
Enabling Automatic Shared Memory Management To enable automatic shared memory
management:
1.

If you are migrating from a manual management scheme, execute the following
query on the instance running in manual mode to get a value for SGA_TARGET:
SELECT (
(SELECT SUM(value) FROM V$SGA) (SELECT CURRENT_SIZE FROM V$SGA_DYNAMIC_FREE_MEMORY)
) "SGA_TARGET"
FROM DUAL;

2.

Set the value of SGA_TARGET, either by editing the text initialization parameter
file and restarting the database, or by issuing the following statement:
ALTER SYSTEM SET SGA_TARGET=value [SCOPE={SPFILE|MEMORY|BOTH}]

2-26 Oracle Database Administrator’s Guide

Understanding Initialization Parameters

where value is the value computed in step 1 or is some value between the sum of
all SGA component sizes and SGA_MAX_SIZE. For more information on the
SCOPE clause, see "Using ALTER SYSTEM to Change Initialization Parameter
Values" on page 2-38.
3.

Do one of the following:
■

■

For more complete automatic tuning, set the values of the automatically sized
SGA components listed in Table 2–3 to zero. Do this by editing the text
initialization parameter file, or by issuing ALTER SYSTEM statements similar
to the one in step 2.
To control the minimum size of one or more automatically sized SGA
components, set those component sizes to the desired value. (See the next
section for details.) Set the values of the other automatically sized SGA
components to zero. Do this by editing the text initialization parameter file, or
by issuing ALTER SYSTEM statements similar to the one in step 2.

For example, suppose you currently have the following configuration of parameters
on a manual mode instance with SGA_MAX_SIZE set to 1200M:
■

SHARED_POOL_SIZE = 200M

■

DB_CACHE_SIZE = 500M

■

LARGE_POOL_SIZE=200M

Also assume that the result of the queries is as follows:
SELECT SUM(value) FROM V$SGA = 1200M
SELECT CURRENT_SIZE FROM V$SGA_DYNAMIC_FREE_MEMORY = 208M

You can take advantage of automatic shared memory management by setting Total
SGA Size to 992M in Oracle Enterprise Manager, or by issuing the following
statements:
ALTER
ALTER
ALTER
ALTER
ALTER
ALTER

SYSTEM
SYSTEM
SYSTEM
SYSTEM
SYSTEM
SYSTEM

SET
SET
SET
SET
SET
SET

SGA_TARGET = 992M;
SHARED_POOL_SIZE = 0;
LARGE_POOL_SIZE = 0;
JAVA_POOL_SIZE = 0;
DB_CACHE_SIZE = 0;
STREAMS_POOL_SIZE = 0;

where 992M = 1200M minus 208M.
Setting Minimums for Automatically Sized SGA Components You can exercise some control
over the size of the automatically sized SGA components by specifying minimum
values for the parameters corresponding to these components. Doing so can be useful
if you know that an application cannot function properly without a minimum amount
of memory in specific components. You specify the minimum amount of SGA space
for a component by setting a value for its corresponding initialization parameter. Here
is an example configuration:
■

SGA_TARGET = 256M

■

SHARED_POOL_SIZE = 32M

■

DB_CACHE_SIZE = 100M

In this example, the shared pool and the default buffer pool will not be sized smaller
than the specified values (32 M and 100M, respectively). The remaining 124M (256
minus 132) is available for use by all the manually and automatically sized
components.

Creating an Oracle Database

2-27

Understanding Initialization Parameters

The actual distribution of values among the SGA components might look like this:
■

Actual shared pool size = 64M

■

Actual buffer cache size = 128M

■

Actual Java pool size = 60M

■

Actual large pool size = 4M

■

Actual Streams pool size = 0

The parameter values determine the minimum amount of SGA space allocated. The
fixed views V$SGA_DYNAMIC_COMPONENTS and V$SGAINFO display the current
actual size of each SGA component. You can also see the current actual values of the
SGA components in the Oracle Enterprise Manager memory configuration page.
Manually limiting the minimum size of one or more automatically sized components
reduces the total amount of memory available for dynamic adjustment. This reduction
in turn limits the ability of the system to adapt to workload changes. Therefore, this
practice is not recommended except in exceptional cases. The default automatic
management behavior maximizes both system performance and the use of available
resources.
Automatic Tuning and the Shared Pool When the automatic shared memory management
feature is enabled, the internal tuning algorithm tries to determine an optimal size for
the shared pool based on the workload. It usually converges on this value by
increasing in small increments over time. However, the internal tuning algorithm
typically does not attempt to shrink the shared pool, because the presence of open
cursors, pinned PL/SQL packages, and other SQL execution state in the shared pool
make it impossible to find granules that can be freed. Therefore, the tuning algorithm
only tries to increase the shared pool in conservative increments, starting from a
conservative size and stabilizing the shared pool at a size that produces the optimal
performance benefit.
Dynamic Modification of SGA Parameters You can modify the value of SGA_TARGET and
the parameters controlling individual components dynamically using the ALTER
SYSTEM statement, as described in "Using ALTER SYSTEM to Change Initialization
Parameter Values" on page 2-38.
Dynamic Modification of SGA_TARGET The SGA_TARGET parameter can be increased up
to the value specified for the SGA_MAX_SIZE parameter, and it can also be reduced. If
you reduce the value of SGA_TARGET, the system identifies one or more automatically
tuned components for which to release memory. You can reduce SGA_TARGET until
one or more automatically tuned components reach their minimum size. Oracle
Database determines the minimum allowable value for SGA_TARGET taking into
account several factors, including values set for the automatically sized components,
manually sized components that use SGA_TARGET space, and number of CPUs.
The change in the amount of physical memory consumed when SGA_TARGET is
modified depends on the operating system. On some UNIX platforms that do not
support dynamic shared memory, the physical memory in use by the SGA is equal to
the value of the SGA_MAX_SIZE parameter. On such platforms, there is no real benefit
in setting SGA_TARGET to a value smaller than SGA_MAX_SIZE. Therefore, setting
SGA_MAX_SIZE on those platforms is not recommended.
On other platforms, such as Solaris and Windows, the physical memory consumed by
the SGA is equal to the value of SGA_TARGET.

2-28 Oracle Database Administrator’s Guide

Understanding Initialization Parameters

When SGA_TARGET is resized, the only components affected are the automatically
tuned components for which you have not set a minimum value in their
corresponding initialization parameter. Any manually configured components remain
unaffected.
For example, suppose you have an environment with the following configuration:
■

SGA_MAX_SIZE = 1024M

■

SGA_TARGET = 512M

■

DB_8K_CACHE_SIZE = 128M

In this example, the value of SGA_TARGET can be resized up to 1024M and can also be
reduced until one or more of the automatically sized components reaches its minimum
size. The exact value depends on environmental factors such as the number of CPUs
on the system. However, the value of DB_8K_CACHE_SIZE remains fixed at all times
at 128M
When SGA_TARGET is reduced, if minimum values for any automatically tuned
components are specified, those components are not reduced smaller than that
minimum. Consider the following combination of parameters:
■

SGA_MAX_SIZE = 1024M

■

SGA_TARGET = 512M

■

DB_CACHE_SIZE = 96M

■

DB_8K_CACHE_SIZE = 128M

As in the last example, if SGA_TARGET is reduced, the DB_8K_CACHE_SIZE
parameter is permanently fixed at 128M. In addition, the primary buffer cache
(determined by the DB_CACHE_SIZE parameter) is not reduced smaller than 96M.
Thus the amount that SGA_TARGET can be reduced is restricted.
When enabling automatic shared memory management, it is
best to set SGA_TARGET to the desired non-zero value before starting
the database. Dynamically modifying SGA_TARGET from zero to a
non-zero value may not achieve the desired results because the shared
pool may not be able to shrink. After startup, you can dynamically
tune SGA_TARGET up or down as required.

Note:

Modifying Parameters for Automatically Managed Components When SGA_TARGET is not set,
the automatic shared memory management feature is not enabled. Therefore the rules
governing resize for all component parameters are the same as in earlier releases.
However, when automatic shared memory management is enabled, the manually
specified sizes of automatically sized components serve as a lower bound for the size
of the components. You can modify this limit dynamically by changing the values of
the corresponding parameters.
If the specified lower limit for the size of a given SGA component is less than its
current size, there is no immediate change in the size of that component. The new
setting only limits the automatic tuning algorithm to that reduced minimum size in the
future. For example, consider the following configuration:
■

SGA_TARGET = 512M

■

LARGE_POOL_SIZE = 256M

■

Current actual large pool size = 284M

Creating an Oracle Database

2-29

Understanding Initialization Parameters

In this example, if you increase the value of LARGE_POOL_SIZE to a value greater
than the actual current size of the component, the system expands the component to
accommodate the increased minimum size. For example, if you increase the value of
LARGE_POOL_SIZE to 300M, then the system increases the large pool incrementally
until it reaches 300M. This resizing occurs at the expense of one or more automatically
tuned components.
If you decrease the value of LARGE_POOL_SIZE to 200, there is no immediate change
in the size of that component. The new setting only limits the reduction of the large
pool size to 200 M in the future.
Modifying Parameters for Manually Sized Components Parameters for manually sized
components can be dynamically altered as well. However, rather than setting a
minimum size, the value of the parameter specifies the precise size of the
corresponding component. When you increase the size of a manually sized
component, extra memory is taken away from one or more automatically sized
components. When you decrease the size of a manually sized component, the memory
that is released is given to the automatically sized components.
For example, consider this configuration:
■

SGA_TARGET = 512M

■

DB_8K_CACHE_SIZE = 128M

In this example, increasing DB_8K_CACHE_SIZE by 16 M to 144M means that the 16M
is taken away from the automatically sized components. Likewise, reducing
DB_8K_CACHE_SIZE by 16M to 112M means that the 16M is given to the
automatically sized components.

Using Manual Shared Memory Management
If you decide not to use automatic shared memory management by not setting the
SGA_TARGET parameter, you must manually configure each component of the SGA.
This section provides guidelines on setting the parameters that control the size of each
SGA components.
Setting the Buffer Cache Initialization Parameters The buffer cache initialization parameters
determine the size of the buffer cache component of the SGA. You use them to specify
the sizes of caches for the various block sizes used by the database. These initialization
parameters are all dynamic.
If you intend to use multiple block sizes in your database, you must have the
DB_CACHE_SIZE and at least one DB_nK_CACHE_SIZE parameter set. Oracle
Database assigns an appropriate default value to the DB_CACHE_SIZE parameter, but
the DB_nK_CACHE_SIZE parameters default to 0, and no additional block size caches
are configured.
The size of a buffer cache affects performance. Larger cache sizes generally reduce the
number of disk reads and writes. However, a large cache may take up too much
memory and induce memory paging or swapping.
DB_CACHE_SIZE Initialization Parameter The DB_CACHE_SIZE initialization
parameter has replaced the DB_BLOCK_BUFFERS initialization parameter, which was
used in earlier releases. The DB_CACHE_SIZE parameter specifies the size in bytes of
the cache of standard block size buffers. Thus, to specify a value for DB_CACHE_SIZE,
you would determine the number of buffers that you need and multiple that value
times the block size specified in DB_BLOCK_SIZE.

2-30 Oracle Database Administrator’s Guide

Understanding Initialization Parameters

For backward compatibility, the DB_BLOCK_BUFFERS parameter still functions, but it
remains a static parameter and cannot be combined with any of the dynamic sizing
parameters.
The DB_nK_CACHE_SIZE Initialization Parameters The sizes and numbers of

nonstandard block size buffers are specified by the following initialization parameters:
■

DB_2K_CACHE_SIZE

■

DB_4K_CACHE_SIZE

■

DB_8K_CACHE_SIZE

■

DB_16K_CACHE_SIZE

■

DB_32K_CACHE_SIZE

Each parameter specifies the size of the buffer cache for the corresponding block size.
For example:
DB_BLOCK_SIZE=4096
DB_CACHE_SIZE=12M
DB_2K_CACHE_SIZE=8M
DB_8K_CACHE_SIZE=4M

In this example, the parameters specify that the standard block size of the database is
4K. The size of the cache of standard block size buffers is 12M. Additionally, 2K and
8K caches will be configured with sizes of 8M and 4M respectively.
You cannot use a DB_nK_CACHE_SIZE parameter to size
the cache for the standard block size. For example, if the value of
DB_BLOCK_SIZE is 2K, it is invalid to set DB_2K_CACHE_SIZE.
The size of the cache for the standard block size is always
determined from the value of DB_CACHE_SIZE.

Note:

Specifying the Shared Pool Size The SHARED_POOL_SIZE initialization parameter is a
dynamic parameter that lets you specify or adjust the size of the shared pool
component of the SGA. Oracle Database selects an appropriate default value.
In releases before Oracle Database 10g Release 1, the amount of shared pool memory
that was allocated was equal to the value of the SHARED_POOL_SIZE initialization
parameter plus the amount of internal SGA overhead computed during instance
startup. The internal SGA overhead refers to memory that is allocated by Oracle
during startup, based on the values of several other initialization parameters. This
memory is used to maintain state for different server components in the SGA. For
example, if the SHARED_POOL_SIZE parameter is set to 64MB and the internal SGA
overhead is computed to be 12MB, the real size of shared pool in the SGA is
64+12=76MB, although the value of the SHARED_POOL_SIZE parameter is still
displayed as 64MB.
Starting with Oracle Database 10g Release 1, the size of internal SGA overhead is
included in the user-specified value of SHARED_POOL_SIZE. In other words, if you
are not using the automatic shared memory management feature, then the amount of
shared pool memory that is allocated at startup is exactly equal to the value of
SHARED_POOL_SIZE initialization parameter. In manual SGA mode, this parameter
must be set so that it includes the internal SGA overhead in addition to the desired
value of shared pool size. In the previous example, if the SHARED_POOL_SIZE
parameter is set to 64MB at startup, then the available shared pool after startup is

Creating an Oracle Database

2-31

Understanding Initialization Parameters

64-12=52MB, assuming the value of internal SGA overhead remains unchanged. In
order to maintain an effective value of 64MB for shared pool memory after startup,
you must set the SHARED_POOL_SIZE parameter to 64+12=76MB.
The Oracle Database 10g migration utilities recommend a new value for this parameter
based on the value of internal SGA overhead in the pre-upgrade environment and
based on the old value of this parameter. In Oracle Database 10g, the exact value of
internal SGA overhead, also known as startup overhead in the shared pool, can be
queried from the V$SGAINFO view. Also, in manual SGA mode, if the user-specified
value of SHARED_POOL_SIZE is too small to accommodate even the requirements of
internal SGA overhead, then Oracle generates an ORA-371 error during startup, along
with a suggested value to use for the SHARED_POOL_SIZE parameter.
When you use automatic shared memory management in Oracle Database 10g, the
shared pool is automatically tuned, and an ORA-371 error would not be generated by
Oracle.
Specifying the Large Pool Size The LARGE_POOL_SIZE initialization parameter is a
dynamic parameter that lets you specify or adjust the size of the large pool component
of the SGA. The large pool is an optional component of the SGA. You must specifically
set the LARGE_POOL_SIZE parameter if you want to create a large pool. Configuring
the large pool is discussed in Oracle Database Performance Tuning Guide.
Specifying the Java Pool Size The JAVA_POOL_SIZE initialization parameter is a
dynamic parameter that lets you specify or adjust the size of the java pool component
of the SGA. Oracle Database selects an appropriate default value. Configuration of the
java pool is discussed in Oracle Database Java Developer's Guide.
Specifying the Streams Pool Size The STREAMS_POOL_SIZE initialization parameter is a
dynamic parameter that lets you specify or adjust the size of the Streams Pool
component of the SGA. If STREAMS_POOL_SIZE is set to 0, then the Oracle Streams
product transfers memory from the buffer cache to the Streams Pool when it is needed.
For details, see the discussion of the Streams Pool in Oracle Streams Concepts and
Administration.

Viewing Information About the SGA
The following views provide information about the SGA components and their
dynamic resizing:
View

Description

V$SGA

Displays summary information about the system
global area (SGA).

V$SGAINFO

Displays size information about the SGA, including
the sizes of different SGA components, the granule
size, and free memory.

V$SGASTAT

Displays detailed information about the SGA.

V$SGA_DYNAMIC_COMPONENTS

Displays information about the dynamic SGA
components. This view summarizes information
based on all completed SGA resize operations since
instance startup.

V$SGA_DYNAMIC_FREE_MEMORY

Displays information about the amount of SGA
memory available for future dynamic SGA resize
operations.

2-32 Oracle Database Administrator’s Guide

Understanding Initialization Parameters

View

Description

V$SGA_RESIZE_OPS

Displays information about the last 400 completed
SGA resize operations.

V$SGA_CURRENT_RESIZE_OPS

Displays information about SGA resize operations
that are currently in progress. A resize operation is an
enlargement or reduction of a dynamic SGA
component.

V$SGA_TARGET_ADVICE

Displays information that helps you tune
SGA_TARGET. For more information, see Oracle
Database Performance Tuning Guide.

Specifying the Maximum Number of Processes
The PROCESSES initialization parameter determines the maximum number of
operating system processes that can be connected to Oracle Database concurrently. The
value of this parameter must be a minimum of one for each background process plus
one for each user process. The number of background processes will vary according
the database features that you are using. For example, if you are using Advanced
Queuing or the file mapping feature, you will have additional background processes.
If you are using Automatic Storage Management, then add three additional processes.
If you plan on running 50 user processes, a good estimate would be to set the
PROCESSES initialization parameter to 70.

Specifying the Method of Undo Space Management
Every Oracle Database must have a method of maintaining information that is used to
undo changes to the database. Such information consists of records of the actions of
transactions, primarily before they are committed. Collectively these records are called
undo data.This section provides instructions for setting up an environment for
automatic undo management using an undo tablespace.
See Also:

Chapter 10, "Managing the Undo Tablespace"

UNDO_MANAGEMENT Initialization Parameter
The UNDO_MANAGEMENT initialization parameter determines whether an instance
starts in automatic undo management mode which stores undo in an undo tablespace.
By default, this parameter is set to MANUAL. Set this parameter to AUTO to enable
automatic undo management mode.

UNDO_TABLESPACE Initialization Parameter
When an instance starts up in automatic undo management mode, it attempts to select
an undo tablespace for storage of undo data. If the database was created in undo
management mode, then the default undo tablespace (either the system-created
SYS_UNDOTS tablespace or the user-specified undo tablespace) is the undo tablespace
used at instance startup. You can override this default for the instance by specifying a
value for the UNDO_TABLESPACE initialization parameter. This parameter is especially
useful for assigning a particular undo tablespace to an instance in an Oracle Real
Application Clusters environment.
If no undo tablespace has been specified during database creation or by the
UNDO_TABLESPACE initialization parameter, then the first available undo tablespace
in the database is chosen. If no undo tablespace is available, then the instance starts
without an undo tablespace. You should avoid running in this mode.

Creating an Oracle Database

2-33

Understanding Initialization Parameters

The COMPATIBLE Initialization Parameter and Irreversible Compatibility
The COMPATIBLE initialization parameter enables or disables the use of features in the
database that affect file format on disk. For example, if you create an Oracle Database
10g database, but specify COMPATIBLE = 9.2.0.2 in the initialization parameter
file, then features that requires 10.0 compatibility will generate an error if you try to
use them. Such a database is said to be at the 9.2.0.2 compatibility level.
You can advance the compatibility level of your database. If you do advance the
compatibility of your database with the COMPATIBLE initialization parameter, there is
no way to start the database using a lower compatibility level setting, except by doing
a point-in-time recovery to a time before the compatibility was advanced.
The default value for the COMPATIBLE parameter is the release number of the most
recent major release.
Note: For Oracle Database 10g Release 2 (10.2), the default value
of the COMPATIBLE parameter is 10.2.0. The minimum value is
9.2.0. If you create an Oracle Database using the default value, you
can immediately use all the new features in this release, and you
can never downgrade the database.

See Also:
■

■

Oracle Database Upgrade Guide for a detailed discussion of
database compatibility and the COMPATIBLE initialization
parameter
Oracle Database Backup and Recovery Advanced User's Guide for
information about point-in-time recovery of your database

Setting the License Parameter
Oracle no longer offers licensing by the number of
concurrent sessions. Therefore the LICENSE_MAX_SESSIONS and
LICENSE_SESSIONS_WARNING initialization parameters are no
longer needed and have been deprecated.

Note:

If you use named user licensing, Oracle Database can help you enforce this form of
licensing. You can set a limit on the number of users created in the database. Once this
limit is reached, you cannot create more users.
This mechanism assumes that each person accessing the
database has a unique user name and that no people share a user
name. Therefore, so that named user licensing can help you ensure
compliance with your Oracle license agreement, do not allow
multiple users to log in using the same user name.

Note:

To limit the number of users created in a database, set the LICENSE_MAX_USERS
initialization parameter in the database initialization parameter file, as shown in the
following example:
LICENSE_MAX_USERS = 200

2-34 Oracle Database Administrator’s Guide

Managing Initialization Parameters Using a Server Parameter File

Troubleshooting Database Creation
If database creation fails, you can look at the alert log to determine the reason for the
failure and to determine corrective action. The alert log is discussed in "Monitoring the
Operation of Your Database" on page 4-18.
You should shut down the instance and delete any files created by the CREATE
DATABASE statement before you attempt to create it again. After correcting the error
that caused the failure of the database creation, try re-creating the database.

Dropping a Database
Dropping a database involves removing its datafiles, redo log files, control files, and
initialization parameter files. The DROP DATABASE statement deletes all control files
and all other database files listed in the control file. To use the DROP DATABASE
statement successfully, all of the following conditions must apply:
■

The database must be mounted and closed.

■

The database must be mounted exclusively--not in shared mode.

■

The database must be mounted as RESTRICTED.

An example of this statement is:
DROP DATABASE;

The DROP DATABASE statement has no effect on archived log files, nor does it have
any effect on copies or backups of the database. It is best to use RMAN to delete such
files. If the database is on raw disks, the actual raw disk special files are not deleted.
If you used the Database Configuration Assistant to create your database, you can use
that tool to delete (drop) your database and remove the files.

Managing Initialization Parameters Using a Server Parameter File
Initialization parameters for the Oracle Database have traditionally been stored in a
text initialization parameter file. For better manageability, you can choose to maintain
initialization parameters in a binary server parameter file that is persistent across
database startup and shutdown. This section introduces the server parameter file, and
explains how to manage initialization parameters using either method of storing the
parameters. The following topics are contained in this section.
■

What Is a Server Parameter File?

■

Migrating to a Server Parameter File

■

Creating a Server Parameter File

■

The SPFILE Initialization Parameter

■

Using ALTER SYSTEM to Change Initialization Parameter Values

■

Exporting the Server Parameter File

■

Backing Up the Server Parameter File

■

Errors and Recovery for the Server Parameter File

■

Viewing Parameter Settings

Creating an Oracle Database

2-35

Managing Initialization Parameters Using a Server Parameter File

What Is a Server Parameter File?
A server parameter file can be thought of as a repository for initialization parameters
that is maintained on the machine running the Oracle Database server. It is, by design,
a server-side initialization parameter file. Initialization parameters stored in a server
parameter file are persistent, in that any changes made to the parameters while an
instance is running can persist across instance shutdown and startup. This
arrangement eliminates the need to manually update initialization parameters to make
persistent any changes effected by ALTER SYSTEM statements. It also provides a basis
for self-tuning by the Oracle Database server.
A server parameter file is initially built from a text initialization parameter file using
the CREATE SPFILE statement. (It can also be created directly by the Database
Configuration Assistant.) The server parameter file is a binary file that cannot be
edited using a text editor. Oracle Database provides other interfaces for viewing and
modifying parameter settings in a server parameter file.
Caution: Although you can open the binary server parameter file
with a text editor and view its text, do not manually edit it. Doing so
will corrupt the file. You will not be able to start your instance, and
if the instance is running, it could fail.

When you issue a STARTUP command with no PFILE clause, the Oracle instance
searches an operating system–specific default location for a server parameter file from
which to read initialization parameter settings. If no server parameter file is found, the
instance searches for a text initialization parameter file. If a server parameter file exists
but you want to override it with settings in a text initialization parameter file, you
must specify the PFILE clause when issuing the STARTUP command. Instructions for
starting an instance using a server parameter file are contained in "Starting Up a
Database" on page 3-1.

Migrating to a Server Parameter File
If you are currently using a text initialization parameter file, use the following steps to
migrate to a server parameter file.
1.

If the initialization parameter file is located on a client machine, transfer the file
(for example, FTP) from the client machine to the server machine.
If you are migrating to a server parameter file in an Oracle
Real Application Clusters environment, you must combine all of
your instance-specific initialization parameter files into a single
initialization parameter file. Instructions for doing this, and other
actions unique to using a server parameter file for instances that are
part of an Oracle Real Application Clusters installation, are
discussed in:

Note:

■

■

Oracle Real Application Clusters Installation and Configuration
Guide
Oracle Database Oracle Clusterware and Oracle Real Application
Clusters Administration and Deployment Guide

2-36 Oracle Database Administrator’s Guide

Managing Initialization Parameters Using a Server Parameter File

2.

Create a server parameter file using the CREATE SPFILE statement. This
statement reads the initialization parameter file to create a server parameter file.
The database does not have to be started to issue a CREATE SPFILE statement.

3.

Start up the instance using the newly created server parameter file.

Creating a Server Parameter File
You use the CREATE SPFILE statement to create a server parameter file from a text
initialization parameter file. You must have the SYSDBA or the SYSOPER system
privilege to execute this statement.
When you use the Database Configuration Assistant to create
a database, it can automatically create a server parameter file for you.

Note:

The CREATE SPFILE statement can be executed before or after instance startup.
However, if the instance has been started using a server parameter file, an error is
raised if you attempt to re-create the same server parameter file that is currently being
used by the instance.
The following example creates a server parameter file from initialization parameter file
/u01/oracle/dbs/init.ora. In this example no SPFILE name is specified, so the
file is created with the platform-specific default name and location shown in Table 2–5
on page 2-37.
CREATE SPFILE FROM PFILE='/u01/oracle/dbs/init.ora';

Another example, which follows, illustrates creating a server parameter file and
supplying a name.
CREATE SPFILE='/u01/oracle/dbs/test_spfile.ora'
FROM PFILE='/u01/oracle/dbs/test_init.ora';

When you create a server parameter file from an initialization parameter file,
comments specified on the same lines as a parameter setting in the initialization
parameter file are maintained in the server parameter file. All other comments are
ignored.
The server parameter file is always created on the machine running the database
server. If a server parameter file of the same name already exists on the server, it is
overwritten with the new information.
Oracle recommends that you allow the database server to default the name and
location of the server parameter file. This will ease administration of your database.
For example, the STARTUP command assumes this default location to read the
parameter file. The table that follows shows the default name and location of the
server parameter file. The table assumes that the SPFILE is a file. If it is a raw device,
the default name could be a logical volume name or partition device name, and the
default location could differ.
Table 2–5
Platform

Server Parameter File Default Name and Location on UNIX and Windows
Default Name

Default Location

UNIX and spfile$ORACLE_SID.ora
Linux

$ORACLE_HOME/dbs or the same location as the
datafiles

Windows

%ORACLE_HOME%\database

spfile%ORACLE_SID%.ora

Creating an Oracle Database

2-37

Managing Initialization Parameters Using a Server Parameter File

Upon startup, the instance first searches for the server
parameter file named spfile$ORACLE_SID.ora, and if not found,
searches for spfile.ora. Using spfile.ora enables all Real
Application Cluster (RAC) instances to use the same server parameter
file.

Note:

If neither server parameter file is found, the instance searches for the
text initialization parameter file init$ORACLE_SID.ora.
If you create a server parameter file in a location other than the default location, you
must create a text initialization parameter file that points to the server parameter file.
For more information, see "Starting Up a Database" on page 3-1.

The SPFILE Initialization Parameter
The SPFILE initialization parameter contains the name of the current server
parameter file. When the default server parameter file is used by the server (that is,
you issue a STARTUP command and do not specify a PFILE), the value of SPFILE is
internally set by the server. The SQL*Plus command SHOW PARAMETERS SPFILE (or
any other method of querying the value of a parameter) displays the name of the
server parameter file that is currently in use.

Using ALTER SYSTEM to Change Initialization Parameter Values
The ALTER SYSTEM statement lets you set, change, or restore to default the values of
initialization parameter. If you are using a text initialization parameter file, the ALTER
SYSTEM statement changes the value of a parameter only for the current instance,
because there is no mechanism for automatically updating initialization parameters on
disk. You must update them manually to be passed to a future instance. Using a server
parameter file overcomes this limitation.

Setting or Changing Initialization Parameter Values
Use the SET clause of the ALTER SYSTEM statement to set or change initialization
parameter values. The optional SCOPE clause specifies the scope of a change as
described in the following table:
SCOPE Clause

Description

SCOPE = SPFILE

The change is applied in the server parameter file only. The effect is
as follows:
■

■

SCOPE = MEMORY

For static parameters, the behavior is the same as for dynamic
parameters. This is the only SCOPE specification allowed for
static parameters.

The change is applied in memory only. The effect is as follows:
■

■

2-38 Oracle Database Administrator’s Guide

For dynamic parameters, the change is effective at the next
startup and is persistent.

For dynamic parameters, the effect is immediate, but it is not
persistent because the server parameter file is not updated.
For static parameters, this specification is not allowed.

Managing Initialization Parameters Using a Server Parameter File

SCOPE Clause

Description

SCOPE = BOTH

The change is applied in both the server parameter file and
memory. The effect is as follows:
■

■

For dynamic parameters, the effect is immediate and
persistent.
For static parameters, this specification is not allowed.

It is an error to specify SCOPE=SPFILE or SCOPE=BOTH if the server is not using a
server parameter file. The default is SCOPE=BOTH if a server parameter file was used
to start up the instance, and MEMORY if a text initialization parameter file was used to
start up the instance.
For dynamic parameters, you can also specify the DEFERRED keyword. When
specified, the change is effective only for future sessions.
An optional COMMENT clause lets you associate a text string with the parameter
update. When you specify SCOPE as SPFILE or BOTH, the comment is written to the
server parameter file.
The following statement changes the maximum number of job queue processes
allowed for the instance. It includes a comment, and explicitly states that the change is
to be made only in memory (that is, it is not persistent across instance shutdown and
startup).
ALTER SYSTEM SET JOB_QUEUE_PROCESSES=50
COMMENT='temporary change on Nov 29'
SCOPE=MEMORY;

The next example sets a complex initialization parameter that takes a list of attributes.
Specifically, the parameter value being set is the LOG_ARCHIVE_DEST_n initialization
parameter. This statement could change an existing setting for this parameter of create
a new archive destination.
ALTER SYSTEM
SET LOG_ARCHIVE_DEST_4='LOCATION=/u02/oracle/rbdb1/',MANDATORY,'REOPEN=2'
COMMENT='Add new destimation on Nov 29'
SCOPE=SPFILE;

When a value consists of a list of parameters, you cannot edit individual attributes by
the position or ordinal number. You must specify the complete list of values each time
the parameter is updated, and the new list completely replaces the old list.

Exporting the Server Parameter File
You can use the CREATE PFILE statement to export a server parameter file to a text
initialization parameter file. Doing so might be necessary for several reasons:
■

■

For diagnostic purposes, listing all of the parameter values currently used by an
instance. This is analogous to the SQL*Plus SHOW PARAMETERS command or
selecting from the V$PARAMETER or V$PARAMETER2 views.
To modify the server parameter file by first exporting it, editing the resulting text
file, and then re-creating it using the CREATE SPFILE statement

The exported file can also be used to start up an instance using the PFILE clause.
You must have the SYSDBA or the SYSOPER system privilege to execute the CREATE
PFILE statement. The exported file is created on the database server machine. It

Creating an Oracle Database

2-39

Managing Initialization Parameters Using a Server Parameter File

contains any comments associated with the parameter in the same line as the
parameter setting.
The following example creates a text initialization parameter file from the server
parameter file:
CREATE PFILE FROM SPFILE;

Because no names were specified for the files, the database creates an initialization
parameter file with a platform-specific name, and it is created from the
platform-specific default server parameter file.
The following example creates a text initialization parameter file from a server
parameter file, but in this example the names of the files are specified:
CREATE PFILE='/u01/oracle/dbs/test_init.ora'
FROM SPFILE='/u01/oracle/dbs/test_spfile.ora';

Backing Up the Server Parameter File
You can create a backup of your server parameter file by exporting it, as described in
"Exporting the Server Parameter File". If the backup and recovery strategy for your
database is implemented using Recovery Manager (RMAN), then you can use RMAN
to create a backup. The server parameter file is backed up automatically by RMAN
when you back up your database, but RMAN also lets you specifically create a backup
of the currently active server parameter file.
See Also:

Oracle Database Backup and Recovery Basics

Errors and Recovery for the Server Parameter File
If an error occurs while reading the server parameter file (during startup or an export
operation), or while writing the server parameter file during its creation, the operation
terminates with an error reported to the user.
If an error occurs while reading or writing the server parameter file during a
parameter update, the error is reported in the alert log and all subsequent parameter
updates to the server parameter file are ignored. At this point, you can take one of the
following actions:
■

■

Shut down the instance, recover the server parameter file, and then restart the
instance.
Continue to run the database if you do not care that subsequent parameter
updates will not be persistent.

Viewing Parameter Settings
You can view parameter settings in several ways, as shown in the following table.
Method

Description

SHOW PARAMETERS

This SQL*Plus command displays the values of parameters
currently in use.

CREATE PFILE

This SQL statement creates a text initialization parameter file from
the binary server parameter file.

V$PARAMETER

This view displays the values of parameters currently in effect.

2-40 Oracle Database Administrator’s Guide

Defining Application Services for Oracle Database 10g

Method

Description

V$PARAMETER2

This view displays the values of parameters currently in effect. It is
easier to distinguish list parameter values in this view because
each list parameter value appears as a row.

V$SPPARAMETER

This view displays the current contents of the server parameter file.
The view returns FALSE values in the ISSPECIFIED column if a
server parameter file is not being used by the instance.

See Also:

Oracle Database Reference for a complete description of

views

Defining Application Services for Oracle Database 10g
This section describes Oracle Database 10g services and includes the following topics:
■

Deploying Services

■

Configuring Services

■

Using Services

Services are logical abstractions for managing workloads in Oracle Database 10g.
Services divide workloads into mutually disjoint groupings. Each service represents a
workload with common attributes, service-level thresholds, and priorities. The
grouping is based on attributes of work that might include the application function to
be used, the priority of execution for the application function, the job class to be
managed, or the data range used in the application function or job class. For example,
the Oracle E-Business suite defines a service for each responsibility, such as general
ledger, accounts receivable, order entry, and so on.
In Oracle Database 10g, services are built into the Oracle Database providing single
system image for workloads, prioritization for workloads, performance measures for
real transactions, and alerts and actions when performance goals are violated. Services
enable you to configure a workload, administer it, enable and disable it, and measure
the workload as a single entity. You can do this using standard tools such as the
Database Configuration Assistant (DBCA), Net Configuration Assistant (NetCA), and
Enterprise Manager (EM). Enterprise Manager supports viewing and operating
services as a whole, with drill down to the instance-level when needed.
In Real Application Clusters (RAC), a service can span one or more instances and
facilitate real workload balancing based on real transaction performance. This
provides end-to-end unattended recovery, rolling changes by workload, and full
location transparency. RAC also enables you to manage a number of service features
with Enterprise Manager, the DBCA, and the Server Control utility (SRVCTL).
Services also offer an extra dimension in performance tuning. Tuning by "service and
SQL" can replace tuning by "session and SQL" in the majority of systems where all
sessions are anonymous and shared. With services, workloads are visible and
measurable. Resource consumption and waits are attributable by application.
Additionally, resources assigned to services can be augmented when loads increase or
decrease. This dynamic resource allocation enables a cost-effective solution for
meeting demands as they occur. For example, services are measured automatically and
the performance is compared to service-level thresholds. Performance violations are
reported to Enterprise Manager, enabling the execution of automatic or scheduled
solutions.

Creating an Oracle Database

2-41

Defining Application Services for Oracle Database 10g

See Also: Oracle Real Application Clusters Deployment and
Performance Guide for more information about services in RAC

Deploying Services
Installations configure Oracle Database 10g services in the database giving each service
a unique global name, associated performance goals, and associated importance. The
services are tightly integrated with the Oracle Database and are maintained in the data
dictionary. You can find service information in the following service-specific views:
■

DBA_SERVICES

■

ALL_SERVICES or V$SERVICES

■

V$ACTIVE_SERVICES

■

V$SERVICE_STATS

■

V$SERVICE_EVENTS

■

V$SERVICE_WAIT_CLASSES

■

V$SERV_MOD_ACT_STATS

■

V$SERVICE_METRICS

■

V$SERVICE_METRICS_HISTORY

The following additional views also contain some information about services:
■

V$SESSION

■

V$ACTIVE_SESSION_HISTORY

■

DBA_RSRC_GROUP_MAPPINGS

■

DBA_SCHEDULER_JOB_CLASSES

■

DBA_THRESHOLDS
See Also:

Oracle Database Reference for detailed information about

these views
Several Oracle Database features support services. The Automatic Workload
Repository (AWR) manages the performance of services. AWR records service
performance, including execution times, wait classes, and resources consumed by
service. AWR alerts warn when service response time thresholds are exceeded. The
dynamic views report current service performance metrics with one hour of history.
Each service has quality-of-service thresholds for response time and CPU
consumption.
In addition, the Database Resource Manager maps services to consumer groups. This
enables you to automatically manage the priority of one service relative to others. You
can use consumer groups to define relative priority in terms of either ratios or resource
consumption. This is described in more detail, for example, in Oracle Real Application
Clusters Deployment and Performance Guide.

Configuring Services
Services describe applications, application functions, and data ranges as either
functional services or data-dependent services. Functional services are the most
common mapping of workloads. Sessions using a particular function are grouped
together. For Oracle*Applications, ERP, CRM, and iSupport functions create a

2-42 Oracle Database Administrator’s Guide

Defining Application Services for Oracle Database 10g

functional division of the work. For SAP, dialog and update functions create a
functional division of the work.
In contrast, data-dependent routing routes sessions to services based on data keys. The
mapping of work requests to services occurs in the object relational mapping layer for
application servers and TP monitors. For example, in RAC, these ranges can be
completely dynamic and based on demand because the database is shared.
You can also define preconnect application services in RAC databases. Preconnect
services span instances to support a service in the event of a failure. The preconnect
service supports TAF preconnect mode and is managed transparently when using
RAC.
In addition to application services, Oracle Database also supports two internal
services: SYS$BACKGROUND is used by the background processes only and
SYS$USERS is the default service for user sessions that are not associated with
services.
Use the DBMS_SERVICE package or set the SERVICE_NAMES parameter to create
application services on a single-instance Oracle Database. You can later define the
response time goal or importance of each service through EM, either individually or
by using the Enterprise Manager feature "Copy Thresholds From a Baseline" on the
Manage Metrics/Edit Threshold pages. You can also do this using PL/SQL.

Using Services
Using services requires no changes to your application code. Client-side work connects
to a service. Server-side work specifies the service when creating the job class for the
Job Scheduler and the database links for distributed databases. Work requests
executing under a service inherit the performance thresholds for the service and are
measured as part of the service.

Client-Side Use
Middle-tier applications and client-server applications use a service by specifying the
service as part of the connection in TNS connect data. This connect data may be in the
TNSnames file for thick Net drivers, in the URL specification for thin drivers, or may
be maintained in the Oracle Internet Directory. For example, data sources for the
Oracle Application Server 10g are set to route to a service. Using Easy Connect
Naming, this connection needs only the host name and service name (for example,
hr/hr@myDBhost/myservice). For Oracle E-Business Suite, the service is also
maintained in the application database identifier and in the cookie for the ICX
parameters.

Server-Side Use
Server-side work, such as the Oracle Scheduler, parallel execution, and Oracle Streams
Advanced Queuing, set the service name as part of the workload definition.
For the Oracle Scheduler, the service that the job class uses is defined when the job
class is created. During execution, jobs are assigned to job classes, and job classes run
within services. Using services with job classes ensures that the work executed by the
job scheduler is identified for workload management and performance tuning.
For parallel query and parallel DML, the query coordinator connects to a service just
like any other client. The parallel query processes inherit the service for the duration of
the execution. At the end of query execution, the parallel execution processes revert to
the default service.

Creating an Oracle Database

2-43

Considerations After Creating a Database

Chapter 27, "Using the Scheduler" for more information
about the Oracle Scheduler.

See Also:

Considerations After Creating a Database
After you create a database, the instance is left running, and the database is open and
available for normal database use. You may want to perform other actions, some of
which are discussed in this section.

Some Security Considerations
Note Regarding Security Enhancements: In this release of Oracle
Database and in subsequent releases, several enhancements are
being made to ensure the security of default database user
accounts. You can find a security checklist for this release in Oracle
Database Security Guide. Oracle recommends that you read this
checklist and configure your database accordingly.

After the database is created, you can configure it to take advantage of Oracle Identity
Management. For information on how to do this, please refer to Oracle Database
Enterprise User Security Administrator's Guide.
A newly created database has at least three user accounts that are important for
administering your database: SYS, SYSTEM, and SYSMAN.
Caution: To prevent unauthorized access and protect the integrity
of your database, it is important that new passwords for user
accounts SYS and SYSTEM be specified when the database is
created. This is accomplished by specifying the following CREATE
DATABASE clauses when manually creating you database, or by
using DBCA to create the database:
■

USER SYS IDENTIFIED BY

■

USER SYSTEM IDENTIFIED BY

Additional administrative accounts are provided by Oracle Database that should be
used only by authorized users. To protect these accounts from being used by
unauthorized users familiar with their Oracle-supplied passwords, these accounts are
initially locked with their passwords expired. As the database administrator, you are
responsible for the unlocking and resetting of these accounts.
Table 2–6 lists the administrative accounts that are provided by Oracle Database. Not
all accounts may be present on your system, depending upon the options that you
selected for your database.
Table 2–6

Administrative User Accounts Provided by Oracle Database

Username

Password

Description

See Also

CTXSYS

CTXSYS

The Oracle Text account

Oracle Text Reference

DBSNMP

DBSNMP

The account used by the
Management Agent component
of Oracle Enterprise Manager to
monitor and manage the
database

Oracle Enterprise
Manager Grid Control
Installation and Basic
Configuration

2-44 Oracle Database Administrator’s Guide

Considerations After Creating a Database

Table 2–6 (Cont.) Administrative User Accounts Provided by Oracle Database
Username

Password

Description

See Also

LBACSYS

LBACSYS

The Oracle Label Security
administrator account

Oracle Label Security
Administrator's Guide

MDDATA

MDDATA

The schema used by Oracle
Spatial for storing Geocoder and
router data

Oracle Spatial User's
Guide and Reference

MDSYS

MDSYS

The Oracle Spatial and Oracle
interMedia Locator administrator
account

Oracle Spatial User's
Guide and Reference

DMSYS

DMSYS

The Oracle Data Mining account.

Oracle Data Mining
Administrator's Guide
Oracle Data Mining
Concepts

OLAPSYS

MANAGER

The account used to create OLAP
metadata structures. It owns the
OLAP Catalog (CWMLite).

Oracle OLAP
Application Developer's
Guide

ORDPLUGINS

ORDPLUGINS

The Oracle interMedia user.
Plug-ins supplied by Oracle and
third party format plug-ins are
installed in this schema.

Oracle interMedia
User's Guide

ORDSYS

ORDSYS

The Oracle interMedia
administrator account

Oracle interMedia
User's Guide

OUTLN

OUTLN

Oracle Database
The account that supports plan
Performance Tuning
stability. Plan stability enables
Guide
you to maintain the same
execution plans for the same SQL
statements. OUTLN acts as a role
to centrally manage metadata
associated with stored outlines.

SI_INFORMTN_
SCHEMA

SI_INFORMTN_
SCHEMA

The account that stores the
information views for the
SQL/MM Still Image Standard

Oracle interMedia
User's Guide

SYS

CHANGE_ON_
INSTALL

The account used to perform
database administration tasks

Oracle Database
Administrator's Guide

SYSMAN

CHANGE_ON_
INSTALL

The account used to perform
Oracle Enterprise Manager
database administration tasks.
Note that SYS and SYSTEM can
also perform these tasks.

Oracle Enterprise
Manager Grid Control
Installation and Basic
Configuration

SYSTEM

MANAGER

Another account used to perform
database administration tasks.

Oracle Database
Administrator's Guide

See Also:
■

■

■

"Database Administrator Usernames" on page 1-9 for more
information about the users SYS and SYSTEM
Oracle Database Security Guide to learn how to add new users
and change passwords
Oracle Database SQL Reference for the syntax of the ALTER USER
statement used for unlocking user accounts

Enabling Transparent Data Encryption
Transparent data encryption is a feature that enables encryption of database columns
before storing them in the datafile. If users attempt to circumvent the database access
control mechanisms by looking inside datafiles directly with operating system tools,
transparent data encryption prevents such users from viewing sensitive information.
Creating an Oracle Database

2-45

Viewing Information About the Database

Users who have the CREATE TABLE privilege can choose one or more columns in a
table to be encrypted. The data is encrypted in the data files and in the audit logs (if
audit is turned on). Database users with appropriate privileges can view the data in
unencrypted format. For information on enabling and disabling transparent data
encryption, see Oracle Database Security Guide

Creating a Secure External Password Store
For large-scale deployments where applications use password credentials to connect to
databases, it is possible to store such credentials in a client-side Oracle wallet. An
Oracle wallet is a secure software container that is used to store authentication and
signing credentials.
Storing database password credentials in a client-side Oracle wallet eliminates the
need to embed usernames and passwords in application code, batch jobs, or scripts.
This reduces the risk of exposing passwords in the clear in scripts and application
code, and simplifies maintenance because you need not change your code each time
usernames and passwords change. In addition, not having to change application code
also makes it easier to enforce password management policies for these user accounts.
When you configure a client to use the external password store, applications can use
the following syntax to connect to databases that use password authentication:
CONNECT /@database_alias

Note that you need not specify database login credentials in this CONNECT statement.
Instead your system looks for database login credentials in the client wallet.
See Also: Oracle Database Advanced Security Administrator's Guide for
information about configuring your client to use a secure external
password store and for information about managing credentials in it.

Installing the Oracle Database Sample Schemas
The Oracle Database distribution media includes various SQL files that let you
experiment with the system, learn SQL, or create additional tables, views, or
synonyms.
Oracle Database includes sample schemas that help you to become familiar with
Oracle Database functionality. All Oracle Database documentation and training
materials are being converted to the Sample Schemas environment as those materials
are updated.
The Sample Schemas can be installed automatically by the Database Configuration
Assistant, or you can install them manually. The schemas and installation instructions
are described in detail in Oracle Database Sample Schemas.

Viewing Information About the Database
In addition to the views listed previously in "Viewing Parameter Settings", you can
view information about your database content and structure using the following
views:
View

Description

DATABASE_PROPERTIES

Displays permanent database properties

GLOBAL_NAME

Displays the global database name

2-46 Oracle Database Administrator’s Guide

Viewing Information About the Database

View

Description

V$DATABASE

Contains database information from the control file

Creating an Oracle Database

2-47

Viewing Information About the Database

2-48 Oracle Database Administrator’s Guide

3
Starting Up and Shutting Down
This chapter describes the procedures for starting up and shutting down an Oracle
Database instance and contains the following topics:
■

Starting Up a Database

■

Altering Database Availability

■

Shutting Down a Database

■

Quiescing a Database

■

Suspending and Resuming a Database
See Also: Oracle Database Oracle Clusterware and Oracle Real
Application Clusters Administration and Deployment Guide for
additional information specific to an Oracle Real Application
Clusters environment

Starting Up a Database
When you start up a database, you create an instance of that database and you
determine the state of the database. Normally, you start up an instance by mounting
and opening the database. Doing so makes the database available for any valid user to
connect to and perform typical data access operations. Other options exist, and these
are also discussed in this section.
This section contains the following topics relating to starting up an instance of a
database:
■

Options for Starting Up a Database

■

Understanding Initialization Parameter Files

■

Preparing to Start Up an Instance

■

Starting Up an Instance

Options for Starting Up a Database
You can start up a database instance with SQL*Plus, Recovery Manager, or Enterprise
Manager.

Starting Up a Database Using SQL*Plus
You can start a SQL*Plus session, connect to Oracle Database with administrator
privileges, and then issue the STARTUP command. Using SQL*Plus in this way is the
only method described in detail in this book.

Starting Up and Shutting Down

3-1

Starting Up a Database

Starting Up a Database Using Recovery Manager
You can also use Recovery Manager (RMAN) to execute STARTUP and SHUTDOWN
commands. You may prefer to do this if your are within the RMAN environment and
do not want to invoke SQL*Plus.
See Also: Oracle Database Backup and Recovery Basics for
information on starting up the database using RMAN

Starting Up a Database Using Oracle Enterprise Manager
You can use Oracle Enterprise Manager (EM) to administer your database, including
starting it up and shutting it down. EM combines a GUI console, agents, common
services, and tools to provide an integrated and comprehensive systems management
platform for managing Oracle products. EM Database Control, which is the portion of
EM that is dedicated to administering an Oracle database, enables you to perform the
functions discussed in this book using a GUI interface, rather than command line
operations.
See Also:
■
■

■

Oracle Enterprise Manager Concepts
Oracle Enterprise Manager Grid Control Installation and Basic
Configuration
Oracle Database 2 Day DBA

The remainder of this section describes using SQL*Plus to start up a database instance.

Understanding Initialization Parameter Files
To start an instance, the database must read instance configuration parameters (the
initialization parameters) from either a server parameter file (SPFILE) or a text
initialization parameter file.
When you issue the SQL*Plus STARTUP command, the database attempts to read the
initialization parameters from an SPFILE in a platform-specific default location. If it
finds no SPFILE, it searches for a text initialization parameter file.
For UNIX or Linux, the platform-specific default location
(directory) for the SPFILE and text initialization parameter file is:

Note:

$ORACLE_HOME/dbs

For Windows NT and Windows 2000 the location is:
%ORACLE_HOME%\database

In the platform-specific default location, Oracle Database locates your initialization
parameter file by examining filenames in the following order:
1.

spfile$ORACLE_SID.ora

2.

spfile.ora

3.

init$ORACLE_SID.ora

The first two filenames represent SPFILEs and the third represents a text initialization
parameter file.

3-2 Oracle Database Administrator’s Guide

Starting Up a Database

The spfile.ora file is included in this search path
because in a Real Application Clusters environment one server
parameter file is used to store the initialization parameter settings
for all instances. There is no instance-specific location for storing a
server parameter file.

Note:

For more information about the server parameter file for a Real
Application Clusters environment, see Oracle Database Oracle
Clusterware and Oracle Real Application Clusters Administration and
Deployment Guide.
If you (or the Database Configuration Assistant) created a server parameter file, but
you want to override it with a text initialization parameter file, you can specify the
PFILE clause of the STARTUP command to identify the initialization parameter file.
STARTUP PFILE = /u01/oracle/dbs/init.ora

Starting Up with a Non-Default Server Parameter File
A non-default server parameter file (SPFILE) is an SPFILE that is in a location other
than the default location. It is not usually necessary to start an instance with a
non-default SPFILE. However, should such a need arise, you can use the PFILE
clause to start an instance with a non-default server parameter file as follows:
1.

Create a one-line text initialization parameter file that contains only the SPFILE
parameter. The value of the parameter is the non-default server parameter file
location.
For example, create a text initialization parameter file
/u01/oracle/dbs/spf_init.ora that contains only the following parameter:
SPFILE = /u01/oracle/dbs/test_spfile.ora

Note: You cannot use the IFILE initialization parameter within a
text initialization parameter file to point to a server parameter file.
In this context, you must use the SPFILE initialization parameter.
2.

Start up the instance pointing to this initialization parameter file.
STARTUP PFILE = /u01/oracle/dbs/spf_init.ora

The SPFILE must reside on the machine running the database server. Therefore, the
preceding method also provides a means for a client machine to start a database that
uses an SPFILE. It also eliminates the need for a client machine to maintain a
client-side initialization parameter file. When the client machine reads the
initialization parameter file containing the SPFILE parameter, it passes the value to
the server where the specified SPFILE is read.
Initialization Files and Automatic Storage Management
A database that uses Automatic Storage Management (ASM) usually has a non-default
SPFILE. If you use the Database Configuration Assistant (DBCA) to configure a
database to use ASM, DBCA creates an SPFILE for the database instance in an ASM
disk group, and then creates a text initialization parameter file in the default location
in the local file system to point to the SPFILE.

Starting Up and Shutting Down

3-3

Starting Up a Database

See Also: Chapter 2, "Creating an Oracle Database", for more
information about initialization parameters, initialization
parameter files, and server parameter files

Preparing to Start Up an Instance
You must perform some preliminary steps before attempting to start an instance of
your database using SQL*Plus.
1.

Ensure that environment variables are set so that you connect to the desired Oracle
instance. For details, see "Selecting an Instance with Environment Variables" on
page 1-7.

2.

Start SQL*Plus without connecting to the database:
SQLPLUS /NOLOG

3.

Connect to Oracle Database as SYSDBA:
CONNECT username/password AS SYSDBA

Now you are connected to the database and ready to start up an instance of your
database.
See Also: SQL*Plus User's Guide and Reference for descriptions and
syntax for the CONNECT, STARTUP, and SHUTDOWN commands.

Starting Up an Instance
You use the SQL*Plus STARTUP command to start up an Oracle Database instance.
You can start an instance in various modes:
■

■

■

■

Start the instance without mounting a database. This does not allow access to the
database and usually would be done only for database creation or the re-creation
of control files.
Start the instance and mount the database, but leave it closed. This state allows for
certain DBA activities, but does not allow general access to the database.
Start the instance, and mount and open the database. This can be done in
unrestricted mode, allowing access to all users, or in restricted mode, allowing
access for database administrators only.
Force the instance to start after a startup or shutdown problem, or start the
instance and have complete media recovery begin immediately.
You cannot start a database instance if you are connected to
the database through a shared server process.

Note:

The following scenarios describe and illustrate the various states in which you can
start up an instance. Some restrictions apply when combining clauses of the STARTUP
command.

3-4 Oracle Database Administrator’s Guide

Starting Up a Database

It is possible to encounter problems starting up an instance
if control files, database files, or redo log files are not available. If
one or more of the files specified by the CONTROL_FILES
initialization parameter does not exist or cannot be opened when
you attempt to mount a database, Oracle Database returns a
warning message and does not mount the database. If one or more
of the datafiles or redo log files is not available or cannot be opened
when attempting to open a database, the database returns a
warning message and does not open the database.

Note:

SQL*Plus User's Guide and Reference for information
about the restrictions that apply when combining clauses of the
STARTUP command

See Also:

Starting an Instance, and Mounting and Opening a Database
Normal database operation means that an instance is started and the database is
mounted and open. This mode allows any valid user to connect to the database and
perform data access operations.
The following command starts an instance, reads the initialization parameters from the
default location, and then mounts and opens the database. (You can optionally specify
a PFILE clause.)
STARTUP

Starting an Instance Without Mounting a Database
You can start an instance without mounting a database. Typically, you do so only
during database creation. Use the STARTUP command with the NOMOUNT clause:
STARTUP NOMOUNT

Starting an Instance and Mounting a Database
You can start an instance and mount a database without opening it, allowing you to
perform specific maintenance operations. For example, the database must be mounted
but not open during the following tasks:
■

■

Enabling and disabling redo log archiving options. For more information, please
refer to Chapter 7, "Managing Archived Redo Logs".
Performing full database recovery. For more information, please refer to Oracle
Database Backup and Recovery Basics

The following command starts an instance and mounts the database, but leaves the
database closed:
STARTUP MOUNT

Restricting Access to an Instance at Startup
You can start an instance, and optionally mount and open a database, in restricted
mode so that the instance is available only to administrative personnel (not general
database users). Use this mode of instance startup when you need to accomplish one
of the following tasks:
■

Perform an export or import of data

■

Perform a data load (with SQL*Loader)

Starting Up and Shutting Down

3-5

Starting Up a Database

■

Temporarily prevent typical users from using data

■

Perform certain migration or upgrade operations

Typically, all users with the CREATE SESSION system privilege can connect to an
open database. Opening a database in restricted mode allows database access only to
users with both the CREATE SESSION and RESTRICTED SESSION system privilege.
Only database administrators should have the RESTRICTED SESSION system
privilege. Further, when the instance is in restricted mode, a database administrator
cannot access the instance remotely through an Oracle Net listener, but can only access
the instance locally from the machine that the instance is running on.
The following command starts an instance (and mounts and opens the database) in
restricted mode:
STARTUP RESTRICT

You can use the RESTRICT clause in combination with the MOUNT, NOMOUNT, and
OPEN clauses.
Later, use the ALTER SYSTEM statement to disable the RESTRICTED SESSION
feature:
ALTER SYSTEM DISABLE RESTRICTED SESSION;

If you open the database in nonrestricted mode and later find that you need to restrict
access, you can use the ALTER SYSTEM statement to do so, as described in "Restricting
Access to an Open Database" on page 3-8.
Oracle Database SQL Reference for more information on
the ALTER SYSTEM statement

See Also:

Forcing an Instance to Start
In unusual circumstances, you might experience problems when attempting to start a
database instance. You should not force a database to start unless you are faced with
the following:
■

■

You cannot shut down the current instance with the SHUTDOWN NORMAL,
SHUTDOWN IMMEDIATE, or SHUTDOWN TRANSACTIONAL commands.
You experience problems when starting an instance.

If one of these situations arises, you can usually solve the problem by starting a new
instance (and optionally mounting and opening the database) using the STARTUP
command with the FORCE clause:
STARTUP FORCE

If an instance is running, STARTUP FORCE shuts it down with mode ABORT before
restarting it. In this case, beginning with Oracle Database 10g Release 2, the alert log
shows the message "Shutting down instance (abort)" followed by "Starting
ORACLE instance (normal)." (Earlier versions of the database showed only
"Starting ORACLE instance (force)" in the alert log.)
See Also: "Shutting Down with the ABORT Clause" on page 3-10
to understand the side effects of aborting the current instance

3-6 Oracle Database Administrator’s Guide

Altering Database Availability

Starting an Instance, Mounting a Database, and Starting Complete Media Recovery
If you know that media recovery is required, you can start an instance, mount a
database to the instance, and have the recovery process automatically start by using
the STARTUP command with the RECOVER clause:
STARTUP OPEN RECOVER

If you attempt to perform recovery when no recovery is required, Oracle Database
issues an error message.

Automatic Database Startup at Operating System Start
Many sites use procedures to enable automatic startup of one or more Oracle Database
instances and databases immediately following a system start. The procedures for
performing this task are specific to each operating system. For information about
automatic startup, see your operating system specific Oracle documentation.

Starting Remote Instances
If your local Oracle Database server is part of a distributed database, you might want
to start a remote instance and database. Procedures for starting and stopping remote
instances vary widely depending on communication protocol and operating system.

Altering Database Availability
You can alter the availability of a database. You may want to do this in order to restrict
access for maintenance reasons or to make the database read only. The following
sections explain how to alter the availability of a database:
■

Mounting a Database to an Instance

■

Opening a Closed Database

■

Opening a Database in Read-Only Mode

■

Restricting Access to an Open Database

Mounting a Database to an Instance
When you need to perform specific administrative operations, the database must be
started and mounted to an instance, but closed. You can achieve this scenario by
starting the instance and mounting the database.
To mount a database to a previously started, but not opened instance, use the SQL
statement ALTER DATABASE with the MOUNT clause as follows:
ALTER DATABASE MOUNT;

See Also: "Starting an Instance and Mounting a Database" on
page 3-5 for a list of operations that require the database to be
mounted and closed (and procedures to start an instance and
mount a database in one step)

Opening a Closed Database
You can make a mounted but closed database available for general use by opening the
database. To open a mounted database, use the ALTER DATABASE statement with the
OPEN clause:
ALTER DATABASE OPEN;

Starting Up and Shutting Down

3-7

Altering Database Availability

After executing this statement, any valid Oracle Database user with the CREATE
SESSION system privilege can connect to the database.

Opening a Database in Read-Only Mode
Opening a database in read-only mode enables you to query an open database while
eliminating any potential for online data content changes. While opening a database in
read-only mode guarantees that datafile and redo log files are not written to, it does
not restrict database recovery or operations that change the state of the database
without generating redo. For example, you can take datafiles offline or bring them
online since these operations do not affect data content.
If a query against a database in read-only mode uses temporary tablespace, for
example to do disk sorts, then the issuer of the query must have a locally managed
tablespace assigned as the default temporary tablespace. Otherwise, the query will fail.
This is explained in "Creating a Locally Managed Temporary Tablespace" on page 8-9.
Ideally, you open a database in read-only mode when you alternate a standby
database between read-only and recovery mode. Be aware that these are mutually
exclusive modes.
The following statement opens a database in read-only mode:
ALTER DATABASE OPEN READ ONLY;

You can also open a database in read/write mode as follows:
ALTER DATABASE OPEN READ WRITE;

However, read/write is the default mode.
Note:

You cannot use the RESETLOGS clause with a READ ONLY

clause.
Oracle Database SQL Reference for more information
about the ALTER DATABASE statement

See Also:

Restricting Access to an Open Database
To place an instance in restricted mode, where only users with administrative
privileges can access it, use the SQL statement ALTER SYSTEM with the ENABLE
RESTRICTED SESSION clause. After placing an instance in restricted mode, you
should consider killing all current user sessions before performing any administrative
tasks.
To lift an instance from restricted mode, use ALTER SYSTEM with the DISABLE
RESTRICTED SESSION clause.
See Also:
■

■

"Terminating Sessions" on page 4-16 for directions for killing
user sessions
"Restricting Access to an Instance at Startup" on page 3-5 to
learn some reasons for placing an instance in restricted mode

3-8 Oracle Database Administrator’s Guide

Shutting Down a Database

Shutting Down a Database
To initiate database shutdown, use the SQL*Plus SHUTDOWN command. Control is not
returned to the session that initiates a database shutdown until shutdown is complete.
Users who attempt connections while a shutdown is in progress receive a message like
the following:
ORA-01090: shutdown in progress - connection is not permitted

You cannot shut down a database if you are connected to
the database through a shared server process.

Note:

To shut down a database and instance, you must first connect as SYSOPER or SYSDBA.
There are several modes for shutting down a database. These are discussed in the
following sections:
■

Shutting Down with the NORMAL Clause

■

Shutting Down with the IMMEDIATE Clause

■

Shutting Down with the TRANSACTIONAL Clause

■

Shutting Down with the ABORT Clause

Some shutdown modes wait for certain events to occur (such as transactions
completing or users disconnecting) before actually bringing down the database. There
is a one-hour timeout period for these events. This timeout behavior is discussed in
this additional section:
■

Shutdown Timeout

Shutting Down with the NORMAL Clause
To shut down a database in normal situations, use the SHUTDOWN command with the
NORMAL clause:
SHUTDOWN NORMAL

The NORMAL clause is optional, because this is the default shutdown method if no
clause is provided.
Normal database shutdown proceeds with the following conditions:
■
■

No new connections are allowed after the statement is issued.
Before the database is shut down, the database waits for all currently connected
users to disconnect from the database.

The next startup of the database will not require any instance recovery procedures.

Shutting Down with the IMMEDIATE Clause
Use immediate database shutdown only in the following situations:
■

To initiate an automated and unattended backup

■

When a power shutdown is going to occur soon

■

When the database or one of its applications is functioning irregularly and you
cannot contact users to ask them to log off or they are unable to log off

To shut down a database immediately, use the SHUTDOWN command with the
IMMEDIATE clause:
Starting Up and Shutting Down

3-9

Shutting Down a Database

SHUTDOWN IMMEDIATE

Immediate database shutdown proceeds with the following conditions:
■

■

■

No new connections are allowed, nor are new transactions allowed to be started,
after the statement is issued.
Any uncommitted transactions are rolled back. (If long uncommitted transactions
exist, this method of shutdown might not complete quickly, despite its name.)
Oracle Database does not wait for users currently connected to the database to
disconnect. The database implicitly rolls back active transactions and disconnects
all connected users.

The next startup of the database will not require any instance recovery procedures.

Shutting Down with the TRANSACTIONAL Clause
When you want to perform a planned shutdown of an instance while allowing active
transactions to complete first, use the SHUTDOWN command with the TRANSACTIONAL
clause:
SHUTDOWN TRANSACTIONAL

Transactional database shutdown proceeds with the following conditions:
■

■

■

No new connections are allowed, nor are new transactions allowed to be started,
after the statement is issued.
After all transactions have completed, any client still connected to the instance is
disconnected.
At this point, the instance shuts down just as it would when a SHUTDOWN
IMMEDIATE statement is submitted.

The next startup of the database will not require any instance recovery procedures.
A transactional shutdown prevents clients from losing work, and at the same time,
does not require all users to log off.

Shutting Down with the ABORT Clause
You can shut down a database instantaneously by aborting the database instance. If
possible, perform this type of shutdown only in the following situations:
The database or one of its applications is functioning irregularly and none of the other
types of shutdown works.
■

■

You need to shut down the database instantaneously (for example, if you know a
power shutdown is going to occur in one minute).
You experience problems when starting a database instance.

When you must do a database shutdown by aborting transactions and user
connections, issue the SHUTDOWN command with the ABORT clause:
SHUTDOWN ABORT

An aborted database shutdown proceeds with the following conditions:
■

No new connections are allowed, nor are new transactions allowed to be started,
after the statement is issued.

3-10 Oracle Database Administrator’s Guide

Quiescing a Database

■

■
■

Current client SQL statements being processed by Oracle Database are
immediately terminated.
Uncommitted transactions are not rolled back.
Oracle Database does not wait for users currently connected to the database to
disconnect. The database implicitly disconnects all connected users.

The next startup of the database will require instance recovery procedures.

Shutdown Timeout
Shutdown modes that wait for users to disconnect or for transactions to complete have
a limit on the amount of time that they wait. If all events blocking the shutdown do
not occur within one hour, the shutdown command cancels with the following
message: ORA-01013: user requested cancel of current operation.

Quiescing a Database
Occasionally you might want to put a database in a state that allows only DBA
transactions, queries, fetches, or PL/SQL statements. Such a state is referred to as a
quiesced state, in the sense that no ongoing non-DBA transactions, queries, fetches, or
PL/SQL statements are running in the system.
In this discussion of quiesce database, a DBA is defined as
user SYS or SYSTEM. Other users, including those with the DBA
role, are not allowed to issue the ALTER SYSTEM QUIESCE
DATABASE statement or proceed after the database is quiesced.

Note:

The quiesced state lets administrators perform actions that cannot safely be done
otherwise. These actions include:
■

■

Actions that fail if concurrent user transactions access the same object--for
example, changing the schema of a database table or adding a column to an
existing table where a no-wait lock is required.
Actions whose undesirable intermediate effect can be seen by concurrent user
transactions--for example, a multistep procedure for reorganizing a table when the
table is first exported, then dropped, and finally imported. A concurrent user who
attempts to access the table after it was dropped, but before import, would not
have an accurate view of the situation.

Without the ability to quiesce the database, you would need to shut down the database
and reopen it in restricted mode. This is a serious restriction, especially for systems
requiring 24 x 7 availability. Quiescing a database is much a smaller restriction,
because it eliminates the disruption to users and the downtime associated with
shutting down and restarting the database.
When the database is in the quiesced state, it is through the facilities of the Database
Resource Manager that non-DBA sessions are prevented from becoming active.
Therefore, while this statement is in effect, any attempt to change the current resource
plan will be queued until after the system is unquiesced. See Chapter 24, "Using the
Database Resource Manager" for more information about the Database Resource
Manager.

Starting Up and Shutting Down

3-11

Quiescing a Database

Placing a Database into a Quiesced State
To place a database into a quiesced state, issue the following statement:
ALTER SYSTEM QUIESCE RESTRICTED;

Non-DBA active sessions will continue until they become inactive. An active session is
one that is currently inside of a transaction, a query, a fetch, or a PL/SQL statement; or
a session that is currently holding any shared resources (for example, enqueues). No
inactive sessions are allowed to become active. For example, If a user issues a SQL
query in an attempt to force an inactive session to become active, the query will appear
to be hung. When the database is later unquiesced, the session is resumed, and the
blocked action is processed.
Once all non-DBA sessions become inactive, the ALTER SYSTEM QUIESCE
RESTRICTED statement completes, and the database is in a quiesced state. In an
Oracle Real Application Clusters environment, this statement affects all instances, not
just the one that issues the statement.
The ALTER SYSTEM QUIESCE RESTRICTED statement may wait a long time for
active sessions to become inactive. You can determine the sessions that are blocking
the quiesce operation by querying the V$BLOCKING_QUIESCE view. This view returns
only a single column: SID (Session ID). You can join it with V$SESSION to get more
information about the session, as shown in the following example:
select bl.sid, user, osuser, type, program
from v$blocking_quiesce bl, v$session se
where bl.sid = se.sid;

See Oracle Database Reference for details on these view.
If you interrupt the request to quiesce the database, or if your session terminates
abnormally before all active sessions are quiesced, then Oracle Database automatically
reverses any partial effects of the statement.
For queries that are carried out by successive multiple Oracle Call Interface (OCI)
fetches, the ALTER SYSTEM QUIESCE RESTRICTED statement does not wait for all
fetches to finish. It only waits for the current fetch to finish.
For both dedicated and shared server connections, all non-DBA logins after this
statement is issued are queued by the Database Resource Manager, and are not
allowed to proceed. To the user, it appears as if the login is hung. The login will
resume when the database is unquiesced.
The database remains in the quiesced state even if the session that issued the statement
exits. A DBA must log in to the database to issue the statement that specifically
unquiesces the database.
You cannot perform a cold backup when the database is in
the quiesced state, because Oracle Database background processes
may still perform updates for internal purposes even while the
database is quiesced. In addition, the file headers of online datafiles
continue to appear to be accessible. They do not look the same as if
a clean shutdown had been performed. However, you can still take
online backups while the database is in a quiesced state.

Note:

Restoring the System to Normal Operation
The following statement restores the database to normal operation:

3-12 Oracle Database Administrator’s Guide

Suspending and Resuming a Database

ALTER SYSTEM UNQUIESCE;

All non-DBA activity is allowed to proceed. In an Oracle Real Application Clusters
environment, this statement is not required to be issued from the same session, or even
the same instance, as that which quiesced the database. If the session issuing the
ALTER SYSTEM UNQUIESCE statement terminates abnormally, then the Oracle
Database server ensures that the unquiesce operation completes.

Viewing the Quiesce State of an Instance
You can query the ACTIVE_STATE column of the V$INSTANCE view to see the current
state of an instance. The column values has one of these values:
■

NORMAL: Normal unquiesced state.

■

QUIESCING: Being quiesced, but some non-DBA sessions are still active.

■

QUIESCED: Quiesced; no non-DBA sessions are active or allowed.

Suspending and Resuming a Database
The ALTER SYSTEM SUSPEND statement halts all input and output (I/O) to datafiles
(file header and file data) and control files. The suspended state lets you back up a
database without I/O interference. When the database is suspended all preexisting
I/O operations are allowed to complete and any new database accesses are placed in a
queued state.
The suspend command is not specific to an instance. In an Oracle Real Application
Clusters environment, when you issue the suspend command on one system, internal
locking mechanisms propagate the halt request across instances, thereby quiescing all
active instances in a given cluster. However, if someone starts a new instance another
instance is being suspended, the new instance will not be suspended.
Use the ALTER SYSTEM RESUME statement to resume normal database operations.
The SUSPEND and RESUME commands can be issued from different instances. For
example, if instances 1, 2, and 3 are running, and you issue an ALTER SYSTEM
SUSPEND statement from instance 1, then you can issue a RESUME statement from
instance 1, 2, or 3 with the same effect.
The suspend/resume feature is useful in systems that allow you to mirror a disk or file
and then split the mirror, providing an alternative backup and restore solution. If you
use a system that is unable to split a mirrored disk from an existing database while
writes are occurring, then you can use the suspend/resume feature to facilitate the
split.
The suspend/resume feature is not a suitable substitute for normal shutdown
operations, because copies of a suspended database can contain uncommitted updates.
Do not use the ALTER SYSTEM SUSPEND statement as a
substitute for placing a tablespace in hot backup mode. Precede any
database suspend operation by an ALTER TABLESPACE BEGIN
BACKUP statement.

Caution:

The following statements illustrate ALTER SYSTEM SUSPEND/RESUME usage. The
V$INSTANCE view is queried to confirm database status.
SQL> ALTER SYSTEM SUSPEND;
System altered

Starting Up and Shutting Down

3-13

Suspending and Resuming a Database

SQL> SELECT DATABASE_STATUS FROM V$INSTANCE;
DATABASE_STATUS
--------SUSPENDED
SQL> ALTER SYSTEM RESUME;
System altered
SQL> SELECT DATABASE_STATUS FROM V$INSTANCE;
DATABASE_STATUS
--------ACTIVE

See Also: Oracle Database Backup and Recovery Advanced User's
Guide for details about backing up a database using the database
suspend/resume feature

3-14 Oracle Database Administrator’s Guide

4
Managing Oracle Database Processes
This chapter describes how to manage and monitor the processes of an Oracle
Database instance and contains the following topics:
■

About Dedicated and Shared Server Processes

■

Configuring Oracle Database for Shared Server

■

About Oracle Database Background Processes

■

Managing Processes for Parallel SQL Execution

■

Managing Processes for External Procedures

■

Terminating Sessions

■

Monitoring the Operation of Your Database

About Dedicated and Shared Server Processes
Oracle Database creates server processes to handle the requests of user processes
connected to an instance. A server process can be either of the following:
■

A dedicated server process, which services only one user process

■

A shared server process, which can service multiple user processes

Your database is always enabled to allow dedicated server processes, but you must
specifically configure and enable shared server by setting one or more initialization
parameters.

Dedicated Server Processes
Figure 4–1, "Oracle Database Dedicated Server Processes" illustrates how dedicated
server processes work. In this diagram two user processes are connected to the
database through dedicated server processes.
In general, it is better to be connected through a dispatcher and use a shared server
process. This is illustrated in Figure 4–2, "Oracle Database Shared Server Processes". A
shared server process can be more efficient because it keeps the number of processes
required for the running instance low.
In the following situations, however, users and administrators should explicitly
connect to an instance using a dedicated server process:
■

■

To submit a batch job (for example, when a job can allow little or no idle time for
the server process)
To use Recovery Manager (RMAN) to back up, restore, or recover a database

Managing Oracle Database Processes

4-1

About Dedicated and Shared Server Processes

To request a dedicated server connection when Oracle Database is configured for
shared server, users must connect using a net service name that is configured to use a
dedicated server. Specifically, the net service name value should include the
SERVER=DEDICATED clause in the connect descriptor.
See Also: Oracle Database Net Services Administrator's Guide for
more information about requesting a dedicated server connection
Figure 4–1 Oracle Database Dedicated Server Processes
User
Process

User
Process

Application
Code

Application
Code

Client Workstation

Database Server
Dedicated
Server
Process
Oracle
Server Code

Oracle
Server Code

Program
Interface

System Global Area

Shared Server Processes
Consider an order entry system with dedicated server processes. A customer phones
the order desk and places an order, and the clerk taking the call enters the order into
the database. For most of the transaction, the clerk is on the telephone talking to the
customer. A server process is not needed during this time, so the server process
dedicated to the clerk's user process remains idle. The system is slower for other clerks
entering orders, because the idle server process is holding system resources.
Shared server architecture eliminates the need for a dedicated server process for each
connection (see Figure 4–2).

4-2 Oracle Database Administrator’s Guide

About Dedicated and Shared Server Processes

Figure 4–2 Oracle Database Shared Server Processes
User
Process

Code
Code
Code
Application
Code
Code
Code
Code
Code
Code

7

Client Workstation
Database Server

1
Dispatcher Processes
6

Oracle
Oracle
Oracle
Oracle
Server
Code
Server
Code
Server
Code
Server Code

Shared
server
processes

3
4
2

System Global Area
Request
Queue

5

Response
Queues

In a shared server configuration, client user processes connect to a dispatcher. The
dispatcher can support multiple client connections concurrently. Each client
connection is bound to a virtual circuit, which is a piece of shared memory used by
the dispatcher for client database connection requests and replies. The dispatcher
places a virtual circuit on a common queue when a request arrives.
An idle shared server process picks up the virtual circuit from the common queue,
services the request, and relinquishes the virtual circuit before attempting to retrieve
another virtual circuit from the common queue. This approach enables a small pool of
server processes to serve a large number of clients. A significant advantage of shared
server architecture over the dedicated server model is the reduction of system
resources, enabling the support of an increased number of users.
For even better resource management, shared server can be configured for connection
pooling. Connection pooling lets a dispatcher support more users by enabling the
database server to time-out protocol connections and to use those connections to
service an active session. Further, shared server can be configured for session
multiplexing, which combines multiple sessions for transmission over a single
network connection in order to conserve the operating system's resources.

Managing Oracle Database Processes

4-3

Configuring Oracle Database for Shared Server

Shared server architecture requires Oracle Net Services. User processes targeting the
shared server must connect through Oracle Net Services, even if they are on the same
machine as the Oracle Database instance.
See Also: Oracle Database Net Services Administrator's Guide for
more detailed information about shared server, including features
such as connection pooling and session multiplexing

Configuring Oracle Database for Shared Server
Shared memory resources are preconfigured to allow the enabling of shared server at
run time. You need not configure it by specifying parameters in your initialization
parameter file, but you can do so if that better suits your environment. You can start
dispatchers and shared server processes (shared servers) dynamically using the ALTER
SYSTEM statement.
This section discusses how to enable shared server and how to set or alter shared
server initialization parameters. It contains the following topics:
■

Initialization Parameters for Shared Server

■

Enabling Shared Server

■

Configuring Dispatchers

■

Monitoring Shared Server
Oracle Database SQL Reference for further information
about the ALTER SYSTEM statement

See Also:

Initialization Parameters for Shared Server
The following initialization parameters control shared server operation:
■

■

■

■
■

■

SHARED_SERVERS: Specifies the initial number of shared servers to start and the
minimum number of shared servers to keep. This is the only required parameter
for using shared servers.
MAX_SHARED_SERVERS: Specifies the maximum number of shared servers that
can run simultaneously.
SHARED_SERVER_SESSIONS: Specifies the total number of shared server user
sessions that can run simultaneously. Setting this parameter enables you to reserve
user sessions for dedicated servers.
DISPATCHERS: Configures dispatcher processes in the shared server architecture.
MAX_DISPATCHERS: Specifies the maximum number of dispatcher processes that
can run simultaneously. This parameter can be ignored for now. It will only be
useful in a future release when the number of dispatchers is auto-tuned according
to the number of concurrent connections.
CIRCUITS: Specifies the total number of virtual circuits that are available for
inbound and outbound network sessions.
See Also: Oracle Database Reference for more information about
these initialization parameters

4-4 Oracle Database Administrator’s Guide

Configuring Oracle Database for Shared Server

Enabling Shared Server
Shared server is enabled by setting the SHARED_SERVERS initialization parameter to a
value greater than 0. The other shared server initialization parameters need not be set.
Because shared server requires at least one dispatcher in order to work, a dispatcher is
brought up even if no dispatcher has been configured. Dispatchers are discussed in
"Configuring Dispatchers" on page 4-7.
Shared server can be started dynamically by setting the SHARED_SERVERS parameter
to a nonzero value with the ALTER SYSTEM statement, or SHARED_SERVERS can be
included at database startup in the initialization parameter file. If SHARED_SERVERS is
not included in the initialization parameter file, or is included but is set to 0, then
shared server is not enabled at database startup.
Note: For backward compatibility, if SHARED_SERVERS is not
included in the initialization parameter file at database startup, but
DISPATCHERS is included and it specifies at least one dispatcher,
shared server is enabled. In this case, the default for
SHARED_SERVERS is 1.

However, if neither SHARED_SERVERS nor DISPATCHERS is
included in the initialization file, you cannot start shared server
after the instance is brought up by just altering the DISPATCHERS
parameter. You must specifically alter SHARED_SERVERS to a
nonzero value to start shared server.

Determining a Value for SHARED_SERVERS
The SHARED_SERVERS initialization parameter specifies the minimum number of
shared servers that you want created when the instance is started. After instance
startup, Oracle Database can dynamically adjust the number of shared servers based
on how busy existing shared servers are and the length of the request queue.
In typical systems, the number of shared servers stabilizes at a ratio of one shared
server for every ten connections. For OLTP applications, when the rate of requests is
low, or when the ratio of server usage to request is low, the connections-to-servers
ratio could be higher. In contrast, in applications where the rate of requests is high or
the server usage-to-request ratio is high, the connections-to-server ratio could be
lower.
The PMON (process monitor) background process cannot terminate shared servers
below the value specified by SHARED_SERVERS. Therefore, you can use this
parameter to stabilize the load and minimize strain on the system by preventing
PMON from terminating and then restarting shared servers because of coincidental
fluctuations in load.
If you know the average load on your system, you can set SHARED_SERVERS to an
optimal value. The following example shows how you can use this parameter:
Assume a database is being used by a telemarketing center staffed by 1000 agents. On
average, each agent spends 90% of the time talking to customers and only 10% of the
time looking up and updating records. To keep the shared servers from being
terminated as agents talk to customers and then spawned again as agents access the
database, a DBA specifies that the optimal number of shared servers is 100.
However, not all work shifts are staffed at the same level. On the night shift, only 200
agents are needed. Since SHARED_SERVERS is a dynamic parameter, a DBA reduces
the number of shared servers to 20 at night, thus allowing resources to be freed up for
other tasks such as batch jobs.
Managing Oracle Database Processes

4-5

Configuring Oracle Database for Shared Server

Decreasing the Number of Shared Server Processes
You can decrease the minimum number of shared servers that must be kept active by
dynamically setting the SHARED_SERVERS parameter to a lower value. Thereafter,
until the number of shared servers is decreased to the value of the SHARED_SERVERS
parameter, any shared servers that become inactive are marked by PMON for
termination.
The following statement reduces the number of shared servers:
ALTER SYSTEM SET SHARED_SERVERS = 5;

Setting SHARED_SERVERS to 0 disables shared server. For more information, please
refer to "Disabling Shared Servers" on page 4-12.

Limiting the Number of Shared Server Processes
The MAX_SHARED_SERVERS parameter specifies the maximum number of shared
servers that can be automatically created by PMON. It has no default value. If no value
is specified, then PMON starts as many shared servers as is required by the load,
subject to these limitations:
■
■

■

The process limit (set by the PROCESSES initialization parameter)
A minimum number of free process slots (at least one-eighth of the total process
slots, or two slots if PROCESSES is set to less than 24)
System resources
On Windows NT, take care when setting
MAX_SHARED_SERVERS to a high value, because each server is a
thread in a common process.

Note:

The value of SHARED_SERVERS overrides the value of MAX_SHARED_SERVERS.
Therefore, you can force PMON to start more shared servers than the
MAX_SHARED_SERVERS value by setting SHARED_SERVERS to a value higher than
MAX_SHARED_SERVERS. You can subsequently place a new upper limit on the number
of shared servers by dynamically altering the MAX_SHARED_SERVERS to a value
higher than SHARED_SERVERS.
The primary reason to limit the number of shared servers is to reserve resources, such
as memory and CPU time, for other processes. For example, consider the case of the
telemarketing center discussed previously:
The DBA wants to reserve two thirds of the resources for batch jobs at night. He sets
MAX_SHARED_SERVERS to less than one third of the maximum number of processes
(PROCESSES). By doing so, the DBA ensures that even if all agents happen to access
the database at the same time, batch jobs can connect to dedicated servers without
having to wait for the shared servers to be brought down after processing agents'
requests.
Another reason to limit the number of shared servers is to prevent the concurrent run
of too many server processes from slowing down the system due to heavy swapping,
although PROCESSES can serve as the upper bound for this rather than
MAX_SHARED_SERVERS.
Still other reasons to limit the number of shared servers are testing, debugging,
performance analysis, and tuning. For example, to see how many shared servers are
needed to efficiently support a certain user community, you can vary

4-6 Oracle Database Administrator’s Guide

Configuring Oracle Database for Shared Server

MAX_SHARED_SERVERS from a very small number upward until no delay in response
time is noticed by the users.

Limiting the Number of Shared Server Sessions
The SHARED_SERVER_SESSIONS initialization parameter specifies the maximum
number of concurrent shared server user sessions. Setting this parameter, which is a
dynamic parameter, lets you reserve database sessions for dedicated servers. This in
turn ensures that administrative tasks that require dedicated servers, such as backing
up or recovering the database, are not preempted by shared server sessions.
This parameter has no default value. If it is not specified, the system can create shared
server sessions as needed, limited by the SESSIONS initialization parameter.

Protecting Shared Memory
The CIRCUITS parameter sets a maximum limit on the number of virtual circuits that
can be created in shared memory. This parameter has no default. If it is not specified,
then the system can create circuits as needed, limited by the DISPATCHERS
initialization parameter and system resources.

Configuring Dispatchers
The DISPATCHERS initialization parameter configures dispatcher processes in the
shared server architecture. At least one dispatcher process is required for shared server
to work.If you do not specify a dispatcher, but you enable shared server by setting
SHARED_SERVER to a nonzero value, then by default Oracle Database creates one
dispatcher for the TCP protocol. The equivalent DISPATCHERS explicit setting of the
initialization parameter for this configuration is:
dispatchers="(PROTOCOL=tcp)"

You can configure more dispatchers, using the DISPATCHERS initialization parameter,
if either of the following conditions apply:
■

■

You need to configure a protocol other than TCP/IP. You configure a protocol
address with one of the following attributes of the DISPATCHERS parameter:
–

ADDRESS

–

DESCRIPTION

–

PROTOCOL

You want to configure one or more of the optional dispatcher attributes:
–

DISPATCHERS

–

CONNECTIONS

–

SESSIONS

–

TICKS

–

LISTENER

–

MULTIPLEX

–

POOL

–

SERVICE

Managing Oracle Database Processes

4-7

Configuring Oracle Database for Shared Server

Note: Database Configuration Assistant helps you configure this
parameter.

DISPATCHERS Initialization Parameter Attributes
This section provides brief descriptions of the attributes that can be specified with the
DISPATCHERS initialization parameter.
A protocol address is required and is specified using one or more of the following
attributes:
Attribute

Description

ADDRESS

Specify the network protocol address of the endpoint on which
the dispatchers listen.

DESCRIPTION

Specify the network description of the endpoint on which the
dispatchers listen, including the network protocol address. The
syntax is as follows:
(DESCRIPTION=(ADDRESS=...))

PROTOCOL

Specify the network protocol for which the dispatcher
generates a listening endpoint. For example:
(PROTOCOL=tcp)

See the Oracle Database Net Services Reference for further
information about protocol address syntax.

The following attribute specifies how many dispatchers this configuration should
have. It is optional and defaults to 1.
Attribute

Description

DISPATCHERS

Specify the initial number of dispatchers to start.

The following attributes tell the instance about the network attributes of each
dispatcher of this configuration. They are all optional.
Attribute

Description

CONNECTIONS

Specify the maximum number of network connections to allow
for each dispatcher.

SESSIONS

Specify the maximum number of network sessions to allow for
each dispatcher.

TICKS

Specify the duration of a TICK in seconds. A TICK is a unit of
time in terms of which the connection pool timeout can be
specified. Used for connection pooling.

LISTENER

Specify an alias name for the listeners with which the PMON
process registers dispatcher information. Set the alias to a name
that is resolved through a naming method.

MULTIPLEX

Used to enable the Oracle Connection Manager session
multiplexing feature.

POOL

Used to enable connection pooling.

SERVICE

Specify the service names the dispatchers register with the
listeners.

4-8 Oracle Database Administrator’s Guide

Configuring Oracle Database for Shared Server

You can specify either an entire attribute name a substring consisting of at least the
first three characters. For example, you can specify SESSIONS=3, SES=3, SESS=3, or
SESSI=3, and so forth.
Oracle Database Reference for more detailed descriptions
of the attributes of the DISPATCHERS initialization parameter

See Also:

Determining the Number of Dispatchers
Once you know the number of possible connections for each process for the operating
system, calculate the initial number of dispatchers to create during instance startup,
for each network protocol, using the following formula:
Number of dispatchers =
CEIL ( max. concurrent sessions / connections for each dispatcher )

CEIL returns the result roundest up to the next whole integer.
For example, assume a system that can support 970 connections for each process, and
that has:
■

A maximum of 4000 sessions concurrently connected through TCP/IP and

■

A maximum of 2,500 sessions concurrently connected through TCP/IP with SSL

The DISPATCHERS attribute for TCP/IP should be set to a minimum of five
dispatchers (4000 / 970), and for TCP/IP with SSL three dispatchers (2500 / 970:
DISPATCHERS='(PROT=tcp)(DISP=5)', '(PROT-tcps)(DISP=3)'

Depending on performance, you may need to adjust the number of dispatchers.

Setting the Initial Number of Dispatchers
You can specify multiple dispatcher configurations by setting DISPATCHERS to a
comma separated list of strings, or by specifying multiple DISPATCHERS parameters
in the initialization file. If you specify DISPATCHERS multiple times, the lines must be
adjacent to each other in the initialization parameter file. Internally, Oracle Database
assigns an INDEX value (beginning with zero) to each DISPATCHERS parameter. You
can later refer to that DISPATCHERS parameter in an ALTER SYSTEM statement by its
index number.
Some examples of setting the DISPATCHERS initialization parameter follow.
Example: Typical

This is a typical example of setting the DISPATCHERS initialization

parameter.
DISPATCHERS="(PROTOCOL=TCP)(DISPATCHERS=2)"

Example: Forcing the IP Address Used for Dispatchers The following hypothetical
example will create two dispatchers that will listen on the specified IP address. The
address must be a valid IP address for the host that the instance is on. (The host may
be configured with multiple IP addresses.)
DISPATCHERS="(ADDRESS=(PROTOCOL=TCP)(HOST=144.25.16.201))(DISPATCHERS=2)"

Example: Forcing the Port Used by Dispatchers To force the dispatchers to use a
specific port as the listening endpoint, add the PORT attribute as follows:
DISPATCHERS="(ADDRESS=(PROTOCOL=TCP)(PORT=5000))"
DISPATCHERS="(ADDRESS=(PROTOCOL=TCP)(PORT=5001))"

Managing Oracle Database Processes

4-9

Configuring Oracle Database for Shared Server

Altering the Number of Dispatchers
You can control the number of dispatcher processes in the instance. Unlike the number
of shared servers, the number of dispatchers does not change automatically. You
change the number of dispatchers explicitly with the ALTER SYSTEM statement. In
this release of Oracle Database, you can increase the number of dispatchers to more
than the limit specified by the MAX_DISPATCHERS parameter. It is planned that
MAX_DISPATCHERS will be taken into consideration in a future release.
Monitor the following views to determine the load on the dispatcher processes:
■

V$QUEUE

■

V$DISPATCHER

■

V$DISPATCHER_RATE
See Also: Oracle Database Performance Tuning Guide for
information about monitoring these views to determine dispatcher
load and performance

If these views indicate that the load on the dispatcher processes is consistently high,
then performance may be improved by starting additional dispatcher processes to
route user requests. In contrast, if the load on dispatchers is consistently low, reducing
the number of dispatchers may improve performance.
To dynamically alter the number of dispatchers when the instance is running, use the
ALTER SYSTEM statement to modify the DISPATCHERS attribute setting for an
existing dispatcher configuration. You can also add new dispatcher configurations to
start dispatchers with different network attributes.
When you reduce the number of dispatchers for a particular dispatcher configuration,
the dispatchers are not immediately removed. Rather, as users disconnect, Oracle
Database terminates dispatchers down to the limit you specify in DISPATCHERS,
For example, suppose the instance was started with this DISPATCHERS setting in the
initialization parameter file:
DISPATCHERS='(PROT=tcp)(DISP=2)', '(PROT=tcps)(DISP=2)'

To increase the number of dispatchers for the TCP/IP protocol from 2 to 3, and
decrease the number of dispatchers for the TCP/IP with SSL protocol from 2 to 1, you
can issue the following statement:
ALTER SYSTEM SET DISPATCHERS = '(INDEX=0)(DISP=3)', '(INDEX=1)(DISP=1)';

or
ALTER SYSTEM SET DISPATCHERS = '(PROT=tcp)(DISP=3)', '(PROT-tcps)(DISP=1)';

You need not specify (DISP=1). It is optional because 1 is
the default value for the DISPATCHERS parameter.

Note:

If fewer than three dispatcher processes currently exist for TCP/IP, the database
creates new ones. If more than one dispatcher process currently exists for TCP/IP with
SSL, then the database terminates the extra ones as the connected users disconnect.
Suppose that instead of changing the number of dispatcher processes for the TCP/IP
protocol, you want to add another TCP/IP dispatcher that supports connection
pooling. You can do so by entering the following statement:

4-10 Oracle Database Administrator’s Guide

Configuring Oracle Database for Shared Server

ALTER SYSTEM SET DISPATCHERS = '(INDEX=2)(PROT=tcp)(POOL=on)';

The INDEX attribute is needed to add the new dispatcher configuration. If you omit
(INDEX=2) in the preceding statement, then the TCP/IP dispatcher configuration at
INDEX 0 will be changed to support connection pooling, and the number of
dispatchers for that configuration will be reduced to 1, which is the default when the
number of dispatchers (attribute DISPATCHERS) is not specified.
Notes on Altering Dispatchers
■

■

■

■

The INDEX keyword can be used to identify which dispatcher configuration to
modify. If you do not specify INDEX, then the first dispatcher configuration
matching the DESCRIPTION, ADDRESS, or PROTOCOL specified will be modified.
If no match is found among the existing dispatcher configurations, then a new
dispatcher will be added.
The INDEX value can range from 0 to n-1, where n is the current number of
dispatcher configurations. If your ALTER SYSTEM statement specifies an INDEX
value equal to n, where n is the current number of dispatcher configurations, a
new dispatcher configuration will be added.
To see the values of the current dispatcher configurations--that is, the number of
dispatchers, whether connection pooling is on, and so forth--query the
V$DISPATCHER_CONFIG dynamic performance view. To see which dispatcher
configuration a dispatcher is associated with, query the CONF_INDX column of the
V$DISPATCHER view.
When you change the DESCRIPTION, ADDRESS, PROTOCOL, CONNECTIONS,
TICKS, MULTIPLEX, and POOL attributes of a dispatcher configuration, the change
does not take effect for existing dispatchers but only for new dispatchers.
Therefore, in order for the change to be effective for all dispatchers associated with
a configuration, you must forcibly kill existing dispatchers after altering the
DISPATCHERS parameter, and let the database start new ones in their place with
the newly specified properties.
The attributes LISTENER and SERVICES are not subject to the same constraint.
They apply to existing dispatchers associated with the modified configuration.
Attribute SESSIONS applies to existing dispatchers only if its value is reduced.
However, if its value is increased, it is applied only to newly started dispatchers.

Shutting Down Specific Dispatcher Processes
With the ALTER SYSTEM statement, you leave it up to the database to determine
which dispatchers to shut down to reduce the number of dispatchers. Alternatively, it
is possible to shut down specific dispatcher processes. To identify the name of the
specific dispatcher process to shut down, use the V$DISPATCHER dynamic
performance view.
SELECT NAME, NETWORK FROM V$DISPATCHER;

Each dispatcher is uniquely identified by a name of the form Dnnn.
To shut down dispatcher D002, issue the following statement:
ALTER SYSTEM SHUTDOWN IMMEDIATE 'D002';

The IMMEDIATE keyword stops the dispatcher from accepting new connections and
the database immediately terminates all existing connections through that dispatcher.
After all sessions are cleaned up, the dispatcher process shuts down. If IMMEDIATE

Managing Oracle Database Processes

4-11

Configuring Oracle Database for Shared Server

were not specified, the dispatcher would wait until all of its users disconnected and all
of its connections terminated before shutting down.

Disabling Shared Servers
You disable shared server by setting SHARED_SERVERS to 0. No new client can
connect in shared mode. However, when you set SHARED_SERVERS to 0, Oracle
Database retains some shared servers until all shared server connections are closed.
The number of shared servers retained is either the number specified by the preceding
setting of SHARED_SERVERS or the value of the MAX_SHARED_SERVERS parameter,
whichever is smaller. If both SHARED_SERVERS and MAX_SHARED_SERVERS are set to
0, then all shared servers will terminate and requests from remaining shared server
clients will be queued until the value of SHARED_SERVERS or MAX_SHARED_SERVERS
is raised again.
To terminate dispatchers once all shared server clients disconnect, enter this statement:
ALTER SYSTEM SET DISPATCHERS = '';

Monitoring Shared Server
The following views are useful for obtaining information about your shared server
configuration and for monitoring performance.
View

Description

V$DISPATCHER

Provides information on the dispatcher processes,
including name, network address, status, various usage
statistics, and index number.

V$DISPATCHER_CONFIG

Provides configuration information about the dispatchers.

V$DISPATCHER_RATE

Provides rate statistics for the dispatcher processes.

V$QUEUE

Contains information on the shared server message
queues.

V$SHARED_SERVER

Contains information on the shared servers.

V$CIRCUIT

Contains information about virtual circuits, which are
user connections to the database through dispatchers and
servers.

V$SHARED_SERVER_MONITOR

Contains information for tuning shared server.

V$SGA

Contains size information about various system global
area (SGA) groups. May be useful when tuning shared
server.

V$SGASTAT

Contains detailed statistical information about the SGA,
useful for tuning.

V$SHARED_POOL_RESERVED

Lists statistics to help tune the reserved pool and space
within the shared pool.

See Also:
■

■

Oracle Database Reference for detailed descriptions of these
views
Oracle Database Performance Tuning Guide for specific
information about monitoring and tuning shared server

4-12 Oracle Database Administrator’s Guide

About Oracle Database Background Processes

About Oracle Database Background Processes
To maximize performance and accommodate many users, a multiprocess Oracle
Database system uses background processes. Background processes consolidate
functions that would otherwise be handled by multiple database programs running
for each user process. Background processes asynchronously perform I/O and
monitor other Oracle Database processes to provide increased parallelism for better
performance and reliability.
Table 4–1 describes the basic Oracle Database background processes, many of which
are discussed in more detail elsewhere in this book. The use of additional database
server features or options can cause more background processes to be present. For
example, when you use Advanced Queuing, the queue monitor (QMNn) background
process is present. When you specify the FILE_MAPPING initialization parameter for
mapping datafiles to physical devices on a storage subsystem, then the FMON process
is present.
Table 4–1

Oracle Database Background Processes

Process Name

Description

Database writer (DBWn)

The database writer writes modified blocks from the database buffer cache to the
datafiles. Oracle Database allows a maximum of 20 database writer processes
(DBW0-DBW9 and DBWa-DBWj). The DB_WRITER_PROCESSES initialization
parameter specifies the number of DBWn processes. The database selects an
appropriate default setting for this initialization parameter or adjusts a user-specified
setting based on the number of CPUs and the number of processor groups.
For more information about setting the DB_WRITER_PROCESSES initialization
parameter, see the Oracle Database Performance Tuning Guide.

Log writer (LGWR)

The log writer process writes redo log entries to disk. Redo log entries are generated
in the redo log buffer of the system global area (SGA). LGWR writes the redo log
entries sequentially into a redo log file. If the database has a multiplexed redo log,
then LGWR writes the redo log entries to a group of redo log files. See Chapter 6,
"Managing the Redo Log" for information about the log writer process.

Checkpoint (CKPT)

At specific times, all modified database buffers in the system global area are written
to the datafiles by DBWn. This event is called a checkpoint. The checkpoint process is
responsible for signalling DBWn at checkpoints and updating all the datafiles and
control files of the database to indicate the most recent checkpoint.

System monitor (SMON)

The system monitor performs recovery when a failed instance starts up again. In a
Real Application Clusters database, the SMON process of one instance can perform
instance recovery for other instances that have failed. SMON also cleans up
temporary segments that are no longer in use and recovers dead transactions skipped
during system failure and instance recovery because of file-read or offline errors.
These transactions are eventually recovered by SMON when the tablespace or file is
brought back online.

Process monitor (PMON)

The process monitor performs process recovery when a user process fails. PMON is
responsible for cleaning up the cache and freeing resources that the process was
using. PMON also checks on the dispatcher processes (described later in this table)
and server processes and restarts them if they have failed.

Archiver (ARCn)

One or more archiver processes copy the redo log files to archival storage when they
are full or a log switch occurs. Archiver processes are the subject of Chapter 7,
"Managing Archived Redo Logs".

Managing Oracle Database Processes

4-13

Managing Processes for Parallel SQL Execution

Table 4–1 (Cont.) Oracle Database Background Processes
Process Name

Description

Recoverer (RECO)

The recoverer process is used to resolve distributed transactions that are pending
because of a network or system failure in a distributed database. At timed intervals,
the local RECO attempts to connect to remote databases and automatically complete
the commit or rollback of the local portion of any pending distributed transactions.
For information about this process and how to start it, see Chapter 33, "Managing
Distributed Transactions".

Dispatcher (Dnnn)

Dispatchers are optional background processes, present only when the shared server
configuration is used. Shared server was discussed previously in "Configuring Oracle
Database for Shared Server" on page 4-4.

Global Cache Service
(LMS)

In a Real Application Clusters environment, this process manages resources and
provides inter-instance resource control.

Oracle Database Concepts for more information about
Oracle Database background processes

See Also:

Managing Processes for Parallel SQL Execution
Note: The parallel execution feature described in this section is
available with the Oracle Database Enterprise Edition.

This section describes how to manage parallel processing of SQL statements. In this
configuration Oracle Database can divide the work of processing an SQL statement
among multiple parallel processes.
The execution of many SQL statements can be parallelized. The degree of parallelism
is the number of parallel execution servers that can be associated with a single
operation. The degree of parallelism is determined by any of the following:
■
■

A PARALLEL clause in a statement
For objects referred to in a query, the PARALLEL clause that was used when the
object was created or altered

■

A parallel hint inserted into the statement

■

A default determined by the database

An example of using parallel SQL execution is contained in "Parallelizing Table
Creation" on page 15-8.
The following topics are contained in this section:
■

About Parallel Execution Servers

■

Altering Parallel Execution for a Session
See Also:
■
■

Oracle Database Concepts for a description of parallel execution
Oracle Database Performance Tuning Guide for information about
using parallel hints

4-14 Oracle Database Administrator’s Guide

Managing Processes for Parallel SQL Execution

About Parallel Execution Servers
When an instance starts up, Oracle Database creates a pool of parallel execution
servers which are available for any parallel operation. A process called the parallel
execution coordinator dispatches the execution of a pool of parallel execution servers
and coordinates the sending of results from all of these parallel execution servers back
to the user.
The parallel execution servers are enabled by default, because by default the value for
PARALLEL_MAX_SERVERS initialization parameter is set >0. The processes are
available for use by the various Oracle Database features that are capable of exploiting
parallelism. Related initialization parameters are tuned by the database for the
majority of users, but you can alter them as needed to suit your environment. For ease
of tuning, some parameters can be altered dynamically.
Parallelism can be used by a number of features, including transaction recovery,
replication, and SQL execution. In the case of parallel SQL execution, the topic
discussed in this book, parallel server processes remain associated with a statement
throughout its execution phase. When the statement is completely processed, these
processes become available to process other statements.
See Also: Oracle Database Data Warehousing Guide for more
information about using and tuning parallel execution, including
parallel SQL execution

Altering Parallel Execution for a Session
You control parallel SQL execution for a session using the ALTER SESSION statement.

Disabling Parallel SQL Execution
You disable parallel SQL execution with an ALTER SESSION DISABLE PARALLEL
DML|DDL|QUERY statement. All subsequent DML (INSERT, UPDATE, DELETE), DDL
(CREATE, ALTER), or query (SELECT) operations are executed serially after such a
statement is issued. They will be executed serially regardless of any PARALLEL clause
associated with the statement or parallel attribute associated with the table or indexes
involved.
The following statement disables parallel DDL operations:
ALTER SESSION DISABLE PARALLEL DDL;

Enabling Parallel SQL Execution
You enable parallel SQL execution with an ALTER SESSION ENABLE PARALLEL
DML|DDL|QUERY statement. Subsequently, when a PARALLEL clause or parallel hint is
associated with a statement, those DML, DDL, or query statements will execute in
parallel. By default, parallel execution is enabled for DDL and query statements.
A DML statement can be parallelized only if you specifically issue an ALTER
SESSION statement to enable parallel DML:
ALTER SESSION ENABLE PARALLEL DML;

Forcing Parallel SQL Execution
You can force parallel execution of all subsequent DML, DDL, or query statements for
which parallelization is possible with the ALTER SESSION FORCE PARALLEL
DML|DDL|QUERY statement. Additionally you can force a specific degree of
parallelism to be in effect, overriding any PARALLEL clause associated with
subsequent statements. If you do not specify a degree of parallelism in this statement,
Managing Oracle Database Processes

4-15

Managing Processes for External Procedures

the default degree of parallelism is used. However, a degree of parallelism specified in
a statement through a hint will override the degree being forced.
The following statement forces parallel execution of subsequent statements and sets
the overriding degree of parallelism to 5:
ALTER SESSION FORCE PARALLEL DDL PARALLEL 5;

Managing Processes for External Procedures
External procedures are procedures written in one language that are called from
another program in a different language. An example is a PL/SQL program calling
one or more C routines that are required to perform special-purpose processing.
These callable routines are stored in a dynamic link library (DLL), or a libunit in the
case of a Java class method, and are registered with the base language. Oracle
Database provides a special-purpose interface, the call specification (call spec), that
enables users to call external procedures from other languages.
To call an external procedure, an application alerts a network listener process, which in
turn starts an external procedure agent. The default name of the agent is extproc,
and this agent must reside on the same computer as the database server. Using the
network connection established by the listener, the application passes to the external
procedure agent the name of the DLL or libunit, the name of the external procedure,
and any relevant parameters. The external procedure agent then loads, DLL or libunit,
runs the external procedure, and passes back to the application any values returned by
the external procedure.
To control access to DLLs, the database administrator grants execute privileges on the
appropriate DLLs to application developers. The application developers write the
external procedures and grant execute privilege on specific external procedures to
other users.
The external library (DLL file) must be statically linked. In
other words, it must not reference any external symbols from other
external libraries (DLL files). Oracle Database does not resolve such
symbols, so they can cause your external procedure to fail.

Note:

The environment for calling external procedures, consisting of tnsnames.ora and
listener.ora entries, is configured by default during the installation of your
database. You may need to perform additional network configuration steps for a
higher level of security. These steps are documented in the Oracle Database Net Services
Administrator's Guide.
See Also: Oracle Database Application Developer's Guide Fundamentals for information about external procedures

Terminating Sessions
Sometimes it is necessary to terminate current user sessions. For example, you might
want to perform an administrative operation and need to terminate all
non-administrative sessions. This section describes the various aspects of terminating
sessions, and contains the following topics:
■

Identifying Which Session to Terminate

■

Terminating an Active Session

4-16 Oracle Database Administrator’s Guide

Terminating Sessions

■

Terminating an Inactive Session

When a session is terminated, any active transactions of the session are rolled back,
and resources held by the session (such as locks and memory areas) are immediately
released and available to other sessions.
You terminate a current session using the SQL statement ALTER SYSTEM KILL
SESSION. The following statement terminates the session whose system identifier is 7
and serial number is 15:
ALTER SYSTEM KILL SESSION '7,15';

Identifying Which Session to Terminate
To identify which session to terminate, specify the session index number and serial
number. To identify the system identifier (SID) and serial number of a session, query
the V$SESSION dynamic performance view. For example, the following query
identifies all sessions for the user jward:
SELECT SID, SERIAL#, STATUS
FROM V$SESSION
WHERE USERNAME = 'JWARD';
SID
----7
12

SERIAL#
--------15
63

STATUS
-------ACTIVE
INACTIVE

A session is ACTIVE when it is making a SQL call to Oracle Database. A session is
INACTIVE if it is not making a SQL call to the database.
See Also: Oracle Database Reference for a description of the status
values for a session

Terminating an Active Session
If a user session is processing a transaction (ACTIVE status) when you terminate the
session, the transaction is rolled back and the user immediately receives the following
message:
ORA-00028: your session has been killed

If, after receiving the ORA-00028 message, a user submits additional statements
before reconnecting to the database, Oracle Database returns the following message:
ORA-01012: not logged on

An active session cannot be interrupted when it is performing network I/O or rolling
back a transaction. Such a session cannot be terminated until the operation completes.
In this case, the session holds all resources until it is terminated. Additionally, the
session that issues the ALTER SYSTEM statement to terminate a session waits up to 60
seconds for the session to be terminated. If the operation that cannot be interrupted
continues past one minute, the issuer of the ALTER SYSTEM statement receives a
message indicating that the session has been marked to be terminated. A session
marked to be terminated is indicated in V$SESSION with a status of KILLED and a
server that is something other than PSEUDO.

Managing Oracle Database Processes

4-17

Monitoring the Operation of Your Database

Terminating an Inactive Session
If the session is not making a SQL call to Oracle Database (is INACTIVE) when it is
terminated, the ORA-00028 message is not returned immediately. The message is not
returned until the user subsequently attempts to use the terminated session.
When an inactive session has been terminated, the STATUS of the session in the
V$SESSION view is KILLED. The row for the terminated session is removed from
V$SESSION after the user attempts to use the session again and receives the
ORA-00028 message.
In the following example, an inactive session is terminated. First, V$SESSION is
queried to identify the SID and SERIAL# of the session, and then the session is
terminated.
SELECT SID,SERIAL#,STATUS,SERVER
FROM V$SESSION
WHERE USERNAME = 'JWARD';
SID
SERIAL#
----- -------7
15
12
63
2 rows selected.

STATUS
--------INACTIVE
INACTIVE

SERVER
--------DEDICATED
DEDICATED

ALTER SYSTEM KILL SESSION '7,15';
Statement processed.
SELECT SID, SERIAL#, STATUS, SERVER
FROM V$SESSION
WHERE USERNAME = 'JWARD';
SID
SERIAL#
----- -------7
15
12
63
2 rows selected.

STATUS
--------KILLED
INACTIVE

SERVER
--------PSEUDO
DEDICATED

Monitoring the Operation of Your Database
It is important that you monitor the operation of your database on a regular basis.
Doing so not only informs you about errors that have not yet come to your attention
but also gives you a better understanding of the normal operation of your database.
Being familiar with normal behavior in turn helps you recognize when something is
wrong.
This section describes some of the options available to you for monitoring the
operation of your database.

Server-Generated Alerts
A server-generated alert is a notification from the Oracle Database server of an
impending problem. The notification may contain suggestions for correcting the
problem. Notifications are also provided when the problem condition has been
cleared.
Alerts are automatically generated when a problem occurs or when data does not
match expected values for metrics, such as the following:
■

Physical Reads Per Second

4-18 Oracle Database Administrator’s Guide

Monitoring the Operation of Your Database

■

User Commits Per Second

■

SQL Service Response Time

Server-generated alerts can be based on threshold levels or can issue simply because
an event has occurred. Threshold-based alerts can be triggered at both threshold
warning and critical levels. The value of these levels can be customer-defined or
internal values, and some alerts have default threshold levels which you can change if
appropriate. For example, by default a server-generated alert is generated for
tablespace space usage when the percentage of space usage exceeds either the 85%
warning or 97% critical threshold level. Examples of alerts not based on threshold
levels are:
■

Snapshot Too Old

■

Resumable Session Suspended

■

Recovery Area Space Usage

An alert message is sent to the predefined persistent queue ALERT_QUE owned by the
user SYS. Oracle Enterprise Manager reads this queue and provides notifications
about outstanding server alerts, and sometimes suggests actions for correcting the
problem. The alerts are displayed on the Enterprise Manager console and can be
configured to send email or pager notifications to selected administrators. If an alert
cannot be written to the alert queue, a message about the alert is written to the Oracle
Database alert log.
Background processes periodically flush the data to the Automatic Workload
Repository to capture a history of metric values. The alert history table and
ALERT_QUE are purged automatically by the system at regular intervals.
The most convenient way to set and view threshold values is to use Enterprise
Manager. See Oracle Database 2 Day DBA for instructions.
See Also: Oracle Enterprise Manager Concepts for information
about alerts available with Oracle Enterprise Manager

Using APIs to Administer Server-Generated Alerts
You can view and change threshold settings for the server alert metrics using the
SET_THRESHOLD and GET_THRESHOLD procedures of the DBMS_SERVER_ALERTS
PL/SQL package. The DBMS_AQ and DBMS_AQADM packages provide procedures for
accessing and reading alert messages in the alert queue.
See Also: Oracle Database PL/SQL Packages and Types Reference for
information about the DBMS_SERVER_ALERTS, DBMS_AQ, and
DBMS_AQADM packages

Setting Threshold Levels The following example shows how to set thresholds with the
SET_THRESHOLD procedure for CPU time for each user call for an instance:
DBMS_SERVER_ALERT.SET_THRESHOLD(
DBMS_SERVER_ALERT.CPU_TIME_PER_CALL, DBMS_SERVER_ALERT.OPERATOR_GE, '8000',
DBMS_SERVER_ALERT.OPERATOR_GE, '10000', 1, 2, 'inst1',
DBMS_SERVER_ALERT.OBJECT_TYPE_SERVICE, 'main.regress.rdbms.dev.us.oracle.com');

In this example, a warning alert is issued when CPU time exceeds 8000 microseconds
for each user call and a critical alert is issued when CPU time exceeds 10,000
microseconds for each user call. The arguments include:

Managing Oracle Database Processes

4-19

Monitoring the Operation of Your Database

■

■

■

■
■

CPU_TIME_PER_CALL specifies the metric identifier. For a list of support metrics,
see Oracle Database PL/SQL Packages and Types Reference.
The observation period is set to 1 minute. This period specifies the number of
minutes that the condition must deviate from the threshold value before the alert
is issued.
The number of consecutive occurrences is set to 2. This number specifies how
many times the metric value must violate the threshold values before the alert is
generated.
The name of the instance is set to inst1.
The constant DBMS_ALERT.OBJECT_TYPE_SERVICE specifies the object type on
which the threshold is set. In this example, the service name is
main.regress.rdbms.dev.us.oracle.com.

Retrieving Threshold Information To retrieve threshold values, use the GET_THRESHOLD
procedure. For example:
DECLARE
warning_operator
BINARY_INTEGER;
warning_value
VARCHAR2(60);
critical_operator
BINARY_INTEGER;
critical_value
VARCHAR2(60);
observation_period
BINARY_INTEGER;
consecutive_occurrences BINARY_INTEGER;
BEGIN
DBMS_SERVER_ALERT.GET_THRESHOLD(
DBMS_SERVER_ALERT.CPU_TIME_PER_CALL, warning_operator, warning_value,
critical_operator, critical_value, observation_period,
consecutive_occurrences, 'inst1',
DBMS_SERVER_ALERT.OBJECT_TYPE_SERVICE, 'main.regress.rdbms.dev.us.oracle.com');
DBMS_OUTPUT.PUT_LINE('Warning operator:
' || warning_operator);
DBMS_OUTPUT.PUT_LINE('Warning value:
' || warning_value);
DBMS_OUTPUT.PUT_LINE('Critical operator:
' || critical_operator);
DBMS_OUTPUT.PUT_LINE('Critical value:
' || critical_value);
DBMS_OUTPUT.PUT_LINE('Observation_period:
' || observation_period);
DBMS_OUTPUT.PUT_LINE('Consecutive occurrences:' || consecutive_occurrences);
END;
/

You can also check specific threshold settings with the DBA_THRESHOLDS view. For
example:
SELECT metrics_name, warning_value, critical_value, consecutive_occurrences
FROM DBA_THRESHOLDS
WHERE metrics_name LIKE '%CPU Time%';

Additional APIs to Manage Server-Generated Alerts If you use your own tool rather than
Enterprise Manager to display alerts, you must subscribe to the ALERT_QUE, read the
ALERT_QUE, and display an alert notification after setting the threshold levels for an
alert. To create an agent and subscribe the agent to the ALERT_QUE, use the
CREATE_AQ_AGENT and ADD_SUBSCRIBER procedures of the DBMS_AQADM package.
Next you must associate a database user with the subscribing agent, because only a
user associated with the subscribing agent can access queued messages in the secure
ALERT_QUE. You must also assign the enqueue privilege to the user. Use the
ENABLE_DB_ACCESS and GRANT_QUEUE_PRIVILEGE procedures of the
DBMS_AQADM package.

4-20 Oracle Database Administrator’s Guide

Monitoring the Operation of Your Database

Optionally, you can register with the DBMS_AQ.REGISTER procedure to receive an
asynchronous notification when an alert is enqueued to ALERT_QUE. The notification
can be in the form of email, HTTP post, or PL/SQL procedure.
To read an alert message, you can use the DBMS_AQ.DEQUEUE procedure or
OCIAQDeq call. After the message has been dequeued, use the
DBMS_SERVER_ALERT.EXPAND_MESSAGE procedure to expand the text of the
message.

Viewing Alert Data
The following dictionary views provide information about server alerts:
■

DBA_THRESHOLDS lists the threshold settings defined for the instance.

■

DBA_OUTSTANDING_ALERTS describes the outstanding alerts in the database.

■

DBA_ALERT_HISTORY lists a history of alerts that have been cleared.

■

V$ALERT_TYPES provides information such as group and type for each alert.

■

■

V$METRICNAME contains the names, identifiers, and other information about the
system metrics.
V$METRIC and V$METRIC_HISTORY views contain system-level metric values in
memory.
Oracle Database Reference for information on static data
dictionary views and dynamic performance views

See Also:

Monitoring the Database Using Trace Files and the Alert Log
Each server and background process can write to an associated trace file. When an
internal error is detected by a process, it dumps information about the error to its trace
file. Some of the information written to a trace file is intended for the database
administrator, and other information is for Oracle Support Services. Trace file
information is also used to tune applications and instances.
The alert log is a special trace file. The alert log of a database is a chronological log of
messages and errors, and includes the following items:
■

■

■

■
■

All internal errors (ORA-600), block corruption errors (ORA-1578), and deadlock
errors (ORA-60) that occur
Administrative operations, such as CREATE, ALTER, and DROP statements and
STARTUP, SHUTDOWN, and ARCHIVELOG statements
Messages and errors relating to the functions of shared server and dispatcher
processes
Errors occurring during the automatic refresh of a materialized view
The values of all initialization parameters that had nondefault values at the time
the database and instance start

Oracle Database uses the alert log to record these operations as an alternative to
displaying the information on an operator's console (although some systems also
display information on the console). If an operation is successful, a "completed"
message is written in the alert log, along with a timestamp.
Initialization parameters controlling the location and size of trace files are:
■

BACKGROUND_DUMP_DEST

■

USER_DUMP_DEST
Managing Oracle Database Processes

4-21

Monitoring the Operation of Your Database

■

MAX_DUMP_FILE_SIZE

These parameters are discussed in the sections that follow.
See Also: Oracle Database Reference for information about
initialization parameters that control the writing to trace files

Using the Trace Files
Check the alert log and other trace files of an instance periodically to learn whether the
background processes have encountered errors. For example, when the log writer
process (LGWR) cannot write to a member of a log group, an error message indicating
the nature of the problem is written to the LGWR trace file and the database alert log.
Such an error message means that a media or I/O problem has occurred and should be
corrected immediately.
Oracle Database also writes values of initialization parameters to the alert log, in
addition to other important statistics.

Specifying the Location of Trace Files
All trace files for background processes and the alert log are written to the directory
specified by the initialization parameter BACKGROUND_DUMP_DEST. All trace files for
server processes are written to the directory specified by the initialization parameter
USER_DUMP_DEST. The names of trace files are operating system specific, but each file
usually includes the name of the process writing the file (such as LGWR and RECO).
See Also: Your operating system specific Oracle documentation
for information about the names of trace files

Controlling the Size of Trace Files
You can control the maximum size of all trace files (excluding the alert log) using the
initialization parameter MAX_DUMP_FILE_SIZE, which limits the file to the specified
number of operating system blocks. To control the size of an alert log, you must
manually delete the file when you no longer need it. Otherwise the database continues
to append to the file.
You can safely delete the alert log while the instance is running, although you should
consider making an archived copy of it first. This archived copy could prove valuable
if you should have a future problem that requires investigating the history of an
instance.

Controlling When Oracle Database Writes to Trace Files
Background processes always write to a trace file when appropriate. In the case of the
ARCn background process, it is possible, through an initialization parameter, to
control the amount and type of trace information that is produced. This behavior is
described in "Controlling Trace Output Generated by the Archivelog Process" on
page 7-13. Other background processes do not have this flexibility.
Trace files are written on behalf of server processes whenever internal errors occur.
Additionally, setting the initialization parameter SQL_TRACE = TRUE causes the SQL
trace facility to generate performance statistics for the processing of all SQL statements
for an instance and write them to the USER_DUMP_DEST directory.
Optionally, you can request that trace files be generated for server processes.
Regardless of the current value of the SQL_TRACE initialization parameter, each
session can enable or disable trace logging on behalf of the associated server process

4-22 Oracle Database Administrator’s Guide

Monitoring the Operation of Your Database

by using the SQL statement ALTER SESSION SET SQL_TRACE. This example
enables the SQL trace facility for a specific session:
ALTER SESSION SET SQL_TRACE TRUE;

Caution: The SQL trace facility for server processes can cause
significant system overhead resulting in severe performance
impact, so you should enable this feature only when collecting
statistics.

Use the DBMS_SESSION or the DBMS_MONITOR package if you want to control SQL
tracing for a session.

Reading the Trace File for Shared Server Sessions
If shared server is enabled, each session using a dispatcher is routed to a shared server
process, and trace information is written to the server trace file only if the session has
enabled tracing (or if an error is encountered). Therefore, to track tracing for a specific
session that connects using a dispatcher, you might have to explore several shared
server trace files. To help you, Oracle provides a command line utility program,
trcsess, which consolidates all trace information pertaining to a user session in one
place and orders the information by time.
See Also: Oracle Database Performance Tuning Guide for
information about using the SQL trace facility and using TKPROF
and trcsess to interpret the generated trace files

Monitoring Locks
Locks are mechanisms that prevent destructive interaction between transactions
accessing the same resource. The resources can be either user objects, such as tables
and rows, or system objects not visible to users, such as shared data structures in
memory and data dictionary rows. Oracle Database automatically obtains and
manages necessary locks when executing SQL statements, so you need not be
concerned with such details. However, the database also lets you lock data manually.
A deadlock can occur when two or more users are waiting for data locked by each
other. Deadlocks prevent some transactions from continuing to work. Oracle Database
automatically detects deadlock situations and resolves them by rolling back one of the
statements involved in the deadlock, thereby releasing one set of the conflicting row
locks.
Oracle Database is designed to avoid deadlocks, and they are not common. Most often
they occur when transactions explicitly override the default locking of the database.
Deadlocks can affect the performance of your database, so Oracle provides some
scripts and views that enable you to monitor locks.
The utllockt.sql script displays, in a tree fashion, the sessions in the system that
are waiting for locks and the locks that they are waiting for. The location of this script
file is operating system dependent.
A second script, catblock.sql, creates the lock views that utllockt.sql needs, so
you must run it before running utllockt.sql.
The following views can help you to monitor locks:

Managing Oracle Database Processes

4-23

Monitoring the Operation of Your Database

View

Description

V$LOCK

Lists the locks currently held by Oracle Database and outstanding
requests for a lock or latch

DBA_BLOCKERS

Displays a session if it is holding a lock on an object for which
another session is waiting

DBA_WAITERS

Displays a session if it is waiting for a locked object

DBA_DDL_LOCKS

Lists all DDL locks held in the database and all outstanding
requests for a DDL lock

DBA_DML_LOCKS

Lists all DML locks held in the database and all outstanding
requests for a DML lock

DBA_LOCK

Lists all locks or latches held in the database and all outstanding
requests for a lock or latch

DBA_LOCK_INTERNAL

Displays a row for each lock or latch that is being held, and one
row for each outstanding request for a lock or latch

See Also:
■

■

Oracle Database Reference contains detailed descriptions of these
views
Oracle Database Concepts contains more information about locks.

Monitoring Wait Events
Wait events are statistics that are incremented by a server process to indicate that it
had to wait for an event to complete before being able to continue processing. A
session could wait for a variety of reasons, including waiting for more input, waiting
for the operating system to complete a service such as a disk write, or it could wait for
a lock or latch.
When a session is waiting for resources, it is not doing any useful work. A large
number of waits is a source of concern. Wait event data reveals various symptoms of
problems that might be affecting performance, such as latch contention, buffer
contention, and I/O contention.
Oracle provides several views that display wait event statistics. A discussion of these
views and their role in instance tuning is contained in Oracle Database Performance
Tuning Guide.

Process and Session Views
This section lists some of the data dictionary views that you can use to monitor an
Oracle Database instance. These views are general in their scope. Other views, more
specific to a process, are discussed in the section of this book where the process is
described.
View

Description

V$PROCESS

Contains information about the currently active processes

V$LOCKED_OBJECT

Lists all locks acquired by every transaction on the system

V$SESSION

Lists session information for each current session

V$SESS_IO

Contains I/O statistics for each user session

4-24 Oracle Database Administrator’s Guide

Monitoring the Operation of Your Database

View

Description

V$SESSION_LONGOPS

Displays the status of various operations that run for longer than 6
seconds (in absolute time). These operations currently include
many backup and recovery functions, statistics gathering, and
query execution. More operations are added for every Oracle
Database release.

V$SESSION_WAIT

Lists the resources or events for which active sessions are waiting

V$SYSSTAT

Contains session statistics

V$RESOURCE_LIMIT

Provides information about current and maximum global resource
utilization for some system resources

V$SQLAREA

Contains statistics about shared SQL area and contains one row for
each SQL string. Also provides statistics about SQL statements that
are in memory, parsed, and ready for execution

V$LATCH

Contains statistics for nonparent latches and summary statistics for
parent latches

Oracle Database Reference contains detailed descriptions
of these views

See Also:

Managing Oracle Database Processes

4-25

Monitoring the Operation of Your Database

4-26 Oracle Database Administrator’s Guide

Part II
Oracle Database Structure and Storage
This part describes database structure in terms of its storage components and how to
create and manage those components. It contains the following chapters:
■

Chapter 5, "Managing Control Files"

■

Chapter 6, "Managing the Redo Log"

■

Chapter 7, "Managing Archived Redo Logs"

■

Chapter 8, "Managing Tablespaces"

■

Chapter 9, "Managing Datafiles and Tempfiles"

■

Chapter 10, "Managing the Undo Tablespace"

5
Managing Control Files
This chapter explains how to create and maintain the control files for your database
and contains the following topics:
■

What Is a Control File?

■

Guidelines for Control Files

■

Creating Control Files

■

Troubleshooting After Creating Control Files

■

Backing Up Control Files

■

Recovering a Control File Using a Current Copy

■

Dropping Control Files

■

Displaying Control File Information
See Also: Part III, "Automated File and Storage Management" for
information about creating control files that are both created and
managed by the Oracle Database server

What Is a Control File?
Every Oracle Database has a control file, which is a small binary file that records the
physical structure of the database. The control file includes:
■

The database name

■

Names and locations of associated datafiles and redo log files

■

The timestamp of the database creation

■

The current log sequence number

■

Checkpoint information

The control file must be available for writing by the Oracle Database server whenever
the database is open. Without the control file, the database cannot be mounted and
recovery is difficult.
The control file of an Oracle Database is created at the same time as the database. By
default, at least one copy of the control file is created during database creation. On
some operating systems the default is to create multiple copies. You should create two
or more copies of the control file during database creation. You can also create control
files later, if you lose control files or want to change particular settings in the control
files.

Managing Control Files

5-1

Guidelines for Control Files

Guidelines for Control Files
This section describes guidelines you can use to manage the control files for a
database, and contains the following topics:
■

Provide Filenames for the Control Files

■

Multiplex Control Files on Different Disks

■

Back Up Control Files

■

Manage the Size of Control Files

Provide Filenames for the Control Files
You specify control file names using the CONTROL_FILES initialization parameter in
the database initialization parameter file (see "Creating Initial Control Files" on
page 5-3). The instance recognizes and opens all the listed file during startup, and the
instance writes to and maintains all listed control files during database operation.
If you do not specify files for CONTROL_FILES before database creation:
■

■

■

If you are not using Oracle-managed files, then the database creates a control file
and uses a default filename. The default name is operating system specific.
If you are using Oracle-managed files, then the initialization parameters you set to
enable that feature determine the name and location of the control files, as
described in Chapter 11, "Using Oracle-Managed Files".
If you are using Automatic Storage Management, you can place incomplete ASM
filenames in the DB_CREATE_FILE_DEST and DB_RECOVERY_FILE_DEST
initialization parameters. ASM then automatically creates control files in the
appropriate places. See "About ASM Filenames" on page 12-34 and "Creating a
Database in ASM" on page 12-42 for more information.

Multiplex Control Files on Different Disks
Every Oracle Database should have at least two control files, each stored on a different
physical disk. If a control file is damaged due to a disk failure, the associated instance
must be shut down. Once the disk drive is repaired, the damaged control file can be
restored using the intact copy of the control file from the other disk and the instance
can be restarted. In this case, no media recovery is required.
The behavior of multiplexed control files is this:
■

■

■

The database writes to all filenames listed for the initialization parameter
CONTROL_FILES in the database initialization parameter file.
The database reads only the first file listed in the CONTROL_FILES parameter
during database operation.
If any of the control files become unavailable during database operation, the
instance becomes inoperable and should be aborted.
Oracle strongly recommends that your database has a
minimum of two control files and that they are located on separate
physical disks.

Note:

One way to multiplex control files is to store a control file copy on every disk drive
that stores members of redo log groups, if the redo log is multiplexed. By storing

5-2 Oracle Database Administrator’s Guide

Creating Control Files

control files in these locations, you minimize the risk that all control files and all
groups of the redo log will be lost in a single disk failure.

Back Up Control Files
It is very important that you back up your control files. This is true initially, and every
time you change the physical structure of your database. Such structural changes
include:
■

Adding, dropping, or renaming datafiles

■

Adding or dropping a tablespace, or altering the read/write state of the tablespace

■

Adding or dropping redo log files or groups

The methods for backing up control files are discussed in "Backing Up Control Files"
on page 5-7.

Manage the Size of Control Files
The main determinants of the size of a control file are the values set for the
MAXDATAFILES, MAXLOGFILES, MAXLOGMEMBERS, MAXLOGHISTORY, and
MAXINSTANCES parameters in the CREATE DATABASE statement that created the
associated database. Increasing the values of these parameters increases the size of a
control file of the associated database.
See Also:
■

■

Your operating system specific Oracle documentation contains
more information about the maximum control file size.
Oracle Database SQL Reference for a description of the CREATE
DATABASE statement

Creating Control Files
This section describes ways to create control files, and contains the following topics:
■

Creating Initial Control Files

■

Creating Additional Copies, Renaming, and Relocating Control Files

■

Creating New Control Files

Creating Initial Control Files
The initial control files of an Oracle Database are created when you issue the CREATE
DATABASE statement. The names of the control files are specified by the
CONTROL_FILES parameter in the initialization parameter file used during database
creation. The filenames specified in CONTROL_FILES should be fully specified and are
operating system specific. The following is an example of a CONTROL_FILES
initialization parameter:
CONTROL_FILES = (/u01/oracle/prod/control01.ctl,
/u02/oracle/prod/control02.ctl,
/u03/oracle/prod/control03.ctl)

If files with the specified names currently exist at the time of database creation, you
must specify the CONTROLFILE REUSE clause in the CREATE DATABASE statement,
or else an error occurs. Also, if the size of the old control file differs from the SIZE
parameter of the new one, you cannot use the REUSE clause.
Managing Control Files

5-3

Creating Control Files

The size of the control file changes between some releases of Oracle Database, as well
as when the number of files specified in the control file changes. Configuration
parameters such as MAXLOGFILES, MAXLOGMEMBERS, MAXLOGHISTORY,
MAXDATAFILES, and MAXINSTANCES affect control file size.
You can subsequently change the value of the CONTROL_FILES initialization
parameter to add more control files or to change the names or locations of existing
control files.
See Also: Your operating system specific Oracle documentation
contains more information about specifying control files.

Creating Additional Copies, Renaming, and Relocating Control Files
You can create an additional control file copy for multiplexing by copying an existing
control file to a new location and adding the file name to the list of control files.
Similarly, you rename an existing control file by copying the file to its new name or
location, and changing the file name in the control file list. In both cases, to guarantee
that control files do not change during the procedure, shut down the database before
copying the control file.
To add a multiplexed copy of the current control file or to rename a control file:
1.

Shut down the database.

2.

Copy an existing control file to a new location, using operating system commands.

3.

Edit the CONTROL_FILES parameter in the database initialization parameter file
to add the new control file name, or to change the existing control filename.

4.

Restart the database.

Creating New Control Files
This section discusses when and how to create new control files.

When to Create New Control Files
It is necessary for you to create new control files in the following situations:
■

■

All control files for the database have been permanently damaged and you do not
have a control file backup.
You want to change the database name.
For example, you would change a database name if it conflicted with another
database name in a distributed environment.
You can change the database name and DBID (internal
database identifier) using the DBNEWID utility. See Oracle Database
Utilities for information about using this utility.

Note:

■

The compatibility level is set to a value that is earlier than 10.2.0, and you must
make a change to an area of database configuration that relates to any of the
following parameters from the CREATE DATABASE or CREATE CONTROLFILE
commands: MAXLOGFILES, MAXLOGMEMBERS, MAXLOGHISTORY, and
MAXINSTANCES. If compatibility is 10.2.0 or later, you do not have to create new
control files when you make such a change; the control files automatically expand,
if necessary, to accommodate the new configuration information.

5-4 Oracle Database Administrator’s Guide

Creating Control Files

For example, assume that when you created the database or recreated the control
files, you set MAXLOGFILES to 3. Suppose that now you want to add a fourth redo
log file group to the database with the ALTER DATABASE command. If
compatibility is set to 10.2.0 or later, you can do so and the controlfiles
automatically expand to accommodate the new logfile information. However, with
compatibility set earlier than 10.2.0, your ALTER DATABASE command would
generate an error, and you would have to first create new control files.
For information on compatibility level, see "The COMPATIBLE Initialization
Parameter and Irreversible Compatibility" on page 2-34.

The CREATE CONTROLFILE Statement
You can create a new control file for a database using the CREATE CONTROLFILE
statement. The following statement creates a new control file for the prod database (a
database that formerly used a different database name):
CREATE CONTROLFILE
SET DATABASE prod
LOGFILE GROUP 1 ('/u01/oracle/prod/redo01_01.log',
'/u01/oracle/prod/redo01_02.log'),
GROUP 2 ('/u01/oracle/prod/redo02_01.log',
'/u01/oracle/prod/redo02_02.log'),
GROUP 3 ('/u01/oracle/prod/redo03_01.log',
'/u01/oracle/prod/redo03_02.log')
RESETLOGS
DATAFILE '/u01/oracle/prod/system01.dbf' SIZE 3M,
'/u01/oracle/prod/rbs01.dbs' SIZE 5M,
'/u01/oracle/prod/users01.dbs' SIZE 5M,
'/u01/oracle/prod/temp01.dbs' SIZE 5M
MAXLOGFILES 50
MAXLOGMEMBERS 3
MAXLOGHISTORY 400
MAXDATAFILES 200
MAXINSTANCES 6
ARCHIVELOG;

Cautions:
■

■

The CREATE CONTROLFILE statement can potentially damage
specified datafiles and redo log files. Omitting a filename can
cause loss of the data in that file, or loss of access to the entire
database. Use caution when issuing this statement and be sure
to follow the instructions in "Steps for Creating New Control
Files".
If the database had forced logging enabled before creating the
new control file, and you want it to continue to be enabled,
then you must specify the FORCE LOGGING clause in the
CREATE CONTROLFILE statement. See "Specifying FORCE
LOGGING Mode" on page 2-18.

Oracle Database SQL Reference describes the complete
syntax of the CREATE CONTROLFILE statement

See Also:

Steps for Creating New Control Files
Complete the following steps to create a new control file.

Managing Control Files

5-5

Creating Control Files

1.

Make a list of all datafiles and redo log files of the database.
If you follow recommendations for control file backups as discussed in "Backing
Up Control Files" on page 5-7, you will already have a list of datafiles and redo
log files that reflect the current structure of the database. However, if you have no
such list, executing the following statements will produce one.
SELECT MEMBER FROM V$LOGFILE;
SELECT NAME FROM V$DATAFILE;
SELECT VALUE FROM V$PARAMETER WHERE NAME = 'control_files';

If you have no such lists and your control file has been damaged so that the
database cannot be opened, try to locate all of the datafiles and redo log files that
constitute the database. Any files not specified in step 5 are not recoverable once a
new control file has been created. Moreover, if you omit any of the files that make
up the SYSTEM tablespace, you might not be able to recover the database.
2.

Shut down the database.
If the database is open, shut down the database normally if possible. Use the
IMMEDIATE or ABORT clauses only as a last resort.

3.

Back up all datafiles and redo log files of the database.

4.

Start up a new instance, but do not mount or open the database:
STARTUP NOMOUNT

5.

Create a new control file for the database using the CREATE CONTROLFILE
statement.
When creating a new control file, specify the RESETLOGS clause if you have lost
any redo log groups in addition to control files. In this case, you will need to
recover from the loss of the redo logs (step 8). You must specify the RESETLOGS
clause if you have renamed the database. Otherwise, select the NORESETLOGS
clause.

6.

Store a backup of the new control file on an offline storage device. See "Backing Up
Control Files" on page 5-7 for instructions for creating a backup.

7.

Edit the CONTROL_FILES initialization parameter for the database to indicate all
of the control files now part of your database as created in step 5 (not including the
backup control file). If you are renaming the database, edit the DB_NAME
parameter in your instance parameter file to specify the new name.

8.

Recover the database if necessary. If you are not recovering the database, skip to
step 9.
If you are creating the control file as part of recovery, recover the database. If the
new control file was created using the NORESETLOGS clause (step 5), you can
recover the database with complete, closed database recovery.
If the new control file was created using the RESETLOGS clause, you must specify
USING BACKUP CONTROL FILE. If you have lost online or archived redo logs or
datafiles, use the procedures for recovering those files.
See Also: Oracle Database Backup and Recovery Basics and Oracle
Database Backup and Recovery Advanced User's Guide for information
about recovering your database and methods of recovering a lost
control file

9.

Open the database using one of the following methods:

5-6 Oracle Database Administrator’s Guide

Backing Up Control Files

■

If you did not perform recovery, or you performed complete, closed database
recovery in step 8, open the database normally.
ALTER DATABASE OPEN;

■

If you specified RESETLOGS when creating the control file, use the ALTER
DATABASE statement, indicating RESETLOGS.
ALTER DATABASE OPEN RESETLOGS;

The database is now open and available for use.

Troubleshooting After Creating Control Files
After issuing the CREATE CONTROLFILE statement, you may encounter some errors.
This section describes the most common control file errors:
■

Checking for Missing or Extra Files

■

Handling Errors During CREATE CONTROLFILE

Checking for Missing or Extra Files
After creating a new control file and using it to open the database, check the alert log
to see if the database has detected inconsistencies between the data dictionary and the
control file, such as a datafile in the data dictionary includes that the control file does
not list.
If a datafile exists in the data dictionary but not in the new control file, the database
creates a placeholder entry in the control file under the name MISSINGnnnn, where
nnnn is the file number in decimal. MISSINGnnnn is flagged in the control file as
being offline and requiring media recovery.
If the actual datafile corresponding to MISSINGnnnn is read-only or offline normal,
then you can make the datafile accessible by renaming MISSINGnnnn to the name of
the actual datafile. If MISSINGnnnn corresponds to a datafile that was not read-only
or offline normal, then you cannot use the rename operation to make the datafile
accessible, because the datafile requires media recovery that is precluded by the results
of RESETLOGS. In this case, you must drop the tablespace containing the datafile.
Conversely, if a datafile listed in the control file is not present in the data dictionary,
then the database removes references to it from the new control file. In both cases, the
database includes an explanatory message in the alert log to let you know what was
found.

Handling Errors During CREATE CONTROLFILE
If Oracle Database sends you an error (usually error ORA-01173, ORA-01176,
ORA-01177, ORA-01215, or ORA-01216) when you attempt to mount and open the
database after creating a new control file, the most likely cause is that you omitted a
file from the CREATE CONTROLFILE statement or included one that should not have
been listed. In this case, you should restore the files you backed up in step 3 on
page 5-6 and repeat the procedure from step 4, using the correct filenames.

Backing Up Control Files
Use the ALTER DATABASE BACKUP CONTROLFILE statement to back up your
control files. You have two options:

Managing Control Files

5-7

Recovering a Control File Using a Current Copy

1.

Back up the control file to a binary file (duplicate of existing control file) using the
following statement:
ALTER DATABASE BACKUP CONTROLFILE TO '/oracle/backup/control.bkp';

2.

Produce SQL statements that can later be used to re-create your control file:
ALTER DATABASE BACKUP CONTROLFILE TO TRACE;

This command writes a SQL script to the database trace file where it can be
captured and edited to reproduce the control file.
See Also: Oracle Database Backup and Recovery Basics for more
information on backing up your control files

Recovering a Control File Using a Current Copy
This section presents ways that you can recover your control file from a current
backup or from a multiplexed copy.

Recovering from Control File Corruption Using a Control File Copy
This procedure assumes that one of the control files specified in the CONTROL_FILES
parameter is corrupted, that the control file directory is still accessible, and that you
have a multiplexed copy of the control file.
1.

With the instance shut down, use an operating system command to overwrite the
bad control file with a good copy:
% cp /u03/oracle/prod/control03.ctl

2.

/u02/oracle/prod/control02.ctl

Start SQL*Plus and open the database:
SQL> STARTUP

Recovering from Permanent Media Failure Using a Control File Copy
This procedure assumes that one of the control files specified in the CONTROL_FILES
parameter is inaccessible due to a permanent media failure and that you have a
multiplexed copy of the control file.
1.

With the instance shut down, use an operating system command to copy the
current copy of the control file to a new, accessible location:
% cp /u01/oracle/prod/control01.ctl

2.

/u04/oracle/prod/control03.ctl

Edit the CONTROL_FILES parameter in the initialization parameter file to replace
the bad location with the new location:
CONTROL_FILES = (/u01/oracle/prod/control01.ctl,
/u02/oracle/prod/control02.ctl,
/u04/oracle/prod/control03.ctl)

3.

Start SQL*Plus and open the database:
SQL> STARTUP

If you have multiplexed control files, you can get the database started up quickly by
editing the CONTROL_FILES initialization parameter. Remove the bad control file
from CONTROL_FILES setting and you can restart the database immediately. Then you
can perform the reconstruction of the bad control file and at some later time shut down

5-8 Oracle Database Administrator’s Guide

Displaying Control File Information

and restart the database after editing the CONTROL_FILES initialization parameter to
include the recovered control file.

Dropping Control Files
You want to drop control files from the database, for example, if the location of a
control file is no longer appropriate. Remember that the database should have at least
two control files at all times.
1.

Shut down the database.

2.

Edit the CONTROL_FILES parameter in the database initialization parameter file
to delete the old control file name.

3.

Restart the database.
This operation does not physically delete the unwanted
control file from the disk. Use operating system commands to
delete the unnecessary file after you have dropped the control file
from the database.

Note:

Displaying Control File Information
The following views display information about control files:
View

Description

V$DATABASE

Displays database information from the control file

V$CONTROLFILE

Lists the names of control files

V$CONTROLFILE_RECORD_SECTION

Displays information about control file record
sections

V$PARAMETER

Displays the names of control files as specified in
the CONTROL_FILES initialization parameter

This example lists the names of the control files.
SQL> SELECT NAME FROM V$CONTROLFILE;
NAME
------------------------------------/u01/oracle/prod/control01.ctl
/u02/oracle/prod/control02.ctl
/u03/oracle/prod/control03.ctl

Managing Control Files

5-9

Displaying Control File Information

5-10 Oracle Database Administrator’s Guide

6
Managing the Redo Log
This chapter explains how to manage the online redo log. The current redo log is
always online, unlike archived copies of a redo log. Therefore, the online redo log is
usually referred to as simply the redo log.
This chapter contains the following topics:
■

What Is the Redo Log?

■

Planning the Redo Log

■

Creating Redo Log Groups and Members

■

Relocating and Renaming Redo Log Members

■

Dropping Redo Log Groups and Members

■

Forcing Log Switches

■

Verifying Blocks in Redo Log Files

■

Clearing a Redo Log File

■

Viewing Redo Log Information
See Also: Part III, "Automated File and Storage Management" for
information about redo log files that are both created and managed
by the Oracle Database server

What Is the Redo Log?
The most crucial structure for recovery operations is the redo log, which consists of
two or more preallocated files that store all changes made to the database as they
occur. Every instance of an Oracle Database has an associated redo log to protect the
database in case of an instance failure.

Redo Threads
When speaking in the context of multiple database instances, the redo log for each
database instance is also referred to as a redo thread. In typical configurations, only one
database instance accesses an Oracle Database, so only one thread is present. In an
Oracle Real Application Clusters environment, however, two or more instances
concurrently access a single database and each instance has its own thread of redo. A
separate redo thread for each instance avoids contention for a single set of redo log
files, thereby eliminating a potential performance bottleneck.
This chapter describes how to configure and manage the redo log on a standard
single-instance Oracle Database. The thread number can be assumed to be 1 in all

Managing the Redo Log

6-1

What Is the Redo Log?

discussions and examples of statements. For information about redo log groups in a
Real Application Clusters environment, please refer to Oracle Database Oracle
Clusterware and Oracle Real Application Clusters Administration and Deployment Guide.

Redo Log Contents
Redo log files are filled with redo records. A redo record, also called a redo entry, is
made up of a group of change vectors, each of which is a description of a change made
to a single block in the database. For example, if you change a salary value in an
employee table, you generate a redo record containing change vectors that describe
changes to the data segment block for the table, the undo segment data block, and the
transaction table of the undo segments.
Redo entries record data that you can use to reconstruct all changes made to the
database, including the undo segments. Therefore, the redo log also protects rollback
data. When you recover the database using redo data, the database reads the change
vectors in the redo records and applies the changes to the relevant blocks.
Redo records are buffered in a circular fashion in the redo log buffer of the SGA (see
"How Oracle Database Writes to the Redo Log" on page 6-2) and are written to one of
the redo log files by the Log Writer (LGWR) database background process. Whenever a
transaction is committed, LGWR writes the transaction redo records from the redo log
buffer of the SGA to a redo log file, and assigns a system change number (SCN) to
identify the redo records for each committed transaction. Only when all redo records
associated with a given transaction are safely on disk in the online logs is the user
process notified that the transaction has been committed.
Redo records can also be written to a redo log file before the corresponding transaction
is committed. If the redo log buffer fills, or another transaction commits, LGWR
flushes all of the redo log entries in the redo log buffer to a redo log file, even though
some redo records may not be committed. If necessary, the database can roll back these
changes.

How Oracle Database Writes to the Redo Log
The redo log of a database consists of two or more redo log files. The database requires
a minimum of two files to guarantee that one is always available for writing while the
other is being archived (if the database is in ARCHIVELOG mode). See "Managing
Archived Redo Logs" on page 7-1 for more information.
LGWR writes to redo log files in a circular fashion. When the current redo log file fills,
LGWR begins writing to the next available redo log file. When the last available redo
log file is filled, LGWR returns to the first redo log file and writes to it, starting the
cycle again. Figure 6–1 illustrates the circular writing of the redo log file. The numbers
next to each line indicate the sequence in which LGWR writes to each redo log file.
Filled redo log files are available to LGWR for reuse depending on whether archiving
is enabled.
■

■

If archiving is disabled (the database is in NOARCHIVELOG mode), a filled redo log
file is available after the changes recorded in it have been written to the datafiles.
If archiving is enabled (the database is in ARCHIVELOG mode), a filled redo log file
is available to LGWR after the changes recorded in it have been written to the
datafiles and the file has been archived.

6-2 Oracle Database Administrator’s Guide

What Is the Redo Log?

Figure 6–1 Reuse of Redo Log Files by LGWR

Online redo
log file
#1

1, 4, 7, ...

Online redo
log file
#2

2, 5, 8, ...

Online redo
log file
#3

3, 6, 9, ...

LGWR

Active (Current) and Inactive Redo Log Files
Oracle Database uses only one redo log files at a time to store redo records written
from the redo log buffer. The redo log file that LGWR is actively writing to is called the
current redo log file.
Redo log files that are required for instance recovery are called active redo log files.
Redo log files that are no longer required for instance recovery are called inactive redo
log files.
If you have enabled archiving (the database is in ARCHIVELOG mode), then the
database cannot reuse or overwrite an active online log file until one of the archiver
background processes (ARCn) has archived its contents. If archiving is disabled (the
database is in NOARCHIVELOG mode), then when the last redo log file is full, LGWR
continues by overwriting the first available active file.

Log Switches and Log Sequence Numbers
A log switch is the point at which the database stops writing to one redo log file and
begins writing to another. Normally, a log switch occurs when the current redo log file
is completely filled and writing must continue to the next redo log file. However, you
can configure log switches to occur at regular intervals, regardless of whether the
current redo log file is completely filled. You can also force log switches manually.
Oracle Database assigns each redo log file a new log sequence number every time a
log switch occurs and LGWR begins writing to it. When the database archives redo log
files, the archived log retains its log sequence number. A redo log file that is cycled
back for use is given the next available log sequence number.
Each online or archived redo log file is uniquely identified by its log sequence number.
During crash, instance, or media recovery, the database properly applies redo log files
in ascending order by using the log sequence number of the necessary archived and
redo log files.
Managing the Redo Log

6-3

Planning the Redo Log

Planning the Redo Log
This section provides guidelines you should consider when configuring a database
instance redo log and contains the following topics:
■

Multiplexing Redo Log Files

■

Placing Redo Log Members on Different Disks

■

Setting the Size of Redo Log Members

■

Choosing the Number of Redo Log Files

■

Controlling Archive Lag

Multiplexing Redo Log Files
To protect against a failure involving the redo log itself, Oracle Database allows a
multiplexed redo log, meaning that two or more identical copies of the redo log can be
automatically maintained in separate locations. For the most benefit, these locations
should be on separate disks. Even if all copies of the redo log are on the same disk,
however, the redundancy can help protect against I/O errors, file corruption, and so
on. When redo log files are multiplexed, LGWR concurrently writes the same redo log
information to multiple identical redo log files, thereby eliminating a single point of
redo log failure.
Multiplexing is implemented by creating groups of redo log files. A group consists of a
redo log file and its multiplexed copies. Each identical copy is said to be a member of
the group. Each redo log group is defined by a number, such as group 1, group 2, and
so on.
Figure 6–2

Multiplexed Redo Log Files

Disk A

Disk B

1, 3, 5, ...
A_LOG1

B_LOG1

Group 1

LGWR
Group 2
A_LOG2

2, 4, 6, ...

B_LOG2

Group 1
Group 2

In Figure 6–2, A_LOG1 and B_LOG1 are both members of Group 1, A_LOG2 and
B_LOG2 are both members of Group 2, and so forth. Each member in a group must be
exactly the same size.
Each member of a log file group is concurrently active—that is, concurrently written to
by LGWR—as indicated by the identical log sequence numbers assigned by LGWR. In
Figure 6–2, first LGWR writes concurrently to both A_LOG1 and B_LOG1. Then it

6-4 Oracle Database Administrator’s Guide

Planning the Redo Log

writes concurrently to both A_LOG2 and B_LOG2, and so on. LGWR never writes
concurrently to members of different groups (for example, to A_LOG1 and B_LOG2).
Oracle recommends that you multiplex your redo log files.
The loss of the log file data can be catastrophic if recovery is
required. Note that when you multiplex the redo log, the database
must increase the amount of I/O that it performs. Depending on
your configuration, this may impact overall database performance.

Note:

Responding to Redo Log Failure
Whenever LGWR cannot write to a member of a group, the database marks that
member as INVALID and writes an error message to the LGWR trace file and to the
database alert log to indicate the problem with the inaccessible files. The specific
reaction of LGWR when a redo log member is unavailable depends on the reason for
the lack of availability, as summarized in the table that follows.
Condition

LGWR Action

LGWR can successfully write to at
least one member in a group

Writing proceeds as normal. LGWR writes to the
available members of a group and ignores the
unavailable members.

LGWR cannot access the next group at
a log switch because the group needs
to be archived

Database operation temporarily halts until the group
becomes available or until the group is archived.

All members of the next group are
inaccessible to LGWR at a log switch
because of media failure

Oracle Database returns an error, and the database
instance shuts down. In this case, you may need to
perform media recovery on the database from the
loss of a redo log file.
If the database checkpoint has moved beyond the lost
redo log, media recovery is not necessary, because the
database has saved the data recorded in the redo log
to the datafiles. You need only drop the inaccessible
redo log group. If the database did not archive the
bad log, use ALTER DATABASE CLEAR
UNARCHIVED LOG to disable archiving before the log
can be dropped.

All members of a group suddenly
become inaccessible to LGWR while it
is writing to them

Oracle Database returns an error and the database
instance immediately shuts down. In this case, you
may need to perform media recovery. If the media
containing the log is not actually lost--for example, if
the drive for the log was inadvertently turned
off--media recovery may not be needed. In this case,
you need only turn the drive back on and let the
database perform automatic instance recovery.

Legal and Illegal Configurations
In most cases, a multiplexed redo log should be symmetrical: all groups of the redo log
should have the same number of members. However, the database does not require
that a multiplexed redo log be symmetrical. For example, one group can have only one
member, and other groups can have two members. This configuration protects against
disk failures that temporarily affect some redo log members but leave others intact.
The only requirement for an instance redo log is that it have at least two groups.
Figure 6–3 shows legal and illegal multiplexed redo log configurations. The second
configuration is illegal because it has only one group.

Managing the Redo Log

6-5

Planning the Redo Log

Figure 6–3 Legal and Illegal Multiplexed Redo Log Configuration

LEGAL

Group 1

Group 2

Group 3

Disk A

Disk B

A_LOG1

B_LOG1

A_LOG2

B_LOG2

A_LOG3

B_LOG3

Disk A

Disk B

A_LOG1

B_LOG1

ILLEGAL

Group 1

Group 2

Group 3

Group 1
Group 2
Group 3

Placing Redo Log Members on Different Disks
When setting up a multiplexed redo log, place members of a group on different
physical disks. If a single disk fails, then only one member of a group becomes
unavailable to LGWR and other members remain accessible to LGWR, so the instance
can continue to function.
If you archive the redo log, spread redo log members across disks to eliminate
contention between the LGWR and ARCn background processes. For example, if you
have two groups of multiplexed redo log members (a duplexed redo log), place each
member on a different disk and set your archiving destination to a fifth disk. Doing so
will avoid contention between LGWR (writing to the members) and ARCn (reading
the members).

6-6 Oracle Database Administrator’s Guide

Planning the Redo Log

Datafiles should also be placed on different disks from redo log files to reduce
contention in writing data blocks and redo records.

Setting the Size of Redo Log Members
When setting the size of redo log files, consider whether you will be archiving the redo
log. Redo log files should be sized so that a filled group can be archived to a single
unit of offline storage media (such as a tape or disk), with the least amount of space on
the medium left unused. For example, suppose only one filled redo log group can fit
on a tape and 49% of the tape storage capacity remains unused. In this case, it is better
to decrease the size of the redo log files slightly, so that two log groups could be
archived on each tape.
All members of the same multiplexed redo log group must be the same size. Members
of different groups can have different sizes. However, there is no advantage in varying
file size between groups. If checkpoints are not set to occur between log switches,
make all groups the same size to guarantee that checkpoints occur at regular intervals.
The minimum size permitted for a redo log file is 4 MB.
See Also: Your operating system–specific Oracle documentation.
The default size of redo log files is operating system dependent.

Choosing the Number of Redo Log Files
The best way to determine the appropriate number of redo log files for a database
instance is to test different configurations. The optimum configuration has the fewest
groups possible without hampering LGWR from writing redo log information.
In some cases, a database instance may require only two groups. In other situations, a
database instance may require additional groups to guarantee that a recycled group is
always available to LGWR. During testing, the easiest way to determine whether the
current redo log configuration is satisfactory is to examine the contents of the LGWR
trace file and the database alert log. If messages indicate that LGWR frequently has to
wait for a group because a checkpoint has not completed or a group has not been
archived, add groups.
Consider the parameters that can limit the number of redo log files before setting up or
altering the configuration of an instance redo log. The following parameters limit the
number of redo log files that you can add to a database:
■

■

The MAXLOGFILES parameter used in the CREATE DATABASE statement
determines the maximum number of groups of redo log files for each database.
Group values can range from 1 to MAXLOGFILES. When the compatibility level is
set earlier than 10.2.0, the only way to override this upper limit is to re-create the
database or its control file. Therefore, it is important to consider this limit before
creating a database. When compatibility is set to 10.2.0 or later, you can exceed the
MAXLOGFILES limit, and the control files expand as needed. If MAXLOGFILES is
not specified for the CREATE DATABASE statement, then the database uses an
operating system specific default value.
The MAXLOGMEMBERS parameter used in the CREATE DATABASE statement
determines the maximum number of members for each group. As with
MAXLOGFILES, the only way to override this upper limit is to re-create the
database or control file. Therefore, it is important to consider this limit before
creating a database. If no MAXLOGMEMBERS parameter is specified for the CREATE
DATABASE statement, then the database uses an operating system default value.

Managing the Redo Log

6-7

Planning the Redo Log

See Also:
■

■

Oracle Database Backup and Recovery Advanced User's Guide to
learn how checkpoints and the redo log impact instance
recovery
Your operating system specific Oracle documentation for the
default and legal values of the MAXLOGFILES and
MAXLOGMEMBERS parameters

Controlling Archive Lag
You can force all enabled redo log threads to switch their current logs at regular time
intervals. In a primary/standby database configuration, changes are made available to
the standby database by archiving redo logs at the primary site and then shipping
them to the standby database. The changes that are being applied by the standby
database can lag behind the changes that are occurring on the primary database,
because the standby database must wait for the changes in the primary database redo
log to be archived (into the archived redo log) and then shipped to it. To limit this lag,
you can set the ARCHIVE_LAG_TARGET initialization parameter. Setting this
parameter lets you specify in seconds how long that lag can be.

Setting the ARCHIVE_LAG_TARGET Initialization Parameter
When you set the ARCHIVE_LAG_TARGET initialization parameter, you cause the
database to examine the current redo log of the instance periodically. If the following
conditions are met, then the instance will switch the log:
■

■

The current log was created prior to n seconds ago, and the estimated archival
time for the current log is m seconds (proportional to the number of redo blocks
used in the current log), where n + m exceeds the value of the
ARCHIVE_LAG_TARGET initialization parameter.
The current log contains redo records.

In an Oracle Real Application Clusters environment, the instance also causes other
threads to switch and archive their logs if they are falling behind. This can be
particularly useful when one instance in the cluster is more idle than the other
instances (as when you are running a 2-node primary/secondary configuration of
Oracle Real Application Clusters).
The ARCHIVE_LAG_TARGET initialization parameter specifies the target of how many
seconds of redo the standby could lose in the event of a primary shutdown or failure if
the Oracle Data Guard environment is not configured in a no-data-loss mode. It also
provides an upper limit of how long (in seconds) the current log of the primary
database can span. Because the estimated archival time is also considered, this is not
the exact log switch time.
The following initialization parameter setting sets the log switch interval to 30 minutes
(a typical value).
ARCHIVE_LAG_TARGET = 1800

A value of 0 disables this time-based log switching functionality. This is the default
setting.
You can set the ARCHIVE_LAG_TARGET initialization parameter even if there is no
standby database. For example, the ARCHIVE_LAG_TARGET parameter can be set
specifically to force logs to be switched and archived.

6-8 Oracle Database Administrator’s Guide

Creating Redo Log Groups and Members

ARCHIVE_LAG_TARGET is a dynamic parameter and can be set with the ALTER
SYSTEM SET statement.
The ARCHIVE_LAG_TARGET parameter must be set to
the same value in all instances of an Oracle Real Application
Clusters environment. Failing to do so results in unpredictable
behavior.

Caution:

Factors Affecting the Setting of ARCHIVE_LAG_TARGET
Consider the following factors when determining if you want to set the
ARCHIVE_LAG_TARGET parameter and in determining the value for this parameter.
■

Overhead of switching (as well as archiving) logs

■

How frequently normal log switches occur as a result of log full conditions

■

How much redo loss is tolerated in the standby database

Setting ARCHIVE_LAG_TARGET may not be very useful if natural log switches already
occur more frequently than the interval specified. However, in the case of irregularities
of redo generation speed, the interval does provide an upper limit for the time range
each current log covers.
If the ARCHIVE_LAG_TARGET initialization parameter is set to a very low value, there
can be a negative impact on performance. This can force frequent log switches. Set the
parameter to a reasonable value so as not to degrade the performance of the primary
database.

Creating Redo Log Groups and Members
Plan the redo log of a database and create all required groups and members of redo log
files during database creation. However, there are situations where you might want to
create additional groups or members. For example, adding groups to a redo log can
correct redo log group availability problems.
To create new redo log groups and members, you must have the ALTER DATABASE
system privilege. A database can have up to MAXLOGFILES groups.
See Also: Oracle Database SQL Reference for a complete description
of the ALTER DATABASE statement

Creating Redo Log Groups
To create a new group of redo log files, use the SQL statement ALTER DATABASE with
the ADD LOGFILE clause.
The following statement adds a new group of redo logs to the database:
ALTER DATABASE
ADD LOGFILE ('/oracle/dbs/log1c.rdo', '/oracle/dbs/log2c.rdo') SIZE 500K;

Use fully specify filenames of new log members to indicate
where the operating system file should be created. Otherwise, the
files will be created in either the default or current directory of the
database server, depending upon your operating system.

Note:

You can also specify the number that identifies the group using the GROUP clause:
Managing the Redo Log

6-9

Relocating and Renaming Redo Log Members

ALTER DATABASE
ADD LOGFILE GROUP 10 ('/oracle/dbs/log1c.rdo', '/oracle/dbs/log2c.rdo')
SIZE 500K;

Using group numbers can make administering redo log groups easier. However, the
group number must be between 1 and MAXLOGFILES. Do not skip redo log file group
numbers (that is, do not number your groups 10, 20, 30, and so on), or you will
consume unnecessary space in the control files of the database.

Creating Redo Log Members
In some cases, it might not be necessary to create a complete group of redo log files. A
group could already exist, but not be complete because one or more members of the
group were dropped (for example, because of a disk failure). In this case, you can add
new members to an existing group.
To create new redo log members for an existing group, use the SQL statement ALTER
DATABASE with the ADD LOGFILE MEMBER clause. The following statement adds a
new redo log member to redo log group number 2:
ALTER DATABASE ADD LOGFILE MEMBER '/oracle/dbs/log2b.rdo' TO GROUP 2;

Notice that filenames must be specified, but sizes need not be. The size of the new
members is determined from the size of the existing members of the group.
When using the ALTER DATABASE statement, you can alternatively identify the target
group by specifying all of the other members of the group in the TO clause, as shown
in the following example:
ALTER DATABASE ADD LOGFILE MEMBER '/oracle/dbs/log2c.rdo'
TO ('/oracle/dbs/log2a.rdo', '/oracle/dbs/log2b.rdo');

Fully specify the filenames of new log members to indicate
where the operating system files should be created. Otherwise, the
files will be created in either the default or current directory of the
database server, depending upon your operating system. You may
also note that the status of the new log member is shown as
INVALID. This is normal and it will change to active (blank) when
it is first used.

Note:

Relocating and Renaming Redo Log Members
You can use operating system commands to relocate redo logs, then use the ALTER
DATABASE statement to make their new names (locations) known to the database. This
procedure is necessary, for example, if the disk currently used for some redo log files is
going to be removed, or if datafiles and a number of redo log files are stored on the
same disk and should be separated to reduce contention.
To rename redo log members, you must have the ALTER DATABASE system privilege.
Additionally, you might also need operating system privileges to copy files to the
desired location and privileges to open and back up the database.
Before relocating your redo logs, or making any other structural changes to the
database, completely back up the database in case you experience problems while
performing the operation. As a precaution, after renaming or relocating a set of redo
log files, immediately back up the database control file.

6-10 Oracle Database Administrator’s Guide

Relocating and Renaming Redo Log Members

Use the following steps for relocating redo logs. The example used to illustrate these
steps assumes:
■
■

■

The log files are located on two disks: diska and diskb.
The redo log is duplexed: one group consists of the members
/diska/logs/log1a.rdo and /diskb/logs/log1b.rdo, and the second
group consists of the members /diska/logs/log2a.rdo and
/diskb/logs/log2b.rdo.
The redo log files located on diska must be relocated to diskc. The new
filenames will reflect the new location: /diskc/logs/log1c.rdo and
/diskc/logs/log2c.rdo.

Steps for Renaming Redo Log Members
1.

Shut down the database.
SHUTDOWN

2.

Copy the redo log files to the new location.
Operating system files, such as redo log members, must be copied using the
appropriate operating system commands. See your operating system specific
documentation for more information about copying files.
You can execute an operating system command to copy a
file (or perform other operating system commands) without exiting
SQL*Plus by using the HOST command. Some operating systems
allow you to use a character in place of the word HOST. For
example, you can use an exclamation point (!) in UNIX.
Note:

The following example uses operating system commands (UNIX) to move the
redo log members to a new location:
mv /diska/logs/log1a.rdo /diskc/logs/log1c.rdo
mv /diska/logs/log2a.rdo /diskc/logs/log2c.rdo
3.

Startup the database, mount, but do not open it.
CONNECT / as SYSDBA
STARTUP MOUNT

4.

Rename the redo log members.
Use the ALTER DATABASE statement with the RENAME FILE clause to rename the
database redo log files.
ALTER DATABASE
RENAME FILE '/diska/logs/log1a.rdo', '/diska/logs/log2a.rdo'
TO '/diskc/logs/log1c.rdo', '/diskc/logs/log2c.rdo';

5.

Open the database for normal operation.
The redo log alterations take effect when the database is opened.
ALTER DATABASE OPEN;

Managing the Redo Log 6-11

Dropping Redo Log Groups and Members

Dropping Redo Log Groups and Members
In some cases, you may want to drop an entire group of redo log members. For
example, you want to reduce the number of groups in an instance redo log. In a
different case, you may want to drop one or more specific redo log members. For
example, if a disk failure occurs, you may need to drop all the redo log files on the
failed disk so that the database does not try to write to the inaccessible files. In other
situations, particular redo log files become unnecessary. For example, a file might be
stored in an inappropriate location.

Dropping Log Groups
To drop a redo log group, you must have the ALTER DATABASE system privilege.
Before dropping a redo log group, consider the following restrictions and precautions:
■

■

■

An instance requires at least two groups of redo log files, regardless of the number
of members in the groups. (A group comprises one or more members.)
You can drop a redo log group only if it is inactive. If you need to drop the current
group, first force a log switch to occur.
Make sure a redo log group is archived (if archiving is enabled) before dropping it.
To see whether this has happened, use the V$LOG view.
SELECT GROUP#, ARCHIVED, STATUS FROM V$LOG;
GROUP#
--------1
2
3
4

ARC
--YES
NO
YES
YES

STATUS
---------------ACTIVE
CURRENT
INACTIVE
INACTIVE

Drop a redo log group with the SQL statement ALTER DATABASE with the DROP
LOGFILE clause.
The following statement drops redo log group number 3:
ALTER DATABASE DROP LOGFILE GROUP 3;

When a redo log group is dropped from the database, and you are not using the
Oracle-managed files feature, the operating system files are not deleted from disk.
Rather, the control files of the associated database are updated to drop the members of
the group from the database structure. After dropping a redo log group, make sure
that the drop completed successfully, and then use the appropriate operating system
command to delete the dropped redo log files.
When using Oracle-managed files, the cleanup of operating systems files is done
automatically for you.

Dropping Redo Log Members
To drop a redo log member, you must have the ALTER DATABASE system privilege.
Consider the following restrictions and precautions before dropping individual redo
log members:
■

It is permissible to drop redo log files so that a multiplexed redo log becomes
temporarily asymmetric. For example, if you use duplexed groups of redo log
files, you can drop one member of one group, even though all other groups have
two members each. However, you should rectify this situation immediately so that

6-12 Oracle Database Administrator’s Guide

Verifying Blocks in Redo Log Files

all groups have at least two members, and thereby eliminate the single point of
failure possible for the redo log.
■

■

■

An instance always requires at least two valid groups of redo log files, regardless
of the number of members in the groups. (A group comprises one or more
members.) If the member you want to drop is the last valid member of the group,
you cannot drop the member until the other members become valid. To see a redo
log file status, use the V$LOGFILE view. A redo log file becomes INVALID if the
database cannot access it. It becomes STALE if the database suspects that it is not
complete or correct. A stale log file becomes valid again the next time its group is
made the active group.
You can drop a redo log member only if it is not part of an active or current group.
If you want to drop a member of an active group, first force a log switch to occur.
Make sure the group to which a redo log member belongs is archived (if archiving
is enabled) before dropping the member. To see whether this has happened, use
the V$LOG view.

To drop specific inactive redo log members, use the ALTER DATABASE statement with
the DROP LOGFILE MEMBER clause.
The following statement drops the redo log /oracle/dbs/log3c.rdo:
ALTER DATABASE DROP LOGFILE MEMBER '/oracle/dbs/log3c.rdo';

When a redo log member is dropped from the database, the operating system file is
not deleted from disk. Rather, the control files of the associated database are updated
to drop the member from the database structure. After dropping a redo log file, make
sure that the drop completed successfully, and then use the appropriate operating
system command to delete the dropped redo log file.
To drop a member of an active group, you must first force a log switch.

Forcing Log Switches
A log switch occurs when LGWR stops writing to one redo log group and starts
writing to another. By default, a log switch occurs automatically when the current redo
log file group fills.
You can force a log switch to make the currently active group inactive and available for
redo log maintenance operations. For example, you want to drop the currently active
group, but are not able to do so until the group is inactive. You may also wish to force
a log switch if the currently active group needs to be archived at a specific time before
the members of the group are completely filled. This option is useful in configurations
with large redo log files that take a long time to fill.
To force a log switch, you must have the ALTER SYSTEM privilege. Use the ALTER
SYSTEM statement with the SWITCH LOGFILE clause.
The following statement forces a log switch:
ALTER SYSTEM SWITCH LOGFILE;

Verifying Blocks in Redo Log Files
You can configure the database to use checksums to verify blocks in the redo log files.
If you set the initialization parameter DB_BLOCK_CHECKSUM to TRUE, the database
computes a checksum for each database block when it is written to disk, including

Managing the Redo Log 6-13

Clearing a Redo Log File

each redo log block as it is being written to the current log. The checksum is stored the
header of the block.
Oracle Database uses the checksum to detect corruption in a redo log block. The
database verifies the redo log block when the block is read from an archived log
during recovery and when it writes the block to an archive log file. An error is raised
and written to the alert log if corruption is detected.
If corruption is detected in a redo log block while trying to archive it, the system
attempts to read the block from another member in the group. If the block is corrupted
in all members of the redo log group, then archiving cannot proceed.
The default value of DB_BLOCK_CHECKSUM is TRUE. The value of this parameter can
be changed dynamically using the ALTER SYSTEM statement.
There is a slight overhead and decrease in database
performance with DB_BLOCK_CHECKSUM enabled. Monitor your
database performance to decide if the benefit of using data block
checksums to detect corruption outweighs the performance impact.

Note:

Oracle Database Reference for a description of the
DB_BLOCK_CHECKSUM initialization parameter

See Also:

Clearing a Redo Log File
A redo log file might become corrupted while the database is open, and ultimately
stop database activity because archiving cannot continue. In this situation the ALTER
DATABASE CLEAR LOGFILE statement can be used to reinitialize the file without
shutting down the database.
The following statement clears the log files in redo log group number 3:
ALTER DATABASE CLEAR LOGFILE GROUP 3;

This statement overcomes two situations where dropping redo logs is not possible:
■

If there are only two log groups

■

The corrupt redo log file belongs to the current group

If the corrupt redo log file has not been archived, use the UNARCHIVED keyword in the
statement.
ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3;

This statement clears the corrupted redo logs and avoids archiving them. The cleared
redo logs are available for use even though they were not archived.
If you clear a log file that is needed for recovery of a backup, then you can no longer
recover from that backup. The database writes a message in the alert log describing the
backups from which you cannot recover.
If you clear an unarchived redo log file, you should make
another backup of the database.

Note:

If you want to clear an unarchived redo log that is needed to bring an offline
tablespace online, use the UNRECOVERABLE DATAFILE clause in the ALTER
DATABASE CLEAR LOGFILE statement.
6-14 Oracle Database Administrator’s Guide

Viewing Redo Log Information

If you clear a redo log needed to bring an offline tablespace online, you will not be able
to bring the tablespace online again. You will have to drop the tablespace or perform
an incomplete recovery. Note that tablespaces taken offline normal do not require
recovery.

Viewing Redo Log Information
The following views provide information on redo logs.
View

Description

V$LOG

Displays the redo log file information from the control file

V$LOGFILE

Identifies redo log groups and members and member status

V$LOG_HISTORY

Contains log history information

The following query returns the control file information about the redo log for a
database.
SELECT * FROM V$LOG;
GROUP# THREAD#
SEQ
BYTES
------ ------- ----- ------1
1 10605 1048576
2
1 10606 1048576
3
1 10603 1048576
4
1 10604 1048576

MEMBERS
------1
1
1
1

ARC
--YES
NO
YES
YES

STATUS
--------ACTIVE
CURRENT
INACTIVE
INACTIVE

FIRST_CHANGE#
------------11515628
11517595
11511666
11513647

FIRST_TIM
--------16-APR-00
16-APR-00
16-APR-00
16-APR-00

To see the names of all of the member of a group, use a query similar to the following:
SELECT * FROM V$LOGFILE;
GROUP#
-----1
2
3
4

STATUS
-------

MEMBER
---------------------------------D:\ORANT\ORADATA\IDDB2\REDO04.LOG
D:\ORANT\ORADATA\IDDB2\REDO03.LOG
D:\ORANT\ORADATA\IDDB2\REDO02.LOG
D:\ORANT\ORADATA\IDDB2\REDO01.LOG

If STATUS is blank for a member, then the file is in use.
See Also: Oracle Database Reference for detailed information about
these views

Managing the Redo Log 6-15

Viewing Redo Log Information

6-16 Oracle Database Administrator’s Guide

7
Managing Archived Redo Logs
This chapter describes how to archive redo data. It contains the following topics:
■

What Is the Archived Redo Log?

■

Choosing Between NOARCHIVELOG and ARCHIVELOG Mode

■

Controlling Archiving

■

Specifying the Archive Destination

■

Specifying the Mode of Log Transmission

■

Managing Archive Destination Failure

■

Controlling Trace Output Generated by the Archivelog Process

■

Viewing Information About the Archived Redo Log
See Also:
■

■

Part III, "Automated File and Storage Management" for
information about creating an archived redo log that is both
created and managed by the Oracle Database server
Oracle Database Oracle Clusterware and Oracle Real Application
Clusters Administration and Deployment Guide for information
specific to archiving in the Oracle Real Application Clusters
environment

What Is the Archived Redo Log?
Oracle Database lets you save filled groups of redo log files to one or more offline
destinations, known collectively as the archived redo log, or more simply the archive
log. The process of turning redo log files into archived redo log files is called
archiving. This process is only possible if the database is running in ARCHIVELOG
mode. You can choose automatic or manual archiving.
An archived redo log file is a copy of one of the filled members of a redo log group. It
includes the redo entries and the unique log sequence number of the identical member
of the redo log group. For example, if you are multiplexing your redo log, and if group
1 contains identical member files a_log1 and b_log1, then the archiver process
(ARCn) will archive one of these member files. Should a_log1 become corrupted,
then ARCn can still archive the identical b_log1. The archived redo log contains a
copy of every group created since you enabled archiving.
When the database is running in ARCHIVELOG mode, the log writer process (LGWR)
cannot reuse and hence overwrite a redo log group until it has been archived. The
background process ARCn automates archiving operations when automatic archiving
Managing Archived Redo Logs

7-1

Choosing Between NOARCHIVELOG and ARCHIVELOG Mode

is enabled. The database starts multiple archiver processes as needed to ensure that the
archiving of filled redo logs does not fall behind.
You can use archived redo logs to:
■

Recover a database

■

Update a standby database

■

Get information about the history of a database using the LogMiner utility
See Also: The following sources document the uses for archived
redo logs:
■
■

■

Oracle Database Backup and Recovery Basics
Oracle Data Guard Concepts and Administration discusses setting
up and maintaining a standby database
Oracle Database Utilities contains instructions for using the
LogMiner PL/SQL package

Choosing Between NOARCHIVELOG and ARCHIVELOG Mode
This section describes the issues you must consider when choosing to run your
database in NOARCHIVELOG or ARCHIVELOG mode, and contains these topics:
■

Running a Database in NOARCHIVELOG Mode

■

Running a Database in ARCHIVELOG Mode

The choice of whether to enable the archiving of filled groups of redo log files depends
on the availability and reliability requirements of the application running on the
database. If you cannot afford to lose any data in your database in the event of a disk
failure, use ARCHIVELOG mode. The archiving of filled redo log files can require you
to perform extra administrative operations.

Running a Database in NOARCHIVELOG Mode
When you run your database in NOARCHIVELOG mode, you disable the archiving of
the redo log. The database control file indicates that filled groups are not required to
be archived. Therefore, when a filled group becomes inactive after a log switch, the
group is available for reuse by LGWR.
NOARCHIVELOG mode protects a database from instance failure but not from media
failure. Only the most recent changes made to the database, which are stored in the
online redo log groups, are available for instance recovery. If a media failure occurs
while the database is in NOARCHIVELOG mode, you can only restore the database to
the point of the most recent full database backup. You cannot recover transactions
subsequent to that backup.
In NOARCHIVELOG mode you cannot perform online tablespace backups, nor can you
use online tablespace backups taken earlier while the database was in ARCHIVELOG
mode. To restore a database operating in NOARCHIVELOG mode, you can use only
whole database backups taken while the database is closed. Therefore, if you decide to
operate a database in NOARCHIVELOG mode, take whole database backups at regular,
frequent intervals.

7-2 Oracle Database Administrator’s Guide

Choosing Between NOARCHIVELOG and ARCHIVELOG Mode

Running a Database in ARCHIVELOG Mode
When you run a database in ARCHIVELOG mode, you enable the archiving of the redo
log. The database control file indicates that a group of filled redo log files cannot be
reused by LGWR until the group is archived. A filled group becomes available for
archiving immediately after a redo log switch occurs.
The archiving of filled groups has these advantages:
■

■

■

A database backup, together with online and archived redo log files, guarantees
that you can recover all committed transactions in the event of an operating
system or disk failure.
If you keep an archived log, you can use a backup taken while the database is
open and in normal system use.
You can keep a standby database current with its original database by
continuously applying the original archived redo logs to the standby.

You can configure an instance to archive filled redo log files automatically, or you can
archive manually. For convenience and efficiency, automatic archiving is usually best.
Figure 7–1 illustrates how the archiver process (ARC0 in this illustration) writes filled
redo log files to the database archived redo log.
If all databases in a distributed database operate in ARCHIVELOG mode, you can
perform coordinated distributed database recovery. However, if any database in a
distributed database is in NOARCHIVELOG mode, recovery of a global distributed
database (to make all databases consistent) is limited by the last full backup of any
database operating in NOARCHIVELOG mode.
Figure 7–1 Redo Log File Use in ARCHIVELOG Mode

0001
0001

LGWR

Log
0001

0001
0001

0002
0002

0001

0002
0002

0003
0003

ARC0

ARC0

ARC0

LGWR

LGWR

LGWR

Log
0003

Log
0004

Log
0002

Archived
Redo Log
Files

Online
Redo Log
Files

TIME

Managing Archived Redo Logs

7-3

Controlling Archiving

Controlling Archiving
This section describes how to set the archiving mode of the database and how to
control the archiving process. The following topics are discussed:
■

Setting the Initial Database Archiving Mode

■

Changing the Database Archiving Mode

■

Performing Manual Archiving

■

Adjusting the Number of Archiver Processes
See Also: your Oracle operating system specific documentation
for additional information on controlling archiving modes

Setting the Initial Database Archiving Mode
You set the initial archiving mode as part of database creation in the CREATE
DATABASE statement. Usually, you can use the default of NOARCHIVELOG mode at
database creation because there is no need to archive the redo information generated
by that process. After creating the database, decide whether to change the initial
archiving mode.
If you specify ARCHIVELOG mode, you must have initialization parameters set that
specify the destinations for the archive log files (see "Specifying Archive Destinations"
on page 7-6).

Changing the Database Archiving Mode
To change the archiving mode of the database, use the ALTER DATABASE statement
with the ARCHIVELOG or NOARCHIVELOG clause. To change the archiving mode, you
must be connected to the database with administrator privileges (AS SYSDBA).
The following steps switch the database archiving mode from NOARCHIVELOG to
ARCHIVELOG:
1.

Shut down the database instance.
SHUTDOWN

An open database must first be closed and any associated instances shut down
before you can switch the database archiving mode. You cannot change the mode
from ARCHIVELOG to NOARCHIVELOG if any datafiles need media recovery.
2.

Back up the database.
Before making any major change to a database, always back up the database to
protect against any problems. This will be your final backup of the database in
NOARCHIVELOG mode and can be used if something goes wrong during the
change to ARCHIVELOG mode. See Oracle Database Backup and Recovery Basics for
information about taking database backups.

3.

Edit the initialization parameter file to include the initialization parameters that
specify the destinations for the archive log files (see "Specifying Archive
Destinations" on page 7-6).

4.

Start a new instance and mount, but do not open, the database.
STARTUP MOUNT

To enable or disable archiving, the database must be mounted but not open.

7-4 Oracle Database Administrator’s Guide

Controlling Archiving

5.

Change the database archiving mode. Then open the database for normal
operations.
ALTER DATABASE ARCHIVELOG;
ALTER DATABASE OPEN;

6.

Shut down the database.
SHUTDOWN IMMEDIATE

7.

Back up the database.
Changing the database archiving mode updates the control file. After changing the
database archiving mode, you must back up all of your database files and control
file. Any previous backup is no longer usable because it was taken in
NOARCHIVELOG mode.
See Also: Oracle Database Oracle Clusterware and Oracle Real
Application Clusters Administration and Deployment Guide for more
information about switching the archiving mode when using Real
Application Clusters

Performing Manual Archiving
To operate your database in manual archiving mode, follow the procedure shown in
"Changing the Database Archiving Mode" on page 7-4. However, when you specify
the new mode in step 5, use the following statement:
ALTER DATABASE ARCHIVELOG MANUAL;

When you operate your database in manual ARCHIVELOG mode, you must archive
inactive groups of filled redo log files or your database operation can be temporarily
suspended. To archive a filled redo log group manually, connect with administrator
privileges. Ensure that the database is mounted but not open. Use the ALTER SYSTEM
statement with the ARCHIVE LOG clause to manually archive filled redo log files. The
following statement archives all unarchived log files:
ALTER SYSTEM ARCHIVE LOG ALL;

When you use manual archiving mode, you cannot specify any standby databases in
the archiving destinations.
Even when automatic archiving is enabled, you can use manual archiving for such
actions as rearchiving an inactive group of filled redo log members to another location.
In this case, it is possible for the instance to reuse the redo log group before you have
finished manually archiving, and thereby overwrite the files. If this happens, the
database writes an error message to the alert log.

Adjusting the Number of Archiver Processes
The LOG_ARCHIVE_MAX_PROCESSES initialization parameter specifies the number of
ARCn processes that the database initially invokes. The default is two processes. There
is usually no need specify this initialization parameter or to change its default value,
because the database starts additional archiver processes (ARCn) as needed to ensure
that the automatic processing of filled redo log files does not fall behind.
However, to avoid any runtime overhead of invoking additional ARCn processes, you
can set the LOG_ARCHIVE_MAX_PROCESSES initialization parameter to specify up to
ten ARCn processes to be started at instance startup. The
LOG_ARCHIVE_MAX_PROCESSES parameter is dynamic, and can be changed using
Managing Archived Redo Logs

7-5

Specifying the Archive Destination

the ALTER SYSTEM statement. The database must be mounted but not open. The
following statement increases (or decreases) the number of ARCn processes currently
running:
ALTER SYSTEM SET LOG_ARCHIVE_MAX_PROCESSES=3;

Specifying the Archive Destination
Before you can archive redo logs, you must determine the destination to which you
will archive and familiarize yourself with the various destination states. The dynamic
performance (V$) views, listed in "Viewing Information About the Archived Redo
Log" on page 7-14, provide all needed archive information.
The following topics are contained in this section:
■

Specifying Archive Destinations

■

Understanding Archive Destination Status

Specifying Archive Destinations
You can choose whether to archive redo logs to a single destination or multiplex them.
If you want to archive only to a single destination, you specify that destination in the
LOG_ARCHIVE_DEST initialization parameter. If you want to multiplex the archived
logs, you can choose whether to archive to up to ten locations (using the
LOG_ARCHIVE_DEST_n parameters) or to archive only to a primary and secondary
destination (using LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST). The
following table summarizes the multiplexing alternatives, which are further described
in the sections that follow.
Method

Initialization Parameter

Host

Example

1

LOG_ARCHIVE_DEST_n

Local or
remote

LOG_ARCHIVE_DEST_1 =
'LOCATION=/disk1/arc'

where:

LOG_ARCHIVE_DEST_2 = 'SERVICE=standby1'

n is an integer from 1 to 10
2

LOG_ARCHIVE_DEST and
LOG_ARCHIVE_DUPLEX_DEST

Local only

LOG_ARCHIVE_DEST = '/disk1/arc'
LOG_ARCHIVE_DUPLEX_DEST = '/disk2/arc'

See Also:
■

■

Oracle Database Reference for additional information about the
initialization parameters used to control the archiving of redo
logs
Oracle Data Guard Concepts and Administration for information
about using the LOG_ARCHIVE_DEST_n initialization
parameter for specifying a standby destination. There are
additional keywords that can be specified with this
initialization parameter that are not discussed in this book.

Method 1: Using the LOG_ARCHIVE_DEST_n Parameter
Use the LOG_ARCHIVE_DEST_n parameter (where n is an integer from 1 to 10) to
specify from one to ten different destinations for archival. Each numerically suffixed
parameter uniquely identifies an individual destination.
You specify the location for LOG_ARCHIVE_DEST_n using the keywords explained in
the following table:
7-6 Oracle Database Administrator’s Guide

Specifying the Archive Destination

Keyword

Indicates

Example

LOCATION

A local file system
location.

LOG_ARCHIVE_DEST_1 =
'LOCATION=/disk1/arc'

SERVICE

Remote archival
through Oracle Net
service name.

LOG_ARCHIVE_DEST_2 = 'SERVICE=standby1'

If you use the LOCATION keyword, specify a valid path name for your operating
system. If you specify SERVICE, the database translates the net service name through
the tnsnames.ora file to a connect descriptor. The descriptor contains the
information necessary for connecting to the remote database. The service name must
have an associated database SID, so that the database correctly updates the log history
of the control file for the standby database.
Perform the following steps to set the destination for archived redo logs using the
LOG_ARCHIVE_DEST_n initialization parameter:
1.

Use SQL*Plus to shut down the database.
SHUTDOWN

2.

Set the LOG_ARCHIVE_DEST_n initialization parameter to specify from one to ten
archiving locations. The LOCATION keyword specifies an operating system specific
path name. For example, enter:
LOG_ARCHIVE_DEST_1 = 'LOCATION = /disk1/archive'
LOG_ARCHIVE_DEST_2 = 'LOCATION = /disk2/archive'
LOG_ARCHIVE_DEST_3 = 'LOCATION = /disk3/archive'

If you are archiving to a standby database, use the SERVICE keyword to specify a
valid net service name from the tnsnames.ora file. For example, enter:
LOG_ARCHIVE_DEST_4 = 'SERVICE = standby1'
3.

Optionally, set the LOG_ARCHIVE_FORMAT initialization parameter, using %t to
include the thread number as part of the file name, %s to include the log sequence
number, and %r to include the resetlogs ID (a timestamp value represented in
ub4). Use capital letters (%T, %S, and %R) to pad the file name to the left with
zeroes.
If the COMPATIBLE initialization parameter is set to 10.0 or
higher, the database requires the specification of resetlogs ID (%r)
when you include the LOG_ARCHIVE_FORMAT parameter. The
default for this parameter is operating system dependent. For
example, this is the default format for UNIX:
Note:

LOG_ARCHIVE_FORMAT=%t_%s_%r.dbf
The incarnation of a database changes when you open it with the
RESETLOGS option. Specifying %r causes the database to capture
the resetlogs ID in the archive log file name, enabling you to more
easily perform recovery from a backup of a previous database
incarnation. See Oracle Database Backup and Recovery Advanced User's
Guide for more information about this method of recovery.
The following example shows a setting of LOG_ARCHIVE_FORMAT:

Managing Archived Redo Logs

7-7

Specifying the Archive Destination

LOG_ARCHIVE_FORMAT = arch_%t_%s_%r.arc

This setting will generate archived logs as follows for thread 1; log sequence
numbers 100, 101, and 102; resetlogs ID 509210197. The identical resetlogs ID
indicates that the files are all from the same database incarnation:
/disk1/archive/arch_1_100_509210197.arc,
/disk1/archive/arch_1_101_509210197.arc,
/disk1/archive/arch_1_102_509210197.arc
/disk2/archive/arch_1_100_509210197.arc,
/disk2/archive/arch_1_101_509210197.arc,
/disk2/archive/arch_1_102_509210197.arc
/disk3/archive/arch_1_100_509210197.arc,
/disk3/archive/arch_1_101_509210197.arc,
/disk3/archive/arch_1_102_509210197.arc

Method 2: Using LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST
To specify a maximum of two locations, use the LOG_ARCHIVE_DEST parameter to
specify a primary archive destination and the LOG_ARCHIVE_DUPLEX_DEST to
specify an optional secondary archive destination. All locations must be local.
Whenever the database archives a redo log, it archives it to every destination specified
by either set of parameters.
Perform the following steps the use method 2:
1.

Use SQL*Plus to shut down the database.
SHUTDOWN

2.

Specify destinations for the LOG_ARCHIVE_DEST and
LOG_ARCHIVE_DUPLEX_DEST parameter (you can also specify
LOG_ARCHIVE_DUPLEX_DEST dynamically using the ALTER SYSTEM statement).
For example, enter:
LOG_ARCHIVE_DEST = '/disk1/archive'
LOG_ARCHIVE_DUPLEX_DEST = '/disk2/archive'

3.

Set the LOG_ARCHIVE_FORMAT initialization parameter as described in step 3 for
method 1.

Understanding Archive Destination Status
Each archive destination has the following variable characteristics that determine its
status:
■

■

■

Valid/Invalid: indicates whether the disk location or service name information is
specified and valid
Enabled/Disabled: indicates the availability state of the location and whether the
database can use the destination
Active/Inactive: indicates whether there was a problem accessing the destination

Several combinations of these characteristics are possible. To obtain the current status
and other information about each destination for an instance, query the
V$ARCHIVE_DEST view.

7-8 Oracle Database Administrator’s Guide

Specifying the Mode of Log Transmission

The characteristics determining a locations status that appear in the view are shown in
Table 7–1. Note that for a destination to be used, its characteristics must be valid,
enabled, and active.
Table 7–1

Destination Status
Characteristics

STATUS

Valid

Enabled

Active

Meaning

VALID

True

True

True

The user has properly initialized the
destination, which is available for
archiving.

INACTIVE

False

n/a

n/a

The user has not provided or has
deleted the destination information.

ERROR

True

True

False

An error occurred creating or writing to
the destination file; refer to error data.

FULL

True

True

False

Destination is full (no disk space).

DEFERRED

True

False

True

The user manually and temporarily
disabled the destination.

DISABLED

True

False

False

The user manually and temporarily
disabled the destination following an
error; refer to error data.

BAD PARAM

n/a

n/a

n/a

A parameter error occurred; refer to
error data.

The LOG_ARCHIVE_DEST_STATE_n (where n is an integer from 1 to 10) initialization
parameter lets you control the availability state of the specified destination (n).
■

ENABLE indicates that the database can use the destination.

■

DEFER indicates that the location is temporarily disabled.

■

ALTERNATE indicates that the destination is an alternate.

The availability state of the destination is DEFER, unless there is a failure of its parent
destination, in which case its state becomes ENABLE.

Specifying the Mode of Log Transmission
The two modes of transmitting archived logs to their destination are normal archiving
transmission and standby transmission mode. Normal transmission involves
transmitting files to a local disk. Standby transmission involves transmitting files
through a network to either a local or remote standby database.

Normal Transmission Mode
In normal transmission mode, the archiving destination is another disk drive of the
database server. In this configuration archiving does not contend with other files
required by the instance and can complete more quickly. Specify the destination with
either the LOG_ARCHIVE_DEST_n or LOG_ARCHIVE_DEST parameters.
It is good practice to move archived redo log files and corresponding database
backups from the local disk to permanent inexpensive offline storage media such as
tape. A primary value of archived logs is database recovery, so you want to ensure that
these logs are safe should disaster strike your primary database.

Managing Archived Redo Logs

7-9

Specifying the Mode of Log Transmission

Standby Transmission Mode
In standby transmission mode, the archiving destination is either a local or remote
standby database.
Caution: You can maintain a standby database on a local disk, but
Oracle strongly encourages you to maximize disaster protection by
maintaining your standby database at a remote site.

If you are operating your standby database in managed recovery mode, you can keep
your standby database synchronized with your source database by automatically
applying transmitted archive logs.
To transmit files successfully to a standby database, either ARCn or a server process
must do the following:
■
■

Recognize a remote location
Transmit the archived logs in conjunction with a remote file server (RFS) process
that resides on the remote server

Each ARCn process has a corresponding RFS for each standby destination. For
example, if three ARCn processes are archiving to two standby databases, then Oracle
Database establishes six RFS connections.
You transmit archived logs through a network to a remote location by using Oracle
Net Services. Indicate a remote archival by specifying a Oracle Net service name as an
attribute of the destination. Oracle Database then translates the service name, through
the tnsnames.ora file, to a connect descriptor. The descriptor contains the
information necessary for connecting to the remote database. The service name must
have an associated database SID, so that the database correctly updates the log history
of the control file for the standby database.
The RFS process, which runs on the destination node, acts as a network server to the
ARCn client. Essentially, ARCn pushes information to RFS, which transmits it to the
standby database.
The RFS process, which is required when archiving to a remote destination, is
responsible for the following tasks:
■
■

■
■

Consuming network I/O from the ARCn process
Creating file names on the standby database by using the
STANDBY_ARCHIVE_DEST parameter
Populating the log files at the remote site
Updating the standby database control file (which Recovery Manager can then use
for recovery)

Archived redo logs are integral to maintaining a standby database, which is an exact
replica of a database. You can operate your database in standby archiving mode,
which automatically updates a standby database with archived redo logs from the
original database.

7-10 Oracle Database Administrator’s Guide

Managing Archive Destination Failure

See Also:
■
■

Oracle Data Guard Concepts and Administration
Oracle Database Net Services Administrator's Guide for
information about connecting to a remote database using a
service name

Managing Archive Destination Failure
Sometimes archive destinations can fail, causing problems when you operate in
automatic archiving mode. Oracle Database provides procedures to help you minimize
the problems associated with destination failure. These procedures are discussed in the
sections that follow:
■

Specifying the Minimum Number of Successful Destinations

■

Rearchiving to a Failed Destination

Specifying the Minimum Number of Successful Destinations
The optional initialization parameter LOG_ARCHIVE_MIN_SUCCEED_DEST=n
determines the minimum number of destinations to which the database must
successfully archive a redo log group before it can reuse online log files. The default
value is 1. Valid values for n are 1 to 2 if you are using duplexing, or 1 to 10 if you are
multiplexing.

Specifying Mandatory and Optional Destinations
The LOG_ARCHIVE_DEST_n parameter lets you specify whether a destination is
OPTIONAL (the default) or MANDATORY. The LOG_ARCHIVE_MIN_SUCCEED_DEST=n
parameter uses all MANDATORY destinations plus some number of non-standby
OPTIONAL destinations to determine whether LGWR can overwrite the online log. The
following rules apply:
■

■

■

■

■

■

Omitting the MANDATORY attribute for a destination is the same as specifying
OPTIONAL.
You must have at least one local destination, which you can declare OPTIONAL or
MANDATORY.
When you specify a value for LOG_ARCHIVE_MIN_SUCCEED_DEST=n, Oracle
Database will treat at least one local destination as MANDATORY, because the
minimum value for LOG_ARCHIVE_MIN_SUCCEED_DEST is 1.
If any MANDATORY destination fails, including a MANDATORY standby destination,
Oracle Database ignores the LOG_ARCHIVE_MIN_SUCCEED_DEST parameter.
The LOG_ARCHIVE_MIN_SUCCEED_DEST value cannot be greater than the
number of destinations, nor can it be greater than the number of MANDATORY
destinations plus the number of OPTIONAL local destinations.
If you DEFER a MANDATORY destination, and the database overwrites the online
log without transferring the archived log to the standby site, then you must
transfer the log to the standby manually.

If you are duplexing the archived logs, you can establish which destinations are
mandatory or optional by using the LOG_ARCHIVE_DEST and
LOG_ARCHIVE_DUPLEX_DEST parameters. The following rules apply:
■

Any destination declared by LOG_ARCHIVE_DEST is mandatory.

Managing Archived Redo Logs 7-11

Managing Archive Destination Failure

Any destination declared by LOG_ARCHIVE_DUPLEX_DEST is optional if
LOG_ARCHIVE_MIN_SUCCEED_DEST = 1 and mandatory if
LOG_ARCHIVE_MIN_SUCCEED_DEST = 2.

■

Specifying the Number of Successful Destinations: Scenarios
You can see the relationship between the LOG_ARCHIVE_DEST_n and
LOG_ARCHIVE_MIN_SUCCEED_DEST parameters most easily through sample
scenarios.
Scenario for Archiving to Optional Local Destinations In this scenario, you archive to three
local destinations, each of which you declare as OPTIONAL. Table 7–2 illustrates the
possible values for LOG_ARCHIVE_MIN_SUCCEED_DEST=n in this case.
Table 7–2

LOG_ARCHIVE_MIN_SUCCEED_DEST Values for Scenario 1

Value

Meaning

1

The database can reuse log files only if at least one of the OPTIONAL
destinations succeeds.

2

The database can reuse log files only if at least two of the OPTIONAL
destinations succeed.

3

The database can reuse log files only if all of the OPTIONAL destinations
succeed.

4 or greater

ERROR: The value is greater than the number of destinations.

This scenario shows that even though you do not explicitly set any of your
destinations to MANDATORY using the LOG_ARCHIVE_DEST_n parameter, the database
must successfully archive to one or more of these locations when
LOG_ARCHIVE_MIN_SUCCEED_DEST is set to 1, 2, or 3.
Scenario for Archiving to Both Mandatory and Optional Destinations Consider a case in which:
■

You specify two MANDATORY destinations.

■

You specify two OPTIONAL destinations.

■

No destination is a standby database.

Table 7–3 shows the possible values for LOG_ARCHIVE_MIN_SUCCEED_DEST=n.
Table 7–3

LOG_ARCHIVE_MIN_SUCCEED_DEST Values for Scenario 2

Value

Meaning

1

The database ignores the value and uses the number of MANDATORY
destinations (in this example, 2).

2

The database can reuse log files even if no OPTIONAL destination
succeeds.

3

The database can reuse logs only if at least one OPTIONAL destination
succeeds.

4

The database can reuse logs only if both OPTIONAL destinations succeed.

5 or greater

ERROR: The value is greater than the number of destinations.

This case shows that the database must archive to the destinations you specify as
MANDATORY, regardless of whether you set LOG_ARCHIVE_MIN_SUCCEED_DEST to
archive to a smaller number of destinations.

7-12 Oracle Database Administrator’s Guide

Controlling Trace Output Generated by the Archivelog Process

Rearchiving to a Failed Destination
Use the REOPEN attribute of the LOG_ARCHIVE_DEST_n parameter to specify whether
and when ARCn should attempt to rearchive to a failed destination following an error.
REOPEN applies to all errors, not just OPEN errors.
REOPEN=n sets the minimum number of seconds before ARCn should try to reopen a
failed destination. The default value for n is 300 seconds. A value of 0 is the same as
turning off the REOPEN attribute; ARCn will not attempt to archive after a failure. If
you do not specify the REOPEN keyword, ARCn will never reopen a destination
following an error.
You cannot use REOPEN to specify the number of attempts ARCn should make to
reconnect and transfer archived logs. The REOPEN attempt either succeeds or fails.
When you specify REOPEN for an OPTIONAL destination, the database can overwrite
online logs if there is an error. If you specify REOPEN for a MANDATORY destination, the
database stalls the production database when it cannot successfully archive. In this
situation, consider the following options:
Archive manually to the failed destination.

■

Change the destination by deferring the destination, specifying the destination as
optional, or changing the service.

■

Drop the destination.

■

When using the REOPEN keyword, note the following:
ARCn reopens a destination only when starting an archive operation from the
beginning of the log file, never during a current operation. ARCn always retries the
log copy from the beginning.

■

If you specified REOPEN, either with a specified time the default, ARCn checks to
see whether the time of the recorded error plus the REOPEN interval is less than the
current time. If it is, ARCn retries the log copy.

■

The REOPEN clause successfully affects the ACTIVE=TRUE destination state. The
VALID and ENABLED states are not changed.

■

Controlling Trace Output Generated by the Archivelog Process
Background processes always write to a trace file when appropriate. (See the
discussion of this topic in "Monitoring the Database Using Trace Files and the Alert
Log" on page 4-21.) In the case of the archivelog process, you can control the output
that is generated to the trace file. You do this by setting the LOG_ARCHIVE_TRACE
initialization parameter to specify a trace level. The following values can be specified:
Trace Level

Meaning

0

Disable archivelog tracing. This is the default.

1

Track archival of redo log file.

2

Track archival status for each archivelog destination.

4

Track archival operational phase.

8

Track archivelog destination activity.

16

Track detailed archivelog destination activity.

32

Track archivelog destination parameter modifications.

Managing Archived Redo Logs 7-13

Viewing Information About the Archived Redo Log

Trace Level

Meaning

64

Track ARCn process state activity.

128

Track FAL (fetch archived log) server related activities.

256

Supported in a future release.

512

Tracks asynchronous LGWR activity.

1024

RFS physical client tracking.

2048

ARCn/RFS heartbeat tracking.

4096

Track real-time apply

You can combine tracing levels by specifying a value equal to the sum of the
individual levels that you would like to trace. For example, setting
LOG_ARCHIVE_TRACE=12, will generate trace level 8 and 4 output. You can set
different values for the primary and any standby database.
The default value for the LOG_ARCHIVE_TRACE parameter is 0. At this level, the
archivelog process generates appropriate alert and trace entries for error conditions.
You can change the value of this parameter dynamically using the ALTER SYSTEM
statement. The database must be mounted but not open. For example:
ALTER SYSTEM SET LOG_ARCHIVE_TRACE=12;

Changes initiated in this manner will take effect at the start of the next archiving
operation.
See Also: Oracle Data Guard Concepts and Administration for
information about using this parameter with a standby database

Viewing Information About the Archived Redo Log
You can display information about the archived redo logs using the following sources:
■

Dynamic Performance Views

■

The ARCHIVE LOG LIST Command

Dynamic Performance Views
Several dynamic performance views contain useful information about archived redo
logs, as summarized in the following table.
Dynamic Performance View

Description

V$DATABASE

Shows if the database is in ARCHIVELOG or NOARCHIVELOG
mode and if MANUAL (archiving mode) has been specified.

V$ARCHIVED_LOG

Displays historical archived log information from the control
file. If you use a recovery catalog, the RC_ARCHIVED_LOG
view contains similar information.

V$ARCHIVE_DEST

Describes the current instance, all archive destinations, and
the current value, mode, and status of these destinations.

V$ARCHIVE_PROCESSES

Displays information about the state of the various archive
processes for an instance.

7-14 Oracle Database Administrator’s Guide

Viewing Information About the Archived Redo Log

Dynamic Performance View

Description

V$BACKUP_REDOLOG

Contains information about any backups of archived logs. If
you use a recovery catalog, the RC_BACKUP_REDOLOG
contains similar information.

V$LOG

Displays all redo log groups for the database and indicates
which need to be archived.

V$LOG_HISTORY

Contains log history information such as which logs have
been archived and the SCN range for each archived log.

For example, the following query displays which redo log group requires archiving:
SELECT GROUP#, ARCHIVED
FROM SYS.V$LOG;
GROUP#
-------1
2

ARC
--YES
NO

To see the current archiving mode, query the V$DATABASE view:
SELECT LOG_MODE FROM SYS.V$DATABASE;
LOG_MODE
-----------NOARCHIVELOG

See Also: Oracle Database Reference for detailed descriptions of
dynamic performance views

The ARCHIVE LOG LIST Command
The SQL*Plus command ARCHIVE LOG LIST displays archiving information for the
connected instance. For example:
SQL> ARCHIVE LOG LIST
Database log mode
Automatic archival
Archive destination
Oldest online log sequence
Next log sequence to archive
Current log sequence

Archive Mode
Enabled
D:\oracle\oradata\IDDB2\archive
11160
11163
11163

This display tells you all the necessary information regarding the archived redo log
settings for the current instance:
■

The database is currently operating in ARCHIVELOG mode.

■

Automatic archiving is enabled.

■

The archived redo log destination is D:\oracle\oradata\IDDB2\archive.

■

The oldest filled redo log group has a sequence number of 11160.

■

The next filled redo log group to archive has a sequence number of 11163.

■

The current redo log file has a sequence number of 11163.

Managing Archived Redo Logs 7-15

Viewing Information About the Archived Redo Log

SQL*Plus User's Guide and Reference for more
information on the ARCHIVE LOG LIST command

See Also:

7-16 Oracle Database Administrator’s Guide

8
Managing Tablespaces
This chapter describes the various aspects of tablespace management, and contains the
following topics:
■

Guidelines for Managing Tablespaces

■

Creating Tablespaces

■

Specifying Nonstandard Block Sizes for Tablespaces

■

Controlling the Writing of Redo Records

■

Altering Tablespace Availability

■

Using Read-Only Tablespaces

■

Renaming Tablespaces

■

Dropping Tablespaces

■

Managing the SYSAUX Tablespace

■

Diagnosing and Repairing Locally Managed Tablespace Problems

■

Migrating the SYSTEM Tablespace to a Locally Managed Tablespace

■

Transporting Tablespaces Between Databases

■

Viewing Tablespace Information
See Also: Chapter 11, "Using Oracle-Managed Files" for
information about creating datafiles and tempfiles that are both
created and managed by the Oracle Database server

Guidelines for Managing Tablespaces
Before working with tablespaces of an Oracle Database, familiarize yourself with the
guidelines provided in the following sections:
■

Using Multiple Tablespaces

■

Assigning Tablespace Quotas to Users
See Also: Oracle Database Concepts for a complete discussion of
database structure, space management, tablespaces, and datafiles

Using Multiple Tablespaces
Using multiple tablespaces allows you more flexibility in performing database
operations. When a database has multiple tablespaces, you can:

Managing Tablespaces 8-1

Creating Tablespaces

■
■

■

■

■

■

Separate user data from data dictionary data to reduce I/O contention.
Separate data of one application from the data of another to prevent multiple
applications from being affected if a tablespace must be taken offline.
Store different the datafiles of different tablespaces on different disk drives to
reduce I/O contention.
Take individual tablespaces offline while others remain online, providing better
overall availability.
Optimizing tablespace use by reserving a tablespace for a particular type of
database use, such as high update activity, read-only activity, or temporary
segment storage.
Back up individual tablespaces.

Some operating systems set a limit on the number of files that can be open
simultaneously. Such limits can affect the number of tablespaces that can be
simultaneously online. To avoid exceeding your operating system limit, plan your
tablespaces efficiently. Create only enough tablespaces to fulfill your needs, and create
these tablespaces with as few files as possible. If you need to increase the size of a
tablespace, add one or two large datafiles, or create datafiles with autoextension
enabled, rather than creating many small datafiles.
Review your data in light of these factors and decide how many tablespaces you need
for your database design.

Assigning Tablespace Quotas to Users
Grant to users who will be creating tables, clusters, materialized views, indexes, and
other objects the privilege to create the object and a quota (space allowance or limit) in
the tablespace intended to hold the object segment.
See Also: Oracle Database Security Guide for information about
creating users and assigning tablespace quotas.

Creating Tablespaces
Before you can create a tablespace, you must create a database to contain it. The
primary tablespace in any database is the SYSTEM tablespace, which contains
information basic to the functioning of the database server, such as the data dictionary
and the system rollback segment. The SYSTEM tablespace is the first tablespace created
at database creation. It is managed as any other tablespace, but requires a higher level
of privilege and is restricted in some ways. For example, you cannot rename or drop
the SYSTEM tablespace or take it offline.
The SYSAUX tablespace, which acts as an auxiliary tablespace to the SYSTEM
tablespace, is also always created when you create a database. It contains information
about and the schemas used by various Oracle products and features, so that those
products do not require their own tablespaces. As for the SYSTEM tablespace,
management of the SYSAUX tablespace requires a higher level of security and you
cannot rename or drop it. The management of the SYSAUX tablespace is discussed
separately in "Managing the SYSAUX Tablespace" on page 8-20.
The steps for creating tablespaces vary by operating system, but the first step is always
to use your operating system to create a directory structure in which your datafiles
will be allocated. On most operating systems, you specify the size and fully specified
filenames of datafiles when you create a new tablespace or alter an existing tablespace
by adding datafiles. Whether you are creating a new tablespace or modifying an
8-2 Oracle Database Administrator’s Guide

Creating Tablespaces

existing one, the database automatically allocates and formats the datafiles as
specified.
To create a new tablespace, use the SQL statement CREATE TABLESPACE or CREATE
TEMPORARY TABLESPACE. You must have the CREATE TABLESPACE system
privilege to create a tablespace. Later, you can use the ALTER TABLESPACE or ALTER
DATABASE statements to alter the tablespace. You must have the ALTER TABLESPACE
or ALTER DATABASE system privilege, correspondingly.
You can also use the CREATE UNDO TABLESPACE statement to create a special type of
tablespace called an undo tablespace, which is specifically designed to contain undo
records. These are records generated by the database that are used to roll back, or
undo, changes to the database for recovery, read consistency, or as requested by a
ROLLBACK statement. Creating and managing undo tablespaces is the subject of
Chapter 10, "Managing the Undo Tablespace".
The creation and maintenance of permanent and temporary tablespaces are discussed
in the following sections:
■

Locally Managed Tablespaces

■

Bigfile Tablespaces

■

Temporary Tablespaces

■

Multiple Temporary Tablespaces: Using Tablespace Groups
See Also:
■

■

■

Chapter 2, "Creating an Oracle Database" and your Oracle
Database installation documentation for your operating system
for information about tablespaces that are created at database
creation
Oracle Database SQL Reference for more information about the
syntax and semantics of the CREATE TABLESPACE, CREATE
TEMPORARY TABLESPACE, ALTER TABLESPACE, and ALTER
DATABASE statements.
"Specifying Database Block Sizes" on page 2-23 for information
about initialization parameters necessary to create tablespaces
with nonstandard block sizes

Locally Managed Tablespaces
Locally managed tablespaces track all extent information in the tablespace itself by
using bitmaps, resulting in the following benefits:
■

■
■

■

■

■

Fast, concurrent space operations. Space allocations and deallocations modify
locally managed resources (bitmaps stored in header files).
Enhanced performance
Readable standby databases are allowed, because locally managed temporary
tablespaces do not generate any undo or redo.
Space allocation is simplified, because when the AUTOALLOCATE clause is
specified, the database automatically selects the appropriate extent size.
User reliance on the data dictionary is reduced, because the necessary information
is stored in file headers and bitmap blocks.
Coalescing free extents is unnecessary for locally managed tablespaces.

Managing Tablespaces 8-3

Creating Tablespaces

All tablespaces, including the SYSTEM tablespace, can be locally managed.
The DBMS_SPACE_ADMIN package provides maintenance procedures for locally
managed tablespaces.
See Also:
■

■

■

"Creating a Locally Managed SYSTEM Tablespace" on
page 2-11, "Migrating the SYSTEM Tablespace to a Locally
Managed Tablespace" on page 8-24, and "Diagnosing and
Repairing Locally Managed Tablespace Problems" on page 8-22
"Bigfile Tablespaces" on page 8-6 for information about creating
another type of locally managed tablespace that contains only a
single datafile or tempfile.
Oracle Database PL/SQL Packages and Types Reference for
information on the DBMS_SPACE_ADMIN package

Creating a Locally Managed Tablespace
Create a locally managed tablespace by specifying LOCAL in the EXTENT
MANAGEMENT clause of the CREATE TABLESPACE statement. This is the default for
new permanent tablespaces, but you must specify the EXTENT MANAGEMENT LOCAL
clause if you want to specify either the AUTOALLOCATE clause or the UNIFORM clause.
You can have the database manage extents for you automatically with the
AUTOALLOCATE clause (the default), or you can specify that the tablespace is managed
with uniform extents of a specific size (UNIFORM).
If you expect the tablespace to contain objects of varying sizes requiring many extents
with different extent sizes, then AUTOALLOCATE is the best choice. AUTOALLOCATE is
also a good choice if it is not important for you to have a lot of control over space
allocation and deallocation, because it simplifies tablespace management. Some space
may be wasted with this setting, but the benefit of having Oracle Database manage
your space most likely outweighs this drawback.
If you want exact control over unused space, and you can predict exactly the space to
be allocated for an object or objects and the number and size of extents, then UNIFORM
is a good choice. This setting ensures that you will never have unusable space in your
tablespace.
When you do not explicitly specify the type of extent management, Oracle Database
determines extent management as follows:
■

■

If the CREATE TABLESPACE statement omits the DEFAULT storage clause, then
the database creates a locally managed autoallocated tablespace.
If the CREATE TABLESPACE statement includes a DEFAULT storage clause, then
the database considers the following:
–

If you specified the MINIMUM EXTENT clause, the database evaluates whether
the values of MINIMUM EXTENT, INITIAL, and NEXT are equal and the value
of PCTINCREASE is 0. If so, the database creates a locally managed uniform
tablespace with extent size = INITIAL. If the MINIMUM EXTENT, INITIAL,
and NEXT parameters are not equal, or if PCTINCREASE is not 0, the database
ignores any extent storage parameters you may specify and creates a locally
managed, autoallocated tablespace.

–

If you did not specify MINIMUM EXTENT clause, the database evaluates only
whether the storage values of INITIAL and NEXT are equal and

8-4 Oracle Database Administrator’s Guide

Creating Tablespaces

PCTINCREASE is 0. If so, the tablespace is locally managed and uniform.
Otherwise, the tablespace is locally managed and autoallocated.
The following statement creates a locally managed tablespace named lmtbsb and
specifies AUTOALLOCATE:
CREATE TABLESPACE lmtbsb DATAFILE '/u02/oracle/data/lmtbsb01.dbf' SIZE 50M
EXTENT MANAGEMENT LOCAL AUTOALLOCATE;

AUTOALLOCATE causes the tablespace to be system managed with a minimum extent
size of 64K.
The alternative to AUTOALLOCATE is UNIFORM. which specifies that the tablespace is
managed with extents of uniform size. You can specify that size in the SIZE clause of
UNIFORM. If you omit SIZE, then the default size is 1M.
The following example creates a tablespace with uniform 128K extents. (In a database
with 2K blocks, each extent would be equivalent to 64 database blocks). Each 128K
extent is represented by a bit in the extent bitmap for this file.
CREATE TABLESPACE lmtbsb DATAFILE '/u02/oracle/data/lmtbsb01.dbf' SIZE 50M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K;

You cannot specify the DEFAULT storage clause, MINIMUM EXTENT, or TEMPORARY
when you explicitly specify EXTENT MANAGEMENT LOCAL. If you want to create a
temporary locally managed tablespace, use the CREATE TEMPORARY TABLESPACE
statement.
When you allocate a datafile for a locally managed
tablespace, you should allow space for metadata used for space
management (the extent bitmap or space header segment) which
are part of user space. For example, if specify the UNIFORM clause
in the extent management clause but you omit the SIZE parameter,
then the default extent size is 1MB. In that case, the size specified
for the datafile must be larger (at least one block plus space for the
bitmap) than 1MB.

Note:

Specifying Segment Space Management in Locally Managed Tablespaces
In a locally managed tablespace, there are two methods that Oracle Database can use
to manage segment space: automatic and manual. Manual segment space management
uses linked lists called "freelists" to manage free space in the segment, while automatic
segment space management uses bitmaps. Automatic segment space management is
the more efficient method, and is the default for all new permanent, locally managed
tablespaces.
Automatic segment space management delivers better space utilization than manual
segment space management. It is also self-tuning, in that it scales with increasing
number of users or instances. In an Oracle Real Application Clusters environment,
automatic segment space management allows for a dynamic affinity of space to
instances. In addition, for many standard workloads, application performance with
automatic segment space management is better than the performance of a well-tuned
application using manual segment space management.
Although automatic segment space management is the default for all new permanent,
locally managed tablespaces, you can explicitly enable it with the SEGMENT SPACE
MANAGEMENT AUTO clause.

Managing Tablespaces 8-5

Creating Tablespaces

The following statement creates tablespace lmtbsb with automatic segment space
management:
CREATE TABLESPACE lmtbsb DATAFILE '/u02/oracle/data/lmtbsb01.dbf' SIZE 50M
EXTENT MANAGEMENT LOCAL
SEGMENT SPACE MANAGEMENT AUTO;

The SEGMENT SPACE MANAGEMENT MANUAL clause disables automatic segment
space management.
The segment space management that you specify at tablespace creation time applies to
all segments subsequently created in the tablespace. You cannot change the segment
space management mode of a tablespace.

Notes:
■

■

If you set extent management to LOCAL UNIFORM, then you
must ensure that each extent contains at least 5 database blocks.
If you set extent management to LOCAL AUTOALLOCATE, and if
the database block size is 16K or greater, then Oracle manages
segment space by creating extents with a minimum size of 5
blocks rounded up to 64K.

Locally managed tablespaces using automatic segment space management can be
created as single-file or bigfile tablespaces, as described in "Bigfile Tablespaces" on
page 8-6.

Altering a Locally Managed Tablespace
You cannot alter a locally managed tablespace to a locally managed temporary
tablespace, nor can you change its method of segment space management. Coalescing
free extents is unnecessary for locally managed tablespaces. However, you can use the
ALTER TABLESPACE statement on locally managed tablespaces for some operations,
including the following:
■

Adding a datafile. For example:
ALTER TABLESPACE lmtbsb
ADD DATAFILE '/u02/oracle/data/lmtbsb02.dbf' SIZE 1M;

■

■

■

Altering tablespace availability (ONLINE/OFFLINE). See "Altering Tablespace
Availability" on page 8-13.
Making a tablespace read-only or read/write. See "Using Read-Only Tablespaces"
on page 8-15.
Renaming a datafile, or enabling or disabling the autoextension of the size of a
datafile in the tablespace. See Chapter 9, "Managing Datafiles and Tempfiles".

Bigfile Tablespaces
A bigfile tablespace is a tablespace with a single, but very large (up to 4G blocks)
datafile. Traditional smallfile tablespaces, in contrast, can contain multiple datafiles,
but the files cannot be as large. The benefits of bigfile tablespaces are the following:
■

A bigfile tablespace with 8K blocks can contain a 32 terabyte datafile. A bigfile
tablespace with 32K blocks can contain a 128 terabyte datafile. The maximum
number of datafiles in an Oracle Database is limited (usually to 64K files).

8-6 Oracle Database Administrator’s Guide

Creating Tablespaces

Therefore, bigfile tablespaces can significantly enhance the storage capacity of an
Oracle Database.
■

■

Bigfile tablespaces can reduce the number of datafiles needed for a database. An
additional benefit is that the DB_FILES initialization parameter and
MAXDATAFILES parameter of the CREATE DATABASE and CREATE
CONTROLFILE statements can be adjusted to reduce the amount of SGA space
required for datafile information and the size of the control file.
Bigfile tablespaces simplify database management by providing datafile
transparency. SQL syntax for the ALTER TABLESPACE statement lets you perform
operations on tablespaces, rather than the underlying individual datafiles.

Bigfile tablespaces are supported only for locally managed tablespaces with automatic
segment space management, with three exceptions: locally managed undo tablespaces,
temporary tablespaces, and the SYSTEM tablespace.
Notes:
■

■

■

Bigfile tablespaces are intended to be used with Automatic
Storage Management (ASM) or other logical volume managers
that supports striping or RAID, and dynamically extensible
logical volumes.
Avoid creating bigfile tablespaces on a system that does not
support striping because of negative implications for parallel
query execution and RMAN backup parallelization.
Using bigfile tablespaces on platforms that do not support large
file sizes is not recommended and can limit tablespace capacity.
Refer to your operating system specific documentation for
information about maximum supported file sizes.

Creating a Bigfile Tablespace
To create a bigfile tablespace, specify the BIGFILE keyword of the CREATE
TABLESPACE statement (CREATE BIGFILE TABLESPACE ...). Oracle Database
automatically creates a locally managed tablespace with automatic segment space
management. You can, but need not, specify EXTENT MANAGEMENT LOCAL and
SEGMENT SPACE MANAGEMENT AUTO in this statement. However, the database returns
an error if you specify EXTENT MANAGEMENT DICTIONARY or SEGMENT SPACE
MANAGEMENT MANUAL. The remaining syntax of the statement is the same as for the
CREATE TABLESPACE statement, but you can only specify one datafile. For example:
CREATE BIGFILE TABLESPACE bigtbs
DATAFILE '/u02/oracle/data/bigtbs01.dbf' SIZE 50G
...

You can specify SIZE in kilobytes (K), megabytes (M), gigabytes (G), or terabytes (T).
If the default tablespace type was set to BIGFILE at database creation, you need not
specify the keyword BIGFILE in the CREATE TABLESPACE statement. A bigfile
tablespace is created by default.
If the default tablespace type was set to BIGFILE at database creation, but you want to
create a traditional (smallfile) tablespace, then specify a CREATE SMALLFILE
TABLESPACE statement to override the default tablespace type for the tablespace that
you are creating.

Managing Tablespaces 8-7

Creating Tablespaces

See Also: "Supporting Bigfile Tablespaces During Database
Creation" on page 2-16

Altering a Bigfile Tablespace
Two clauses of the ALTER TABLESPACE statement support datafile transparency
when you are using bigfile tablespaces:
■

RESIZE: The RESIZE clause lets you resize the single datafile in a bigfile
tablespace to an absolute size, without referring to the datafile. For example:
ALTER TABLESPACE bigtbs RESIZE 80G;

■

AUTOEXTEND (used outside of the ADD DATAFILE clause):
With a bigfile tablespace, you can use the AUTOEXTEND clause outside of the ADD
DATAFILE clause. For example:
ALTER TABLESPACE bigtbs AUTOEXTEND ON NEXT 20G;

An error is raised if you specify an ADD DATAFILE clause for a bigfile tablespace.

Identifying a Bigfile Tablespace
The following views contain a BIGFILE column that identifies a tablespace as a bigfile
tablespace:
■

DBA_TABLESPACES

■

USER_TABLESPACES

■

V$TABLESPACE

You can also identify a bigfile tablespace by the relative file number of its single
datafile. That number is 1024 on most platforms, but 4096 on OS/390.

Temporary Tablespaces
A temporary tablespace contains transient data that persists only for the duration of
the session. Temporary tablespaces can improve the concurrency of multiple sort
operations, reduce their overhead, and avoid Oracle Database space management
operations. A temporary tablespace can be assigned to users with the CREATE USER
or ALTER USER statement and can be shared by multiple users.
Within a temporary tablespace, all sort operations for a given instance and tablespace
share a single sort segment. Sort segments exist for every instance that performs sort
operations within a given tablespace. The sort segment is created by the first statement
that uses a temporary tablespace for sorting, after startup, and is released only at
shutdown. An extent cannot be shared by multiple transactions.
You can view the allocation and deallocation of space in a temporary tablespace sort
segment using the V$SORT_SEGMENT view. The V$TEMPSEG_USAGE view identifies
the current sort users in those segments.
You cannot explicitly create objects in a temporary tablespace.
The exception to the preceding statement is a temporary table.
When you create a temporary table, its rows are stored in your default
temporary tablespace. See "Creating a Temporary Table" on page 15-6
for more information.

Note:

8-8 Oracle Database Administrator’s Guide

Creating Tablespaces

See Also:
■

■

■

Oracle Database Security Guide for information about creating
users and assigning temporary tablespaces
Oracle Database Reference for more information about the
V$SORT_SEGMENT and V$TEMPSEG_USAGE views
Oracle Database Performance Tuning Guide for a discussion on
tuning sorts

Creating a Locally Managed Temporary Tablespace
Because space management is much simpler and more efficient in locally managed
tablespaces, they are ideally suited for temporary tablespaces. Locally managed
temporary tablespaces use tempfiles, which do not modify data outside of the
temporary tablespace or generate any redo for temporary tablespace data. Because of
this, they enable you to perform on-disk sorting operations in a read-only or standby
database.
You also use different views for viewing information about tempfiles than you would
for datafiles. The V$TEMPFILE and DBA_TEMP_FILES views are analogous to the
V$DATAFILE and DBA_DATA_FILES views.
To create a locally managed temporary tablespace, you use the CREATE TEMPORARY
TABLESPACE statement, which requires that you have the CREATE TABLESPACE
system privilege.
The following statement creates a temporary tablespace in which each extent is 16M.
Each 16M extent (which is the equivalent of 8000 blocks when the standard block size
is 2K) is represented by a bit in the bitmap for the file.
CREATE TEMPORARY TABLESPACE lmtemp TEMPFILE '/u02/oracle/data/lmtemp01.dbf'
SIZE 20M REUSE
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 16M;

The extent management clause is optional for temporary tablespaces because all
temporary tablespaces are created with locally managed extents of a uniform size. The
Oracle Database default for SIZE is 1M. But if you want to specify another value for
SIZE, you can do so as shown in the preceding statement.
The AUTOALLOCATE clause is not allowed for temporary tablespaces.
On some operating systems, the database does not allocate
space for the tempfile until the tempfile blocks are actually
accessed. This delay in space allocation results in faster creation
and resizing of tempfiles, but it requires that sufficient disk space is
available when the tempfiles are later used. Please refer to your
operating system documentation to determine whether the
database allocates tempfile space in this way on your system.

Note:

Creating a Bigfile Temporary Tablespace
Just as for regular tablespaces, you can create single-file (bigfile) temporary
tablespaces. Use the CREATE BIGFILE TEMPORARY TABLESPACE statement to
create a single-tempfile tablespace. See the sections "Creating a Bigfile Tablespace" on
page 8-7 and "Altering a Bigfile Tablespace" on page 8-8 for information about bigfile
tablespaces, but consider that you are creating temporary tablespaces that use
tempfiles instead of datafiles.

Managing Tablespaces 8-9

Creating Tablespaces

Altering a Locally Managed Temporary Tablespace
You cannot use the ALTER TABLESPACE statement, with
the TEMPORARY keyword, to change a locally managed permanent
tablespace into a locally managed temporary tablespace. You must
use the CREATE TEMPORARY TABLESPACE statement to create a
locally managed temporary tablespace.

Note:

Except for adding a tempfile, taking a tempfile offline, or bringing a tempfile online, as
illustrated in the following examples, you cannot use the ALTER TABLESPACE
statement for a locally managed temporary tablespace.
ALTER TABLESPACE lmtemp
ADD TEMPFILE '/u02/oracle/data/lmtemp02.dbf' SIZE 18M REUSE;
ALTER TABLESPACE lmtemp TEMPFILE OFFLINE;
ALTER TABLESPACE lmtemp TEMPFILE ONLINE;

You cannot take a temporary tablespace offline. Instead, you
take its tempfile offline. The view V$TEMPFILE displays online status
for a tempfile.

Note:

However, the ALTER DATABASE statement can be used to alter tempfiles.
The following statements take offline and bring online tempfiles. They behave
identically to the last two ALTER TABLESPACE statements in the previous example.
ALTER DATABASE TEMPFILE '/u02/oracle/data/lmtemp02.dbf' OFFLINE;
ALTER DATABASE TEMPFILE '/u02/oracle/data/lmtemp02.dbf' ONLINE;

The following statement resizes a temporary file:
ALTER DATABASE TEMPFILE '/u02/oracle/data/lmtemp02.dbf' RESIZE 18M;

The following statement drops a temporary file and deletes the operating system file:
ALTER DATABASE TEMPFILE '/u02/oracle/data/lmtemp02.dbf' DROP
INCLUDING DATAFILES;

The tablespace to which this tempfile belonged remains. A message is written to the
alert log for the datafile that was deleted. If an operating system error prevents the
deletion of the file, the statement still succeeds, but a message describing the error is
written to the alert log.
It is also possible to use the ALTER DATABASE statement to enable or disable the
automatic extension of an existing tempfile, and to rename (RENAME FILE) a tempfile.
See Oracle Database SQL Reference for the required syntax.
To rename a tempfile, you take the tempfile offline, use
operating system commands to rename or relocate the tempfile, and
then use the ALTER DATABASE RENAME FILE command to update the
database controlfiles.
Note:

8-10 Oracle Database Administrator’s Guide

Creating Tablespaces

Multiple Temporary Tablespaces: Using Tablespace Groups
A tablespace group enables a user to consume temporary space from multiple
tablespaces. A tablespace group has the following characteristics:
■

■

■

It contains at least one tablespace. There is no explicit limit on the maximum
number of tablespaces that are contained in a group.
It shares the namespace of tablespaces, so its name cannot be the same as any
tablespace.
You can specify a tablespace group name wherever a tablespace name would
appear when you assign a default temporary tablespace for the database or a
temporary tablespace for a user.

You do not explicitly create a tablespace group. Rather, it is created implicitly when
you assign the first temporary tablespace to the group. The group is deleted when the
last temporary tablespace it contains is removed from it.
Using a tablespace group, rather than a single temporary tablespace, can alleviate
problems caused where one tablespace is inadequate to hold the results of a sort,
particularly on a table that has many partitions. A tablespace group enables parallel
execution servers in a single parallel operation to use multiple temporary tablespaces.
The view DBA_TABLESPACE_GROUPS lists tablespace groups and their member
tablespaces.
Oracle Database Security Guide for more information
about assigning a temporary tablespace or tablespace group to a
user

See Also:

Creating a Tablespace Group
You create a tablespace group implicitly when you include the TABLESPACE GROUP
clause in the CREATE TEMPORARY TABLESPACE or ALTER TABLESPACE statement
and the specified tablespace group does not currently exist.
For example, if neither group1 nor group2 exists, then the following statements
create those groups, each of which has only the specified tablespace as a member:
CREATE TEMPORARY TABLESPACE lmtemp2 TEMPFILE '/u02/oracle/data/lmtemp201.dbf'
SIZE 50M
TABLESPACE GROUP group1;
ALTER TABLESPACE lmtemp TABLESPACE GROUP group2;

Changing Members of a Tablespace Group
You can add a tablespace to an existing tablespace group by specifying the existing
group name in the TABLESPACE GROUP clause of the CREATE TEMPORARY
TABLESPACE or ALTER TABLESPACE statement.
The following statement adds a tablespace to an existing group. It creates and adds
tablespace lmtemp3 to group1, so that group1 contains tablespaces lmtemp2 and
lmtemp3.
CREATE TEMPORARY TABLESPACE lmtemp3 TEMPFILE '/u02/oracle/data/lmtemp301.dbf'
SIZE 25M
TABLESPACE GROUP group1;

The following statement also adds a tablespace to an existing group, but in this case
because tablespace lmtemp2 already belongs to group1, it is in effect moved from
group1 to group2:
Managing Tablespaces 8-11

Specifying Nonstandard Block Sizes for Tablespaces

ALTER TABLESPACE lmtemp2 TABLESPACE GROUP group2;

Now group2 contains both lmtemp and lmtemp2, while group1 consists of only
tmtemp3.
You can remove a tablespace from a group as shown in the following statement:
ALTER TABLESPACE lmtemp3 TABLESPACE GROUP '';

Tablespace lmtemp3 no longer belongs to any group. Further, since there are no longer
any members of group1, this results in the implicit deletion of group1.

Assigning a Tablespace Group as the Default Temporary Tablespace
Use the ALTER DATABASE...DEFAULT TEMPORARY TABLESPACE statement to
assign a tablespace group as the default temporary tablespace for the database. For
example:
ALTER DATABASE sample DEFAULT TEMPORARY TABLESPACE group2;

Any user who has not explicitly been assigned a temporary tablespace will now use
tablespaces lmtemp and lmtemp2.
If a tablespace group is specified as the default temporary tablespace, you cannot drop
any of its member tablespaces. You must first remove the tablespace from the
tablespace group. Likewise, you cannot drop a single temporary tablespace as long as
it is the default temporary tablespace.

Specifying Nonstandard Block Sizes for Tablespaces
You can create tablespaces with block sizes different from the standard database block
size, which is specified by the DB_BLOCK_SIZE initialization parameter. This feature
lets you transport tablespaces with unlike block sizes between databases.
Use the BLOCKSIZE clause of the CREATE TABLESPACE statement to create a
tablespace with a block size different from the database standard block size. In order
for the BLOCKSIZE clause to succeed, you must have already set the DB_CACHE_SIZE
and at least one DB_nK_CACHE_SIZE initialization parameter. Further, and the integer
you specify in the BLOCKSIZE clause must correspond with the setting of one
DB_nK_CACHE_SIZE parameter setting. Although redundant, specifying a
BLOCKSIZE equal to the standard block size, as specified by the DB_BLOCK_SIZE
initialization parameter, is allowed.
The following statement creates tablespace lmtbsb, but specifies a block size that
differs from the standard database block size (as specified by the DB_BLOCK_SIZE
initialization parameter):
CREATE TABLESPACE lmtbsb DATAFILE '/u02/oracle/data/lmtbsb01.dbf' SIZE 50M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K
BLOCKSIZE 8K;

See Also:
■
■

■

"Specifying Database Block Sizes" on page 2-23
"Setting the Buffer Cache Initialization Parameters" on
page 2-30 for information about the DB_CACHE_SIZE and
DB_nK_CACHE_SIZE parameter settings
"Transporting Tablespaces Between Databases" on page 8-25

8-12 Oracle Database Administrator’s Guide

Altering Tablespace Availability

Controlling the Writing of Redo Records
For some database operations, you can control whether the database generates redo
records. Without redo, no media recovery is possible. However, suppressing redo
generation can improve performance, and may be appropriate for easily recoverable
operations. An example of such an operation is a CREATE TABLE...AS SELECT
statement, which can be repeated in case of database or instance failure.
Specify the NOLOGGING clause in the CREATE TABLESPACE statement if you wish to
suppress redo when these operations are performed for objects within the tablespace.
If you do not include this clause, or if you specify LOGGING instead, then the database
generates redo when changes are made to objects in the tablespace. Redo is never
generated for temporary segments or in temporary tablespaces, regardless of the
logging attribute.
The logging attribute specified at the tablespace level is the default attribute for objects
created within the tablespace. You can override this default logging attribute by
specifying LOGGING or NOLOGGING at the schema object level--for example, in a
CREATE TABLE statement.
If you have a standby database, NOLOGGING mode causes problems with the
availability and accuracy of the standby database. To overcome this problem, you can
specify FORCE LOGGING mode. When you include the FORCE LOGGING clause in the
CREATE TABLESPACE statement, you force the generation of redo records for all
operations that make changes to objects in a tablespace. This overrides any
specification made at the object level.
If you transport a tablespace that is in FORCE LOGGING mode to another database, the
new tablespace will not maintain the FORCE LOGGING mode.
See Also:
■

■

Oracle Database SQL Reference for information about operations
that can be done in NOLOGGING mode
"Specifying FORCE LOGGING Mode" on page 2-18 for more
information about FORCE LOGGING mode and for information
about the effects of the FORCE LOGGING clause used with the
CREATE DATABASE statement

Altering Tablespace Availability
You can take an online tablespace offline so that it is temporarily unavailable for
general use. The rest of the database remains open and available for users to access
data. Conversely, you can bring an offline tablespace online to make the schema
objects within the tablespace available to database users. The database must be open to
alter the availability of a tablespace.
To alter the availability of a tablespace, use the ALTER TABLESPACE statement. You
must have the ALTER TABLESPACE or MANAGE TABLESPACE system privilege.
See Also: "Altering Datafile Availability" on page 9-6 for
information about altering the availability of individual datafiles
within a tablespace

Taking Tablespaces Offline
You may want to take a tablespace offline for any of the following reasons:

Managing Tablespaces 8-13

Altering Tablespace Availability

■

■

■

■

To make a portion of the database unavailable while allowing normal access to the
remainder of the database
To perform an offline tablespace backup (even though a tablespace can be backed
up while online and in use)
To make an application and its group of tables temporarily unavailable while
updating or maintaining the application
To rename or relocate tablespace datafiles
See "Renaming and Relocating Datafiles" on page 9-8 for details.

When a tablespace is taken offline, the database takes all the associated files offline.
You cannot take the following tablespaces offline:
■

SYSTEM

■

The undo tablespace

■

Temporary tablespaces

Before taking a tablespace offline, consider altering the tablespace allocation of any
users who have been assigned the tablespace as a default tablespace. Doing so is
advisable because those users will not be able to access objects in the tablespace while
it is offline.
You can specify any of the following parameters as part of the ALTER
TABLESPACE...OFFLINE statement:
Clause

Description

NORMAL

A tablespace can be taken offline normally if no error conditions
exist for any of the datafiles of the tablespace. No datafile in the
tablespace can be currently offline as the result of a write error.
When you specify OFFLINE NORMAL, the database takes a
checkpoint for all datafiles of the tablespace as it takes them
offline. NORMAL is the default.

TEMPORARY

A tablespace can be taken offline temporarily, even if there are
error conditions for one or more files of the tablespace. When
you specify OFFLINE TEMPORARY, the database takes offline
the datafiles that are not already offline, checkpointing them as
it does so.
If no files are offline, but you use the temporary clause, media
recovery is not required to bring the tablespace back online.
However, if one or more files of the tablespace are offline
because of write errors, and you take the tablespace offline
temporarily, the tablespace requires recovery before you can
bring it back online.

IMMEDIATE

8-14 Oracle Database Administrator’s Guide

A tablespace can be taken offline immediately, without the
database taking a checkpoint on any of the datafiles. When you
specify OFFLINE IMMEDIATE, media recovery for the
tablespace is required before the tablespace can be brought
online. You cannot take a tablespace offline immediately if the
database is running in NOARCHIVELOG mode.

Using Read-Only Tablespaces

Caution: If you must take a tablespace offline, use the NORMAL
clause (the default) if possible. This setting guarantees that the
tablespace will not require recovery to come back online, even if
after incomplete recovery you reset the redo log sequence using an
ALTER DATABASE OPEN RESETLOGS statement.

Specify TEMPORARY only when you cannot take the tablespace offline normally. In this
case, only the files taken offline because of errors need to be recovered before the
tablespace can be brought online. Specify IMMEDIATE only after trying both the
normal and temporary settings.
The following example takes the users tablespace offline normally:
ALTER TABLESPACE users OFFLINE NORMAL;

Bringing Tablespaces Online
You can bring any tablespace in an Oracle Database online whenever the database is
open. A tablespace is normally online so that the data contained within it is available
to database users.
If a tablespace to be brought online was not taken offline "cleanly" (that is, using the
NORMAL clause of the ALTER TABLESPACE OFFLINE statement), you must first
perform media recovery on the tablespace before bringing it online. Otherwise, the
database returns an error and the tablespace remains offline.
See Also: Depending upon your archiving strategy, refer to one of
the following books for information about performing media
recovery:
■

Oracle Database Backup and Recovery Basics

■

Oracle Database Backup and Recovery Advanced User's Guide

The following statement brings the users tablespace online:
ALTER TABLESPACE users ONLINE;

Using Read-Only Tablespaces
Making a tablespace read-only prevents write operations on the datafiles in the
tablespace. The primary purpose of read-only tablespaces is to eliminate the need to
perform backup and recovery of large, static portions of a database. Read-only
tablespaces also provide a way to protecting historical data so that users cannot
modify it. Making a tablespace read-only prevents updates on all tables in the
tablespace, regardless of a user's update privilege level.
Note: Making a tablespace read-only cannot in itself be used to
satisfy archiving or data publishing requirements, because the
tablespace can only be brought online in the database in which it
was created. However, you can meet such requirements by using
the transportable tablespace feature, as described in "Transporting
Tablespaces Between Databases" on page 8-25.

Managing Tablespaces 8-15

Using Read-Only Tablespaces

You can drop items, such as tables or indexes, from a read-only tablespace, but you
cannot create or alter objects in a read-only tablespace. You can execute statements that
update the file description in the data dictionary, such as ALTER TABLE...ADD or
ALTER TABLE...MODIFY, but you will not be able to utilize the new description
until the tablespace is made read/write.
Read-only tablespaces can be transported to other databases. And, since read-only
tablespaces can never be updated, they can reside on CD-ROM or WORM (Write
Once-Read Many) devices.
The following topics are discussed in this section:
■

Making a Tablespace Read-Only

■

Making a Read-Only Tablespace Writable

■

Creating a Read-Only Tablespace on a WORM Device

■

Delaying the Opening of Datafiles in Read-Only Tablespaces
See Also: "Transporting Tablespaces Between Databases" on
page 8-25

Making a Tablespace Read-Only
All tablespaces are initially created as read/write. Use the READ ONLY clause in the
ALTER TABLESPACE statement to change a tablespace to read-only. You must have
the ALTER TABLESPACE or MANAGE TABLESPACE system privilege.
Before you can make a tablespace read-only, the following conditions must be met.
■

■
■

The tablespace must be online. This is necessary to ensure that there is no undo
information that needs to be applied to the tablespace.
The tablespace cannot be the active undo tablespace or SYSTEM tablespace.
The tablespace must not currently be involved in an online backup, because the
end of a backup updates the header file of all datafiles in the tablespace.

For better performance while accessing data in a read-only tablespace, you can issue a
query that accesses all of the blocks of the tables in the tablespace just before making it
read-only. A simple query, such as SELECT COUNT (*), executed against each table
ensures that the data blocks in the tablespace can be subsequently accessed most
efficiently. This eliminates the need for the database to check the status of the
transactions that most recently modified the blocks.
The following statement makes the flights tablespace read-only:
ALTER TABLESPACE flights READ ONLY;

You can issue the ALTER TABLESPACE...READ ONLY statement while the database
is processing transactions. After the statement is issued, the tablespace is put into a
transitional read-only state. No transactions are allowed to make further changes
(using DML statements) to the tablespace. If a transaction attempts further changes, it
is terminated and rolled back. However, transactions that already made changes and
that attempt no further changes are allowed to commit or roll back.
When there are transactions waiting to commit, the ALTER TABLESPACE...READ
ONLY statement does not return immediately. It waits for all transactions started before
you issued the ALTER TABLESPACE statement to either commit or rollback.

8-16 Oracle Database Administrator’s Guide

Using Read-Only Tablespaces

This transitional read-only state only occurs if the value of
the initialization parameter COMPATIBLE is 8.1.0 or greater. If this
parameter is set to a value less than 8.1.0, the ALTER TABLESPACE
...READ ONLY statement fails if any active transactions exist.

Note:

If you find it is taking a long time for the ALTER TABLESPACE statement to complete,
you can identify the transactions that are preventing the read-only state from taking
effect. You can then notify the owners of those transactions and decide whether to
terminate the transactions, if necessary.
The following example identifies the transaction entry for the ALTER
TABLESPACE...READ ONLY statement and note its session address (saddr):
SELECT SQL_TEXT, SADDR
FROM V$SQLAREA,V$SESSION
WHERE V$SQLAREA.ADDRESS = V$SESSION.SQL_ADDRESS
AND SQL_TEXT LIKE 'alter tablespace%';
SQL_TEXT
SADDR
---------------------------------------- -------alter tablespace tbs1 read only
80034AF0

The start SCN of each active transaction is stored in the V$TRANSACTION view.
Displaying this view sorted by ascending start SCN lists the transactions in execution
order. From the preceding example, you already know the session address of the
transaction entry for the read-only statement, and you can now locate it in the
V$TRANSACTION view. All transactions with smaller start SCN, which indicates an
earlier execution, can potentially hold up the quiesce and subsequent read-only state
of the tablespace.
SELECT SES_ADDR, START_SCNB
FROM V$TRANSACTION
ORDER BY START_SCNB;
SES_ADDR START_SCNB
-------- ---------800352A0
3621
80035A50
3623
80034AF0
3628
80037910
3629

-->
-->
-->
-->

waiting on this txn
waiting on this txn
this is the ALTER TABLESPACE statement
don't care about this txn

After making the tablespace read-only, it is advisable to back it up immediately. As
long as the tablespace remains read-only, no further backups of the tablespace are
necessary, because no changes can be made to it.
See Also: Depending upon your backup and recovery strategy,
refer to one of the following books for information about backing
up and recovering a database with read-only datafiles:
■

Oracle Database Backup and Recovery Basics

■

Oracle Database Backup and Recovery Advanced User's Guide

Making a Read-Only Tablespace Writable
Use the READ WRITE keywords in the ALTER TABLESPACE statement to change a
tablespace to allow write operations. You must have the ALTER TABLESPACE or
MANAGE TABLESPACE system privilege.

Managing Tablespaces 8-17

Using Read-Only Tablespaces

A prerequisite to making the tablespace read/write is that all of the datafiles in the
tablespace, as well as the tablespace itself, must be online. Use the
DATAFILE...ONLINE clause of the ALTER DATABASE statement to bring a datafile
online. The V$DATAFILE view lists the current status of datafiles.
The following statement makes the flights tablespace writable:
ALTER TABLESPACE flights READ WRITE;

Making a read-only tablespace writable updates the control file entry for the datafiles,
so that you can use the read-only version of the datafiles as a starting point for
recovery.

Creating a Read-Only Tablespace on a WORM Device
Follow these steps to create a read-only tablespace on a CD-ROM or WORM (Write
Once-Read Many) device.
1.

Create a writable tablespace on another device. Create the objects that belong in
the tablespace and insert your data.

2.

Alter the tablespace to make it read-only.

3.

Copy the datafiles of the tablespace onto the WORM device. Use operating system
commands to copy the files.

4.

Take the tablespace offline.

5.

Rename the datafiles to coincide with the names of the datafiles you copied onto
your WORM device. Use ALTER TABLESPACE with the RENAME DATAFILE
clause. Renaming the datafiles changes their names in the control file.

6.

Bring the tablespace back online.

Delaying the Opening of Datafiles in Read-Only Tablespaces
When substantial portions of a very large database are stored in read-only tablespaces
that are located on slow-access devices or hierarchical storage, you should consider
setting the READ_ONLY_OPEN_DELAYED initialization parameter to TRUE. This speeds
certain operations, primarily opening the database, by causing datafiles in read-only
tablespaces to be accessed for the first time only when an attempt is made to read data
stored within them.
Setting READ_ONLY_OPEN_DELAYED=TRUE has the following side-effects:
■

■
■

■

■

■

A missing or bad read-only file is not detected at open time. It is only discovered
when there is an attempt to access it.
ALTER SYSTEM CHECK DATAFILES does not check read-only files.
ALTER TABLESPACE...ONLINE and ALTER DATABASE DATAFILE...ONLINE
do not check read-only files. They are checked only upon the first access.
V$RECOVER_FILE, V$BACKUP, and V$DATAFILE_HEADER do not access
read-only files. Read-only files are indicated in the results list with the error
"DELAYED OPEN", with zeroes for the values of other columns.
V$DATAFILE does not access read-only files. Read-only files have a size of "0"
listed.
V$RECOVER_LOG does not access read-only files. Logs they could need for
recovery are not added to the list.

8-18 Oracle Database Administrator’s Guide

Renaming Tablespaces

■

ALTER DATABASE NOARCHIVELOG does not access read-only files.It proceeds
even if there is a read-only file that requires recovery.
Notes:
■

■

RECOVER DATABASE and ALTER DATABASE OPEN
RESETLOGS continue to access all read-only datafiles
regardless of the parameter value. If you want to avoid
accessing read-only files for these operations, those files should
be taken offline.
If a backup control file is used, the read-only status of some
files may be inaccurate. This can cause some of these operations
to return unexpected results. Care should be taken in this
situation.

Renaming Tablespaces
Using the RENAME TO clause of the ALTER TABLESPACE, you can rename a
permanent or temporary tablespace. For example, the following statement renames the
users tablespace:
ALTER TABLESPACE users RENAME TO usersts;

When you rename a tablespace the database updates all references to the tablespace
name in the data dictionary, control file, and (online) datafile headers. The database
does not change the tablespace ID so if this tablespace were, for example, the default
tablespace for a user, then the renamed tablespace would show as the default
tablespace for the user in the DBA_USERS view.
The following affect the operation of this statement:
■
■

■

■

■

■

The COMPATIBLE parameter must be set to 10.0 or higher.
If the tablespace being renamed is the SYSTEM tablespace or the SYSAUX
tablespace, then it will not be renamed and an error is raised.
If any datafile in the tablespace is offline, or if the tablespace is offline, then the
tablespace is not renamed and an error is raised.
If the tablespace is read only, then datafile headers are not updated. This should
not be regarded as corruption; instead, it causes a message to be written to the
alert log indicating that datafile headers have not been renamed. The data
dictionary and control file are updated.
If the tablespace is the default temporary tablespace, then the corresponding entry
in the database properties table is updated and the DATABASE_PROPERTIES view
shows the new name.
If the tablespace is an undo tablespace and if the following conditions are met,
then the tablespace name is changed to the new tablespace name in the server
parameter file (SPFILE).
–

The server parameter file was used to start up the database.

–

The tablespace name is specified as the UNDO_TABLESPACE for any instance.

If a traditional initialization parameter file (PFILE) is being used then a message is
written to the alert log stating that the initialization parameter file must be
manually changed.

Managing Tablespaces 8-19

Dropping Tablespaces

Dropping Tablespaces
You can drop a tablespace and its contents (the segments contained in the tablespace)
from the database if the tablespace and its contents are no longer required. You must
have the DROP TABLESPACE system privilege to drop a tablespace.
Caution: Once a tablespace has been dropped, the data in the
tablespace is not recoverable. Therefore, make sure that all data
contained in a tablespace to be dropped will not be required in the
future. Also, immediately before and after dropping a tablespace
from a database, back up the database completely. This is strongly
recommended so that you can recover the database if you mistakenly
drop a tablespace, or if the database experiences a problem in the
future after the tablespace has been dropped.

When you drop a tablespace, the file pointers in the control file of the associated
database are removed. You can optionally direct Oracle Database to delete the
operating system files (datafiles) that constituted the dropped tablespace. If you do not
direct the database to delete the datafiles at the same time that it deletes the tablespace,
you must later use the appropriate commands of your operating system to delete
them.
You cannot drop a tablespace that contains any active segments. For example, if a table
in the tablespace is currently being used or the tablespace contains undo data needed
to roll back uncommitted transactions, you cannot drop the tablespace. The tablespace
can be online or offline, but it is best to take the tablespace offline before dropping it.
To drop a tablespace, use the DROP TABLESPACE statement. The following statement
drops the users tablespace, including the segments in the tablespace:
DROP TABLESPACE users INCLUDING CONTENTS;

If the tablespace is empty (does not contain any tables, views, or other structures), you
do not need to specify the INCLUDING CONTENTS clause. Use the CASCADE
CONSTRAINTS clause to drop all referential integrity constraints from tables outside
the tablespace that refer to primary and unique keys of tables inside the tablespace.
To delete the datafiles associated with a tablespace at the same time that the tablespace
is dropped, use the INCLUDING CONTENTS AND DATAFILES clause. The following
statement drops the users tablespace and its associated datafiles:
DROP TABLESPACE users INCLUDING CONTENTS AND DATAFILES;

A message is written to the alert log for each datafile that is deleted. If an operating
system error prevents the deletion of a file, the DROP TABLESPACE statement still
succeeds, but a message describing the error is written to the alert log.
See Also:

"Dropping Datafiles" on page 9-11

Managing the SYSAUX Tablespace
The SYSAUX tablespace was installed as an auxiliary tablespace to the SYSTEM
tablespace when you created your database. Some database components that formerly
created and used separate tablespaces now occupy the SYSAUX tablespace.
If the SYSAUX tablespace becomes unavailable, core database functionality will remain
operational. The database features that use the SYSAUX tablespace could fail, or
function with limited capability.
8-20 Oracle Database Administrator’s Guide

Managing the SYSAUX Tablespace

Monitoring Occupants of the SYSAUX Tablespace
The list of registered occupants of the SYSAUX tablespace are discussed in "Creating
the SYSAUX Tablespace" on page 2-12. These components can use the SYSAUX
tablespace, and their installation provides the means of establishing their occupancy of
the SYSAUX tablespace.
You can monitor the occupants of the SYSAUX tablespace using the
V$SYSAUX_OCCUPANTS view. This view lists the following information about the
occupants of the SYSAUX tablespace:
■

Name of the occupant

■

Occupant description

■

Schema name

■

Move procedure

■

Current space usage

View information is maintained by the occupants.
See Also: Oracle Database Reference for a detailed description of
the V$SYSAUX_OCCUPANTS view

Moving Occupants Out Of or Into the SYSAUX Tablespace
You will have an option at component install time to specify that you do not want the
component to reside in SYSAUX. Also, if you later decide that the component should
be relocated to a designated tablespace, you can use the move procedure for that
component, as specified in the V$SYSAUX_OCCUPANTS view, to perform the move.
For example, assume that you install Oracle Ultra Search into the default tablespace,
which is SYSAUX. Later you discover that Ultra Search is using up too much space. To
alleviate this space pressure on SYSAUX, you can call a PL/SQL move procedure
specified in the V$SYSAUX_OCCUPANTS view to relocate Ultra Search to another
tablespace.
The move procedure also lets you move a component from another tablespace into the
SYSAUX tablespace.

Controlling the Size of the SYSAUX Tablespace
The SYSAUX tablespace is occupied by a number of database components (see
Table 2–2), and its total size is governed by the space consumed by those components.
The space consumed by the components, in turn, depends on which features or
functionality are being used and on the nature of the database workload.
The largest portion of the SYSAUX tablespace is occupied by the Automatic Workload
Repository (AWR). The space consumed by the AWR is determined by several factors,
including the number of active sessions in the system at any given time, the snapshot
interval, and the historical data retention period. A typical system with an average of
30 concurrent active sessions may require approximately 200 to 300 MB of space for its
AWR data. You can control the size of the AWR by changing the snapshot interval and
historical data retention period. For more information on managing the AWR snapshot
interval and retention period, please refer to Oracle Database Performance Tuning Guide.
Another major occupant of the SYSAUX tablespace is the embedded Enterprise
Manager (EM) repository. This repository is used by Oracle Enterprise Manager
Database Control to store its metadata. The size of this repository depends on database
activity and on configuration-related information stored in the repository.
Managing Tablespaces 8-21

Diagnosing and Repairing Locally Managed Tablespace Problems

Other database components in the SYSAUX tablespace will grow in size only if their
associated features (for example, Oracle UltraSearch, Oracle Text, Oracle Streams) are
in use. If the features are not used, then these components do not have any significant
effect on the size of the SYSAUX tablespace.

Diagnosing and Repairing Locally Managed Tablespace Problems
Oracle Database includes the DBMS_SPACE_ADMIN package, which is a collection of
aids for diagnosing and repairing problems in locally managed tablespaces.
DBMS_SPACE_ADMIN Package Procedures
The following table lists the DBMS_SPACE_ADMIN package procedures. See Oracle
Database PL/SQL Packages and Types Reference for details on each procedure.

Procedure

Description

ASSM_SEGMENT_VERIFY

Verifies the integrity of segments created in tablespaces that have
automatic segment space management enabled. Outputs a dump file
named sid_ora_process_id.trc to the location specified in the
USER_DUMP_DEST initialization parameter.
Use SEGMENT_VERIFY for tablespaces with manual segment space
management.

ASSM_TABLESPACE_VERIFY

Verifies the integrity of tablespaces that have automatic segment space
management enabled. Outputs a dump file named
sid_ora_process_id.trc to the location specified in the
USER_DUMP_DEST initialization parameter.
Use TABLESPACE_VERIFY for tablespaces with manual segment
space management.

SEGMENT_CORRUPT

Marks the segment corrupt or valid so that appropriate error recovery
can be done

SEGMENT_DROP_CORRUPT

Drops a segment currently marked corrupt (without reclaiming space)

SEGMENT_DUMP

Dumps the segment header and bitmap blocks of a specific segment to
a dump file named sid_ora_process_id.trc in the location
specified in the USER_DUMP_DEST initialization parameter. Provides
an option to select a slightly abbreviated dump, which includes
segment header and includes bitmap block summaries, without
percent-free states of each block.

SEGMENT_VERIFY

Verifies the consistency of the extent map of the segment

TABLESPACE_FIX_BITMAPS

Marks the appropriate DBA range (extent) as free or used in bitmap

TABLESPACE_FIX_SEGMENT_STATES

Fixes the state of the segments in a tablespace in which migration was
stopped

TABLESPACE_MIGRATE_FROM_LOCAL

Migrates a locally managed tablespace to dictionary-managed
tablespace

TABLESPACE_MIGRATE_TO_LOCAL

Migrates a dictionary-managed tablespace to a locally managed
tablespace

TABLESPACE_REBUILD_BITMAPS

Rebuilds the appropriate bitmaps

TABLESPACE_REBUILD_QUOTAS

Rebuilds quotas for a specific tablespace

TABLESPACE_RELOCATE_BITMAPS

Relocates the bitmaps to the specified destination

TABLESPACE_VERIFY

Verifies that the bitmaps and extent maps for the segments in the
tablespace are synchronized

8-22 Oracle Database Administrator’s Guide

Diagnosing and Repairing Locally Managed Tablespace Problems

The following scenarios describe typical situations in which you can use the
DBMS_SPACE_ADMIN package to diagnose and resolve problems.
Some of these procedures can result in lost and
unrecoverable data if not used properly. You should work with
Oracle Support Services if you have doubts about these procedures.

Note:

See Also: Oracle Database PL/SQL Packages and Types Reference for
details about the DBMS_SPACE_ADMIN package

Scenario 1: Fixing Bitmap When Allocated Blocks are Marked Free (No Overlap)
The TABLESPACE_VERIFY procedure discovers that a segment has allocated blocks
that are marked free in the bitmap, but no overlap between segments is reported.
In this scenario, perform the following tasks:
1.

Call the SEGMENT_DUMP procedure to dump the ranges that the administrator
allocated to the segment.

2.

For each range, call the TABLESPACE_FIX_BITMAPS procedure with the
TABLESPACE_EXTENT_MAKE_USED option to mark the space as used.

3.

Call TABLESPACE_REBUILD_QUOTAS to rebuild quotas.

Scenario 2: Dropping a Corrupted Segment
You cannot drop a segment because the bitmap has segment blocks marked "free". The
system has automatically marked the segment corrupted.
In this scenario, perform the following tasks:
1.

Call the SEGMENT_VERIFY procedure with the
SEGMENT_VERIFY_EXTENTS_GLOBAL option. If no overlaps are reported, then
proceed with steps 2 through 5.

2.

Call the SEGMENT_DUMP procedure to dump the DBA ranges allocated to the
segment.

3.

For each range, call TABLESPACE_FIX_BITMAPS with the
TABLESPACE_EXTENT_MAKE_FREE option to mark the space as free.

4.

Call SEGMENT_DROP_CORRUPT to drop the SEG$ entry.

5.

Call TABLESPACE_REBUILD_QUOTAS to rebuild quotas.

Scenario 3: Fixing Bitmap Where Overlap is Reported
The TABLESPACE_VERIFY procedure reports some overlapping. Some of the real data
must be sacrificed based on previous internal errors.
After choosing the object to be sacrificed, in this case say, table t1, perform the
following tasks:
1.

Make a list of all objects that t1 overlaps.

2.

Drop table t1. If necessary, follow up by calling the SEGMENT_DROP_CORRUPT
procedure.

Managing Tablespaces 8-23

Migrating the SYSTEM Tablespace to a Locally Managed Tablespace

3.

Call the SEGMENT_VERIFY procedure on all objects that t1 overlapped. If
necessary, call the TABLESPACE_FIX_BITMAPS procedure to mark appropriate
bitmap blocks as used.

4.

Rerun the TABLESPACE_VERIFY procedure to verify that the problem is resolved.

Scenario 4: Correcting Media Corruption of Bitmap Blocks
A set of bitmap blocks has media corruption.
In this scenario, perform the following tasks:
1.

Call the TABLESPACE_REBUILD_BITMAPS procedure, either on all bitmap blocks,
or on a single block if only one is corrupt.

2.

Call the TABLESPACE_REBUILD_QUOTAS procedure to rebuild quotas.

3.

Call the TABLESPACE_VERIFY procedure to verify that the bitmaps are consistent.

Scenario 5: Migrating from a Dictionary-Managed to a Locally Managed Tablespace
Use the TABLESPACE_MIGRATE_TO_LOCAL procedure to migrate a
dictionary-managed tablespace to a locally managed tablespace. This operation is
done online, but space management operations are blocked until the migration has
been completed. This means that you can read or modify data while the migration is in
progress, but if you are loading a large amount of data that requires the allocation of
additional extents, then the operation may be blocked.
Assume that the database block size is 2K and the existing extent sizes in tablespace
tbs_1 are 10, 50, and 10,000 blocks (used, used, and free). The MINIMUM EXTENT
value is 20K (10 blocks). Allow the system to choose the bitmap allocation unit. The
value of 10 blocks is chosen, because it is the highest common denominator and does
not exceed MINIMUM EXTENT.
The statement to convert tbs_1 to a locally managed tablespace is as follows:
EXEC DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL ('tbs_1');

If you choose to specify an allocation unit size, it must be a factor of the unit size
calculated by the system.

Migrating the SYSTEM Tablespace to a Locally Managed Tablespace
Use the DBMS_SPACE_ADMIN package to migrate the SYSTEM tablespace from
dictionary-managed to locally managed. The following statement performs the
migration:
SQL> EXECUTE DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL('SYSTEM');

Before performing the migration the following conditions must be met:
■

The database has a default temporary tablespace that is not SYSTEM.

■

There are no rollback segments in the dictionary-managed tablespace.

■

■

There is at least one online rollback segment in a locally managed tablespace, or if
using automatic undo management, an undo tablespace is online.
All tablespaces other than the tablespace containing the undo space (that is, the
tablespace containing the rollback segment or the undo tablespace) are in
read-only mode.

8-24 Oracle Database Administrator’s Guide

Transporting Tablespaces Between Databases

■

The system is in restricted mode.

■

There is a cold backup of the database.

All of these conditions, except for the cold backup, are enforced by the
TABLESPACE_MIGRATE_TO_LOCAL procedure.
After the SYSTEM tablespace is migrated to locally
managed, any dictionary-managed tablespaces in the database
cannot be made read/write. If you want to be able to use the
dictionary-managed tablespaces in read/write mode, then Oracle
recommends that you first migrate these tablespaces to locally
managed before migrating the SYSTEM tablespace.

Note:

Transporting Tablespaces Between Databases
This section describes how to transport tablespaces between databases, and contains
the following topics:
■

Introduction to Transportable Tablespaces

■

About Transporting Tablespaces Across Platforms

■

Limitations on Transportable Tablespace Use

■

Compatibility Considerations for Transportable Tablespaces

■

Transporting Tablespaces Between Databases: A Procedure and Example

■

Using Transportable Tablespaces: Scenarios
Note: You must be using the Enterprise Edition of Oracle8i or
later to generate a transportable tablespace set. However, you can
use any edition of Oracle8i or later to import a transportable
tablespace set into an Oracle Database on the same platform. To
import a transportable tablespace set into an Oracle Database on a
different platform, both databases must have compatibility set to at
least 10.0. Please refer to "Compatibility Considerations for
Transportable Tablespaces" on page 8-28 for a discussion of
database compatibility for transporting tablespaces across release
levels.

Introduction to Transportable Tablespaces
You can use the Transportable Tablespaces feature to copy a set of tablespaces from
one Oracle Database to another.
This method for transporting tablespaces requires that you
place the tablespaces to be transported in read-only mode until you
complete the transporting process. If this is undesirable, you can use
the Transportable Tablespaces from Backup feature, described in
Oracle Database Backup and Recovery Advanced User's Guide.

Note:

The tablespaces being transported can be either dictionary managed or locally
managed. Starting with Oracle9i, the transported tablespaces are not required to be of
the same block size as the target database standard block size.

Managing Tablespaces 8-25

Transporting Tablespaces Between Databases

Moving data using transportable tablespaces is much faster than performing either an
export/import or unload/load of the same data. This is because the datafiles
containing all of the actual data are just copied to the destination location, and you use
an export/import utility to transfer only the metadata of the tablespace objects to the
new database.
The remainder of this chapter assumes that Data Pump is
the import/export utility used. However, the transportable
tablespaces feature supports both Data Pump and the original
import and export utilities, IMP and EXP, with one caveat: you
must use IMP and EXP for tablespaces containing XMLTypes. Refer
to Oracle Database Utilities for more information on these utilities
and to Oracle XML DB Developer's Guide for more information on
XMLTypes.
Note:

The transportable tablespace feature is useful in a number of scenarios, including:
■

Exporting and importing partitions in data warehousing tables

■

Publishing structured data on CDs

■

Copying multiple read-only versions of a tablespace on multiple databases

■

Archiving historical data

■

Performing tablespace point-in-time-recovery (TSPITR)

These scenarios are discussed in "Using Transportable Tablespaces: Scenarios" on
page 8-37.
There are two ways to transport a tablespace:
■

■

Manually, following the steps described in this section. This involves issuing
commands to SQL*Plus, RMAN, IMP/EXP and Data Pump.
Using the Transport Tablespaces Wizard in Enterprise Manager
To run the Transport Tablespaces Wizard:
1.

Log in to Enterprise Manager with a user that has the EXP_FULL_DATABASE
role.

2.

Click the Maintenance link to go to the Maintenance tab.

3.

Under the heading Move Database Files, click Transport Tablespaces.
See Also: Oracle Database Data Warehousing Guide for information
about using transportable tablespaces in a data warehousing
environment

About Transporting Tablespaces Across Platforms
Starting with Oracle Database 10g, you can transport tablespaces across platforms.
This functionality can be used to:
■
■

Allow a database to be migrated from one platform to another
Provide an easier and more efficient means for content providers to publish
structured data and distribute it to customers running Oracle Database on
different platforms

8-26 Oracle Database Administrator’s Guide

Transporting Tablespaces Between Databases

■

■

Simplify the distribution of data from a data warehouse environment to data
marts, which are often running on smaller platforms
Enable the sharing of read-only tablespaces between Oracle Database installations
on different operating systems or platforms, assuming that your storage system is
accessible from those platforms and the platforms all have the same endianness, as
described in the sections that follow

Many, but not all, platforms are supported for cross-platform tablespace transport. You
can query the V$TRANSPORTABLE_PLATFORM view to see the platforms that are
supported, and to determine each platform's endian format (byte ordering). The
following query displays the platforms that support cross-platform tablespace
transport:
SQL> COLUMN PLATFORM_NAME FORMAT A32
SQL> SELECT * FROM V$TRANSPORTABLE_PLATFORM;
PLATFORM_ID
----------1
2
7
10
6
3
5
4
11
15
8
9
13
16
12
17

PLATFORM_NAME
-------------------------------Solaris[tm] OE (32-bit)
Solaris[tm] OE (64-bit)
Microsoft Windows IA (32-bit)
Linux IA (32-bit)
AIX-Based Systems (64-bit)
HP-UX (64-bit)
HP Tru64 UNIX
HP-UX IA (64-bit)
Linux IA (64-bit)
HP Open VMS
Microsoft Windows IA (64-bit)
IBM zSeries Based Linux
Linux 64-bit for AMD
Apple Mac OS
Microsoft Windows 64-bit for AMD
Solaris Operating System (x86)

ENDIAN_FORMAT
-------------Big
Big
Little
Little
Big
Big
Little
Big
Little
Little
Little
Big
Little
Big
Little
Little

16 rows selected.

If the source platform and the target platform are of different endianness, then an
additional step must be done on either the source or target platform to convert the
tablespace being transported to the target format. If they are of the same endianness,
then no conversion is necessary and tablespaces can be transported as if they were on
the same platform.
Before a tablespace can be transported to a different platform, the datafile header must
identify the platform to which it belongs. In an Oracle Database with compatibility set
to 10.0.0 or later, you can accomplish this by making the datafile read/write at least
once.

Limitations on Transportable Tablespace Use
Be aware of the following limitations as you plan to transport tablespaces:
■

■

The source and target database must use the same character set and national
character set.
You cannot transport a tablespace to a target database in which a tablespace with
the same name already exists. However, you can rename either the tablespace to
be transported or the destination tablespace before the transport operation.

Managing Tablespaces 8-27

Transporting Tablespaces Between Databases

■

■

Objects with underlying objects (such as materialized views) or contained objects
(such as partitioned tables) are not transportable unless all of the underlying or
contained objects are in the tablespace set.
Beginning with Oracle Database 10g Release 2, you can transport tablespaces that
contain XMLTypes, but you must use the IMP and EXP utilities, not Data Pump.
When using EXP, ensure that the CONSTRAINTS and TRIGGERS parameters are set
to Y (the default).
The following query returns a list of tablespaces that contain XMLTypes:
select distinct p.tablespace_name from dba_tablespaces p,
dba_xml_tables x, dba_users u, all_all_tables t where
t.table_name=x.table_name and t.tablespace_name=p.tablespace_name
and x.owner=u.username

See Oracle XML DB Developer's Guide for information on XMLTypes.
Transporting tablespaces with XMLTypes has the following limitations:
■
■

■
■

■

The target database must have XML DB installed.
Schemas referenced by XMLType tables cannot be the XML DB standard
schemas.
Schemas referenced by XMLType tables cannot have cyclic dependencies.
Any row level security on XMLType tables is lost upon import. This is because
the access control lists (ACLs) that implement the row level security cannot be
imported, as the target database may not have the same set of users as the
source database.
If the schema for a transported XMLType table is not present in the target
database, it is imported and registered. If the schema already exists in the
target database, an error is returned unless the ignore=y option is set.

Additional limitations include the following:
Transportable tablespaces do not support 8.0-compatible
advanced queues with multiple recipients.

Advanced Queues

SYSTEM Tablespace Objects You cannot transport the SYSTEM tablespace or objects

owned by the user SYS. Some examples of such objects are PL/SQL, Java classes,
callouts, views, synonyms, users, privileges, dimensions, directories, and sequences.
Opaque Types Types whose interpretation is application-specific and opaque to the
database (such as RAW, BFILE, and the AnyTypes) can be transported, but they are not
converted as part of the cross-platform transport operation. Their actual structure is
known only to the application, so the application must address any endianness issues
after these types are moved to the new platform. Types and objects that use these
opaque types, either directly or indirectly, are also subject to this limitation.
Floating-Point Numbers BINARY_FLOAT and BINARY_DOUBLE types are
transportable using Data Pump but not the original export utility, EXP.

Compatibility Considerations for Transportable Tablespaces
When you create a transportable tablespace set, Oracle Database computes the lowest
compatibility level at which the target database must run. This is referred to as the
compatibility level of the transportable set. Beginning with Oracle Database 10g, a
tablespace can always be transported to a database with the same or higher
8-28 Oracle Database Administrator’s Guide

Transporting Tablespaces Between Databases

compatibility setting, whether the target database is on the same or a different
platform. The database signals an error if the compatibility level of the transportable
set is higher than the compatibility level of the target database.
The following table shows the minimum compatibility requirements of the source and
target tablespace in various scenarios. The source and target database need not have
the same compatibility setting.
Table 8–1

Minimum Compatibility Requirements
Minimum Compatibility Setting

Transport Scenario

Source Database

Target Database

Databases on the same platform

8.0

8.0

Tablespace with different database block
size than the target database

9.0

9.0

Databases on different platforms

10.0

10.0

Transporting Tablespaces Between Databases: A Procedure and Example
The following steps summarize the process of transporting a tablespace. Details for
each step are provided in the subsequent example.
1.

For cross-platform transport, check the endian format of both platforms by
querying the V$TRANSPORTABLE_PLATFORM view.
Ignore this step if you are transporting your tablespace set to the same platform.

2.

Pick a self-contained set of tablespaces.

3.

Generate a transportable tablespace set.
A transportable tablespace set (or transportable set) consists of datafiles for the
set of tablespaces being transported and an export file containing structural
information (metadata) for the set of tablespaces. You use Data Pump or EXP to
perform the export.
Note:

If any of the tablespaces contain XMLTypes, you must use EXP.

If you are transporting the tablespace set to a platform with different endianness
from the source platform, you must convert the tablespace set to the endianness of
the target platform. You can perform a source-side conversion at this step in the
procedure, or you can perform a target-side conversion as part of step 4.
Note: This method of generating a transportable tablespace requires
that you temporarily make the tablespace read-only. If this is
undesirable, you can use the alternate method known as transportable
tablespace from backup. See Oracle Database Backup and Recovery
Advanced User's Guide for details.
4.

Transport the tablespace set.
Copy the datafiles and the export file to a place that is accessible to the target
database.
If you have transported the tablespace set to a platform with different endianness
from the source platform, and you have not performed a source-side conversion to
Managing Tablespaces 8-29

Transporting Tablespaces Between Databases

the endianness of the target platform, you should perform a target-side conversion
now.
5.

Import the tablespace set.
Invoke the Data Pump utility or IMP to import the metadata for the set of
tablespaces into the target database.
Note:

If any of the tablespaces contain XMLTypes, you must use IMP.

Example
The steps for transporting a tablespace are illustrated more fully in the example that
follows, where it is assumed the following datafiles and tablespaces exist:
Tablespace

Datafile

sales_1

/u01/oracle/oradata/salesdb/sales_101.dbf

sales_2

/u01/oracle/oradata/salesdb/sales_201.dbf

Step 1: Determine if Platforms are Supported and Endianness
This step is only necessary if you are transporting the tablespace set to a platform
different from the source platform.
If you are transporting the tablespace set to a platform different from the source
platform, then determine if cross-platform tablespace transport is supported for both
the source and target platforms, and determine the endianness of each platform. If
both platforms have the same endianness, no conversion is necessary. Otherwise you
must do a conversion of the tablespace set either at the source or target database.
If you are transporting sales_1 and sales_2 to a different platform, you can
execute the following query on each platform. If the query returns a row, the platform
supports cross-platform tablespace transport.
SELECT d.PLATFORM_NAME, ENDIAN_FORMAT
FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d
WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;

The following is the query result from the source platform:
PLATFORM_NAME
ENDIAN_FORMAT
------------------------- -------------Solaris[tm] OE (32-bit)
Big

The following is the result from the target platform:
PLATFORM_NAME
ENDIAN_FORMAT
------------------------- -------------Microsoft Windows NT
Little

You can see that the endian formats are different and thus a conversion is necessary for
transporting the tablespace set.

Step 2: Pick a Self-Contained Set of Tablespaces
There may be logical or physical dependencies between objects in the transportable set
and those outside of the set. You can only transport a set of tablespaces that is
self-contained. In this context "self-contained" means that there are no references from

8-30 Oracle Database Administrator’s Guide

Transporting Tablespaces Between Databases

inside the set of tablespaces pointing outside of the tablespaces. Some examples of self
contained tablespace violations are:
■

An index inside the set of tablespaces is for a table outside of the set of tablespaces.
It is not a violation if a corresponding index for a table is
outside of the set of tablespaces.

Note:

■

A partitioned table is partially contained in the set of tablespaces.
The tablespace set you want to copy must contain either all partitions of a
partitioned table, or none of the partitions of a partitioned table. If you want to
transport a subset of a partition table, you must exchange the partitions into tables.

■

A referential integrity constraint points to a table across a set boundary.
When transporting a set of tablespaces, you can choose to include referential
integrity constraints. However, doing so can affect whether or not a set of
tablespaces is self-contained. If you decide not to transport constraints, then the
constraints are not considered as pointers.

■

■

A table inside the set of tablespaces contains a LOB column that points to LOBs
outside the set of tablespaces.
An XML DB schema (*.xsd) that was registered by user A imports a global schema
that was registered by user B, and the following is true: the default tablespace for
user A is tablespace A, the default tablespace for user B is tablespace B, and only
tablespace A is included in the set of tablespaces.

To determine whether a set of tablespaces is self-contained, you can invoke the
TRANSPORT_SET_CHECK procedure in the Oracle supplied package DBMS_TTS. You
must have been granted the EXECUTE_CATALOG_ROLE role (initially signed to SYS) to
execute this procedure.
When you invoke the DBMS_TTS package, you specify the list of tablespaces in the
transportable set to be checked for self containment. You can optionally specify if
constraints must be included. For strict or full containment, you must additionally set
the TTS_FULL_CHECK parameter to TRUE.
The strict or full containment check is for cases that require capturing not only
references going outside the transportable set, but also those coming into the set.
Tablespace Point-in-Time Recovery (TSPITR) is one such case where dependent objects
must be fully contained or fully outside the transportable set.
For example, it is a violation to perform TSPITR on a tablespace containing a table t
but not its index i because the index and data will be inconsistent after the transport.
A full containment check ensures that there are no dependencies going outside or
coming into the transportable set. See the example for TSPITR in the Oracle Database
Backup and Recovery Advanced User's Guide.
The default for transportable tablespaces is to check for self
containment rather than full containment.

Note:

The following statement can be used to determine whether tablespaces sales_1 and
sales_2 are self-contained, with referential integrity constraints taken into
consideration (indicated by TRUE).
EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK('sales_1,sales_2', TRUE);

Managing Tablespaces 8-31

Transporting Tablespaces Between Databases

After invoking this PL/SQL package, you can see all violations by selecting from the
TRANSPORT_SET_VIOLATIONS view. If the set of tablespaces is self-contained, this
view is empty. The following example illustrates a case where there are two violations:
a foreign key constraint, dept_fk, across the tablespace set boundary, and a
partitioned table, jim.sales, that is partially contained in the tablespace set.
SQL> SELECT * FROM TRANSPORT_SET_VIOLATIONS;
VIOLATIONS
--------------------------------------------------------------------------Constraint DEPT_FK between table JIM.EMP in tablespace SALES_1 and table
JIM.DEPT in tablespace OTHER
Partitioned table JIM.SALES is partially contained in the transportable set

These violations must be resolved before sales_1 and sales_2 are transportable. As
noted in the next step, one choice for bypassing the integrity constraint violation is to
not export the integrity constraints.
See Also:
■

■

Oracle Database PL/SQL Packages and Types Reference for more
information about the DBMS_TTS package
Oracle Database Backup and Recovery Advanced User's Guide for
information specific to using the DBMS_TTS package for
TSPITR

Step 3: Generate a Transportable Tablespace Set
Any privileged user can perform this step. However, you must have been assigned the
EXP_FULL_DATABASE role to perform a transportable tablespace export operation.
Note: This method of generating a transportable tablespace requires
that you temporarily make the tablespace read-only. If this is
undesirable, you can use the alternate method known as transportable
tablespace from backup. See Oracle Database Backup and Recovery
Advanced User's Guide for details.

After ensuring you have a self-contained set of tablespaces that you want to transport,
generate a transportable tablespace set by performing the following actions:
1.

Make all tablespaces in the set you are copying read-only.
SQL> ALTER TABLESPACE sales_1 READ ONLY;
Tablespace altered.
SQL> ALTER TABLESPACE sales_2 READ ONLY;
Tablespace altered.

2.

Invoke the Data Pump export utility on the host system and specify which
tablespaces are in the transportable set.
If any of the tablespaces have XMLTypes, you must use EXP
instead of Data Pump. Ensure that the CONSTRAINTS and TRIGGERS
parameters are set to Y (the default).

Note:

8-32 Oracle Database Administrator’s Guide

Transporting Tablespaces Between Databases

SQL> HOST
$ EXPDP system/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir
TRANSPORT_TABLESPACES = sales_1,sales_2

You must always specify TRANSPORT_TABLESPACES, which determines the
mode of the export operation. In this example:
■

■

■

The DUMPFILE parameter specifies the name of the structural information
export file to be created, expdat.dmp.
The DIRECTORY parameter specifies the default directory object that points to
the operating system or Automatic Storage Management location of the dump
file. You must create the DIRECTORY object before invoking Data Pump, and
you must grant the READ and WRITE object privileges on the directory to
PUBLIC. See Oracle Database SQL Reference for information on the CREATE
DIRECTORY command.
Triggers and indexes are included in the export operation by default.

If you want to perform a transport tablespace operation with a strict containment
check, use the TRANSPORT_FULL_CHECK parameter, as shown in the following
example:
EXPDP system/password DUMPFILE=expdat.dmp DIRECTORY = dpump_dir
TRANSPORT_TABLESPACES=sales_1,sales_2 TRANSPORT_FULL_CHECK=Y

In this example, the Data Pump export utility verifies that there are no
dependencies between the objects inside the transportable set and objects outside
the transportable set. If the tablespace set being transported is not self-contained,
then the export fails and indicates that the transportable set is not self-contained.
You must then return to Step 1 to resolve all violations.
Notes: The Data Pump utility is used to export only data
dictionary structural information (metadata) for the tablespaces. No
actual data is unloaded, so this operation goes relatively quickly
even for large tablespace sets.
3.

When finished, exit back to SQL*Plus:
$ EXIT

Oracle Database Utilities for information about using the
Data Pump utility

See Also:

If sales_1 and sales_2 are being transported to a different platform, and the
endianness of the platforms is different, and if you want to convert before transporting
the tablespace set, then convert the datafiles composing the sales_1 and sales_2
tablespaces:
4.

From SQL*Plus, return to the host system:
SQL> HOST

5.

The RMAN CONVERT command is used to do the conversion. Start RMAN and
connect to the target database:
$ RMAN TARGET /
Recovery Manager: Release 10.1.0.0.0
Managing Tablespaces 8-33

Transporting Tablespaces Between Databases

Copyright (c) 1995, 2003, Oracle Corporation.

All rights reserved.

connected to target database: salesdb (DBID=3295731590)
6.

Convert the datafiles into a temporary location on the source platform. In this
example, assume that the temporary location, directory /temp, has already been
created. The converted datafiles are assigned names by the system.
RMAN> CONVERT TABLESPACE sales_1,sales_2
2> TO PLATFORM 'Microsoft Windows NT'
3> FORMAT '/temp/%U';
Starting backup at 08-APR-03
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=11 devtype=DISK
channel ORA_DISK_1: starting datafile conversion
input datafile fno=00005 name=/u01/oracle/oradata/salesdb/sales_101.dbf
converted datafile=/temp/data_D-10_I-3295731590_TS-ADMIN_TBS_FNO-5_05ek24v5
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting datafile conversion
input datafile fno=00004 name=/u01/oracle/oradata/salesdb/sales_101.dbf
converted datafile=/temp/data_D-10_I-3295731590_TS-EXAMPLE_FNO-4_06ek24vl
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:45
Finished backup at 08-APR-03

See Also: Oracle Database Backup and Recovery Reference for a
description of the RMAN CONVERT command
7.

Exit Recovery Manager:
RMAN> exit
Recovery Manager complete.

Step 4: Transport the Tablespace Set
Transport both the datafiles and the export file of the tablespaces to a place that is
accessible to the target database.
If both the source and destination are files systems, you can use:
■

Any facility for copying flat files (for example, an operating system copy utility or
ftp)

■

The DBMS_FILE_TRANSFER package

■

RMAN

■

Any facility for publishing on CDs

If either the source or destination is an Automatic Storage Management (ASM) disk
group, you can use:
■

ftp to or from the /sys/asm virtual folder in the XML DB repository
See "Accessing Automatic Storage Management Files with the XML DB Virtual
Folder" on page 12-46 for more information.

■

The DBMS_FILE_TRANSFER package

■

RMAN

8-34 Oracle Database Administrator’s Guide

Transporting Tablespaces Between Databases

Caution: Exercise caution when using the UNIX dd utility to copy
raw-device files between databases. The dd utility can be used to
copy an entire source raw-device file, or it can be invoked with
options that instruct it to copy only a specific range of blocks from
the source raw-device file.

It is difficult to ascertain actual datafile size for a raw-device file
because of hidden control information that is stored as part of the
datafile. Thus, it is advisable when using the dd utility to specify
copying the entire source raw-device file contents.
If you are transporting the tablespace set to a platform with endianness that is
different from the source platform, and you have not yet converted the tablespace set,
you must do so now. This example assumes that you have completed the following
steps before the transport:
1.

Set the source tablespaces to be transported to be read-only.

2.

Use the export utility to create an export file (in our example, expdat.dmp).

Datafiles that are to be converted on the target platform can be moved to a temporary
location on the target platform. However, all datafiles, whether already converted or
not, must be moved to a designated location on the target database.
Now use RMAN to convert the necessary transported datafiles to the endian format of
the destination host format and deposit the results in /orahome/dbs, as shown in this
hypothetical example:
RMAN> CONVERT DATAFILE
2> '/hq/finance/work/tru/tbs_31.f',
3> '/hq/finance/work/tru/tbs_32.f',
4> '/hq/finance/work/tru/tbs_41.f'
5> TO PLATFORM="Solaris[tm] OE (32-bit)"
6> FROM PLATFORM="HP TRu64 UNIX"
7> DB_FILE_NAME_CONVERT=
8> "/hq/finance/work/tru/", "/hq/finance/dbs/tru"
9> PARALLELISM=5;

You identify the datafiles by filename, not by tablespace name. Until the tablespace
metadata is imported, the local instance has no way of knowing the desired tablespace
names. The source and destination platforms are optional. RMAN determines the
source platform by examining the datafile, and the target platform defaults to the
platform of the host running the conversion.
See Also: "Copying Files Using the Database Server" on page 9-12
for information about using the DBMS_FILE_TRANSFER package
to copy the files that are being transported and their metadata

Managing Tablespaces 8-35

Transporting Tablespaces Between Databases

Step 5: Import the Tablespace Set
If you are transporting a tablespace of a different block size
than the standard block size of the database receiving the
tablespace set, then you must first have a DB_nK_CACHE_SIZE
initialization parameter entry in the receiving database parameter
file.

Note:

For example, if you are transporting a tablespace with an 8K block
size into a database with a 4K standard block size, then you must
include a DB_8K_CACHE_SIZE initialization parameter entry in the
parameter file. If it is not already included in the parameter file, this
parameter can be set using the ALTER SYSTEM SET statement.
See Oracle Database Reference for information about specifying
values for the DB_nK_CACHE_SIZE initialization parameter.
Any privileged user can perform this step. To import a tablespace set, perform the
following tasks:
1.

Import the tablespace metadata using the Data Pump Import utility, impdp:
If any of the tablespaces contain XMLTypes, you must use IMP
instead of Data Pump.

Note:

IMPDP system/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir
TRANSPORT_DATAFILES=
/salesdb/sales_101.dbf,
/salesdb/sales_201.dbf
REMAP_SCHEMA=(dcranney:smith) REMAP_SCHEMA=(jfee:williams)

In this example we specify the following:
■

■

■

■

The DUMPFILE parameter specifies the exported file containing the metadata
for the tablespaces to be imported.
The DIRECTORY parameter specifies the directory object that identifies the
location of the dump file.
The TRANSPORT_DATAFILES parameter identifies all of the datafiles
containing the tablespaces to be imported.
The REMAP_SCHEMA parameter changes the ownership of database objects. If
you do not specify REMAP_SCHEMA, all database objects (such as tables and
indexes) are created in the same user schema as in the source database, and
those users must already exist in the target database. If they do not exist, then
the import utility returns an error. In this example, objects in the tablespace set
owned by dcranney in the source database will be owned by smith in the
target database after the tablespace set is imported. Similarly, objects owned
by jfee in the source database will be owned by williams in the target
database. In this case, the target database is not required to have users
dcranney and jfee, but must have users smith and williams.

After this statement executes successfully, all tablespaces in the set being copied
remain in read-only mode. Check the import logs to ensure that no error has
occurred.

8-36 Oracle Database Administrator’s Guide

Transporting Tablespaces Between Databases

When dealing with a large number of datafiles, specifying the list of datafile
names in the statement line can be a laborious process. It can even exceed the
statement line limit. In this situation, you can use an import parameter file. For
example, you can invoke the Data Pump import utility as follows:
IMPDP system/password PARFILE='par.f'

where the parameter file, par.f contains the following:
DIRECTORY=dpump_dir
DUMPFILE=expdat.dmp
TRANSPORT_DATAFILES="'/db/sales_jan','/db/sales_feb'"
REMAP_SCHEMA=dcranney:smith
REMAP_SCHEMA=jfee:williams

Oracle Database Utilities for information about using the
import utility

See Also:

2.

If required, put the tablespaces into read/write mode as follows:
ALTER TABLESPACE sales_1 READ WRITE;
ALTER TABLESPACE sales_2 READ WRITE;

Using Transportable Tablespaces: Scenarios
The following sections describe some uses for transportable tablespaces:
■

Transporting and Attaching Partitions for Data Warehousing

■

Publishing Structured Data on CDs

■

Mounting the Same Tablespace Read-Only on Multiple Databases

■

Archiving Historical Data Using Transportable Tablespaces

■

Using Transportable Tablespaces to Perform TSPITR

Transporting and Attaching Partitions for Data Warehousing
Typical enterprise data warehouses contain one or more large fact tables. These fact
tables can be partitioned by date, making the enterprise data warehouse a historical
database. You can build indexes to speed up star queries. Oracle recommends that you
build local indexes for such historically partitioned tables to avoid rebuilding global
indexes every time you drop the oldest partition from the historical database.
Suppose every month you would like to load one month of data into the data
warehouse. There is a large fact table in the data warehouse called sales, which has
the following columns:
CREATE TABLE sales (invoice_no NUMBER,
sale_year INT NOT NULL,
sale_month INT NOT NULL,
sale_day
INT NOT NULL)
PARTITION BY RANGE (sale_year, sale_month,
(partition jan98 VALUES LESS THAN (1998,
partition feb98 VALUES LESS THAN (1998,
partition mar98 VALUES LESS THAN (1998,
partition apr98 VALUES LESS THAN (1998,
partition may98 VALUES LESS THAN (1998,
partition jun98 VALUES LESS THAN (1998,

sale_day)
2, 1),
3, 1),
4, 1),
5, 1),
6, 1),
7, 1));

You create a local non-prefixed index:

Managing Tablespaces 8-37

Transporting Tablespaces Between Databases

CREATE INDEX sales_index ON sales(invoice_no) LOCAL;

Initially, all partitions are empty, and are in the same default tablespace. Each month,
you want to create one partition and attach it to the partitioned sales table.
Suppose it is July 1998, and you would like to load the July sales data into the
partitioned table. In a staging database, you create a new tablespace, ts_jul. You also
create a table, jul_sales, in that tablespace with exactly the same column types as
the sales table. You can create the table jul_sales using the CREATE TABLE ... AS
SELECT statement. After creating and populating jul_sales, you can also create an
index, jul_sale_index, for the table, indexing the same column as the local index in
the sales table. After building the index, transport the tablespace ts_jul to the data
warehouse.
In the data warehouse, add a partition to the sales table for the July sales data. This
also creates another partition for the local non-prefixed index:
ALTER TABLE sales ADD PARTITION jul98 VALUES LESS THAN (1998, 8, 1);

Attach the transported table jul_sales to the table sales by exchanging it with the
new partition:
ALTER TABLE sales EXCHANGE PARTITION jul98 WITH TABLE jul_sales
INCLUDING INDEXES
WITHOUT VALIDATION;

This statement places the July sales data into the new partition jul98, attaching the
new data to the partitioned table. This statement also converts the index
jul_sale_index into a partition of the local index for the sales table. This
statement should return immediately, because it only operates on the structural
information and it simply switches database pointers. If you know that the data in the
new partition does not overlap with data in previous partitions, you are advised to
specify the WITHOUT VALIDATION clause. Otherwise, the statement goes through all
the new data in the new partition in an attempt to validate the range of that partition.
If all partitions of the sales table came from the same staging database (the staging
database is never destroyed), the exchange statement always succeeds. In general,
however, if data in a partitioned table comes from different databases, it is possible
that the exchange operation may fail. For example, if the jan98 partition of sales did
not come from the same staging database, the preceding exchange operation can fail,
returning the following error:
ORA-19728: data object number conflict between table JUL_SALES and partition JAN98
in table SALES

To resolve this conflict, move the offending partition by issuing the following
statement:
ALTER TABLE sales MOVE PARTITION jan98;

Then retry the exchange operation.
After the exchange succeeds, you can safely drop jul_sales and jul_sale_index
(both are now empty). Thus you have successfully loaded the July sales data into your
data warehouse.

Publishing Structured Data on CDs
Transportable tablespaces provide a way to publish structured data on CDs. A data
provider can load a tablespace with data to be published, generate the transportable
set, and copy the transportable set to a CD. This CD can then be distributed.
8-38 Oracle Database Administrator’s Guide

Transporting Tablespaces Between Databases

When customers receive this CD, they can add the CD contents to an existing database
without having to copy the datafiles from the CD to disk storage. For example,
suppose on a Windows NT machine D: drive is the CD drive. You can import a
transportable set with datafile catalog.f and export file expdat.dmp as follows:
IMPDP system/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir
TRANSPORT_DATAFILES='D:\catalog.f'

You can remove the CD while the database is still up. Subsequent queries to the
tablespace return an error indicating that the database cannot open the datafiles on the
CD. However, operations to other parts of the database are not affected. Placing the
CD back into the drive makes the tablespace readable again.
Removing the CD is the same as removing the datafiles of a read-only tablespace. If
you shut down and restart the database, the database indicates that it cannot find the
removed datafile and does not open the database (unless you set the initialization
parameter READ_ONLY_OPEN_DELAYED to TRUE). When
READ_ONLY_OPEN_DELAYED is set to TRUE, the database reads the file only when
someone queries the transported tablespace. Thus, when transporting a tablespace
from a CD, you should always set the READ_ONLY_OPEN_DELAYED initialization
parameter to TRUE, unless the CD is permanently attached to the database.

Mounting the Same Tablespace Read-Only on Multiple Databases
You can use transportable tablespaces to mount a tablespace read-only on multiple
databases. In this way, separate databases can share the same data on disk instead of
duplicating data on separate disks. The tablespace datafiles must be accessible by all
databases. To avoid database corruption, the tablespace must remain read-only in all
the databases mounting the tablespace.
The following are two scenarios for mounting the same tablespace read-only on
multiple databases:
■

The tablespace originates in a database that is separate from the databases that will
share the tablespace.
You generate a transportable set in the source database, put the transportable set
onto a disk that is accessible to all databases, and then import the metadata into
each database on which you want to mount the tablespace.

■

The tablespace already belongs to one of the databases that will share the
tablespace.
It is assumed that the datafiles are already on a shared disk. In the database where
the tablespace already exists, you make the tablespace read-only, generate the
transportable set, and then import the tablespace into the other databases, leaving
the datafiles in the same location on the shared disk.

You can make a disk accessible by multiple computers in several ways. You can use
either a cluster file system or raw disk. You can also use network file system (NFS), but
be aware that if a user queries the shared tablespace while NFS is down, the database
will hang until the NFS operation times out.
Later, you can drop the read-only tablespace in some of the databases. Doing so does
not modify the datafiles for the tablespace. Thus, the drop operation does not corrupt
the tablespace. Do not make the tablespace read/write unless only one database is
mounting the tablespace.

Managing Tablespaces 8-39

Viewing Tablespace Information

Archiving Historical Data Using Transportable Tablespaces
Since a transportable tablespace set is a self-contained set of files that can be imported
into any Oracle Database, you can archive old/historical data in an enterprise data
warehouse using the transportable tablespace procedures described in this chapter.
See Also:

Oracle Database Data Warehousing Guide for more details

Using Transportable Tablespaces to Perform TSPITR
You can use transportable tablespaces to perform tablespace point-in-time recovery
(TSPITR).
See Also: Oracle Database Backup and Recovery Advanced User's
Guide for information about how to perform TSPITR using
transportable tablespaces

Moving Databases Across Platforms Using Transportable Tablespaces
You can use the transportable tablespace feature to migrate a database to a different
platform by creating a new database on the destination platform and performing a
transport of all the user tablespaces. See Oracle Database Backup and Recovery Advanced
User's Guide for more information.
You cannot transport the SYSTEM tablespace. Therefore, objects such as sequences,
PL/SQL packages, and other objects that depend on the SYSTEM tablespace are not
transported. You must either create these objects manually on the destination database,
or use Data Pump to transport the objects that are not moved by transportable
tablespace.

Viewing Tablespace Information
The following data dictionary and dynamic performance views provide useful
information about the tablespaces of a database.
View

Description

V$TABLESPACE

Name and number of all tablespaces from the control file.

DBA_TABLESPACES, USER_TABLESPACES

Descriptions of all (or user accessible) tablespaces.

DBA_TABLESPACE_GROUPS

Displays the tablespace groups and the tablespaces that
belong to them.

DBA_SEGMENTS, USER_SEGMENTS

Information about segments within all (or user accessible)
tablespaces.

DBA_EXTENTS, USER_EXTENTS

Information about data extents within all (or user accessible)
tablespaces.

DBA_FREE_SPACE, USER_FREE_SPACE

Information about free extents within all (or user accessible)
tablespaces.

V$DATAFILE

Information about all datafiles, including tablespace number
of owning tablespace.

V$TEMPFILE

Information about all tempfiles, including tablespace number
of owning tablespace.

DBA_DATA_FILES

Shows files (datafiles) belonging to tablespaces.

DBA_TEMP_FILES

Shows files (tempfiles) belonging to temporary tablespaces.

8-40 Oracle Database Administrator’s Guide

Viewing Tablespace Information

View

Description

V$TEMP_EXTENT_MAP

Information for all extents in all locally managed temporary
tablespaces.

V$TEMP_EXTENT_POOL

For locally managed temporary tablespaces: the state of
temporary space cached and used for by each instance.

V$TEMP_SPACE_HEADER

Shows space used/free for each tempfile.

DBA_USERS

Default and temporary tablespaces for all users.

DBA_TS_QUOTAS

Lists tablespace quotas for all users.

V$SORT_SEGMENT

Information about every sort segment in a given instance. The
view is only updated when the tablespace is of the
TEMPORARY type.

V$TEMPSEG_USAGE

Describes temporary (sort) segment usage by user for
temporary or permanent tablespaces.

The following are just a few examples of using some of these views.
See Also: Oracle Database Reference for complete description of
these views

Example 1: Listing Tablespaces and Default Storage Parameters
To list the names and default storage parameters of all tablespaces in a database, use
the following query on the DBA_TABLESPACES view:
SELECT TABLESPACE_NAME "TABLESPACE",
INITIAL_EXTENT "INITIAL_EXT",
NEXT_EXTENT "NEXT_EXT",
MIN_EXTENTS "MIN_EXT",
MAX_EXTENTS "MAX_EXT",
PCT_INCREASE
FROM DBA_TABLESPACES;
TABLESPACE
---------RBS
SYSTEM
TEMP
TESTTBS
USERS

INITIAL_EXT
----------1048576
106496
106496
57344
57344

NEXT_EXT
-------1048576
106496
106496
16384
57344

MIN_EXT
------2
1
1
2
1

MAX_EXT
------40
99
99
10
99

PCT_INCREASE
-----------0
1
0
1
1

Example 2: Listing the Datafiles and Associated Tablespaces of a Database
To list the names, sizes, and associated tablespaces of a database, enter the following
query on the DBA_DATA_FILES view:
SELECT FILE_NAME, BLOCKS, TABLESPACE_NAME
FROM DBA_DATA_FILES;
FILE_NAME
-----------/U02/ORACLE/IDDB3/DBF/RBS01.DBF
/U02/ORACLE/IDDB3/DBF/SYSTEM01.DBF
/U02/ORACLE/IDDB3/DBF/TEMP01.DBF
/U02/ORACLE/IDDB3/DBF/TESTTBS01.DBF
/U02/ORACLE/IDDB3/DBF/USERS01.DBF

BLOCKS
---------1536
6586
6400
6400
384

TABLESPACE_NAME
------------------RBS
SYSTEM
TEMP
TESTTBS
USERS

Managing Tablespaces 8-41

Viewing Tablespace Information

Example 3: Displaying Statistics for Free Space (Extents) of Each Tablespace
To produce statistics about free extents and coalescing activity for each tablespace in
the database, enter the following query:
SELECT TABLESPACE_NAME "TABLESPACE", FILE_ID,
COUNT(*)
"PIECES",
MAX(blocks) "MAXIMUM",
MIN(blocks) "MINIMUM",
AVG(blocks) "AVERAGE",
SUM(blocks) "TOTAL"
FROM DBA_FREE_SPACE
GROUP BY TABLESPACE_NAME, FILE_ID;
TABLESPACE
---------RBS
SYSTEM
TEMP
TESTTBS
USERS

FILE_ID
------2
1
4
5
3

PIECES
-----1
1
1
5
1

MAXIMUM
------955
119
6399
6364
363

MINIMUM
------955
119
6399
3
363

AVERAGE
------955
119
6399
1278
363

TOTAL
-----955
119
6399
6390
363

PIECES shows the number of free space extents in the tablespace file, MAXIMUM and
MINIMUM show the largest and smallest contiguous area of space in database blocks,
AVERAGE shows the average size in blocks of a free space extent, and TOTAL shows the
amount of free space in each tablespace file in blocks. This query is useful when you
are going to create a new object or you know that a segment is about to extend, and
you want to make sure that there is enough space in the containing tablespace.

8-42 Oracle Database Administrator’s Guide

9
Managing Datafiles and Tempfiles
This chapter describes the various aspects of datafile and tempfile management, and
contains the following topics:
■

Guidelines for Managing Datafiles

■

Creating Datafiles and Adding Datafiles to a Tablespace

■

Changing Datafile Size

■

Altering Datafile Availability

■

Renaming and Relocating Datafiles

■

Dropping Datafiles

■

Verifying Data Blocks in Datafiles

■

Copying Files Using the Database Server

■

Mapping Files to Physical Devices

■

Viewing Datafile Information
See Also: Part III, "Automated File and Storage Management" for
information about creating datafiles and tempfiles that are both
created and managed by the Oracle Database server

Guidelines for Managing Datafiles
Datafiles are physical files of the operating system that store the data of all logical
structures in the database. They must be explicitly created for each tablespace.
Tempfiles are a special class of datafiles that are associated
only with temporary tablespaces. Information in this chapter
applies to both datafiles and tempfiles except where differences are
noted. Tempfiles are further described in "Creating a Locally
Managed Temporary Tablespace" on page 8-9

Note:

Oracle Database assigns each datafile two associated file numbers, an absolute file
number and a relative file number, that are used to uniquely identify it. These
numbers are described in the following table:

Managing Datafiles and Tempfiles

9-1

Guidelines for Managing Datafiles

Type of File Number

Description

Absolute

Uniquely identifies a datafile in the database. This file number can
be used in many SQL statements that reference datafiles in place
of using the file name. The absolute file number can be found in
the FILE# column of the V$DATAFILE or V$TEMPFILE view,
or in the FILE_ID column of the DBA_DATA_FILES or
DBA_TEMP_FILES view.

Relative

Uniquely identifies a datafile within a tablespace. For small and
medium size databases, relative file numbers usually have the
same value as the absolute file number. However, when the
number of datafiles in a database exceeds a threshold (typically
1023), the relative file number differs from the absolute file
number. In a bigfile tablespace, the relative file number is always
1024 (4096 on OS/390 platform).

This section describes aspects of managing datafiles, and contains the following topics:
■

Determine the Number of Datafiles

■

Determine the Size of Datafiles

■

Place Datafiles Appropriately

■

Store Datafiles Separate from Redo Log Files

Determine the Number of Datafiles
At least one datafile is required for the SYSTEM and SYSAUX tablespaces of a database.
Your database should contain several other tablespaces with their associated datafiles
or tempfiles. The number of datafiles that you anticipate creating for your database
can affect the settings of initialization parameters and the specification of CREATE
DATABASE statement clauses.
Be aware that your operating system might impose limits on the number of datafiles
contained in your Oracle Database. Also consider that the number of datafiles, and
how and where they are allocated can affect the performance of your database.
One means of controlling the number of datafiles in your
database and simplifying their management is to use bigfile
tablespaces. Bigfile tablespaces comprise a single, very large
datafile and are especially useful in ultra large databases and where
a logical volume manager is used for managing operating system
files. Bigfile tablespaces are discussed in "Bigfile Tablespaces" on
page 8-6.

Note:

Consider the following guidelines when determining the number of datafiles for your
database.

Determine a Value for the DB_FILES Initialization Parameter
When starting an Oracle Database instance, the DB_FILES initialization parameter
indicates the amount of SGA space to reserve for datafile information and thus, the
maximum number of datafiles that can be created for the instance. This limit applies
for the life of the instance. You can change the value of DB_FILES (by changing the
initialization parameter setting), but the new value does not take effect until you shut
down and restart the instance.

9-2 Oracle Database Administrator’s Guide

Guidelines for Managing Datafiles

When determining a value for DB_FILES, take the following into consideration:
■

■

If the value of DB_FILES is too low, you cannot add datafiles beyond the
DB_FILES limit without first shutting down the database.
If the value of DB_FILES is too high, memory is unnecessarily consumed.

Consider Possible Limitations When Adding Datafiles to a Tablespace
You can add datafiles to traditional smallfile tablespaces, subject to the following
limitations:
■

■
■

■

■

Operating systems often impose a limit on the number of files a process can open
simultaneously. More datafiles cannot be created when the operating system limit
of open files is reached.
Operating systems impose limits on the number and size of datafiles.
The database imposes a maximum limit on the number of datafiles for any Oracle
Database opened by any instance. This limit is operating system specific.
You cannot exceed the number of datafiles specified by the DB_FILES
initialization parameter.
When you issue CREATE DATABASE or CREATE CONTROLFILE statements, the
MAXDATAFILES parameter specifies an initial size of the datafile portion of the
control file. However, if you attempt to add a new file whose number is greater
than MAXDATAFILES, but less than or equal to DB_FILES, the control file will
expand automatically so that the datafiles section can accommodate more files.

Consider the Performance Impact
The number of datafiles contained in a tablespace, and ultimately the database, can
have an impact upon performance.
Oracle Database allows more datafiles in the database than the operating system
defined limit. The database DBWn processes can open all online datafiles. Oracle
Database is capable of treating open file descriptors as a cache, automatically closing
files when the number of open file descriptors reaches the operating system-defined
limit. This can have a negative performance impact. When possible, adjust the
operating system limit on open file descriptors so that it is larger than the number of
online datafiles in the database.
See Also:
■

■

Your operating system specific Oracle documentation for more
information on operating system limits
Oracle Database SQL Reference for more information about the
MAXDATAFILES parameter of the CREATE DATABASE or
CREATE CONTROLFILE statement

Determine the Size of Datafiles
When creating a tablespace, you should estimate the potential size of database objects
and create sufficient datafiles. Later, if needed, you can create additional datafiles and
add them to a tablespace to increase the total amount of disk space allocated to it, and
consequently the database. Preferably, place datafiles on multiple devices to ensure
that data is spread evenly across all devices.

Managing Datafiles and Tempfiles

9-3

Creating Datafiles and Adding Datafiles to a Tablespace

Place Datafiles Appropriately
Tablespace location is determined by the physical location of the datafiles that
constitute that tablespace. Use the hardware resources of your computer appropriately.
For example, if several disk drives are available to store the database, consider placing
potentially contending datafiles on separate disks.This way, when users query
information, both disk drives can work simultaneously, retrieving data at the same
time.
See Also: Oracle Database Performance Tuning Guide for
information about I/O and the placement of datafiles

Store Datafiles Separate from Redo Log Files
Datafiles should not be stored on the same disk drive that stores the database redo log
files. If the datafiles and redo log files are stored on the same disk drive and that disk
drive fails, the files cannot be used in your database recovery procedures.
If you multiplex your redo log files, then the likelihood of losing all of your redo log
files is low, so you can store datafiles on the same drive as some redo log files.

Creating Datafiles and Adding Datafiles to a Tablespace
You can create datafiles and associate them with a tablespace using any of the
statements listed in the following table. In all cases, you can either specify the file
specifications for the datafiles being created, or you can use the Oracle-managed files
feature to create files that are created and managed by the database server. The table
includes a brief description of the statement, as used to create datafiles, and references
the section of this book where use of the statement is specifically described:
SQL Statement

Description

Additional Information

CREATE TABLESPACE

Creates a tablespace and the
datafiles that comprise it

"Creating Tablespaces"
on page 8-2

CREATE TEMPORARY TABLESPACE

Creates a locally-managed
temporary tablespace and the
tempfiles (tempfiles are a special
kind of datafile) that comprise it

"Creating a Locally
Managed Temporary
Tablespace" on page 8-9

ALTER TABLESPACE ... ADD DATAFILE

Creates and adds a datafile to a
tablespace

"Altering a Locally
Managed Temporary
Tablespace" on page 8-10

ALTER TABLESPACE ... ADD TEMPFILE

Creates and adds a tempfile to a
temporary tablespace

"Creating a Locally
Managed Temporary
Tablespace" on page 8-9

CREATE DATABASE

Creates a database and associated
datafiles

"Manually Creating an
Oracle Database" on
page 2-2

ALTER DATABASE ... CREATE DATAFILE Creates a new empty datafile in
place of an old one--useful to
re-create a datafile that was lost
with no backup.

See Oracle Database
Backup and Recovery
Advanced User's Guide.

If you add new datafiles to a tablespace and do not fully specify the filenames, the
database creates the datafiles in the default database directory or the current directory,
depending upon your operating system. Oracle recommends you always specify a
fully qualified name for a datafile. Unless you want to reuse existing files, make sure
9-4 Oracle Database Administrator’s Guide

Changing Datafile Size

the new filenames do not conflict with other files. Old files that have been previously
dropped will be overwritten.
If a statement that creates a datafile fails, the database removes any created operating
system files. However, because of the large number of potential errors that can occur
with file systems and storage subsystems, there can be situations where you must
manually remove the files using operating system commands.

Changing Datafile Size
This section describes the various ways to alter the size of a datafile, and contains the
following topics:
■

Enabling and Disabling Automatic Extension for a Datafile

■

Manually Resizing a Datafile

Enabling and Disabling Automatic Extension for a Datafile
You can create datafiles or alter existing datafiles so that they automatically increase in
size when more space is needed in the database. The file size increases in specified
increments up to a specified maximum.
Setting your datafiles to extend automatically provides these advantages:
■
■

Reduces the need for immediate intervention when a tablespace runs out of space
Ensures applications will not halt or be suspended because of failures to allocate
extents

To determine whether a datafile is auto-extensible, query the DBA_DATA_FILES view
and examine the AUTOEXTENSIBLE column.
You can specify automatic file extension by specifying an AUTOEXTEND ON clause
when you create datafiles using the following SQL statements:
■

CREATE DATABASE

■

ALTER DATABASE

■

CREATE TABLESPACE

■

ALTER TABLESPACE

You can enable or disable automatic file extension for existing datafiles, or manually
resize a datafile, using the ALTER DATABASE statement. For a bigfile tablespace, you
are able to perform these operations using the ALTER TABLESPACE statement.
The following example enables automatic extension for a datafile added to the users
tablespace:
ALTER TABLESPACE users
ADD DATAFILE '/u02/oracle/rbdb1/users03.dbf' SIZE 10M
AUTOEXTEND ON
NEXT 512K
MAXSIZE 250M;

The value of NEXT is the minimum size of the increments added to the file when it
extends. The value of MAXSIZE is the maximum size to which the file can
automatically extend.
The next example disables the automatic extension for the datafile.
ALTER DATABASE DATAFILE '/u02/oracle/rbdb1/users03.dbf'

Managing Datafiles and Tempfiles

9-5

Altering Datafile Availability

AUTOEXTEND OFF;

Oracle Database SQL Reference for more information
about the SQL statements for creating or altering datafiles

See Also:

Manually Resizing a Datafile
You can manually increase or decrease the size of a datafile using the ALTER
DATABASE statement. This enables you to add more space to your database without
adding more datafiles. This is beneficial if you are concerned about reaching the
maximum number of datafiles allowed in your database.
For a bigfile tablespace you can use the ALTER TABLESPACE statement to resize a
datafile. You are not allowed to add a datafile to a bigfile tablespace.
Manually reducing the sizes of datafiles enables you to reclaim unused space in the
database. This is useful for correcting errors in estimates of space requirements.
In the next example, assume that the datafile /u02/oracle/rbdb1/stuff01.dbf
has extended up to 250M. However, because its tablespace now stores smaller objects,
the datafile can be reduced in size.
The following statement decreases the size of datafile
/u02/oracle/rbdb1/stuff01.dbf:
ALTER DATABASE DATAFILE '/u02/oracle/rbdb1/stuff01.dbf'
RESIZE 100M;

It is not always possible to decrease the size of a file to a
specific value. It could be that the file contains data beyond the
specified decreased size, in which case the database will return an
error.

Note:

Altering Datafile Availability
You can alter the availability of individual datafiles or tempfiles by taking them offline
or bringing them online. Offline datafiles are unavailable to the database and cannot
be accessed until they are brought back online.
Reasons for altering datafile availability include the following:
■
■

■

■

You want to perform an offline backup of a datafile.
You want to rename or relocate a datafile. You must first take it offline or take the
tablespace offline.
The database has problems writing to a datafile and automatically takes the
datafile offline. Later, after resolving the problem, you can bring the datafile back
online manually.
A datafile becomes missing or corrupted. You must take it offline before you can
open the database.

The datafiles of a read-only tablespace can be taken offline or brought online, but
bringing a file online does not affect the read-only status of the tablespace. You cannot
write to the datafile until the tablespace is returned to the read/write state.

9-6 Oracle Database Administrator’s Guide

Altering Datafile Availability

You can make all datafiles of a tablespace temporarily
unavailable by taking the tablespace itself offline. You must leave
these files in the tablespace to bring the tablespace back online,
although you can relocate or rename them following procedures
similar to those shown in "Renaming and Relocating Datafiles" on
page 9-8.

Note:

For more information, see "Taking Tablespaces Offline" on
page 8-13.
To take a datafile offline or bring it online, you must have the ALTER DATABASE
system privilege. To take all datafiles or tempfiles offline using the ALTER
TABLESPACE statement, you must have the ALTER TABLESPACE or MANAGE
TABLESPACE system privilege. In an Oracle Real Application Clusters environment,
the database must be open in exclusive mode.
This section describes ways to alter datafile availability, and contains the following
topics:
■

Bringing Datafiles Online or Taking Offline in ARCHIVELOG Mode

■

Taking Datafiles Offline in NOARCHIVELOG Mode

■

Altering the Availability of All Datafiles or Tempfiles in a Tablespace

Bringing Datafiles Online or Taking Offline in ARCHIVELOG Mode
To bring an individual datafile online, issue the ALTER DATABASE statement and
include the DATAFILE clause.The following statement brings the specified datafile
online:
ALTER DATABASE DATAFILE '/u02/oracle/rbdb1/stuff01.dbf' ONLINE;

To take the same file offline, issue the following statement:
ALTER DATABASE DATAFILE '/u02/oracle/rbdb1/stuff01.dbf' OFFLINE;

Note: To use this form of the ALTER DATABASE statement, the
database must be in ARCHIVELOG mode. This requirement prevents
you from accidentally losing the datafile, since taking the datafile
offline while in NOARCHIVELOG mode is likely to result in losing
the file.

Taking Datafiles Offline in NOARCHIVELOG Mode
To take a datafile offline when the database is in NOARCHIVELOG mode, use the ALTER
DATABASE statement with both the DATAFILE and OFFLINE FOR DROP clauses.
■

■

The OFFLINE keyword causes the database to mark the datafile OFFLINE,
whether or not it is corrupted, so that you can open the database.
The FOR DROP keywords mark the datafile for subsequent dropping. Such a
datafile can no longer be brought back online.

Managing Datafiles and Tempfiles

9-7

Renaming and Relocating Datafiles

Note: This operation does not actually drop the datafile. It
remains in the data dictionary, and you must drop it yourself using
one of the following methods:
■

An ALTER TABLESPACE ... DROP DATAFILE statement.
After an OFFLINE FOR DROP, this method works for dictionary
managed tablespaces only.

■

■

A DROP TABLESPACE ... INCLUDING CONTENTS AND
DATAFILES statement
If the preceding methods fail, an operating system command to
delete the datafile. This is the least desirable method, as it
leaves references to the datafile in the data dictionary and
control files.

The following statement takes the specified datafile offline and marks it to be dropped:
ALTER DATABASE DATAFILE '/u02/oracle/rbdb1/users03.dbf' OFFLINE FOR DROP;

Altering the Availability of All Datafiles or Tempfiles in a Tablespace
Clauses of the ALTER TABLESPACE statement allow you to change the online or
offline status of all of the datafiles or tempfiles within a tablespace. Specifically, the
statements that affect online/offline status are:
■

ALTER TABLESPACE ... DATAFILE {ONLINE|OFFLINE}

■

ALTER TABLESPACE ... TEMPFILE {ONLINE|OFFLINE}

You are required only to enter the tablespace name, not the individual datafiles or
tempfiles. All of the datafiles or tempfiles are affected, but the online/offline status of
the tablespace itself is not changed.
In most cases the preceding ALTER TABLESPACE statements can be issued whenever
the database is mounted, even if it is not open. However, the database must not be
open if the tablespace is the SYSTEM tablespace, an undo tablespace, or the default
temporary tablespace. The ALTER DATABASE DATAFILE and ALTER DATABASE
TEMPFILE statements also have ONLINE/OFFLINE clauses, however in those
statements you must enter all of the filenames for the tablespace.
The syntax is different from the ALTER TABLESPACE...ONLINE|OFFLINE
statement that alters tablespace availability, because that is a different operation. The
ALTER TABLESPACE statement takes datafiles offline as well as the tablespace, but it
cannot be used to alter the status of a temporary tablespace or its tempfile(s).

Renaming and Relocating Datafiles
You can rename datafiles to either change their names or relocate them. Some possible
procedures for doing this are described in the following sections:
■

Procedures for Renaming and Relocating Datafiles in a Single Tablespace

■

Procedure for Renaming and Relocating Datafiles in Multiple Tablespaces

When you rename and relocate datafiles with these procedures, only the pointers to
the datafiles, as recorded in the database control file, are changed. The procedures do
not physically rename any operating system files, nor do they copy files at the

9-8 Oracle Database Administrator’s Guide

Renaming and Relocating Datafiles

operating system level. Renaming and relocating datafiles involves several steps. Read
the steps and examples carefully before performing these procedures.

Procedures for Renaming and Relocating Datafiles in a Single Tablespace
The section suggests some procedures for renaming and relocating datafiles that can
be used for a single tablespace. You must have ALTER TABLESPACE system
privileges.
See Also: "Taking Tablespaces Offline" on page 8-13 for more
information about taking tablespaces offline in preparation for
renaming or relocating datafiles

Procedure for Renaming Datafiles in a Single Tablespace
To rename datafiles in a single tablespace, complete the following steps:
1.

Take the tablespace that contains the datafiles offline. The database must be open.
For example:
ALTER TABLESPACE users OFFLINE NORMAL;

2.

Rename the datafiles using the operating system.

3.

Use the ALTER TABLESPACE statement with the RENAME DATAFILE clause to
change the filenames within the database.
For example, the following statement renames the datafiles
/u02/oracle/rbdb1/user1.dbf and /u02/oracle/rbdb1/user2.dbf
to/u02/oracle/rbdb1/users01.dbf and
/u02/oracle/rbdb1/users02.dbf, respectively:
ALTER TABLESPACE users
RENAME DATAFILE '/u02/oracle/rbdb1/user1.dbf',
'/u02/oracle/rbdb1/user2.dbf'
TO '/u02/oracle/rbdb1/users01.dbf',
'/u02/oracle/rbdb1/users02.dbf';

Always provide complete filenames (including their paths) to properly identify
the old and new datafiles. In particular, specify the old datafile name exactly as it
appears in the DBA_DATA_FILES view of the data dictionary.
4.

Back up the database. After making any structural changes to a database, always
perform an immediate and complete backup.

Procedure for Relocating Datafiles in a Single Tablespace
Here is a sample procedure for relocating a datafile.
Assume the following conditions:
■

■

An open database has a tablespace named users that is made up of datafiles all
located on the same disk.
The datafiles of the users tablespace are to be relocated to different and separate
disk drives.

■

You are currently connected with administrator privileges to the open database.

■

You have a current backup of the database.

Complete the following steps:

Managing Datafiles and Tempfiles

9-9

Renaming and Relocating Datafiles

1.

If you do not know the specific file names or sizes, you can obtain this information
by issuing the following query of the data dictionary view DBA_DATA_FILES:
SQL> SELECT FILE_NAME, BYTES FROM DBA_DATA_FILES
2> WHERE TABLESPACE_NAME = 'USERS';
FILE_NAME
-----------------------------------------/u02/oracle/rbdb1/users01.dbf
/u02/oracle/rbdb1/users02.dbf

2.

BYTES
---------------102400000
102400000

Take the tablespace containing the datafiles offline:
ALTER TABLESPACE users OFFLINE NORMAL;

3.

Copy the datafiles to their new locations and rename them using the operating
system. You can copy the files using the DBMS_FILE_TRANSFER package
discussed in "Copying Files Using the Database Server" on page 9-12.
You can temporarily exit SQL*Plus to execute an operating
system command to copy a file by using the SQL*Plus HOST
command.

Note:

4.

Rename the datafiles within the database.
The datafile pointers for the files that make up the users tablespace, recorded in
the control file of the associated database, must now be changed from the old
names to the new names.
Use the ALTER TABLESPACE...RENAME DATAFILE statement.
ALTER TABLESPACE users
RENAME DATAFILE '/u02/oracle/rbdb1/users01.dbf',
'/u02/oracle/rbdb1/users02.dbf'
TO '/u03/oracle/rbdb1/users01.dbf',
'/u04/oracle/rbdb1/users02.dbf';

5.

Back up the database. After making any structural changes to a database, always
perform an immediate and complete backup.

Procedure for Renaming and Relocating Datafiles in Multiple Tablespaces
You can rename and relocate datafiles in one or more tablespaces using the ALTER
DATABASE RENAME FILE statement. This method is the only choice if you want to
rename or relocate datafiles of several tablespaces in one operation. You must have the
ALTER DATABASE system privilege.
To rename or relocate datafiles of the SYSTEM tablespace,
the default temporary tablespace, or the active undo tablespace you
must use this ALTER DATABASE method because you cannot take
these tablespaces offline.

Note:

To rename datafiles in multiple tablespaces, follow these steps.
1.

Ensure that the database is mounted but closed.

9-10 Oracle Database Administrator’s Guide

Dropping Datafiles

Optionally, the database does not have to be closed, but the
datafiles (or tempfiles) must be offline.

Note:

2.

Copy the datafiles to be renamed to their new locations and new names, using the
operating system. You can copy the files using the DBMS_FILE_TRANSFER
package discussed in "Copying Files Using the Database Server" on page 9-12.

3.

Use ALTER DATABASE to rename the file pointers in the database control file.
For example, the following statement renames the
datafiles/u02/oracle/rbdb1/sort01.dbf and
/u02/oracle/rbdb1/user3.dbf to /u02/oracle/rbdb1/temp01.dbf and
/u02/oracle/rbdb1/users03.dbf, respectively:
ALTER DATABASE
RENAME FILE '/u02/oracle/rbdb1/sort01.dbf',
'/u02/oracle/rbdb1/user3.dbf'
TO '/u02/oracle/rbdb1/temp01.dbf',
'/u02/oracle/rbdb1/users03.dbf;

Always provide complete filenames (including their paths) to properly identify
the old and new datafiles. In particular, specify the old datafile names exactly as
they appear in the DBA_DATA_FILES view.
4.

Back up the database. After making any structural changes to a database, always
perform an immediate and complete backup.

Dropping Datafiles
You use the DROP DATAFILE and DROP TEMPFILE clauses of the ALTER TABLESPACE
command to drop a single datafile or tempfile. The datafile must be empty. (A datafile
is considered to be empty when no extents remain allocated from it.) When you drop a
datafile or tempfile, references to the datafile or tempfile are removed from the data
dictionary and control files, and the physical file is deleted from the file system or
Automatic Storage Management (ASM) disk group.
The following example drops the datafile identified by the alias example_df3.f in
the ASM disk group DGROUP1. The datafile belongs to the example tablespace.
ALTER TABLESPACE example DROP DATAFILE '+DGROUP1/example_df3.f';

The next example drops the tempfile lmtemp02.dbf, which belongs to the lmtemp
tablespace.
ALTER TABLESPACE lmtemp DROP TEMPFILE '/u02/oracle/data/lmtemp02.dbf';

This is equivalent to the following statement:
ALTER DATABASE TEMPFILE '/u02/oracle/data/lmtemp02.dbf' DROP
INCLUDING DATAFILES;

See Oracle Database SQL Reference for ALTER TABLESPACE syntax details.
Restrictions for Dropping Datafiles
The following are restrictions for dropping datafiles and tempfiles:
■

The database must be open.

■

If a datafile is not empty, it cannot be dropped.

Managing Datafiles and Tempfiles

9-11

Verifying Data Blocks in Datafiles

If you must remove a datafile that is not empty and that cannot be made empty by
dropping schema objects, you must drop the tablespace that contains the datafile.
■

You cannot drop the first or only datafile in a tablespace.
This means that DROP DATAFILE cannot be used with a bigfile tablespace.

■

You cannot drop datafiles in a read-only tablespace.

■

You cannot drop datafiles in the SYSTEM tablespace.

■

If a datafile in a locally managed tablespace is offline, it cannot be dropped.
See Also:

Dropping Tablespaces on page 8-20

Verifying Data Blocks in Datafiles
If you want to configure the database to use checksums to verify data blocks, set the
initialization parameter DB_BLOCK_CHECKSUM to TRUE. This causes the DBWn
process and the direct loader to calculate a checksum for each block and to store the
checksum in the block header when writing the block to disk.
The checksum is verified when the block is read, but only if DB_BLOCK_CHECKSUM is
TRUE and the last write of the block stored a checksum. If corruption is detected, the
database returns message ORA-01578 and writes information about the corruption to
the alert log.
The default value of DB_BLOCK_CHECKSUM is TRUE. The value of this parameter can
be changed dynamically using the ALTER SYSTEM statement. Regardless of the
setting of this parameter, checksums are always used to verify data blocks in the
SYSTEM tablespace.
See Also: Oracle Database Reference for more information about the
DB_BLOCK_CHECKSUM initialization parameter

Copying Files Using the Database Server
You do not necessarily have to use the operating system to copy a file within a
database, or transfer a file between databases as you would do when using the
transportable tablespace feature. You can use the DBMS_FILE_TRANSFER package, or
you can use Streams propagation. Using Streams is not discussed in this book, but an
example of using the DBMS_FILE_TRANSFER package is shown in "Copying a File on
a Local File System" on page 9-13.
The DBMS_FILE_TRANSFER package can use a local file system or an Automatic
Storage Management (ASM) disk group as the source or destination for a file transfer.
Only Oracle database files (datafiles, tempfiles, controlfiles, and so on) can be involved
in transfers to and from ASM.
Caution: Do not use the DBMS_FILE_TRANSFER package to copy or
transfer a file that is being modified by a database because doing so
may result in an inconsistent file.

On UNIX systems, the owner of a file created by the DBMS_FILE_TRANSFER package
is the owner of the shadow process running the instance. Normally, this owner is
ORACLE. A file created using DBMS_FILE_TRANSFER is always writable and readable
by all processes in the database, but non privileged users who need to read or write
such a file directly may need access from a system administrator.

9-12 Oracle Database Administrator’s Guide

Copying Files Using the Database Server

This section contains the following topics:
■

Copying a File on a Local File System

■

Third-Party File Transfer

■

File Transfer and the DBMS_SCHEDULER Package

■

Advanced File Transfer Mechanisms
See Also:
■

Oracle Streams Concepts and Administration

■

"Transporting Tablespaces Between Databases" on page 8-25

■

Oracle Database PL/SQL Packages and Types Reference for a
description of the DBMS_FILE_TRANSFER package.

Copying a File on a Local File System
This section includes an example that uses the COPY_FILE procedure in the
DBMS_FILE_TRANSFER package to copy a file on a local file system. The following
example copies a binary file named db1.dat from the /usr/admin/source
directory to the /usr/admin/destination directory as db1_copy.dat on a local
file system:
1.

In SQL*Plus, connect as an administrative user who can grant privileges and
create directory objects using SQL.

2.

Use the SQL command CREATE DIRECTORY to create a directory object for the
directory from which you want to copy the file. A directory object is similar to an
alias for the directory. For example, to create a directory object called SOURCE_DIR
for the /usr/admin/source directory on your computer system, execute the
following statement:
CREATE DIRECTORY SOURCE_DIR AS '/usr/admin/source';

3.

Use the SQL command CREATE DIRECTORY to create a directory object for the
directory into which you want to copy the binary file. For example, to create a
directory object called DEST_DIR for the /usr/admin/destination directory
on your computer system, execute the following statement:
CREATE DIRECTORY DEST_DIR AS '/usr/admin/destination';

4.

Grant the required privileges to the user who will run the COPY_FILE procedure.
In this example, the strmadmin user runs the procedure.
GRANT EXECUTE ON DBMS_FILE_TRANSFER TO strmadmin;
GRANT READ ON DIRECTORY source_dir TO strmadmin;
GRANT WRITE ON DIRECTORY dest_dir TO strmadmin;

5.

Connect as strmadmin user:
CONNECT strmadmin/strmadminpw

6.

Run the COPY_FILE procedure to copy the file:
BEGIN
DBMS_FILE_TRANSFER.COPY_FILE(
source_directory_object
source_file_name

=>
=>

'SOURCE_DIR',
'db1.dat',

Managing Datafiles and Tempfiles

9-13

Copying Files Using the Database Server

destination_directory_object
destination_file_name

=>
=>

'DEST_DIR',
'db1_copy.dat');

END;
/

Do not use the DBMS_FILE_TRANSFER package to copy
or transfer a file that is being modified by a database because doing so
may result in an inconsistent file.

Caution:

Third-Party File Transfer
Although the procedures in the DBMS_FILE_TRANSFER package typically are invoked
as local procedure calls, they can also be invoked as remote procedure calls. A remote
procedure call lets you copy a file within a database even when you are connected to a
different database. For example, you can make a copy of a file on database DB, even if
you are connected to another database, by executing the following remote procedure
call:
DBMS_FILE_TRANSFER.COPY_FILE@DB(...)

Using remote procedure calls enables you to copy a file between two databases, even if
you are not connected to either database. For example, you can connect to database A
and then transfer a file from database B to database C. In this example, database A is
the third party because it is neither the source of nor the destination for the transferred
file.
A third-party file transfer can both push and pull a file. Continuing with the previous
example, you can perform a third-party file transfer if you have a database link from A
to either B or C, and that database has a database link to the other database. Database A
does not need a database link to both B and C.
For example, if you have a database link from A to B, and another database link from B
to C, then you can run the following procedure at A to transfer a file from B to C:
DBMS_FILE_TRANSFER.PUT_FILE@B(...)

This configuration pushes the file.
Alternatively, if you have a database link from A to C, and another database link from
C to B, then you can run the following procedure at database A to transfer a file from B
to C:
DBMS_FILE_TRANSFER.GET_FILE@C(...)

This configuration pulls the file.

File Transfer and the DBMS_SCHEDULER Package
You can use the DBMS_SCHEDULER package to transfer files automatically within a
single database and between databases. Third-party file transfers are also supported
by the DBMS_SCHEDULER package. You can monitor a long-running file transfer done
by the Scheduler using the V$SESSION_LONGOPS dynamic performance view at the
databases reading or writing the file. Any database links used by a Scheduler job must
be fixed user database links.
You can use a restartable Scheduler job to improve the reliability of file transfers
automatically, especially if there are intermittent failures. If a file transfer fails before
the destination file is closed, then you can restart the file transfer from the beginning
once the database has removed any partially written destination file. Hence you
9-14 Oracle Database Administrator’s Guide

Mapping Files to Physical Devices

should consider using a restartable Scheduler job to transfer a file if the rest of the job
is restartable. See Chapter 27, "Using the Scheduler" for more information on
Scheduler jobs.
If a single restartable job transfers several files, then you
should consider restart scenarios in which some of the files have
been transferred already and some have not been transferred yet.

Note:

Advanced File Transfer Mechanisms
You can create more sophisticated file transfer mechanisms using both the
DBMS_FILE_TRANSFER package and the DBMS_SCHEDULER package. For example,
when several databases have a copy of the file you want to transfer, you can consider
factors such as source availability, source load, and communication bandwidth to the
destination database when deciding which source database to contact first and which
source databases to try if failures occur. In this case, the information about these
factors must be available to you, and you must create the mechanism that considers
these factors.
As another example, when early completion time is more important than load, you can
submit a number of Scheduler jobs to transfer files in parallel. As a final example,
knowing something about file layout on the source and destination databases enables
you to minimize disk contention by performing or scheduling simultaneous transfers
only if they use different I/O devices.

Mapping Files to Physical Devices
In an environment where datafiles are simply file system files or are created directly on
a raw device, it is relatively straight forward to see the association between a
tablespace and the underlying device. Oracle Database provides views, such as
DBA_TABLESPACES, DBA_DATA_FILES, and V$DATAFILE, that provide a mapping
of files onto devices. These mappings, along with device statistics can be used to
evaluate I/O performance.
However, with the introduction of host based Logical Volume Managers (LVM), and
sophisticated storage subsystems that provide RAID (Redundant Array of Inexpensive
Disks) features, it is not easy to determine file to device mapping. This poses a
problem because it becomes difficult to determine your "hottest" files when they are
hidden behind a "black box". This section presents the Oracle Database approach to
resolving this problem.
The following topics are contained in this section:
■

Overview of Oracle Database File Mapping Interface

■

How the Oracle Database File Mapping Interface Works

■

Using the Oracle Database File Mapping Interface

■

File Mapping Examples

Managing Datafiles and Tempfiles

9-15

Mapping Files to Physical Devices

This section presents an overview of the Oracle Database
file mapping interface and explains how to use the
DBMS_STORAGE_MAP package and dynamic performance views to
expose the mapping of files onto physical devices. You can more
easily access this functionality through the Oracle Enterprise
Manager (EM). It provides an easy to use graphical interface for
mapping files to physical devices.
Note:

Overview of Oracle Database File Mapping Interface
To acquire an understanding of I/O performance, one must have detailed knowledge
of the storage hierarchy in which files reside. Oracle Database provides a mechanism
to show a complete mapping of a file to intermediate layers of logical volumes to
actual physical devices. This is accomplished though a set of dynamic performance
views (V$ views). Using these views, you can locate the exact disk on which any block
of a file resides.
To build these views, storage vendors must provide mapping libraries that are
responsible for mapping their particular I/O stack elements. The database
communicates with these libraries through an external non-Oracle Database process
that is spawned by a background process called FMON. FMON is responsible for
managing the mapping information. Oracle provides a PL/SQL package,
DBMS_STORAGE_MAP, that you use to invoke mapping operations that populate the
mapping views.
The file mapping interface is not available on Windows
platforms.

Note:

How the Oracle Database File Mapping Interface Works
This section describes the components of the Oracle Database file mapping interface
and how the interface works. It contains the following topics:
■

Components of File Mapping

■

Mapping Structures

■

Example of Mapping Structures

■

Configuration ID

Components of File Mapping
The following figure shows the components of the file mapping mechanism.

9-16 Oracle Database Administrator’s Guide

Mapping Files to Physical Devices

Figure 9–1 Components of File Mapping
mapping lib0
Oracle Instance
SGA

FMON

FMPUTL

mapping lib1

External
Process

.
.
.
mapping libn

The following sections briefly describes these components and how they work
together to populate the mapping views:
■

FMON

■

External Process (FMPUTL)

■

Mapping Libraries

FMON FMON is a background process started by the database whenever the
FILE_MAPPING initialization parameter is set to TRUE. FMON is responsible for:
■

Building mapping information, which is stored in the SGA. This information is
composed of the following structures:
–

Files

–

File system extents

–

Elements

–

Subelements

These structures are explained in "Mapping Structures" on page 9-18.
■

■

■

Refreshing mapping information when a change occurs because of:
–

Changes to datafiles (size)

–

Addition or deletion of datafiles

–

Changes to the storage configuration (not frequent)

Saving mapping information in the data dictionary to maintain a view of the
information that is persistent across startup and shutdown operations
Restoring mapping information into the SGA at instance startup. This avoids the
need for a potentially expensive complete rebuild of the mapping information on
every instance startup.

You help control this mapping using procedures that are invoked with the
DBMS_STORAGE_MAP package.
External Process (FMPUTL) FMON spawns an external non-Oracle Database process
called FMPUTL, that communicates directly with the vendor supplied mapping
libraries. This process obtains the mapping information through all levels of the I/O
stack, assuming that mapping libraries exist for all levels. On some platforms the
external process requires that the SETUID bit is set to ON because root privileges are
needed to map through all levels of the I/O mapping stack.

Managing Datafiles and Tempfiles

9-17

Mapping Files to Physical Devices

The external process is responsible for discovering the mapping libraries and
dynamically loading them into its address space.
Mapping Libraries Oracle Database uses mapping libraries to discover mapping
information for the elements that are owned by a particular mapping library. Through
these mapping libraries information about individual I/O stack elements is
communicated. This information is used to populate dynamic performance views that
can be queried by users.
Mapping libraries need to exist for all levels of the stack for the mapping to be
complete, and different libraries may own their own parts of the I/O mapping stack.
For example, a VERITAS VxVM library would own the stack elements related to the
VERITAS Volume Manager, and an EMC library would own all EMC storage specific
layers of the I/O mapping stack.
Mapping libraries are vendor supplied. However, Oracle currently supplies a mapping
library for EMC storage. The mapping libraries available to a database server are
identified in a special file named filemap.ora.

Mapping Structures
The mapping structures and the Oracle Database representation of these structures are
described in this section. You will need to understand this information in order to
interpret the information in the mapping views.
The following are the primary structures that compose the mapping information:
■

Files
A file mapping structure provides a set of attributes for a file, including file size,
number of file system extents that the file is composed of, and the file type.

■

File system extents
A file system extent mapping structure describes a contiguous chunk of blocks
residing on one element. This includes the device offset, the extent size, the file
offset, the type (data or parity), and the name of the element where the extent
resides.
File system extents are not the same as Oracle Database
extents. File system extents are physical contiguous blocks of data
written to a device as managed by the file system. Oracle Database
extents are logical structures managed by the database, such as
tablespace extents.

Note:

■

Elements
An element mapping structure is the abstract mapping structure that describes a
storage component within the I/O stack. Elements may be mirrors, stripes,
partitions, RAID5, concatenated elements, and disks. These structures are the
mapping building blocks.

■

Subelements
A subelement mapping structure describes the link between an element and the
next elements in the I/O mapping stack. This structure contains the subelement
number, size, the element name where the subelement exists, and the element
offset.

All of these mapping structures are illustrated in the following example.
9-18 Oracle Database Administrator’s Guide

Mapping Files to Physical Devices

Example of Mapping Structures
Consider an Oracle Database which is composed of two data files X and Y. Both files X
and Y reside on a file system mounted on volume A. File X is composed of two extents
while file Y is composed of only one extent.
The two extents of File X and the one extent of File Y both map to Element A. Element
A is striped to Elements B and C. Element A maps to Elements B and C by way of
Subelements B0 and C1, respectively.
Element B is a partition of Element D (a physical disk), and is mapped to Element D by
way of subelement D0.
Element C is mirrored over Elements E and F (both physical disks), and is mirrored to
those physical disks by way of Subelements E0 and F1, respectively.
All of the mapping structures are illustrated in Figure 9–2.
Figure 9–2 Illustration of Mapping Structures
File X

File Extent 1

File Extent 2

File Extent 1

File Y

Element A

Sub B0

Sub C1

Element B

Sub D0

Element D

Element C

Sub E0

Element E

Sub F1

Element F

Note that the mapping structures represented are sufficient to describe the entire
mapping information for the Oracle Database instance and consequently to map every
logical block within the file into a (element name, element offset) tuple (or more in case
of mirroring) at each level within the I/O stack.

Configuration ID
The configuration ID captures the version information associated with elements or
files. The vendor library provides the configuration ID and updates it whenever a
change occurs. Without a configuration ID, there is no way for the database to tell
whether the mapping has changed.
There are two kinds of configuration IDs:

Managing Datafiles and Tempfiles

9-19

Mapping Files to Physical Devices

■

Persistent
These configuration IDs are persistent across instance shutdown

■

Non-persistent
The configuration IDs are not persistent across instance shutdown. The database is
only capable of refreshing the mapping information while the instance is up.

Using the Oracle Database File Mapping Interface
This section discusses how to use the Oracle Database file mapping interface. It
contains the following topics:
■

Enabling File Mapping

■

Using the DBMS_STORAGE_MAP Package

■

Obtaining Information from the File Mapping Views

Enabling File Mapping
The following steps enable the file mapping feature:
1.

Ensure that a valid filemap.ora file exists in the /opt/ORCLfmap/prot1_32/etc
directory for 32-bit platforms, or in the /opt/ORCLfmap/prot1_64/etc directory
for 64-bit platforms.
Caution: While the format and content of the filemap.ora file is
discussed here, it is for informational reasons only. The filemap.ora
file is created by the database when your system is installed. Until
such time that vendors supply there own libraries, there will be
only one entry in the filemap.ora file, and that is the
Oracle-supplied EMC library. This file should be modified
manually by uncommenting this entry only if an EMC Symmetrix
array is available.

The filemap.ora file is the configuration file that describes all of the available
mapping libraries. FMON requires that a filemap.ora file exists and that it points
to a valid path to mapping libraries. Otherwise, it will not start successfully.
The following row needs to be included in filemap.ora for each library:
lib=vendor_name:mapping_library_path
where:
■

vendor_name should be Oracle for the EMC Symmetric library

■

mapping_library_path is the full path of the mapping library

Note that the ordering of the libraries in this file is extremely important. The
libraries are queried based on their order in the configuration file.
The file mapping service can be even started even if no mapping libraries are
available. The filemap.ora file still needs to be present even though it is empty. In
this case, the mapping service is constrained in the sense that new mapping
information cannot be discovered. Only restore and drop operations are allowed
in such a configuration.
2.

Set the FILE_MAPPING initialization parameter to TRUE.

9-20 Oracle Database Administrator’s Guide

Mapping Files to Physical Devices

The instance does not have to be shut down to set this parameter. You can set it
using the following ALTER SYSTEM statement:
ALTER SYSTEM SET FILE_MAPPING=TRUE;
3.

Invoke the appropriate DBMS_STORAGE_MAP mapping procedure. You have two
options:
■

■

In a cold startup scenario, the Oracle Database is just started and no mapping
operation has been invoked yet. You execute the
DBMS_STORAGE_MAP.MAP_ALL procedure to build the mapping information
for the entire I/O subsystem associated with the database.
In a warm start scenario where the mapping information is already built, you
have the option to invoke the DBMS_STORAGE_MAP.MAP_SAVE procedure to
save the mapping information in the data dictionary. (Note that this procedure
is invoked in DBMS_STORAGE_MAP.MAP_ALL() by default.) This forces all of
the mapping information in the SGA to be flushed to disk.
Once you restart the database, use DBMS_STORAGE_MAP.RESTORE() to
restore the mapping information into the SGA. If needed,
DBMS_STORAGE_MAP.MAP_ALL() can be called to refresh the mapping
information.

Using the DBMS_STORAGE_MAP Package
The DBMS_STORAGE_MAP package enables you to control the mapping operations.
The various procedures available to you are described in the following table.
Procedure

Use to:

MAP_OBJECT

Build the mapping information for the database object
identified by object name, owner, and type

MAP_ELEMENT

Build mapping information for the specified element

MAP_FILE

Build mapping information for the specified filename

MAP_ALL

Build entire mapping information for all types of database files
(excluding archive logs)

DROP_ELEMENT

Drop the mapping information for a specified element

DROP_FILE

Drop the file mapping information for the specified filename

DROP_ALL

Drop all mapping information in the SGA for this instance

SAVE

Save into the data dictionary the required information needed to
regenerate the entire mapping

RESTORE

Load the entire mapping information from the data dictionary
into the shared memory of the instance

LOCK_MAP

Lock the mapping information in the SGA for this instance

UNLOCK_MAP

Unlock the mapping information in the SGA for this instance

See Also:
■

■

Oracle Database PL/SQL Packages and Types Reference for a
description of the DBMS_STORAGE_MAP package
"File Mapping Examples" on page 9-23 for an example of using
the DBMS_STORAGE_MAP package

Managing Datafiles and Tempfiles

9-21

Mapping Files to Physical Devices

Obtaining Information from the File Mapping Views
Mapping information generated by DBMS_STORAGE_MAP package is captured in
dynamic performance views. Brief descriptions of these views are presented here.
View

Description

V$MAP_LIBRARY

Contains a list of all mapping libraries that have been
dynamically loaded by the external process

V$MAP_FILE

Contains a list of all file mapping structures in the shared
memory of the instance

V$MAP_FILE_EXTENT

Contains a list of all file system extent mapping structures in
the shared memory of the instance

V$MAP_ELEMENT

Contains a list of all element mapping structures in the SGA of
the instance

V$MAP_EXT_ELEMENT

Contains supplementary information for all element mapping

V$MAP_SUBELEMENT

Contains a list of all subelement mapping structures in the
shared memory of the instance

V$MAP_COMP_LIST

Contains supplementary information for all element mapping
structures.

V$MAP_FILE_IO_STACK

The hierarchical arrangement of storage containers for the file
displayed as a series of rows. Each row represents a level in
the hierarchy.

Oracle Database Reference for a complete description of
the dynamic performance views

See Also:

However, the information generated by the DBMS_STORAGE_MAP.MAP_OBJECT
procedure is captured in a global temporary table named MAP_OBJECT. This table
displays the hierarchical arrangement of storage containers for objects. Each row in the
table represents a level in the hierarchy. A description of the MAP_OBJECT table
follows.
Column

Datatype

OBJECT_NAME

VARCHAR2(2000) Name of the object

OBJECT_OWNER

VARCHAR2(2000) Owner of the object

OBJECT_TYPE

VARCHAR2(2000) Object type

FILE_MAP_IDX

NUMBER

File index (corresponds to FILE_MAP_IDX in
V$MAP_FILE)

DEPTH

NUMBER

Element depth within the I/O stack

ELEM_IDX

NUMBER

Index corresponding to element

CU_SIZE

NUMBER

Contiguous set of logical blocks of the file, in HKB
(half KB) units, that is resident contiguously on the
element

STRIDE

NUMBER

Number of HKB between contiguous units (CU) in
the file that are contiguous on this element. Used
in RAID5 and striped files.

NUM_CU

NUMBER

Number of contiguous units that are adjacent to
each other on this element that are separated by
STRIDE HKB in the file. In RAID5, the number of
contiguous units also include the parity stripes.

9-22 Oracle Database Administrator’s Guide

Description

Mapping Files to Physical Devices

Column

Datatype

Description

ELEM_OFFSET

NUMBER

Element offset in HKB units

FILE_OFFSET

NUMBER

Offset in HKB units from the start of the file to the
first byte of the contiguous units

DATA_TYPE

VARCHAR2(2000) Datatype (DATA, PARITY, or DATA AND PARITY)

PARITY_POS

NUMBER

Position of the parity. Only for RAID5. This field is
needed to distinguish the parity from the data part.
Parity period. Only for RAID5.

PARITY_PERIOD NUMBER

File Mapping Examples
The following examples illustrates some of the powerful capabilities of the Oracle
Database file mapping feature. This includes:
■

The ability to map all the database files that span a particular device

■

The ability to map a particular file into its corresponding devices

■

The ability to map a particular database object, including its block distribution at
all levels within the I/O stack

Consider an Oracle Database instance which is composed of two datafiles:
■

t_db1.f

■

t_db2.f

These files are created on a Solaris UFS file system mounted on a VERITAS VxVM host
based striped volume, /dev/vx/dsk/ipfdg/ipf-vol1, that consists of the
following host devices as externalized from an EMC Symmetrix array:
■

/dev/vx/rdmp/c2t1d0s2

■

/dev/vx/rdmp/c2t1d1s2

Note that the following examples require the execution of a MAP_ALL() operation.

Example 1: Map All Database Files that Span a Device
The following query returns all Oracle Database files associated with the
/dev/vx/rdmp/c2t1d1s2 host device:
SELECT UNIQUE me.ELEM_NAME, mf.FILE_NAME
FROM V$MAP_FILE_IO_STACK fs, V$MAP_FILE mf, V$MAP_ELEMENT me
WHERE mf.FILE_MAP_IDX = fs.FILE_MAP_IDX
AND me.ELEM_IDX = fs.ELEM_IDX
AND me.ELEM_NAME = '/dev/vx/rdmp/c2t1d1s2';

The query results are:
ELEM_NAME
-----------------------/dev/vx/rdmp/c2t1d1s2
/dev/vx/rdmp/c2t1d1s2

FILE_NAME
-------------------------------/oracle/dbs/t_db1.f
/oracle/dbs/t_db2.f

Example 2: Map a File into Its Corresponding Devices
The following query displays a topological graph of the /oracle/dbs/t_db1.f
datafile:
WITH fv AS

Managing Datafiles and Tempfiles

9-23

Mapping Files to Physical Devices

(SELECT FILE_MAP_IDX, FILE_NAME FROM V$MAP_FILE
WHERE FILE_NAME = '/oracle/dbs/t_db1.f')
SELECT fv.FILE_NAME, LPAD(' ', 4 * (LEVEL - 1)) || el.ELEM_NAME ELEM_NAME
FROM V$MAP_SUBELEMENT sb, V$MAP_ELEMENT el, fv,
(SELECT UNIQUE ELEM_IDX FROM V$MAP_FILE_IO_STACK io, fv
WHERE io.FILE_MAP_IDX = fv.FILE_MAP_IDX) fs
WHERE el.ELEM_IDX = sb.CHILD_IDX
AND fs.ELEM_IDX = el.ELEM_IDX
START WITH sb.PARENT_IDX IN
(SELECT DISTINCT ELEM_IDX
FROM V$MAP_FILE_EXTENT fe, fv
WHERE fv.FILE_MAP_IDX = fe.FILE_MAP_IDX)
CONNECT BY PRIOR sb.CHILD_IDX = sb.PARENT_IDX;

The resulting topological graph is:
FILE_NAME
----------------------/oracle/dbs/t_db1.f
/oracle/dbs/t_db1.f
/oracle/dbs/t_db1.f
/oracle/dbs/t_db1.f
/oracle/dbs/t_db1.f
/oracle/dbs/t_db1.f
/oracle/dbs/t_db1.f
/oracle/dbs/t_db1.f
/oracle/dbs/t_db1.f
/oracle/dbs/t_db1.f
/oracle/dbs/t_db1.f

ELEM_NAME
------------------------------------------------_sym_plex_/dev/vx/rdsk/ipfdg/ipf-vol1_-1_-1
_sym_subdisk_/dev/vx/rdsk/ipfdg/ipf-vol1_0_0_0
/dev/vx/rdmp/c2t1d0s2
_sym_symdev_000183600407_00C
_sym_hyper_000183600407_00C_0
_sym_hyper_000183600407_00C_1
_sym_subdisk_/dev/vx/rdsk/ipfdg/ipf-vol1_0_1_0
/dev/vx/rdmp/c2t1d1s2
_sym_symdev_000183600407_00D
_sym_hyper_000183600407_00D_0
_sym_hyper_000183600407_00D_1

Example 3: Map a Database Object
This example displays the block distribution at all levels within the I/O stack for the
scott.bonus table.
A MAP_OBJECT() operation must first be executed as follows:
EXECUTE DBMS_STORAGE_MAP.MAP_OBJECT('BONUS','SCOTT','TABLE');

The query is as follows:
SELECT io.OBJECT_NAME o_name, io.OBJECT_OWNER o_owner, io.OBJECT_TYPE o_type,
mf.FILE_NAME, me.ELEM_NAME, io.DEPTH,
(SUM(io.CU_SIZE * (io.NUM_CU - DECODE(io.PARITY_PERIOD, 0, 0,
TRUNC(io.NUM_CU / io.PARITY_PERIOD)))) / 2) o_size
FROM MAP_OBJECT io, V$MAP_ELEMENT me, V$MAP_FILE mf
WHERE io.OBJECT_NAME = 'BONUS'
AND
io.OBJECT_OWNER = 'SCOTT'
AND
io.OBJECT_TYPE = 'TABLE'
AND
me.ELEM_IDX = io.ELEM_IDX
AND
mf.FILE_MAP_IDX = io.FILE_MAP_IDX
GROUP BY io.ELEM_IDX, io.FILE_MAP_IDX, me.ELEM_NAME, mf.FILE_NAME, io.DEPTH,
io.OBJECT_NAME, io.OBJECT_OWNER, io.OBJECT_TYPE
ORDER BY io.DEPTH;

The following is the result of the query. Note that the o_size column is expressed in
KB.
O_NAME
-----BONUS
BONUS

O_OWNER
------SCOTT
SCOTT

O_TYPE
-----TABLE
TABLE

FILE_NAME
------------------/oracle/dbs/t_db1.f
/oracle/dbs/t_db1.f

9-24 Oracle Database Administrator’s Guide

ELEM_NAME
----------------------------/dev/vx/dsk/ipfdg/ipf-vol1
_sym_plex_/dev/vx/rdsk/ipf

DEPTH
-----0
1

O_SIZE
-----20
20

Viewing Datafile Information

BONUS

SCOTT

TABLE

/oracle/dbs/t_db1.f

BONUS

SCOTT

TABLE

/oracle/dbs/t_db1.f

BONUS
BONUS
BONUS
BONUS
BONUS
BONUS
BONUS
BONUS

SCOTT
SCOTT
SCOTT
SCOTT
SCOTT
SCOTT
SCOTT
SCOTT

TABLE
TABLE
TABLE
TABLE
TABLE
TABLE
TABLE
TABLE

/oracle/dbs/t_db1.f
/oracle/dbs/t_db1.f
/oracle/dbs/t_db1.f
/oracle/dbs/t_db1.f
/oracle/dbs/t_db1.f
/oracle/dbs/t_db1.f
/oracle/dbs/t_db1.f
/oracle/dbs/t_db1.f

pdg/if-vol1_-1_-1
_sym_subdisk_/dev/vx/rdsk/
ipfdg/ipf-vol1_0_1_0
_sym_subdisk_/dev/vx/rdsk/ipf
dg/ipf-vol1_0_2_0
/dev/vx/rdmp/c2t1d1s2
/dev/vx/rdmp/c2t1d2s2
_sym_symdev_000183600407_00D
_sym_symdev_000183600407_00E
_sym_hyper_000183600407_00D_0
_sym_hyper_000183600407_00D_1
_sym_hyper_000183600407_00E_0
_sym_hyper_000183600407_00E_1

2

12

2

8

3
3
4
4
5
5
6
6

12
8
12
8
12
12
8
8

Viewing Datafile Information
The following data dictionary views provide useful information about the datafiles of
a database:
View

Description

DBA_DATA_FILES

Provides descriptive information about each datafile,
including the tablespace to which it belongs and the file ID.
The file ID can be used to join with other views for detail
information.

DBA_EXTENTS

DBA view describes the extents comprising all segments in the
database. Contains the file ID of the datafile containing the
extent. USER view describes extents of the segments belonging
to objects owned by the current user.

USER_EXTENTS

DBA_FREE_SPACE
USER_FREE_SPACE

DBA view lists the free extents in all tablespaces. Includes the
file ID of the datafile containing the extent. USER view lists the
free extents in the tablespaces accessible to the current user.

V$DATAFILE

Contains datafile information from the control file

V$DATAFILE_HEADER

Contains information from datafile headers

This example illustrates the use of one of these views, V$DATAFILE.
SELECT NAME,
FILE#,
STATUS,
CHECKPOINT_CHANGE# "CHECKPOINT"
FROM
V$DATAFILE;
NAME
-------------------------------/u01/oracle/rbdb1/system01.dbf
/u02/oracle/rbdb1/temp01.dbf
/u02/oracle/rbdb1/users03.dbf

FILE#
----1
2
3

STATUS
------SYSTEM
ONLINE
OFFLINE

CHECKPOINT
---------3839
3782
3782

FILE# lists the file number of each datafile; the first datafile in the SYSTEM tablespace
created with the database is always file 1. STATUS lists other information about a
datafile. If a datafile is part of the SYSTEM tablespace, its status is SYSTEM (unless it
requires recovery). If a datafile in a non-SYSTEM tablespace is online, its status is
ONLINE. If a datafile in a non-SYSTEM tablespace is offline, its status can be either
OFFLINE or RECOVER. CHECKPOINT lists the final SCN (system change number)
written for the most recent checkpoint of a datafile.

Managing Datafiles and Tempfiles

9-25

Viewing Datafile Information

See Also:

Oracle Database Reference for complete descriptions of

these views

9-26 Oracle Database Administrator’s Guide

10
Managing the Undo Tablespace
This chapter describes how to manage the undo tablespace, which stores information
used to roll back changes to the Oracle Database. It contains the following topics:
■

What Is Undo?

■

Introduction to Automatic Undo Management

■

Setting the Undo Retention Period

■

Sizing the Undo Tablespace

■

Managing Undo Tablespaces

■

Migrating to Automatic Undo Management

■

Viewing Information About Undo
See Also: Part III, "Automated File and Storage Management" for
information about creating an undo tablespace whose datafiles are
both created and managed by the Oracle Database server.

What Is Undo?
Every Oracle Database must have a method of maintaining information that is used to
roll back, or undo, changes to the database. Such information consists of records of the
actions of transactions, primarily before they are committed. These records are
collectively referred to as undo.
Undo records are used to:
■

Roll back transactions when a ROLLBACK statement is issued

■

Recover the database

■

Provide read consistency

■

Analyze data as of an earlier point in time by using Oracle Flashback Query

■

Recover from logical corruptions using Oracle Flashback features

When a ROLLBACK statement is issued, undo records are used to undo changes that
were made to the database by the uncommitted transaction. During database recovery,
undo records are used to undo any uncommitted changes applied from the redo log to
the datafiles. Undo records provide read consistency by maintaining the before image
of the data for users who are accessing the data at the same time that another user is
changing it.

Managing the Undo Tablespace

10-1

Introduction to Automatic Undo Management

Introduction to Automatic Undo Management
This section introduces the concepts of Automatic Undo Management and discusses
the following topics:
■

Overview of Automatic Undo Management

■

Undo Retention

Overview of Automatic Undo Management
Oracle provides a fully automated mechanism, referred to as automatic undo
management, for managing undo information and space. In this management mode,
you create an undo tablespace, and the server automatically manages undo segments
and space among the various active sessions.
You set the UNDO_MANAGEMENT initialization parameter to AUTO to enable automatic
undo management. A default undo tablespace is then created at database creation. An
undo tablespace can also be created explicitly. The methods of creating an undo
tablespace are explained in "Creating an Undo Tablespace" on page 10-7.
When the instance starts, the database automatically selects the first available undo
tablespace. If no undo tablespace is available, then the instance starts without an undo
tablespace and stores undo records in the SYSTEM tablespace. This is not
recommended in normal circumstances, and an alert message is written to the alert log
file to warn that the system is running without an undo tablespace.
If the database contains multiple undo tablespaces, you can optionally specify at
startup that you want to use a specific undo tablespace. This is done by setting the
UNDO_TABLESPACE initialization parameter, as shown in this example:
UNDO_TABLESPACE = undotbs_01

In this case, if you have not already created the undo tablespace (in this example,
undotbs_01), the STARTUP command fails. The UNDO_TABLESPACE parameter can
be used to assign a specific undo tablespace to an instance in an Oracle Real
Application Clusters environment.
The following is a summary of the initialization parameters for automatic undo
management:
Initialization Parameter

Description

UNDO_MANAGEMENT

If AUTO, use automatic undo management. The default is
MANUAL.

UNDO_TABLESPACE

An optional dynamic parameter specifying the name of an
undo tablespace. This parameter should be used only when
the database has multiple undo tablespaces and you want to
direct the database instance to use a particular undo
tablespace.

When automatic undo management is enabled, if the initialization parameter file
contains parameters relating to manual undo management, they are ignored.
See Also: Oracle Database Reference for complete descriptions of
initialization parameters used in automatic undo management

10-2 Oracle Database Administrator’s Guide

Introduction to Automatic Undo Management

Undo Retention
After a transaction is committed, undo data is no longer needed for rollback or
transaction recovery purposes. However, for consistent read purposes, long-running
queries may require this old undo information for producing older images of data
blocks. Furthermore, the success of several Oracle Flashback features can also depend
upon the availability of older undo information. For these reasons, it is desirable to
retain the old undo information for as long as possible.
When automatic undo management is enabled, there is always a current undo
retention period, which is the minimum amount of time that Oracle Database
attempts to retain old undo information before overwriting it. Old (committed) undo
information that is older than the current undo retention period is said to be expired.
Old undo information with an age that is less than the current undo retention period is
said to be unexpired.
Oracle Database automatically tunes the undo retention period based on undo
tablespace size and system activity. You can specify a minimum undo retention period
(in seconds) by setting the UNDO_RETENTION initialization parameter. The database
makes its best effort to honor the specified minimum undo retention period, provided
that the undo tablespace has space available for new transactions. When available
space for new transactions becomes short, the database begins to overwrite expired
undo. If the undo tablespace has no space for new transactions after all expired undo
is overwritten, the database may begin overwriting unexpired undo information. If
any of this overwritten undo information is required for consistent read in a current
long-running query, the query could fail with the snapshot too old error message.
The following points explain the exact impact of the UNDO_RETENTION parameter on
undo retention:
■

■

The UNDO_RETENTION parameter is ignored for a fixed size undo tablespace. The
database may overwrite unexpired undo information when tablespace space
becomes low.
For an undo tablespace with the AUTOEXTEND option enabled, the database
attempts to honor the minimum retention period specified by UNDO_RETENTION.
When space is low, instead of overwriting unexpired undo information, the
tablespace auto-extends. If the MAXSIZE clause is specified for an auto-extending
undo tablespace, when the maximum size is reached, the database may begin to
overwrite unexpired undo information.

Retention Guarantee
To guarantee the success of long-running queries or Oracle Flashback operations, you
can enable retention guarantee. If retention guarantee is enabled, the specified
minimum undo retention is guaranteed; the database never overwrites unexpired
undo data even if it means that transactions fail due to lack of space in the undo
tablespace. If retention guarantee is not enabled, the database can overwrite unexpired
undo when space is low, thus lowering the undo retention for the system. This option
is disabled by default.
WARNING: Enabling retention guarantee can cause multiple DML
operations to fail. Use with caution.

You enable retention guarantee by specifying the RETENTION GUARANTEE clause for
the undo tablespace when you create it with either the CREATE DATABASE or CREATE
UNDO TABLESPACE statement. Or, you can later specify this clause in an ALTER

Managing the Undo Tablespace

10-3

Introduction to Automatic Undo Management

TABLESPACE statement. You disable retention guarantee with the RETENTION
NOGUARANTEE clause.
You can use the DBA_TABLESPACES view to determine the retention guarantee setting
for the undo tablespace. A column named RETENTION contains a value of
GUARANTEE, NOGUARANTEE, or NOT APPLY (used for tablespaces other than the undo
tablespace).

Automatic Tuning of Undo Retention
Oracle Database automatically tunes the undo retention period based on how the
undo tablespace is configured.
■

■

If the undo tablespace is fixed size, the database tunes the retention period for the
best possible undo retention for that tablespace size and the current system load.
This tuned retention period can be significantly greater than the specified
minimum retention period.
If the undo tablespace is configured with the AUTOEXTEND option, the database
tunes the undo retention period to be somewhat longer than the longest-running
query on the system at that time. Again, this tuned retention period can be greater
than the specified minimum retention period.
Automatic tuning of undo retention is not supported for LOBs.
This is because undo information for LOBs is stored in the segment
itself and not in the undo tablespace. For LOBs, the database attempts
to honor the minimum undo retention period specified by
UNDO_RETENTION. However, if space becomes low, unexpired LOB
undo information may be overwritten.

Note:

You can determine the current retention period by querying the
TUNED_UNDORETENTION column of the V$UNDOSTAT view. This view contains one
row for each 10-minute statistics collection interval over the last 4 days. (Beyond 4
days, the data is available in the DBA_HIST_UNDOSTAT view.)
TUNED_UNDORETENTION is given in seconds.
select to_char(begin_time, 'DD-MON-RR HH24:MI') begin_time,
to_char(end_time, 'DD-MON-RR HH24:MI') end_time, tuned_undoretention
from v$undostat order by end_time;
BEGIN_TIME
--------------04-FEB-05 00:01
...
07-FEB-05 23:21
07-FEB-05 23:31
07-FEB-05 23:41
07-FEB-05 23:51

END_TIME
TUNED_UNDORETENTION
--------------- ------------------04-FEB-05 00:11
12100
07-FEB-05
07-FEB-05
07-FEB-05
07-FEB-05

23:31
23:41
23:51
23:52

86700
86700
86700
86700

576 rows selected.

See Oracle Database Reference for more information about V$UNDOSTAT.
Undo Retention Tuning and Alert Thresholds For a fixed size undo tablespace, the
database calculates the maximum undo retention period based on database statistics
and on the size of the undo tablespace. For optimal undo management, rather than
tuning based on 100% of the tablespace size, the database tunes the undo retention
period based on 85% of the tablespace size, or on the warning alert threshold
10-4 Oracle Database Administrator’s Guide

Sizing the Undo Tablespace

percentage for space used, whichever is lower. (The warning alert threshold defaults to
85%, but can be changed.) Therefore, if you set the warning alert threshold of the undo
tablespace below 85%, this may reduce the tuned length of the undo retention period.
For more information on tablespace alert thresholds, see "Managing Tablespace Alerts"
on page 14-1.

Setting the Undo Retention Period
You set the undo retention period by setting the UNDO_RETENTION initialization
parameter. This parameter specifies the desired minimum undo retention period in
seconds. As described in "Undo Retention" on page 10-3, the current undo retention
period may be automatically tuned to be greater than UNDO_RETENTION, or, unless
retention guarantee is enabled, less than UNDO_RETENTION if space is low.
To set the undo retention period:
■

Do one of the following:
–

Set UNDO_RETENTION in the initialization parameter file.
UNDO_RETENTION = 1800

–

Change UNDO_RETENTION at any time using the ALTER SYSTEM statement:
ALTER SYSTEM SET UNDO_RETENTION = 2400;

The effect of an UNDO_RETENTION parameter change is immediate, but it can only be
honored if the current undo tablespace has enough space.

Sizing the Undo Tablespace
You can size the undo tablespace appropriately either by using automatic extension of
the undo tablespace or by using the Undo Advisor for a fixed sized tablespace.

Using Auto-Extensible Tablespaces
Oracle Database supports automatic extension of the undo tablespace to facilitate
capacity planning of the undo tablespace in the production environment. When the
system is first running in the production environment, you may be unsure of the space
requirements of the undo tablespace. In this case, you can enable automatic extension
of the undo tablespace so that it automatically increases in size when more space is
needed. You do so by including the AUTOEXTEND keyword when you create the undo
tablespace.

Sizing Fixed-Size Undo Tablespaces
If you have decided on a fixed-size undo tablespace, the Undo Advisor can help you
estimate needed capacity. You can access the Undo Advisor through Enterprise
Manager or through the DBMS_ADVISOR PL/SQL package. Enterprise Manager is the
preferred method of accessing the advisor. For more information on using the Undo
Advisor through Enterprise Manager, please refer to Oracle Database 2 Day DBA.
The Undo Advisor relies for its analysis on data collected in the Automatic Workload
Repository (AWR). It is therefore important that the AWR have adequate workload
statistics available so that the Undo Advisor can make accurate recommendations. For
newly created databases, adequate statistics may not be available immediately. In such
cases, an auto-extensible undo tablespace can be used.

Managing the Undo Tablespace

10-5

Managing Undo Tablespaces

An adjustment to the collection interval and retention period for AWR statistics can
affect the precision and the type of recommendations that the advisor produces. See
"Automatic Workload Repository" on page 1-21 for more information.
To use the Undo Advisor, you first estimate these two values:
■

The length of your expected longest running query
After the database has been up for a while, you can view the Longest Running
Query field on the Undo Management page of Enterprise Manager.

■

The longest interval that you will require for flashback operations
For example, if you expect to run Flashback Queries for up to 48 hours in the past,
your flashback requirement is 48 hours.

You then take the maximum of these two undo retention values and use that value to
look up the required undo tablespace size on the Undo Advisor graph.

The Undo Advisor PL/SQL Interface
You can activate the Undo Advisor by creating an undo advisor task through the
advisor framework. The following example creates an undo advisor task to evaluate
the undo tablespace. The name of the advisor is 'Undo Advisor'. The analysis is based
on Automatic Workload Repository snapshots, which you must specify by setting
parameters START_SNAPSHOT and END_SNAPSHOT. In the following example, the
START_SNAPSHOT is "1" and END_SNAPSHOT is "2".
DECLARE
tid
NUMBER;
tname VARCHAR2(30);
oid
NUMBER;
BEGIN
DBMS_ADVISOR.CREATE_TASK('Undo Advisor', tid, tname, 'Undo Advisor Task');
DBMS_ADVISOR.CREATE_OBJECT(tname, 'UNDO_TBS', null, null, null, 'null', oid);
DBMS_ADVISOR.SET_TASK_PARAMETER(tname, 'TARGET_OBJECTS', oid);
DBMS_ADVISOR.SET_TASK_PARAMETER(tname, 'START_SNAPSHOT', 1);
DBMS_ADVISOR.SET_TASK_PARAMETER(tname, 'END_SNAPSHOT', 2);
DBMS_ADVISOR.SET_TASK_PARAMETER(name, 'INSTANCE', 1);
DBMS_ADVISOR.execute_task(tname);
end;
/

After you have created the advisor task, you can view the output and
recommendations in the Automatic Database Diagnostic Monitor in Enterprise
Manager. This information is also available in the DBA_ADVISOR_* data dictionary
views.
See Also:
■

■

Oracle Database 2 Day DBA for more information on using
advisors and "Using the Segment Advisor" on page 14-16 for an
example of creating an advisor task for a different advisor
Oracle Database Reference for information about the
DBA_ADVISOR_* data dictionary views

Managing Undo Tablespaces
This section describes the various steps involved in undo tablespace management and
contains the following sections:

10-6 Oracle Database Administrator’s Guide

Managing Undo Tablespaces

■

Creating an Undo Tablespace

■

Altering an Undo Tablespace

■

Dropping an Undo Tablespace

■

Switching Undo Tablespaces

■

Establishing User Quotas for Undo Space

Creating an Undo Tablespace
There are two methods of creating an undo tablespace. The first method creates the
undo tablespace when the CREATE DATABASE statement is issued. This occurs when
you are creating a new database, and the instance is started in automatic undo
management mode (UNDO_MANAGEMENT = AUTO). The second method is used with
an existing database. It uses the CREATE UNDO TABLESPACE statement.
You cannot create database objects in an undo tablespace. It is reserved for
system-managed undo data.
Oracle Database enables you to create a single-file undo tablespace. Single-file, or
bigfile, tablespaces are discussed in "Bigfile Tablespaces" on page 8-6.

Using CREATE DATABASE to Create an Undo Tablespace
You can create a specific undo tablespace using the UNDO TABLESPACE clause of the
CREATE DATABASE statement.
The following statement illustrates using the UNDO TABLESPACE clause in a CREATE
DATABASE statement. The undo tablespace is named undotbs_01 and one datafile,
/u01/oracle/rbdb1/undo0101.dbf, is allocated for it.
CREATE DATABASE rbdb1
CONTROLFILE REUSE
.
.
.
UNDO TABLESPACE undotbs_01 DATAFILE '/u01/oracle/rbdb1/undo0101.dbf';

If the undo tablespace cannot be created successfully during CREATE DATABASE, the
entire CREATE DATABASE operation fails. You must clean up the database files,
correct the error and retry the CREATE DATABASE operation.
The CREATE DATABASE statement also lets you create a single-file undo tablespace at
database creation. This is discussed in "Supporting Bigfile Tablespaces During
Database Creation" on page 2-16.
Oracle Database SQL Reference for the syntax for using
the CREATE DATABASE statement to create an undo tablespace

See Also:

Using the CREATE UNDO TABLESPACE Statement
The CREATE UNDO TABLESPACE statement is the same as the CREATE
TABLESPACE statement, but the UNDO keyword is specified. The database determines
most of the attributes of the undo tablespace, but you can specify the DATAFILE
clause.
This example creates the undotbs_02 undo tablespace with the AUTOEXTEND option:
CREATE UNDO TABLESPACE undotbs_02
DATAFILE '/u01/oracle/rbdb1/undo0201.dbf' SIZE 2M REUSE AUTOEXTEND ON;

Managing the Undo Tablespace

10-7

Managing Undo Tablespaces

You can create more than one undo tablespace, but only one of them can be active at
any one time.
Oracle Database SQL Reference for the syntax for using
the CREATE UNDO TABLESPACE statement to create an undo
tablespace

See Also:

Altering an Undo Tablespace
Undo tablespaces are altered using the ALTER TABLESPACE statement. However,
since most aspects of undo tablespaces are system managed, you need only be
concerned with the following actions:
■

Adding a datafile

■

Renaming a datafile

■

Bringing a datafile online or taking it offline

■

Beginning or ending an open backup on a datafile

■

Enabling and disabling undo retention guarantee

These are also the only attributes you are permitted to alter.
If an undo tablespace runs out of space, or you want to prevent it from doing so, you
can add more files to it or resize existing datafiles.
The following example adds another datafile to undo tablespace undotbs_01:
ALTER TABLESPACE undotbs_01
ADD DATAFILE '/u01/oracle/rbdb1/undo0102.dbf' AUTOEXTEND ON NEXT 1M
MAXSIZE UNLIMITED;

You can use the ALTER DATABASE...DATAFILE statement to resize or extend a
datafile.
See Also:
■

"Changing Datafile Size" on page 9-5

■

Oracle Database SQL Reference for ALTER TABLESPACE syntax

Dropping an Undo Tablespace
Use the DROP TABLESPACE statement to drop an undo tablespace. The following
example drops the undo tablespace undotbs_01:
DROP TABLESPACE undotbs_01;

An undo tablespace can only be dropped if it is not currently used by any instance. If
the undo tablespace contains any outstanding transactions (for example, a transaction
died but has not yet been recovered), the DROP TABLESPACE statement fails.
However, since DROP TABLESPACE drops an undo tablespace even if it contains
unexpired undo information (within retention period), you must be careful not to drop
an undo tablespace if undo information is needed by some existing queries.
DROP TABLESPACE for undo tablespaces behaves like DROP
TABLESPACE...INCLUDING CONTENTS. All contents of the undo tablespace are
removed.
See Also: Oracle Database SQL Reference for DROP TABLESPACE
syntax

10-8 Oracle Database Administrator’s Guide

Managing Undo Tablespaces

Switching Undo Tablespaces
You can switch from using one undo tablespace to another. Because the
UNDO_TABLESPACE initialization parameter is a dynamic parameter, the ALTER
SYSTEM SET statement can be used to assign a new undo tablespace.
The following statement switches to a new undo tablespace:
ALTER SYSTEM SET UNDO_TABLESPACE = undotbs_02;

Assuming undotbs_01 is the current undo tablespace, after this command
successfully executes, the instance uses undotbs_02 in place of undotbs_01 as its
undo tablespace.
If any of the following conditions exist for the tablespace being switched to, an error is
reported and no switching occurs:
■

The tablespace does not exist

■

The tablespace is not an undo tablespace

■

The tablespace is already being used by another instance (in a RAC environment
only)

The database is online while the switch operation is performed, and user transactions
can be executed while this command is being executed. When the switch operation
completes successfully, all transactions started after the switch operation began are
assigned to transaction tables in the new undo tablespace.
The switch operation does not wait for transactions in the old undo tablespace to
commit. If there are any pending transactions in the old undo tablespace, the old undo
tablespace enters into a PENDING OFFLINE mode (status). In this mode, existing
transactions can continue to execute, but undo records for new user transactions
cannot be stored in this undo tablespace.
An undo tablespace can exist in this PENDING OFFLINE mode, even after the switch
operation completes successfully. A PENDING OFFLINE undo tablespace cannot be
used by another instance, nor can it be dropped. Eventually, after all active
transactions have committed, the undo tablespace automatically goes from the
PENDING OFFLINE mode to the OFFLINE mode. From then on, the undo tablespace
is available for other instances (in an Oracle Real Application Cluster environment).
If the parameter value for UNDO TABLESPACE is set to '' (two single quotes), then the
current undo tablespace is switched out and the next available undo tablespace is
switched in. Use this statement with care because there may be no undo tablespace
available.
The following example unassigns the current undo tablespace:
ALTER SYSTEM SET UNDO_TABLESPACE = '';

Establishing User Quotas for Undo Space
The Oracle Database Resource Manager can be used to establish user quotas for undo
space. The Database Resource Manager directive UNDO_POOL allows DBAs to limit the
amount of undo space consumed by a group of users (resource consumer group).
You can specify an undo pool for each consumer group. An undo pool controls the
amount of total undo that can be generated by a consumer group. When the total undo
generated by a consumer group exceeds its undo limit, the current UPDATE transaction
generating the undo is terminated. No other members of the consumer group can
perform further updates until undo space is freed from the pool.

Managing the Undo Tablespace

10-9

Migrating to Automatic Undo Management

When no UNDO_POOL directive is explicitly defined, users are allowed unlimited undo
space.
See Also:

Chapter 24, "Using the Database Resource Manager"

Migrating to Automatic Undo Management
If you are currently using rollback segments to manage undo space, Oracle strongly
recommends that you migrate your database to automatic undo management. Oracle
Database provides a function that provides information on how to size your new undo
tablespace based on the configuration and usage of the rollback segments in your
system. DBA privileges are required to execute this function:
DECLARE
utbsiz_in_MB NUMBER;
BEGIN
utbsiz_in_MB := DBMS_UNDO_ADV.RBU_MIGRATION;
end;
/

The function returns the sizing information directly.

Viewing Information About Undo
This section lists views that are useful for viewing information about undo space in the
automatic undo management mode and provides some examples. In addition to views
listed here, you can obtain information from the views available for viewing
tablespace and datafile information. Please refer to "Viewing Datafile Information" on
page 9-25 for information on getting information about those views.
Oracle Database also provides proactive help in managing tablespace disk space use
by alerting you when tablespaces run low on available space. Please refer to
"Managing Tablespace Alerts" on page 14-1 for information on how to set alert
thresholds for the undo tablespace.
In addition to the proactive undo space alerts, Oracle Database also provides alerts if
your system has long-running queries that cause SNAPSHOT TOO OLD errors. To
prevent excessive alerts, the long query alert is issued at most once every 24 hours.
When the alert is generated, you can check the Undo Advisor Page of Enterprise
Manager to get more information about the undo tablespace.
The following dynamic performance views are useful for obtaining space information
about the undo tablespace:
View

Description

V$UNDOSTAT

Contains statistics for monitoring and tuning undo space.
Use this view to help estimate the amount of undo space
required for the current workload. The database also uses
this information to help tune undo usage in the system. This
view is meaningful only in automatic undo management
mode.

V$ROLLSTAT

For automatic undo management mode, information reflects
behavior of the undo segments in the undo tablespace

V$TRANSACTION

Contains undo segment information

DBA_UNDO_EXTENTS

Shows the status and size of each extent in the undo
tablespace.

10-10 Oracle Database Administrator’s Guide

Viewing Information About Undo

View

Description

DBA_HIST_UNDOSTAT

Contains statistical snapshots of V$UNDOSTAT information.
Please refer to Oracle Database 2 Day DBA for more
information.

See Also: Oracle Database Reference for complete descriptions of
the views used in automatic undo management mode

The V$UNDOSTAT view is useful for monitoring the effects of transaction execution on
undo space in the current instance. Statistics are available for undo space
consumption, transaction concurrency, the tuning of undo retention, and the length
and SQL ID of long-running queries in the instance.
Each row in the view contains statistics collected in the instance for a ten-minute
interval. The rows are in descending order by the BEGIN_TIME column value. Each
row belongs to the time interval marked by (BEGIN_TIME, END_TIME). Each column
represents the data collected for the particular statistic in that time interval. The first
row of the view contains statistics for the (partial) current time period. The view
contains a total of 576 rows, spanning a 4 day cycle.
The following example shows the results of a query on the V$UNDOSTAT view.
SELECT TO_CHAR(BEGIN_TIME, 'MM/DD/YYYY HH24:MI:SS') BEGIN_TIME,
TO_CHAR(END_TIME, 'MM/DD/YYYY HH24:MI:SS') END_TIME,
UNDOTSN, UNDOBLKS, TXNCOUNT, MAXCONCURRENCY AS "MAXCON"
FROM v$UNDOSTAT WHERE rownum <= 144;
BEGIN_TIME
------------------10/28/2004 14:25:12
10/28/2004 14:15:12
10/28/2004 14:05:12
10/28/2004 13:55:12
...
10/27/2004 14:45:12
10/27/2004 14:35:12

END_TIME
UNDOTSN
UNDOBLKS
TXNCOUNT
MAXCON
------------------- ---------- ---------- ---------- ---------10/28/2004 14:32:17
8
74
12071108
3
10/28/2004 14:25:12
8
49
12070698
2
10/28/2004 14:15:12
8
125
12070220
1
10/28/2004 14:05:12
8
99
12066511
3
10/27/2004 14:55:12
10/27/2004 14:45:12

8
8

15
154

11831676
11831165

1
2

144 rows selected.

The preceding example shows how undo space is consumed in the system for the
previous 24 hours from the time 14:35:12 on 10/27/2004.

Managing the Undo Tablespace

10-11

Viewing Information About Undo

10-12 Oracle Database Administrator’s Guide

Part III
Automated File and Storage Management
Part III describes how to use the Oracle-managed files and Automatic Storage
Management features to simplify management of your database files.
You use the Oracle-managed files feature to specify file system directories in which
Oracle automatically creates and manages files for you at the database object level. For
example, you need only specify that you want to create a tablespace, you do not need
to specify the DATAFILE clause. This feature works well with a logical volume
manager (LVM).
Automatic Storage Management provides a logical volume manager that is integrated
into the Oracle database and eliminates the need for you to purchase a third party
product. Oracle creates Oracle-managed files within "disk groups" that you specify
and provides redundancy and striping.
This part contains the following chapters:
■

Chapter 11, "Using Oracle-Managed Files"

■

Chapter 12, "Using Automatic Storage Management"

11
Using Oracle-Managed Files
This chapter discusses the use of the Oracle-managed files and contains the following
topics:
■

What Are Oracle-Managed Files?

■

Enabling the Creation and Use of Oracle-Managed Files

■

Creating Oracle-Managed Files

■

Behavior of Oracle-Managed Files

■

Scenarios for Using Oracle-Managed Files

What Are Oracle-Managed Files?
Using Oracle-managed files simplifies the administration of an Oracle Database.
Oracle-managed files eliminate the need for you, the DBA, to directly manage the
operating system files comprising an Oracle Database. You specify operations in terms
of database objects rather than filenames. The database internally uses standard file
system interfaces to create and delete files as needed for the following database
structures:
■

Tablespaces

■

Redo log files

■

Control files

■

Archived logs

■

Block change tracking files

■

Flashback logs

■

RMAN backups

Through initialization parameters, you specify the file system directory to be used for
a particular type of file. The database then ensures that a unique file, an
Oracle-managed file, is created and deleted when no longer needed.
This feature does not affect the creation or naming of administrative files such as trace
files, audit files, alert logs, and core files.

Using Oracle-Managed Files 11-1

What Are Oracle-Managed Files?

See Also: Chapter 12, "Using Automatic Storage Management"
for information about the Oracle Database integrated storage
management system that extends the power of Oracle-managed
files. With Oracle-managed files, files are created and managed
automatically for you, but with Automatic Storage Management
you get the additional benefits of features such as file redundancy
and striping, without the need to purchase a third-party logical
volume manager.

Who Can Use Oracle-Managed Files?
Oracle-managed files are most useful for the following types of databases:
■

■

Databases that are supported by the following:
–

A logical volume manager that supports striping/RAID and dynamically
extensible logical volumes

–

A file system that provides large, extensible files

Low end or test databases

The Oracle-managed files feature is not intended to ease administration of systems
that use raw disks. This feature provides better integration with operating system
functionality for disk space allocation. Since there is no operating system support for
allocation of raw disks (it is done manually), this feature cannot help. On the other
hand, because Oracle-managed files require that you use the operating system file
system (unlike raw disks), you lose control over how files are laid out on the disks and
thus, you lose some I/O tuning ability.

What Is a Logical Volume Manager?
A logical volume manager (LVM) is a software package available with most operating
systems. Sometimes it is called a logical disk manager (LDM). It allows pieces of
multiple physical disks to be combined into a single contiguous address space that
appears as one disk to higher layers of software. An LVM can make the logical volume
have better capacity, performance, reliability, and availability characteristics than any
of the underlying physical disks. It uses techniques such as mirroring, striping,
concatenation, and RAID 5 to implement these characteristics.
Some LVMs allow the characteristics of a logical volume to be changed after it is
created, even while it is in use. The volume may be resized or mirrored, or it may be
relocated to different physical disks.

What Is a File System?
A file system is a data structure built inside a contiguous disk address space. A file
manager (FM) is a software package that manipulates file systems, but it is sometimes
called the file system. All operating systems have file managers. The primary task of a
file manager is to allocate and deallocate disk space into files within a file system.
A file system allows the disk space to be allocated to a large number of files. Each file
is made to appear as a contiguous address space to applications such as Oracle
Database. The files may not actually be contiguous within the disk space of the file
system. Files can be created, read, written, resized, and deleted. Each file has a name
associated with it that is used to refer to the file.
A file system is commonly built on top of a logical volume constructed by an LVM.
Thus all the files in a particular file system have the same performance, reliability, and
availability characteristics inherited from the underlying logical volume. A file system

11-2 Oracle Database Administrator’s Guide

Enabling the Creation and Use of Oracle-Managed Files

is a single pool of storage that is shared by all the files in the file system. If a file system
is out of space, then none of the files in that file system can grow. Space available in
one file system does not affect space in another file system. However some LVM/FM
combinations allow space to be added or removed from a file system.
An operating system can support multiple file systems. Multiple file systems are
constructed to give different storage characteristics to different files as well as to divide
the available disk space into pools that do not affect each other.

Benefits of Using Oracle-Managed Files
Consider the following benefits of using Oracle-managed files:
■

They make the administration of the database easier.
There is no need to invent filenames and define specific storage requirements. A
consistent set of rules is used to name all relevant files. The file system defines the
characteristics of the storage and the pool where it is allocated.

■

They reduce corruption caused by administrators specifying the wrong file.
Each Oracle-managed file and filename is unique. Using the same file in two
different databases is a common mistake that can cause very large down times and
loss of committed transactions. Using two different names that refer to the same
file is another mistake that causes major corruptions.

■

They reduce wasted disk space consumed by obsolete files.
Oracle Database automatically removes old Oracle-managed files when they are
no longer needed. Much disk space is wasted in large systems simply because no
one is sure if a particular file is still required. This also simplifies the
administrative task of removing files that are no longer required on disk and
prevents the mistake of deleting the wrong file.

■

They simplify creation of test and development databases.
You can minimize the time spent making decisions regarding file structure and
naming, and you have fewer file management tasks. You can focus better on
meeting the actual requirements of your test or development database.

■

Oracle-managed files make development of portable third-party tools easier.
Oracle-managed files eliminate the need to put operating system specific file
names in SQL scripts.

Oracle-Managed Files and Existing Functionality
Using Oracle-managed files does not eliminate any existing functionality. Existing
databases are able to operate as they always have. New files can be created as
managed files while old ones are administered in the old way. Thus, a database can
have a mixture of Oracle-managed and unmanaged files.

Enabling the Creation and Use of Oracle-Managed Files
The following initialization parameters allow the database server to use the
Oracle-managed files feature:

Using Oracle-Managed Files 11-3

Enabling the Creation and Use of Oracle-Managed Files

Initialization Parameter

Description

DB_CREATE_FILE_DEST

Defines the location of the default file system
directory where the database creates datafiles or
tempfiles when no file specification is given in the
creation operation. Also used as the default file
system directory for redo log and control files if
DB_CREATE_ONLINE_LOG_DEST_n is not specified.

DB_CREATE_ONLINE_LOG_DEST_n

Defines the location of the default file system
directory for redo log files and control file creation
when no file specification is given in the creation
operation. You can use this initialization parameter
multiple times, where n specifies a multiplexed copy
of the redo log or control file. You can specify up to
five multiplexed copies.

DB_RECOVERY_FILE_DEST

Defines the location of the default file system
directory where the database creates RMAN
backups when no format option is used, archived
logs when no other local destination is configured,
and flashback logs. Also used as the default file
system directory for redo log and control files if
DB_CREATE_ONLINE_LOG_DEST_n is not specified.

The file system directory specified by either of these parameters must already exist:
the database does not create it. The directory must also have permissions to allow the
database to create the files in it.
The default location is used whenever a location is not explicitly specified for the
operation creating the file. The database creates the filename, and a file thus created is
an Oracle-managed file.
Both of these initialization parameters are dynamic, and can be set using the ALTER
SYSTEM or ALTER SESSION statement.
See Also:
■

■

Oracle Database Reference for additional information about
initialization parameters
"How Oracle-Managed Files Are Named" on page 11-6

Setting the DB_CREATE_FILE_DEST Initialization Parameter
Include the DB_CREATE_FILE_DEST initialization parameter in your initialization
parameter file to identify the default location for the database server to create:
■

Datafiles

■

Tempfiles

■

Redo log files

■

Control files

■

Block change tracking files

You specify the name of a file system directory that becomes the default location for
the creation of the operating system files for these entities. The following example sets
/u01/oradata as the default directory to use when creating Oracle-managed files:
DB_CREATE_FILE_DEST = '/u01/oradata'

11-4 Oracle Database Administrator’s Guide

Creating Oracle-Managed Files

Setting the DB_RECOVERY_FILE_DEST Parameter
Include the DB_RECOVERY_FILE_DEST and DB_RECOVERY_FILE_DEST_SIZE
parameters in your initialization parameter file to identify the default location in
which Oracle Database should create:
■

Redo log files

■

Control files

■

RMAN backups (datafile copies, control file copies, backup pieces, control file
autobackups)

■

Archived logs

■

Flashback logs

You specify the name of file system directory that becomes the default location for
creation of the operating system files for these entities. For example:
DB_RECOVERY_FILE_DEST
= '/u01/oradata'
DB_RECOVERY_FILE_DEST_SIZE = 20G

Setting the DB_CREATE_ONLINE_LOG_DEST_n Initialization Parameter
Include the DB_CREATE_ONLINE_LOG_DEST_n initialization parameter in your
initialization parameter file to identify the default location for the database server to
create:
■

Redo log files

■

Control files

You specify the name of a file system directory that becomes the default location for
the creation of the operating system files for these entities. You can specify up to five
multiplexed locations.
For the creation of redo log files and control files only, this parameter overrides any default
location specified in the DB_CREATE_FILE_DEST and DB_RECOVERY_FILE_DEST
initialization parameters. If you do not specify a DB_CREATE_FILE_DEST parameter,
but you do specify the DB_CREATE_ONLINE_LOG_DEST_n parameter, then only redo
log files and control files can be created as Oracle-managed files.
It is recommended that you specify at least two parameters. For example:
DB_CREATE_ONLINE_LOG_DEST_1 = '/u02/oradata'
DB_CREATE_ONLINE_LOG_DEST_2 = '/u03/oradata'

This allows multiplexing, which provides greater fault-tolerance for the redo log and
control file if one of the destinations fails.

Creating Oracle-Managed Files
If you have met any of the following conditions, then Oracle Database creates
Oracle-managed files for you, as appropriate, when no file specification is given in the
creation operation:
■

You have included any of the DB_CREATE_FILE_DEST,
DB_REDOVERY_FILE_DEST, or DB_CREATE_ONLINE_LOG_DEST_n initialization
parameters in your initialization parameter file.

Using Oracle-Managed Files 11-5

Creating Oracle-Managed Files

■

■

You have issued the ALTER SYSTEM statement to dynamically set any of
DB_RECOVERY_FILE_DEST, DB_CREATE_FILE_DEST, or
DB_CREATE_ONLINE_LOG_DEST_n initialization parameters
You have issued the ALTER SESSION statement to dynamically set any of the
DB_CREATE_FILE_DEST, DB_RECOVERY_FILE_DEST, or
DB_CREATE_ONLINE_LOG_DEST_n initialization parameters.

If a statement that creates an Oracle-managed file finds an error or does not complete
due to some failure, then any Oracle-managed files created by the statement are
automatically deleted as part of the recovery of the error or failure. However, because
of the large number of potential errors that can occur with file systems and storage
subsystems, there can be situations where you must manually remove the files using
operating system commands. When an Oracle-managed file is created, its filename is
written to the alert log. This information can be used to find the file if it is necessary to
manually remove the file.
The following topics are discussed in this section:
■

How Oracle-Managed Files Are Named

■

Creating Oracle-Managed Files at Database Creation

■

Creating Datafiles for Tablespaces Using Oracle-Managed Files

■

Creating Tempfiles for Temporary Tablespaces Using Oracle-Managed Files

■

Creating Control Files Using Oracle-Managed Files

■

Creating Redo Log Files Using Oracle-Managed Files

■

Creating Archived Logs Using Oracle-Managed Files

How Oracle-Managed Files Are Named
The filenames of Oracle-managed files comply with the Optimal Flexible Architecture
(OFA) standard for file naming. The assigned names are intended to meet the
following requirements:
■

Database files are easily distinguishable from all other files.

■

Files of one database type are easily distinguishable from other database types.

■

Files are clearly associated with important attributes specific to the file type. For
example, a datafile name may include the tablespace name to allow for easy
association of datafile to tablespace, or an archived log name may include the
thread, sequence, and creation date.

No two Oracle-managed files are given the same name. The name that is used for
creation of an Oracle-managed file is constructed from three sources:
■
■

■

The default creation location
A file name template that is chosen based on the type of the file. The template also
depends on the operating system platform and whether or not automatic storage
management is used.
A unique string created by the Oracle Database server or the operating system.
This ensures that file creation does not damage an existing file and that the file
cannot be mistaken for some other file.

As a specific example, filenames for Oracle-managed files have the following format
on a Solaris file system:
/o1_mf_%t_%u_.dbf

11-6 Oracle Database Administrator’s Guide

Creating Oracle-Managed Files

where:
■

 is
//
where:
–

 is the location specified in
DB_CREATE_FILE_DEST

–

 is the globally unique name (DB_UNIQUE_NAME
initialization parameter) of the target database. If there is no
DB_UNIQUE_NAME parameter, then the DB_NAME initialization parameter
value is used.

■

%t is the tablespace name.

■

%u is an eight-character string that guarantees uniqueness

For example, assume the following parameter settings:
DB_CREATE_FILE_DEST
= /u01/oradata
DB_UNIQUE_NAME = PAYROLL

Then an example datafile name would be:
/u01/oradata/PAYROLL/datafile/o1_mf_tbs1_2ixh90q_.dbf

Names for other file types are similar. Names on other platforms are also similar,
subject to the constraints of the naming rules of the platform.
The examples on the following pages use Oracle-managed file names as they might
appear with a Solaris file system as an OMF destination.
Caution: Do not rename an Oracle-managed file. The database
identifies an Oracle-managed file based on its name. If you rename
the file, the database is no longer able to recognize it as an
Oracle-managed file and will not manage the file accordingly.

Creating Oracle-Managed Files at Database Creation
The behavior of the CREATE DATABASE statement for creating database structures
when using Oracle-managed files is discussed in this section.
Oracle Database SQL Reference for a description of the
CREATE DATABASE statement

See Also:

Specifying Control Files at Database Creation
At database creation, the control file is created in the files specified by the
CONTROL_FILES initialization parameter. If the CONTROL_FILES parameter is not set
and at least one of the initialization parameters required for the creation of
Oracle-managed files is set, then an Oracle-managed control file is created in the
default control file destinations. In order of precedence, the default destination is
defined as follows:
■

One or more control files as specified in the DB_CREATE_ONLINE_LOG_DEST_n
initialization parameter. The file in the first directory is the primary control file.
When DB_CREATE_ONLINE_LOG_DEST_n is specified, the database does not

Using Oracle-Managed Files 11-7

Creating Oracle-Managed Files

create a control file in DB_CREATE_FILE_DEST or in DB_RECOVERY_FILE_DEST
(the flash recovery area).
■

■

■

If no value is specified for DB_CREATE_ONLINE_LOG_DEST_n, but values are set
for both the DB_CREATE_FILE_DEST and DB_RECOVERY_FILE_DEST, then the
database creates one control file in each location. The location specified in
DB_CREATE_FILE_DEST is the primary control file.
If a value is specified only for DB_CREATE_FILE_DEST, then the database creates
one control file in that location.
If a value is specified only for DB_RECOVERY_FILE_DEST, then the database
creates one control file in that location.

If the CONTROL_FILES parameter is not set and none of these initialization
parameters are set, then the Oracle Database default behavior is operating system
dependent. At least one copy of a control file is created in an operating system
dependent default location. Any copies of control files created in this fashion are not
Oracle-managed files, and you must add a CONTROL_FILES initialization parameter
to any initialization parameter file.
If the database creates an Oracle-managed control file, and if there is a server
parameter file, then the database creates a CONTROL_FILES initialization parameter
entry in the server parameter file. If there is no server parameter file, then you must
manually include a CONTROL_FILES initialization parameter entry in the text
initialization parameter file.
See Also:

Chapter 5, "Managing Control Files"

Specifying Redo Log Files at Database Creation
The LOGFILE clause is not required in the CREATE DATABASE statement, and
omitting it provides a simple means of creating Oracle-managed redo log files. If the
LOGFILE clause is omitted, then redo log files are created in the default redo log file
destinations. In order of precedence, the default destination is defined as follows:
■

■

■

■

If either the DB_CREATE_ONLINE_LOG_DEST_n is set, then the database creates a
log file member in each directory specified, up to the value of the
MAXLOGMEMBERS initialization parameter.
If the DB_CREATE_ONLINE_LOG_DEST_n parameter is not set, but both the
DB_CREATE_FILE_DEST and DB_RECOVERY_FILE_DEST initialization
parameters are set, then the database creates one Oracle-managed log file member
in each of those locations. The log file in the DB_CREATE_FILE_DEST destination
is the first member.
If only the DB_CREATE_FILE_DEST initialization parameter is specified, then the
database creates a log file member in that location.
If only the DB_RECOVERY_FILE_DEST initialization parameter is specified, then
the database creates a log file member in that location.

The default size of an Oracle-managed redo log file is 100 MB.
Optionally, you can create Oracle-managed redo log files, and override default
attributes, by including the LOGFILE clause but omitting a filename. Redo log files are
created the same way, except for the following: If no filename is provided in the
LOGFILE clause of CREATE DATABASE, and none of the initialization parameters
required for creating Oracle-managed files are provided, then the CREATE DATABASE
statement fails.

11-8 Oracle Database Administrator’s Guide

Creating Oracle-Managed Files

See Also:

Chapter 6, "Managing the Redo Log"

Specifying the SYSTEM and SYSAUX Tablespace Datafiles at Database Creation
The DATAFILE or SYSAUX DATAFILE clause is not required in the CREATE
DATABASE statement, and omitting it provides a simple means of creating
Oracle-managed datafiles for the SYSTEM and SYSAUX tablespaces. If the DATAFILE
clause is omitted, then one of the following actions occurs:
■

■

If DB_CREATE_FILE_DEST is set, then one Oracle-managed datafile for the
SYSTEM tablespace and another for the SYSAUX tablespace are created in the
DB_CREATE_FILE_DEST directory.
If DB_CREATE_FILE_DEST is not set, then the database creates one SYSTEM, and
one SYSAUX, tablespace datafile whose name and size are operating system
dependent. Any SYSTEM or SYSAUX tablespace datafile created in this manner is
not an Oracle-managed file.

The default size for an Oracle-managed datafile is 100 MB and the file is
autoextensible. When autoextension is required, the database extends the datafile by
its existing size or 100 MB, whichever is smaller. You can also explicitly specify the
autoextensible unit using the NEXT parameter of the STORAGE clause when you
specify the datafile (in a CREATE or ALTER TABLESPACE operation).
Optionally, you can create an Oracle-managed datafile for the SYSTEM or SYSAUX
tablespace and override default attributes. This is done by including the DATAFILE
clause, omitting a filename, but specifying overriding attributes. When a filename is
not supplied and the DB_CREATE_FILE_DEST parameter is set, an Oracle-managed
datafile for the SYSTEM or SYSAUX tablespace is created in the
DB_CREATE_FILE_DEST directory with the specified attributes being overridden.
However, if a filename is not supplied and the DB_CREATE_FILE_DEST parameter is
not set, then the CREATE DATABASE statement fails.
When overriding the default attributes of an Oracle-managed file, if a SIZE value is
specified but no AUTOEXTEND clause is specified, then the datafile is not
autoextensible.

Specifying the Undo Tablespace Datafile at Database Creation
The DATAFILE subclause of the UNDO TABLESPACE clause is optional and a filename
is not required in the file specification. If a filename is not supplied and the
DB_CREATE_FILE_DEST parameter is set, then an Oracle-managed datafile is created
in the DB_CREATE_FILE_DEST directory. If DB_CREATE_FILE_DEST is not set, then
the statement fails with a syntax error.
The UNDO TABLESPACE clause itself is optional in the CREATE DATABASE statement.
If it is not supplied, and automatic undo management mode is enabled, then a default
undo tablespace named SYS_UNDOTBS is created and a 10 MB datafile that is
autoextensible is allocated as follows:
■

■

If DB_CREATE_FILE_DEST is set, then an Oracle-managed datafile is created in
the indicated directory.
If DB_CREATE_FILE_DEST is not set, then the datafile location is operating
system specific.
See Also:

Chapter 10, "Managing the Undo Tablespace"

Using Oracle-Managed Files 11-9

Creating Oracle-Managed Files

Specifying the Default Temporary Tablespace Tempfile at Database Creation
The TEMPFILE subclause is optional for the DEFAULT TEMPORARY TABLESPACE
clause and a filename is not required in the file specification. If a filename is not
supplied and the DB_CREATE_FILE_DEST parameter set, then an Oracle-managed
tempfile is created in the DB_CREATE_FILE_DEST directory. If
DB_CREATE_FILE_DEST is not set, then the CREATE DATABASE statement fails with
a syntax error.
The DEFAULT TEMPORARY TABLESPACE clause itself is optional. If it is not specified,
then no default temporary tablespace is created.
The default size for an Oracle-managed tempfile is 100 MB and the file is
autoextensible with an unlimited maximum size.

CREATE DATABASE Statement Using Oracle-Managed Files: Examples
This section contains examples of the CREATE DATABASE statement when using the
Oracle-managed files feature.
CREATE DATABASE: Example 1

This example creates a database with the following

Oracle-managed files:
■

■

■

■

■

A SYSTEM tablespace datafile in directory /u01/oradata that is 100 MB and
autoextensible up to an unlimited size.
A SYSAUX tablespace datafile in directory /u01/oradata that is 100 MB and
autoextensible up to an unlimited size. The tablespace is locally managed with
automatic segment-space management.
Two online log groups with two members of 100 MB each, one each in
/u02/oradata and /u03/oradata.
If automatic undo management mode is enabled, then an undo tablespace datafile
in directory /u01/oradata that is 10 MB and autoextensible up to an unlimited
size. An undo tablespace named SYS_UNDOTBS is created.
If no CONTROL_FILES initialization parameter is specified, then two control files,
one each in /u02/oradata and /u03/oradata. The control file in
/u02/oradata is the primary control file.

The following parameter settings relating to Oracle-managed files, are included in the
initialization parameter file:
DB_CREATE_FILE_DEST = '/u01/oradata'
DB_CREATE_ONLINE_LOG_DEST_1 = '/u02/oradata'
DB_CREATE_ONLINE_LOG_DEST_2 = '/u03/oradata'

The following statement is issued at the SQL prompt:
SQL> CREATE DATABASE sample;

CREATE DATABASE: Example 2

This example creates a database with the following

Oracle-managed files:
■

■

A 100 MB SYSTEM tablespace datafile in directory /u01/oradata that is
autoextensible up to an unlimited size.
A SYSAUX tablespace datafile in directory /u01/oradata that is 100 MB and
autoextensible up to an unlimited size. The tablespace is locally managed with
automatic segment-space management.

11-10 Oracle Database Administrator’s Guide

Creating Oracle-Managed Files

■

■

■

Two redo log files of 100 MB each in directory /u01/oradata. They are not
multiplexed.
An undo tablespace datafile in directory /u01/oradata that is 10 MB and
autoextensible up to an unlimited size. An undo tablespace named SYS_UNDOTBS
is created.
A control file in /u01/oradata.

In this example, it is assumed that:
■

■

■

No DB_CREATE_ONLINE_LOG_DEST_n initialization parameters are specified in
the initialization parameter file.
No CONTROL_FILES initialization parameter was specified in the initialization
parameter file.
Automatic undo management mode is enabled.

The following statements are issued at the SQL prompt:
SQL> ALTER SYSTEM SET DB_CREATE_FILE_DEST = '/u01/oradata';
SQL> CREATE DATABASE sample2;

This database configuration is not recommended for a production database. The
example illustrates how a very low-end database or simple test database can easily be
created. To better protect this database from failures, at least one more control file
should be created and the redo log should be multiplexed.
In this example, the file size for the Oracle-managed
files for the default temporary tablespace and undo tablespace are specified. A
database with the following Oracle-managed files is created:

CREATE DATABASE: Example 3

■

■

■

■

■

■

A 400 MB SYSTEM tablespace datafile in directory /u01/oradata. Because SIZE
is specified, the file in not autoextensible.
A 200 MB SYSAUX tablespace datafile in directory /u01/oradata. Because SIZE
is specified, the file in not autoextensible. The tablespace is locally managed with
automatic segment-space management.
Two redo log groups with two members of 100 MB each, one each in directories
/u02/oradata and /u03/oradata.
For the default temporary tablespace dflt_ts, a 10 MB tempfile in directory
/u01/oradata. Because SIZE is specified, the file in not autoextensible.
For the undo tablespace undo_ts, a 10 MB datafile in directory /u01/oradata.
Because SIZE is specified, the file in not autoextensible.
If no CONTROL_FILES initialization parameter was specified, then two control
files, one each in directories /u02/oradata and /u03/oradata. The control file
in /u02/oradata is the primary control file.

The following parameter settings are included in the initialization parameter file:
DB_CREATE_FILE_DEST = '/u01/oradata'
DB_CREATE_ONLINE_LOG_DEST_1 = '/u02/oradata'
DB_CREATE_ONLINE_LOG_DEST_2 = '/u03/oradata'

The following statement is issued at the SQL prompt:
SQL>
2>
3>
4>

CREATE DATABASE sample3 DATAFILE SIZE 400M
SYSAUX DATAFILE SIZE 200M
DEFAULT TEMPORARY TABLESPACE dflt_ts TEMPFILE SIZE 10M
UNDO TABLESPACE undo_ts DATAFILE SIZE 10M;
Using Oracle-Managed Files 11-11

Creating Oracle-Managed Files

Creating Datafiles for Tablespaces Using Oracle-Managed Files
The following statements that can create datafiles are relevant to the discussion in this
section:
■

CREATE TABLESPACE

■

CREATE UNDO TABLESPACE

■

ALTER TABLESPACE ... ADD DATAFILE

When creating a tablespace, either a regular tablespace or an undo tablespace, the
DATAFILE clause is optional. When you include the DATAFILE clause the filename is
optional. If the DATAFILE clause or filename is not provided, then the following rules
apply:
■

■

If the DB_CREATE_FILE_DEST initialization parameter is specified, then an
Oracle-managed datafile is created in the location specified by the parameter.
If the DB_CREATE_FILE_DEST initialization parameter is not specified, then the
statement creating the datafile fails.

When you add a datafile to a tablespace with the ALTER TABLESPACE...ADD
DATAFILE statement the filename is optional. If the filename is not specified, then the
same rules apply as discussed in the previous paragraph.
By default, an Oracle-managed datafile for a regular tablespace is 100 MB and is
autoextensible with an unlimited maximum size. However, if in your DATAFILE
clause you override these defaults by specifying a SIZE value (and no AUTOEXTEND
clause), then the datafile is not autoextensible.
See Also:
■

■

■

"Specifying the SYSTEM and SYSAUX Tablespace Datafiles at
Database Creation" on page 11-9
"Specifying the Undo Tablespace Datafile at Database Creation"
on page 11-9
Chapter 8, "Managing Tablespaces"

CREATE TABLESPACE: Examples
The following are some examples of creating tablespaces with Oracle-managed files.
Oracle Database SQL Reference for a description of the
CREATE TABLESPACE statement

See Also:

The following example sets the default location
for datafile creations to /u01/oradata and then creates a tablespace tbs_1 with a
datafile in that location. The datafile is 100 MB and is autoextensible with an unlimited
maximum size.

CREATE TABLESPACE: Example 1

SQL> ALTER SYSTEM SET DB_CREATE_FILE_DEST = '/u01/oradata';
SQL> CREATE TABLESPACE tbs_1;

This example creates a tablespace named tbs_2
with a datafile in the directory /u01/oradata. The datafile initial size is 400 MB, and
because the SIZE clause is specified, the datafile is not autoextensible.

CREATE TABLESPACE: Example 2

The following parameter setting is included in the initialization parameter file:
DB_CREATE_FILE_DEST = '/u01/oradata'

11-12 Oracle Database Administrator’s Guide

Creating Oracle-Managed Files

The following statement is issued at the SQL prompt:
SQL> CREATE TABLESPACE tbs_2 DATAFILE SIZE 400M;

CREATE TABLESPACE: Example 3 This example creates a tablespace named tbs_3
with an autoextensible datafile in the directory /u01/oradata with a maximum size
of 800 MB and an initial size of 100 MB:

The following parameter setting is included in the initialization parameter file:
DB_CREATE_FILE_DEST = '/u01/oradata'

The following statement is issued at the SQL prompt:
SQL> CREATE TABLESPACE tbs_3 DATAFILE AUTOEXTEND ON MAXSIZE 800M;

The following example sets the default location
for datafile creations to /u01/oradata and then creates a tablespace named tbs_4 in
that directory with two datafiles. Both datafiles have an initial size of 200 MB, and
because a SIZE value is specified, they are not autoextensible

CREATE TABLESPACE: Example 4

SQL> ALTER SYSTEM SET DB_CREATE_FILE_DEST = '/u01/oradata';
SQL> CREATE TABLESPACE tbs_4 DATAFILE SIZE 200M SIZE 200M;

CREATE UNDO TABLESPACE: Example
The following example creates an undo tablespace named undotbs_1 with a datafile
in the directory /u01/oradata. The datafile for the undo tablespace is 100 MB and is
autoextensible with an unlimited maximum size.
The following parameter setting is included in the initialization parameter file:
DB_CREATE_FILE_DEST = '/u01/oradata'

The following statement is issued at the SQL prompt:
SQL> CREATE UNDO TABLESPACE undotbs_1;

Oracle Database SQL Reference for a description of the
CREATE UNDO TABLESPACE statement

See Also:

ALTER TABLESPACE: Example
This example adds an Oracle-managed autoextensible datafile to the tbs_1
tablespace. The datafile has an initial size of 100 MB and a maximum size of 800 MB.
The following parameter setting is included in the initialization parameter file:
DB_CREATE_FILE_DEST = '/u01/oradata'

The following statement is entered at the SQL prompt:
SQL> ALTER TABLESPACE tbs_1 ADD DATAFILE AUTOEXTEND ON MAXSIZE 800M;

Oracle Database SQL Reference for a description of the
ALTER TABLESPACE statement

See Also:

Creating Tempfiles for Temporary Tablespaces Using Oracle-Managed Files
The following statements that create tempfiles are relevant to the discussion in this
section:
■

CREATE TEMPORARY TABLESPACE

Using Oracle-Managed Files 11-13

Creating Oracle-Managed Files

■

ALTER TABLESPACE ... ADD TEMPFILE

When creating a temporary tablespace the TEMPFILE clause is optional. If you include
the TEMPFILE clause, then the filename is optional. If the TEMPFILE clause or
filename is not provided, then the following rules apply:
■

■

If the DB_CREATE_FILE_DEST initialization parameter is specified, then an
Oracle-managed tempfile is created in the location specified by the parameter.
If the DB_CREATE_FILE_DEST initialization parameter is not specified, then the
statement creating the tempfile fails.

When you add a tempfile to a tablespace with the ALTER TABLESPACE...ADD
TEMPFILE statement the filename is optional. If the filename is not specified, then the
same rules apply as discussed in the previous paragraph.
When overriding the default attributes of an Oracle-managed file, if a SIZE value is
specified but no AUTOEXTEND clause is specified, then the datafile is not
autoextensible.
See Also: "Specifying the Default Temporary Tablespace Tempfile
at Database Creation" on page 11-10

CREATE TEMPORARY TABLESPACE: Example
The following example sets the default location for datafile creations to
/u01/oradata and then creates a tablespace named temptbs_1 with a tempfile in
that location. The tempfile is 100 MB and is autoextensible with an unlimited
maximum size.
SQL> ALTER SYSTEM SET DB_CREATE_FILE_DEST = '/u01/oradata';
SQL> CREATE TEMPORARY TABLESPACE temptbs_1;

Oracle Database SQL Reference for a description of the
CREATE TABLESPACE statement

See Also:

ALTER TABLESPACE... ADD TEMPFILE: Example
The following example sets the default location for datafile creations to
/u03/oradata and then adds a tempfile in the default location to a tablespace named
temptbs_1. The tempfile initial size is 100 MB. It is autoextensible with an unlimited
maximum size.
SQL> ALTER SYSTEM SET DB_CREATE_FILE_DEST = '/u03/oradata';
SQL> ALTER TABLESPACE TBS_1 ADD TEMPFILE;

Oracle Database SQL Reference for a description of the
ALTER TABLESPACE statement

See Also:

Creating Control Files Using Oracle-Managed Files
When you issue the CREATE CONTROLFILE statement, a control file is created (or
reused, if REUSE is specified) in the files specified by the CONTROL_FILES
initialization parameter. If the CONTROL_FILES parameter is not set, then the control
file is created in the default control file destinations. The default destination is
determined according to the precedence documented in "Specifying Control Files at
Database Creation" on page 11-7.
If Oracle Database creates an Oracle-managed control file, and there is a server
parameter file, then the database creates a CONTROL_FILES initialization parameter
for the server parameter file. If there is no server parameter file, then you must create a

11-14 Oracle Database Administrator’s Guide

Creating Oracle-Managed Files

CONTROL_FILES initialization parameter manually and include it in the initialization
parameter file.
If the datafiles in the database are Oracle-managed files, then the database-generated
filenames for the files must be supplied in the DATAFILE clause of the statement.
If the redo log files are Oracle-managed files, then the NORESETLOGS or RESETLOGS
keyword determines what can be supplied in the LOGFILE clause:
■

■

If the NORESETLOGS keyword is used, then the database-generated filenames for
the Oracle-managed redo log files must be supplied in the LOGFILE clause.
If the RESETLOGS keyword is used, then the redo log file names can be supplied
as with the CREATE DATABASE statement. See "Specifying Redo Log Files at
Database Creation" on page 11-8.

The sections that follow contain examples of using the CREATE CONTROLFILE
statement with Oracle-managed files.
See Also:
■

■

Oracle Database SQL Reference for a description of the CREATE
CONTROLFILE statement
"Specifying Control Files at Database Creation" on page 11-7

CREATE CONTROLFILE Using NORESETLOGS Keyword: Example
The following CREATE CONTROLFILE statement is generated by an ALTER
DATABASE BACKUP CONTROLFILE TO TRACE statement for a database with
Oracle-managed datafiles and redo log files:
CREATE CONTROLFILE
DATABASE sample
LOGFILE
GROUP 1 ('/u01/oradata/SAMPLE/onlinelog/o1_mf_1_o220rtt9_.log',
'/u02/oradata/SAMPLE/onlinelog/o1_mf_1_v2o0b2i3_.log')
SIZE 100M,
GROUP 2 ('/u01/oradata/SAMPLE/onlinelog/o1_mf_2_p22056iw_.log',
'/u02/oradata/SAMPLE/onlinelog/o1_mf_2_p02rcyg3_.log')
SIZE 100M
NORESETLOGS
DATAFILE '/u01/oradata/SAMPLE/datafile/o1_mf_system_xu34ybm2_.dbf'
SIZE 100M,
'/u01/oradata/SAMPLE/datafile/o1_mf_sysaux_aawbmz51_.dbf'
SIZE 100M,
'/u01/oradata/SAMPLE/datafile/o1_mf_sys_undotbs_apqbmz51_.dbf'
SIZE 100M
MAXLOGFILES 5
MAXLOGHISTORY 100
MAXDATAFILES 10
MAXINSTANCES 2
ARCHIVELOG;

CREATE CONTROLFILE Using RESETLOGS Keyword: Example
The following is an example of a CREATE CONTROLFILE statement with the
RESETLOGS option. Some combination of DB_CREATE_FILE_DEST,
DB_RECOVERY_FILE_DEST, and DB_CREATE_ONLINE_LOG_DEST_n or must be set.
CREATE CONTROLFILE
DATABASE sample
RESETLOGS

Using Oracle-Managed Files 11-15

Creating Oracle-Managed Files

DATAFILE '/u01/oradata/SAMPLE/datafile/o1_mf_system_aawbmz51_.dbf',
'/u01/oradata/SAMPLE/datafile/o1_mf_sysaux_axybmz51_.dbf',
'/u01/oradata/SAMPLE/datafile/o1_mf_sys_undotbs_azzbmz51_.dbf'
SIZE 100M
MAXLOGFILES 5
MAXLOGHISTORY 100
MAXDATAFILES 10
MAXINSTANCES 2
ARCHIVELOG;

Later, you must issue the ALTER DATABASE OPEN RESETLOGS statement to
re-create the redo log files. This is discussed in "Using the ALTER DATABASE OPEN
RESETLOGS Statement" on page 11-17. If the previous log files are Oracle-managed
files, then they are not deleted.

Creating Redo Log Files Using Oracle-Managed Files
Redo log files are created at database creation time. They can also be created when you
issue either of the following statements:
■

ALTER DATABASE ADD LOGFILE

■

ALTER DATABASE OPEN RESETLOGS
Oracle Database SQL Reference for a description of the
ALTER DATABASE statement

See Also:

Using the ALTER DATABASE ADD LOGFILE Statement
The ALTER DATABASE ADD LOGFILE statement lets you later add a new group to
your current redo log. The filename in the ADD LOGFILE clause is optional if you are
using Oracle-managed files. If a filename is not provided, then a redo log file is created
in the default log file destination. The default destination is determined according to
the precedence documented in "Specifying Redo Log Files at Database Creation" on
page 11-8.
If a filename is not provided and you have not provided one of the initialization
parameters required for creating Oracle-managed files, then the statement returns an
error.
The default size for an Oracle-managed log file is 100 MB.
You continue to add and drop redo log file members by specifying complete filenames.
See Also:
■
■

"Specifying Redo Log Files at Database Creation" on page 11-8
"Creating Control Files Using Oracle-Managed Files" on
page 11-14

The following example creates a log group
with a member in /u01/oradata and another member in /u02/oradata. The size
of each log file is 100 MB.

Adding New Redo Log Files: Example

The following parameter settings are included in the initialization parameter file:
DB_CREATE_ONLINE_LOG_DEST_1 = '/u01/oradata'
DB_CREATE_ONLINE_LOG_DEST_2 = '/u02/oradata'

The following statement is issued at the SQL prompt:

11-16 Oracle Database Administrator’s Guide

Behavior of Oracle-Managed Files

SQL> ALTER DATABASE ADD LOGFILE;

Using the ALTER DATABASE OPEN RESETLOGS Statement
If you previously created a control file specifying RESETLOGS and either did not
specify filenames or specified nonexistent filenames, then the database creates redo log
files for you when you issue the ALTER DATABASE OPEN RESETLOGS statement.
The rules for determining the directories in which to store redo log files, when none
are specified in the control file, are the same as those discussed in "Specifying Redo
Log Files at Database Creation" on page 11-8.

Creating Archived Logs Using Oracle-Managed Files
Archived logs are created in the DB_RECOVERY_FILE_DEST location when:
■

The ARC or LGWR background process archives an online redo log or

■

An ALTER SYSTEM ARHIVE LOG CURRENT statement is issued.

For example, assume that the following parameter settings are included in the
initialization parameter file:
DB_RECOVERY_FILE_DEST_SIZE = 20G
DB_RECOVERY_FILE_DEST
= '/u01/oradata'
LOG_ARCHIVE_DEST_1
= 'LOCATION=USE_DB_RECOVERY_FILE_DEST'

Behavior of Oracle-Managed Files
The filenames of Oracle-managed files are accepted in SQL statements wherever a
filename is used to identify an existing file. These filenames, like other filenames, are
stored in the control file and, if using Recovery Manager (RMAN) for backup and
recovery, in the RMAN catalog. They are visible in all of the usual fixed and dynamic
performance views that are available for monitoring datafiles and tempfiles (for
example, V$DATAFILE or DBA_DATA_FILES).
The following are some examples of statements using database-generated filenames:
SQL> ALTER DATABASE
2> RENAME FILE '/u01/oradata/mydb/datafile/o1_mf_tbs01_ziw3bopb_.dbf'
3> TO '/u01/oradata/mydb/tbs0101.dbf';
SQL> ALTER DATABASE
2> DROP LOGFILE '/u01/oradata/mydb/onlinelog/o1_mf_1_wo94n2xi_.log';
SQL> ALTER TABLE emp
2> ALLOCATE EXTENT
3> (DATAFILE '/u01/oradata/mydb/datafile/o1_mf_tbs1_2ixfh90q_.dbf');

You can backup and restore Oracle-managed datafiles, tempfiles, and control files as
you would corresponding non Oracle-managed files. Using database-generated
filenames does not impact the use of logical backup files such as export files. This is
particularly important for tablespace point-in-time recovery (TSPITR) and
transportable tablespace export files.
There are some cases where Oracle-managed files behave differently. These are
discussed in the sections that follow.

Using Oracle-Managed Files 11-17

Scenarios for Using Oracle-Managed Files

Dropping Datafiles and Tempfiles
Unlike files that are not managed by the database, when an Oracle-managed datafile
or tempfile is dropped, the filename is removed from the control file and the file is
automatically deleted from the file system. The statements that delete Oracle-managed
files when they are dropped are:
■

DROP TABLESPACE

■

ALTER DATABASE TEMPFILE ... DROP

You can also use these statements, which always delete files, Orace-managed or not:
■

ALTER TABLESPACE ... DROP DATAFILE

■

ALTER TABLESPACE ... DROP TEMPFILE

Dropping Redo Log Files
When an Oracle-managed redo log file is dropped its Oracle-managed files are
deleted. You specify the group or members to be dropped. The following statements
drop and delete redo log files:
■

ALTER DATABASE DROP LOGFILE

■

ALTER DATABASE DROP LOGFILE MEMBER

Renaming Files
The following statements are used to rename files:
■

ALTER DATABASE RENAME FILE

■

ALTER TABLESPACE ... RENAME DATAFILE

These statements do not actually rename the files on the operating system, but rather,
the names in the control file are changed. If the old file is an Oracle-managed file and it
exists, then it is deleted. You must specify each filename using the conventions for
filenames on your operating system when you issue this statement.

Managing Standby Databases
The datafiles, control files, and redo log files in a standby database can be managed by
the database. This is independent of whether Oracle-managed files are used on the
primary database.
When recovery of a standby database encounters redo for the creation of a datafile, if
the datafile is an Oracle-managed file, then the recovery process creates an empty file
in the local default file system location. This allows the redo for the new file to be
applied immediately without any human intervention.
When recovery of a standby database encounters redo for the deletion of a tablespace,
it deletes any Oracle-managed datafiles in the local file system. Note that this is
independent of the INCLUDING DATAFILES option issued at the primary database.

Scenarios for Using Oracle-Managed Files
This section further demonstrates the use of Oracle-managed files by presenting
scenarios of their use.

11-18 Oracle Database Administrator’s Guide

Scenarios for Using Oracle-Managed Files

Scenario 1: Create and Manage a Database with Multiplexed Redo Logs
In this scenario, a DBA creates a database where the datafiles and redo log files are
created in separate directories. The redo log files and control files are multiplexed. The
database uses an undo tablespace, and has a default temporary tablespace. The
following are tasks involved with creating and maintaining this database.
1.

Setting the initialization parameters
The DBA includes three generic file creation defaults in the initialization
parameter file before creating the database. Automatic undo management mode is
also specified.
DB_CREATE_FILE_DEST = '/u01/oradata'
DB_CREATE_ONLINE_LOG_DEST_1 = '/u02/oradata'
DB_CREATE_ONLINE_LOG_DEST_2 = '/u03/oradata'
UNDO_MANAGEMENT = AUTO

The DB_CREATE_FILE_DEST parameter sets the default file system directory for
the datafiles and tempfiles.
The DB_CREATE_ONLINE_LOG_DEST_1 and DB_CREATE_ONLINE_LOG_DEST_2
parameters set the default file system directories for redo log file and control file
creation. Each redo log file and control file is multiplexed across the two
directories.
2.

Creating a database
Once the initialization parameters are set, the database can be created by using this
statement:
SQL> CREATE DATABASE sample
2>
DEFAULT TEMPORARY TABLESPACE dflttmp;

Because a DATAFILE clause is not present and the DB_CREATE_FILE_DEST
initialization parameter is set, the SYSTEM tablespace datafile is created in the
default file system (/u01/oradata in this scenario). The filename is uniquely
generated by the database. The file is autoextensible with an initial size of 100 MB
and an unlimited maximum size. The file is an Oracle-managed file. A similar
datafile is created for the SYSAUX tablespace.
Because a LOGFILE clause is not present, two redo log groups are created. Each
log group has two members, with one member in the
DB_CREATE_ONLINE_LOG_DEST_1 location and the other member in the
DB_CREATE_ONLINE_LOG_DEST_2 location. The filenames are uniquely
generated by the database. The log files are created with a size of 100 MB. The log
file members are Oracle-managed files.
Similarly, because the CONTROL_FILES initialization parameter is not present,
and two DB_CREATE_ONLINE_LOG_DEST_n initialization parameters are
specified, two control files are created. The control file located in the
DB_CREATE_ONLINE_LOG_DEST_1 location is the primary control file; the
control file located in the DB_CREATE_ONLINE_LOG_DEST_2 location is a
multiplexed copy. The filenames are uniquely generated by the database. They are
Oracle-managed files. Assuming there is a server parameter file, a
CONTROL_FILES initialization parameter is generated.
Automatic undo management mode is specified, but because an undo tablespace
is not specified and the DB_CREATE_FILE_DEST initialization parameter is set, a
default undo tablespace named SYS_UNDOTBS is created in the directory

Using Oracle-Managed Files 11-19

Scenarios for Using Oracle-Managed Files

specified by DB_CREATE_FILE_DEST. The datafile is a 10 MB datafile that is
autoextensible. It is an Oracle-managed file.
Lastly, a default temporary tablespace named dflttmp is specified. Because
DB_CREATE_FILE_DEST is included in the parameter file, the tempfile for
dflttmp is created in the directory specified by that parameter. The tempfile is
100 MB and is autoextensible with an unlimited maximum size. It is an
Oracle-managed file.
The resultant file tree, with generated filenames, is as follows:
/u01
/oradata
/SAMPLE
/datafile
/o1_mf_system_cmr7t30p_.dbf
/o1_mf_sysaux_cmr7t88p_.dbf
/o1_mf_sys_undotbs_2ixfh90q_.dbf
/o1_mf_dflttmp_157se6ff_.tmp
/u02
/oradata
/SAMPLE
/onlinelog
/o1_mf_1_0orrm31z_.log
/o1_mf_2_2xyz16am_.log
/controlfile
/o1_mf_cmr7t30p_.ctl
/u03
/oradata
/SAMPLE
/onlinelog
/o1_mf_1_ixfvm8w9_.log
/o1_mf_2_q89tmp28_.log
/controlfile
/o1_mf_x1sr8t36_.ctl

The internally generated filenames can be seen when selecting from the usual
views. For example:
SQL> SELECT NAME FROM V$DATAFILE;
NAME
---------------------------------------------------/u01/oradata/SAMPLE/datafile/o1_mf_system_cmr7t30p_.dbf
/u01/oradata/SAMPLE/datafile/o1_mf_sysaux_cmr7t88p_.dbf
/u01/oradata/SAMPLE/datafile/o1_mf_sys_undotbs_2ixfh90q_.dbf
3 rows selected

The name is also printed to the alert log when the file is created.
3.

Managing control files
The control file was created when generating the database, and a
CONTROL_FILES initialization parameter was added to the parameter file. If
needed, then the DBA can re-create the control file or build a new one for the
database using the CREATE CONTROLFILE statement.
The correct Oracle-managed filenames must be used in the DATAFILE and
LOGFILE clauses. The ALTER DATABASE BACKUP CONTROLFILE TO TRACE
statement generates a script with the correct filenames. Alternatively, the filenames

11-20 Oracle Database Administrator’s Guide

Scenarios for Using Oracle-Managed Files

can be found by selecting from the V$DATAFILE, V$TEMPFILE, and V$LOGFILE
views. The following example re-creates the control file for the sample database:
CREATE CONTROLFILE REUSE
DATABASE sample
LOGFILE
GROUP 1('/u02/oradata/SAMPLE/onlinelog/o1_mf_1_0orrm31z_.log',
'/u03/oradata/SAMPLE/onlinelog/o1_mf_1_ixfvm8w9_.log'),
GROUP 2('/u02/oradata/SAMPLE/onlinelog/o1_mf_2_2xyz16am_.log',
'/u03/oradata/SAMPLE/onlinelog/o1_mf_2_q89tmp28_.log')
NORESETLOGS
DATAFILE '/u01/oradata/SAMPLE/datafile/o1_mf_system_cmr7t30p_.dbf',
'/u01/oradata/SAMPLE/datafile/o1_mf_sysaux_cmr7t88p_.dbf',
'/u01/oradata/SAMPLE/datafile/o1_mf_sys_undotbs_2ixfh90q_.dbf',
'/u01/oradata/SAMPLE/datafile/o1_mf_dflttmp_157se6ff_.tmp'
MAXLOGFILES 5
MAXLOGHISTORY 100
MAXDATAFILES 10
MAXINSTANCES 2
ARCHIVELOG;

The control file created by this statement is located as specified by the
CONTROL_FILES initialization parameter that was generated when the database
was created. The REUSE clause causes any existing files to be overwritten.
4.

Managing the redo log
To create a new group of redo log files, the DBA can use the ALTER DATABASE
ADD LOGFILE statement. The following statement adds a log file with a member
in the DB_CREATE_ONLINE_LOG_DEST_1 location and a member in the
DB_CREATE_ONLINE_LOG_DEST_2 location. These files are Oracle-managed
files.
SQL> ALTER DATABASE ADD LOGFILE;

Log file members continue to be added and dropped by specifying complete
filenames.
The GROUP clause can be used to drop a log group. In the following example the
operating system file associated with each Oracle-managed log file member is
automatically deleted.
SQL> ALTER DATABASE DROP LOGFILE GROUP 3;
5.

Managing tablespaces
The default storage for all datafiles for future tablespace creations in the sample
database is the location specified by the DB_CREATE_FILE_DEST initialization
parameter (/u01/oradata in this scenario). Any datafiles for which no filename
is specified, are created in the file system specified by the initialization parameter
DB_CREATE_FILE_DEST. For example:
SQL> CREATE TABLESPACE tbs_1;

The preceding statement creates a tablespace whose storage is in /u01/oradata.
A datafile is created with an initial of 100 MB and it is autoextensible with an
unlimited maximum size. The datafile is an Oracle-managed file.
When the tablespace is dropped, the Oracle-managed files for the tablespace are
automatically removed. The following statement drops the tablespace and all the
Oracle-managed files used for its storage:

Using Oracle-Managed Files 11-21

Scenarios for Using Oracle-Managed Files

SQL> DROP TABLESPACE tbs_1;

Once the first datafile is full, the database does not automatically create a new
datafile. More space can be added to the tablespace by adding another
Oracle-managed datafile. The following statement adds another datafile in the
location specified by DB_CREATE_FILE_DEST:
SQL> ALTER TABLESPACE tbs_1 ADD DATAFILE;

The default file system can be changed by changing the initialization parameter.
This does not change any existing datafiles. It only affects future creations. This
can be done dynamically using the following statement:
SQL> ALTER SYSTEM SET DB_CREATE_FILE_DEST='/u04/oradata';
6.

Archiving redo information
Archiving of redo log files is no different for Oracle-managed files, than it is for
unmanaged files. A file system location for the archived log files can be specified
using the LOG_ARCHIVE_DEST_n initialization parameters. The filenames are
formed based on the LOG_ARCHIVE_FORMAT parameter or its default. The
archived logs are not Oracle-managed files

7.

Backup, restore, and recover
Since an Oracle-managed file is compatible with standard operating system files,
you can use operating system utilities to backup or restore Oracle-managed files.
All existing methods for backing up, restoring, and recovering the database work
for Oracle-managed files.

Scenario 2: Create and Manage a Database with Database and Flash Recovery Areas
In this scenario, a DBA creates a database where the control files and redo log files are
multiplexed. Archived logs and RMAN backups are created in the flash recovery area.
The following tasks are involved in creating and maintaining this database:
1.

Setting the initialization parameters
The DBA includes the following generic file creation defaults:
DB_CREATE_FILE_DEST = '/u01/oradata'
DB_RECOVERY_FILE_DEST_SIZE = 10G
DB_RECOVERY_FILE_DEST = '/u02/oradata'
LOG_ARCHIVE_DEST_1 = 'LOCATION = USE_DB_RECOVERY_FILE_DEST'

The DB_CREATE_FILE_DEST parameter sets the default file system directory for
datafiles, tempfiles, control files, and redo logs.
The DB_RECOVERY_FILE_DEST parameter sets the default file system directory
for control files, redo logs, and RMAN backups.
The LOG_ARCHIVE_DEST_1 configuration
'LOCATION=USE_DB_RECOVERY_FILE_DEST' redirects archived logs to the
DB_RECOVERY_FILE_DEST location.
The DB_CREATE_FILE_DEST and DB_RECOVERY_FILE_DEST parameters set the
default directory for log file and control file creation. Each redo log and control file
is multiplexed across the two directories.
2.

Creating a database

3.

Managing control files

11-22 Oracle Database Administrator’s Guide

Scenarios for Using Oracle-Managed Files

4.

Managing the redo log

5.

Managing tablespaces
Tasks 2, 3, 4, and 5 are the same as in Scenario 1, except that the control files and
redo logs are multiplexed across the DB_CREATE_FILE_DEST and
DB_RECOVERY_FILE_DEST locations.

6.

Archiving redo log information
Archiving online logs is no different for Oracle-managed files than it is for
unmanaged files. The archived logs are created in DB_RECOVERY_FILE_DEST
and are Oracle-managed files.

7.

Backup, restore, and recover
An Oracle-managed file is compatible with standard operating system files, so you
can use operating system utilities to backup or restore Oracle-managed files. All
existing methods for backing up, restoring, and recovering the database work for
Oracle-managed files. When no format option is specified, all disk backups by
RMAN are created in the DB_RECOVERY_FILE_DEST location. The backups are
Oracle-managed files.

Scenario 3: Adding Oracle-Managed Files to an Existing Database
Assume in this case that an existing database does not have any Oracle-managed files,
but the DBA would like to create new tablespaces with Oracle-managed files and
locate them in directory /u03/oradata.
1.

Setting the initialization parameters
To allow automatic datafile creation, set the DB_CREATE_FILE_DEST
initialization parameter to the file system directory in which to create the datafiles.
This can be done dynamically as follows:
SQL> ALTER SYSTEM SET DB_CREATE_FILE_DEST = '/u03/oradata';

2.

Creating tablespaces
Once DB_CREATE_FILE_DEST is set, the DATAFILE clause can be omitted from a
CREATE TABLESPACE statement. The datafile is created in the location specified
by DB_CREATE_FILE_DEST by default. For example:
SQL> CREATE TABLESPACE tbs_2;

When the tbs_2 tablespace is dropped, its datafiles are automatically deleted.

Using Oracle-Managed Files 11-23

Scenarios for Using Oracle-Managed Files

11-24 Oracle Database Administrator’s Guide

12
Using Automatic Storage Management
This chapter discusses some of the concepts behind Automatic Storage Management
and describes how to use it. It contains the following topics:
■

What Is Automatic Storage Management?

■

Administering an Automatic Storage Management Instance

■

Administering Automatic Storage Management Disk Groups

■

Using Automatic Storage Management in the Database

■

Migrating a Database to Automatic Storage Management

■

Accessing Automatic Storage Management Files with the XML DB Virtual Folder

■

Viewing Information About Automatic Storage Management

What Is Automatic Storage Management?
Automatic Storage Management (ASM) is an integrated file system and volume
manager expressly built for Oracle database files. ASM provides the performance of
raw I/O with the easy management of a file system. It simplifies database
administration by eliminating the need for you to directly manage potentially
thousands of Oracle database files. It does this by enabling you to divide all available
storage into disk groups. You manage a small set of disk groups and ASM automates
the placement of the database files within those disk groups.
In the SQL statements that you use for creating database structures such as
tablespaces, control files, and redo and archive log files, you specify file location in
terms of disk groups. ASM then creates and manages the associated underlying files
for you.
Mirroring and Striping
ASM extends the power of Oracle-managed files. With Oracle-managed files, files are
created and managed automatically for you, but with ASM you get the additional
benefits of features such as mirroring and striping.
ASM divides files into 1 MB extents and spreads each file's extents evenly across all
disks in the disk group. This optimizes performance and disk utilization, making
manual I/O performance tuning unnecessary.
ASM mirroring is more flexible than operating system mirrored disks because ASM
mirroring enables the redundancy level to be specified on a per-file basis. Thus two
files can share the same disk group with one file being mirrored while the other is not.
Mirroring takes place at the extent level. If a file is mirrored, depending upon
redundancy level set for the file, each extent has one or more mirrored copies, and
Using Automatic Storage Management

12-1

What Is Automatic Storage Management?

mirrored copies are always kept on different disks in the disk group. Table 12–1
describes the three mirroring options that ASM supports on a per-file basis.
Table 12–1

ASM Mirroring Options

Mirroring Option

Description

2-way mirroring

Each extent has 1 mirrored copy.

3-way mirroring

Each extent has 2 mirrored copies.

Unprotected

ASM provides no mirroring. Used when mirroring is provided
by the disk subsystem itself.

Dynamic Storage Configuration
ASM enables you to change the storage configuration without having to take the
database offline. It automatically rebalances—redistributes file data evenly across all
the disks of the disk group—after you add disks to or drop disks from a disk group.
Should a disk failure occur, ASM automatically rebalances to restore full redundancy
for files that had extents on the failed disk. When you replace the failed disk with a
new disk, ASM rebalances the disk group to spread data evenly across all disks,
including the replacement disk.
Interoperability with Existing Databases
ASM does not eliminate any existing database functionality. Existing databases using
file systems or with storage on raw devices can operate as they always have. New files
can be created as ASM files while existing ones are administered as before. Databases
can have a mixture of ASM files and non-ASM files.
ASM Instance
ASM is implemented as a special kind of Oracle instance, with its own System Global
Area (SGA) and background processes. The ASM instance typically has a much
smaller SGA than a database instance.
Single Instance and Clustered Environments
Each database server that has database files managed by ASM needs to be running an
ASM instance. A single ASM instance can service one or more single-instance
databases on a stand-alone server. Each ASM disk group can be shared among all the
databases on the server.
In a clustered environment, each node runs an ASM instance, and the ASM instances
communicate with each other on a peer-to-peer basis. This is true for both Real
Application Clusters (RAC) environments, and non-RAC clustered environments
where multiple single-instance databases across multiple nodes share a clustered pool
of storage that is managed by ASM. If a node is part of a Real Application Clusters
(RAC) system, the peer-to-peer communications service is already installed on that
server. If the node is part of a cluster where RAC is not installed, the Oracle
Clusterware, Cluster Ready Services (CRS), must be installed on that node.
An ASM instance on a node in a cluster can manage storage simultaneously for one or
more RAC database instances and one or more single instance databases.
See Also: Oracle Database Concepts for an overview of Automatic
Storage Management

12-2 Oracle Database Administrator’s Guide

Overview of the Components of Automatic Storage Management

Overview of the Components of Automatic Storage Management
The components of Automatic Storage Management (ASM) are disk groups, disks,
failure groups, files, and templates.
Disk Groups
The primary component of ASM is the disk group. A disk group consists of a
grouping of disks that are managed together as a unit. You configure ASM by creating
disk groups to store database files. Oracle provides SQL statements that create and
manage disk groups, their contents, and their metadata.
The disk group type determines the levels of mirroring that files in the disk group can
be created with. You specify disk group type when you create the disk group.
Table 12–2 lists ASM disk group types, their supported mirroring levels, and their
default mirroring levels. The default mirroring level indicates the mirroring level that
a file is created with unless you designate a different mirroring level. (See Templates,
later in this section.)
Table 12–2

Mirroring Options for Each Disk Group Type

Disk Group Type

Supported
Mirroring Levels

Default Mirroring
Level

Normal redundancy

2-way
2-way
3-way
Unprotected (none)

High redundancy

3-way

External redundancy

Unprotected (none) Unprotected

3-way

If you do not specify a disk group type (redundancy level) when you create a disk
group, the disk group defaults to normal redundancy.
As Table 12–2 indicates, files in a high redundancy disk group are always 3-way
mirrored, files in an external redundancy disk group have no ASM mirroring, and files
in a normal redundancy disk group can be 2-way or 3-way mirrored or unprotected,
with 2-way mirroring as the default. Mirroring level for each file is set by templates,
which are described later in this section.
Disks
The disks in a disk group are referred to as ASM disks. On Windows operating
systems, an ASM disk is always a partition. On all other platforms, an ASM disk can
be:
■

A partition of a logical unit number (LUN)

■

A network-attached file
Although you can also present a volume (a logical collection of
disks) for management by ASM, it is not recommended to run ASM
on top of another host-based volume manager.

Note:

When an ASM instance starts, it automatically discovers all available ASM disks.
Discovery is the process of determining every disk device to which the ASM instance
has been given I/O permissions (by some operating system mechanism), and of
examining the contents of the first block of such disks to see if they are recognized as
belonging to a disk group. ASM discovers disks in the paths that are listed in an

Using Automatic Storage Management

12-3

Administering an Automatic Storage Management Instance

initialization parameter, or if the parameter is NULL, in an operating
system–dependent default path.
Failure Groups
Failure groups define ASM disks that share a common potential failure mechanism.
An example of a failure group is a set of SCSI disks sharing the same SCSI controller.
Failure groups are used to determine which ASM disks to use for storing redundant
copies of data. For example, if two-way mirroring is specified for a file, ASM
automatically stores redundant copies of file extents in separate failure groups. Failure
groups apply only to normal and high redundancy disk groups. You define the failure
groups in a disk group when you create or alter the disk group.

Disk Group 1
Failure Group 1
Disk 1

Disk 2

Disk 3

Disk 4

Disk Controller 1

Disk 6

Disk 7

Disk 8

Disk Controller 2

Failure Group 2
Disk 5

Files
Files written on ASM disks are ASM files, whose names are automatically generated
by ASM. You can specify user-friendly alias names (or just aliases) for ASM files, and
you can create a hierarchical directory structure for these aliases. Each ASM file is
completely contained within a single disk group, and is evenly spaced over all of the
ASM disks in the disk group.
Templates
Templates are collections of file attribute values, and are used to set mirroring and
striping attributes of each type of database file (datafile, control file, redo log file, and
so on) created in ASM disk groups. Each disk group has a default template associated
with each file type. See "Managing Disk Group Templates" on page 12-29 for more
information.
You can also create your own templates to meet unique requirements. You can then
include a template name when creating a file, thereby assigning desired attributes on
an individual file basis rather than on the basis of file type. See "About ASM
Filenames" on page 12-34 for more information.

Administering an Automatic Storage Management Instance
Administering an Automatic Storage Management (ASM) instance is similar to
managing a database instance, but with fewer tasks. An ASM instance does not require
a database instance to be running for you to administer it.
You can perform all ASM administration tasks with SQL*Plus.

12-4 Oracle Database Administrator’s Guide

Administering an Automatic Storage Management Instance

Note: To administer ASM with SQL*Plus, you must set the
ORACLE_SID environment variable to the ASM SID before you start
SQL*Plus. If ASM and the database have different Oracle homes, you
must also set the ORACLE_HOME environment variable to point to the
ASM Oracle home. Depending on platform, you may have to change
other environment variables as well. See "Selecting an Instance with
Environment Variables" on page 1-7 for more information.

The default ASM SID for a single instance database is +ASM, and the
default SID for ASM on Real Application Clusters nodes is +ASMnode#.
You can also use Oracle Enterprise Manager (EM) or the Database Configuration
Assistant (DBCA) for configuring and altering disk groups. DBCA eases the
configuration and creation of your database while EM provides an integrated
graphical approach for managing both your ASM instance and database instance. See
Appendix A of Oracle Database 2 Day DBA for instructions on administering an ASM
instance with EM.
This section contains the following topics:
■

Installing ASM

■

Authentication for Accessing an ASM Instance

■

Setting Initialization Parameters for an ASM Instance

■

Starting Up an ASM Instance

■

Shutting Down an ASM Instance
Oracle Database 2 Day DBA for information on
administering ASM with Enterprise Manager.

See Also:

Installing ASM
Because ASM is integrated into the database server, you use the Oracle Universal
Installer (OUI) and the Database Configuration Assistant (DBCA) to install and
initially configure it. OUI has options to either install and configure a database that
uses ASM for storage management, or to install and configure an ASM instance by
itself, without creating a database instance. Refer to the Oracle Database Installation
Guide for your operating system for details on installing ASM.
ASM Installation Tips
Keep the following in mind when installing ASM:
■

When running more than one database instance on a single server or node, it is
recommended that you install ASM in its own Oracle home on that server or node.
This is advisable even if you are running only one database instance but plan to
add one or more database instances to the server or node in the future.
With separate Oracle homes, you can upgrade and patch ASM and databases
independently, and you can deinstall database software without impacting the
ASM instance.
If an ASM instance does not already exist and you select the OUI option to install
and configure ASM only, OUI installs ASM in its own Oracle home.

Using Automatic Storage Management

12-5

Administering an Automatic Storage Management Instance

■

If you are running a single database instance on a server or node and have no
plans to add one or more database instances to this server or node, ASM and the
database can share a single Oracle home
If an ASM instance does not already exist and you select the OUI option to create a
database that uses ASM for storage, OUI creates this single-home configuration.

■

■

■

When you install ASM in a single-instance configuration, DBCA creates a separate
server parameter file (SPFILE) and password file for the ASM instance.
When installing ASM in a clustered environment where the ASM Oracle home is
shared among all nodes, DBCA creates an SPFILE for ASM. In a clustered
environment without a shared ASM Oracle home, DBCA installs a text
initialization parameter file (PFILE) for ASM on each node.
Before installing ASM, you may want to install the ASM support library, ASMLib.
ASMLib is an application program interface (API) developed by Oracle to simplify
the operating system–to-database interface, and to exploit the capabilities and
strengths of vendors' storage arrays. The purpose of ASMLib, which is an optional
add-on to ASM, is to provide an alternative interface for the ASM-enabled kernel
to discover and access block devices. It provides storage and operating system
vendors the opportunity to supply extended storage-related features. These
features provide benefits such as improved performance and greater data integrity.
See the ASM page of the Oracle Technology Network web site at
http://www.oracle.com/technology/products/database/asm for more
information on ASMLib. To download ASMLib for Linux, go to
http://www.oracle.com/technology/tech/linux/asmlib.

Authentication for Accessing an ASM Instance
ASM security considerations derive from the fact that a particular ASM instance is
tightly bound to one or more database instances operating on the same server. In
effect, the ASM instance is a logical extension of these database instances. Both the
ASM instance and the database instances must have equivalent operating system
access rights (read/write) to the disk group member disks. For UNIX this is typically
provided through shared UNIX group membership. See the Oracle Database Installation
Guide for your operating system for information on how to ensure that the ASM and
database instances have the proper access to member disks.
ASM instances do not have a data dictionary, so the only way to connect to one is as an
administrator. This means that you use operating system authentication and connect
as SYSDBA, or when connecting remotely through Oracle Net Services, you use a
password file.

Operating System Authentication for ASM
Using operating system authentication, the authorization to connect locally with the
SYSDBA privilege is granted through the use of a special operating system user group,
generically referred to as OSDBA. (On UNIX, OSDBA is typically the dba group.) See
"Using Operating System Authentication" on page 1-14 for more information about
OSDBA.
By default, members of the OSDBA group are authorized to connect with the SYSDBA
privilege on all instances on the node, including the ASM instance. Users who connect
to the ASM instance with SYSDBA privilege have complete administrative access to all
disk groups that are managed by that ASM instance.

12-6 Oracle Database Administrator’s Guide

Administering an Automatic Storage Management Instance

The user that is the software owner for the database Oracle
home (typically, the user named oracle) must be a member of the
OSDBA group defined for the ASM Oracle home. This is
automatically the case when ASM and a single instance of Oracle
Database share the same Oracle home. If you install the ASM and
database instances in separate Oracle homes, you must ensure that
you configure the proper group memberships, otherwise the database
instance will not be able to connect to the ASM instance.

Note:

Password File Authentication for ASM
To enable remote administration of ASM (through Oracle Net Services), a password
file must be created for ASM. A password file is also required for Enterprise Manager
to connect to ASM.
The Oracle Database Configuration Assistant (DBCA) creates a password file for ASM
when it initially configures ASM disk groups. Like a database password file, the only
user added to the password file upon creation is SYS. If you want to add other users to
the password file, you must share the password file with a database instance and add
the users with the database.
If you configure an ASM instance without using DBCA, you must create a password
file yourself. See "Creating and Maintaining a Password File" on page 1-16 for more
information.

Setting Initialization Parameters for an ASM Instance
The ASM instance has its own initialization parameter file, which, like that of the
database instance, can be a server parameter file (SPFILE) or a text file.
For ASM installations in clustered environments, server
parameter files (SPFILEs) are not used unless there is a shared ASM
Oracle home. Without a shared ASM Oracle home, each ASM instance
gets its own text initialization parameter file (PFILE).

Note:

The ASM parameter file name is distinguished from the database file name by the SID
embedded in the name. (The SID for ASM defaults to +ASM for a single-instance
database and +ASMnode# for Real Application Clusters nodes.) The same rules for file
name, default location, and search order that apply to the database initialization
parameter file apply to the ASM initialization parameter file. Thus, on single-instance
Unix platforms, the server parameter file for ASM would have the following path:
$ORACLE_HOME/dbs/spfile+ASM.ora

For more information about initialization parameter files, see "Understanding
Initialization Parameters" on page 2-19.
Some initialization parameters are specifically relevant to an ASM instance. Of those
initialization parameters intended for a database instance, only a few are relevant to an
ASM instance. You can set those parameters at database creation time using Database
Configuration Assistant or later using Enterprise Manager. The remainder of this
section describes setting the parameters manually by editing the initialization
parameter file.

Using Automatic Storage Management

12-7

Administering an Automatic Storage Management Instance

Initialization Parameters for ASM Instances
The following initialization parameters relate to an ASM instance. Parameters that
start with ASM_ cannot be set in database instances.
Name

Description

INSTANCE_TYPE

Must be set to ASM
Note: This is the only required parameter. All other parameters take
suitable defaults for most environments.

ASM_POWER_LIMIT

The default power for disk rebalancing.
Default: 1, Range: 0 – 11
See Also: "Tuning Rebalance Operations" on page 12-9

ASM_DISKSTRING

A comma-separated list of strings that limits the set of disks that ASM
discovers. May include wildcard characters. Only disks that match
one of the strings are discovered. String format depends on the ASM
library in use and on the operating system. The standard system
library for ASM supports glob pattern matching.
For example, on a Solaris server that does not use ASMLib, to limit
discovery to disks that are in the /dev/rdsk/ directory,
ASM_DISKSTRING would be set to:
/dev/rdsk/*
Note that the asterisk cannot be omitted. To limit discovery to disks in
this directory that have a name that ends in s3 or s4,
ASM_DISKSTRING would be set to:
/dev/rdsk/*s3,/dev/rdsk/*s4
This could be simplified to:
/dev/rdsk/*s[34]
The ? character, when used as the first character of a path, expands to
the Oracle home directory. Depending on operating system, when the
? character is used elsewhere in the path, it is a wildcard for a single
character.
Default: NULL. A NULL value causes ASM to search a default path for
all disks in the system to which the ASM instance has read/write
access. The default search path is platform-specific. See the Oracle
Database Administrator's Reference for UNIX Systems on OTN for a list
of default search paths for Unix platforms. For the Windows
platform, the default search path is \\.\ORCLDISK*. See the Oracle
Database Installation Guide for Windows or the Real Application Clusters
Quick Installation Guide for Oracle Database Standard Edition for Windows
for more information.
See Also: "Improving Disk Discovery Time" on page 12-9

12-8 Oracle Database Administrator’s Guide

Administering an Automatic Storage Management Instance

Name

Description

ASM_DISKGROUPS

A list of the names of disk groups to be mounted by an ASM instance
at startup, or when the ALTER DISKGROUP ALL MOUNT statement is
used.
Default: NULL (If this parameter is not specified, then no disk groups
are mounted.)
This parameter is dynamic, and if you are using a server parameter
file (SPFILE), you should not need to manually alter this value. ASM
automatically adds a disk group to this parameter when the disk
group is successfully created or mounted, and automatically removes
a disk group from this parameter when the disk group is dropped or
dismounted. However, when using a text initialization parameter file
(PFILE), you must edit the initialization parameter file to add the
name of any disk group that you want automatically mounted at
instance startup, and remove the name of any disk group that you no
longer want automatically mounted.
Note: Issuing the ALTER DISKGROUP...ALL MOUNT or ALTER
DISKGROUP...ALL DISMOUNT command does not affect the value of
this parameter.

Tuning Rebalance Operations
If the POWER clause is not specified in an ALTER DISKGROUP command, or when
rebalance is implicitly invoked by adding or dropping a disk, the rebalance power
defaults to the value of the ASM_POWER_LIMIT initialization parameter. You can
adjust this parameter dynamically. The higher the limit, the faster a rebalance
operation may complete. Lower values cause rebalancing to take longer, but consume
fewer processing and I/O resources. This leaves these resources available for other
applications, such as the database. The default value of 1 minimizes disruption to
other applications. The appropriate value is dependent upon your hardware
configuration as well as performance and availability requirements.
If a rebalance is in progress because a disk is manually or automatically dropped,
increasing the power of the rebalance shortens the window during which redundant
copies of that data on the dropped disk are reconstructed on other disks.
The V$ASM_OPERATION view provides information that can be used for adjusting
ASM_POWER_LIMIT and the resulting power of rebalance operations. The
V$ASM_OPERATION view also gives an estimate in the EST_MINUTES column of the
amount of time remaining for the rebalance operation to complete. You can see the
effect of changing the rebalance power by observing the change in the time estimate.
See Also: "Manually Rebalancing a Disk Group" on page 12-24 for
more information.

Improving Disk Discovery Time
The value for the ASM_DISKSTRING initialization parameter is an operating
system–dependent value used by ASM to limit the set of paths that the discovery
process uses to search for disks. When a new disk is added to a disk group, each ASM
instance that has the disk group mounted must be able to discover the new disk using
its ASM_DISKSTRING.
In many cases, the default value (NULL) is sufficient. Using a more restrictive value
may reduce the time required for ASM to perform discovery, and thus improve disk
group mount time or the time for adding a disk to a disk group. It may be necessary to
dynamically change the ASM_DISKSTRING before adding a disk so that the new disk
will be discovered through this parameter.

Using Automatic Storage Management

12-9

Administering an Automatic Storage Management Instance

Note that the default value of ASM_DISKSTRING may not find all disks in all
situations. If your site is using a third-party vendor ASMLib, that vendor may have
discovery string conventions you must use for ASM_DISKSTRING. In addition, if your
installation uses multipathing software, the software may place pseudo-devices in a
path that is different from the operating system default. Consult the multipathing
vendor documentation for details.

Behavior of Database Initialization Parameters in an ASM Instance
If you specify a database instance initialization parameter in an ASM initialization
parameter file, it can have one of these effects:
■
■

If the parameter is not valid in the ASM instance, it produces an ORA-15021 error.
If the database parameter is valid in an ASM instance, for example parameters
relating to dump destinations and some buffer cache parameters, ASM accepts the
parameter. In general, ASM selects appropriate defaults for any database
parameters that are relevant to the ASM instance.

Behavior of ASM Initialization Parameters in a Database Instance
If you specify any of the ASM-specific parameters (names starting with ASM_) in a
database instance parameter file, you receive an ORA-15021 error.

Starting Up an ASM Instance
ASM instances are started similarly to Oracle database instances with some minor
differences:
■

■

To connect to the ASM instance with SQL*Plus, you must set the ORACLE_SID
environment variable to the ASM SID. (The default ASM SID for a single instance
database is +ASM, and the default SID for ASM on Real Application Clusters is
+ASMnode#.) Depending on your operating system and whether or not you
installed ASM in its own Oracle home, you may have to change other environment
variables. For more information, see "Selecting an Instance with Environment
Variables" on page 1-7.
The initialization parameter file, which can be a server parameter file, must
contain:
INSTANCE_TYPE = ASM

This parameter signals the Oracle executable that an ASM instance is starting and
not a database instance.
■

The STARTUP command, rather than trying to mount and open a database, tries to
mount the disk groups specified by the initialization parameter
ASM_DISKGROUPS. If ASM_DISKGROUPS is blank, the ASM instance starts and
warns that no disk groups were mounted. You can then mount disk groups with
the ALTER DISKGROUP...MOUNT command.

The SQL*Plus STARTUP command parameters are interpreted by ASM as follows:
STARTUP Parameter

Description

FORCE

Issues a SHUTDOWN ABORT to the ASM instance before restarting it

MOUNT, OPEN

Mounts the disk groups specified in the ASM_DISKGROUPS
initialization parameter. This is the default if no command
parameter is specified.

12-10 Oracle Database Administrator’s Guide

Administering an Automatic Storage Management Instance

STARTUP Parameter

Description

NOMOUNT

Starts up the ASM instance without mounting any disk groups

The following is a sample SQL*Plus session in which an ASM instance is started.
% sqlplus /nolog
SQL> CONNECT / AS sysdba
Connected to an idle instance.
SQL> STARTUP
ASM instance started
Total System Global Area
Fixed Size
Variable Size
ASM Cache
ASM diskgroups mounted

71303168
1069292
45068052
25165824

bytes
bytes
bytes
bytes

ASM Instance Memory Requirements
ASM instances are smaller than database instances. A 64 MB SGA should be sufficient
for all but the largest ASM installations. Total memory footprint for a typical ASM
instance is approximately 100 MB.

CSS Requirement
The Cluster Synchronization Services (CSS) daemon is required to enable
synchronization between ASM and its client database instances. The CSS daemon is
normally started (and configured to start upon reboot) when you use Database
Configuration Assistant (DBCA) to create your database. If you did not use DBCA to
create the database, you must ensure that the CSS daemon is running before you start
the ASM instance.
CSS Daemon on the Linux and Unix Platforms To determine if the CSS daemon is running,
issue the command crsctl check cssd. If you receive the message CSS appears
healthy, the CSS daemon is running.
To start the CSS daemon and configure the host to always start the daemon upon
reboot, do the following:
1.

Log in to the host as root.

2.

Ensure that $ORACLE_HOME/bin is in your PATH environment variable.

3.

Enter the following command:
localconfig add

CSS Daemon on the Windows Platform You can also use the crsctl and localconfig
commands on the Windows platform to check the status of the CSS daemon or to start
it. If you prefer to use Windows GUI tools, you can do the following:
To determine if the CSS daemon is properly configured and running, double-click the
Services icon in the Windows Control Panel, and look for the OracleCSService
service. Its status should be Started and its startup type should be Automatic.
Refer to the Windows documentation for information on how to start a Windows
service and how to configure it for Automatic startup.

Using Automatic Storage Management 12-11

Administering an Automatic Storage Management Instance

Disk Discovery
When an ASM instance initializes, ASM discovers and examines the contents of all of
the disks that are in the paths designated in the ASM_DISKSTRING initialization
parameter. For purposes of this discussion, a "disk" is an ASM disk as defined in
"Overview of the Components of Automatic Storage Management" on page 12-3. Disk
discovery also takes place when you:
■

■

Run the ALTER DISKGROUP...ADD DISK and ALTER DISKGROUP...RESIZE DISK
commands.
Query the V$ASM_DISKGROUP and V$ASM_DISK views.

After a disk is successfully discovered, it appears in the V$ASM_DISK view. Disks that
belong to a disk group—that is, that have a disk group name in the disk header—have
a header status of MEMBER. Disks that were discovered but that have not yet been
assigned to a disk group have a header status of either CANDIDATE or PROVISIONED.
Note: The PROVISIONED header status implies that an additional
platform-specific action has been taken by an administrator to make
the disk available for ASM. For example, on Windows, the
administrator used asmtool or asmtoolg to stamp the disk with a
header, or on Linux, the administrator used ASMLib to prepare the
disk for ASM.

The following query shows six MEMBER disks and one CANDIDATE disk.
SQL> select name, header_status, path from v$asm_disk;
NAME
HEADER_STATUS PATH
------------ ------------- ------------------------CANDIDATE
/dev/rdsk/disk07
DISK06
MEMBER
/dev/rdsk/disk06
DISK05
MEMBER
/dev/rdsk/disk05
DISK04
MEMBER
/dev/rdsk/disk04
DISK03
MEMBER
/dev/rdsk/disk03
DISK02
MEMBER
/dev/rdsk/disk02
DISK01
MEMBER
/dev/rdsk/disk01
7 rows selected.

Discovery Rules
Rules for discovery are as follows:
■

■

■

ASM discovers no more than 10,000 disks. That is, if more than 10,000 disks match
the ASM_DISKSTRING initialization parameter, only the first 10,000 are
discovered.
ASM does not discover a disk that contains an operating system partition table,
even if the disk is in an ASM_DISKSTRING search path and ASM has read/write
permission on the disk.
If ASM recognizes a disk header as that of an Oracle object, such as the header of
an Oracle datafile, the disk is discovered, but can only be added to a disk group
with the FORCE keyword. Such a disk appears in V$ASM_DISK with a header
status of FOREIGN.

In addition, ASM identifies the following configuration errors during discovery:
■

Multiple paths to the same disk

12-12 Oracle Database Administrator’s Guide

Administering an Automatic Storage Management Instance

In this case, if the disk is part of a disk group, disk group mount fails. If the disk is
being added to a disk group with the ADD DISK or CREATE DISKGROUP
command, the command fails. To correct the error, restrict ASM_DISKSTRING so
that it does not include multiple paths to the same disk, or if you are using
multipathing software, ensure that you include only the pseudo-device in
ASM_DISKSTRING.
■

Multiple ASM disks with the same disk header
This can be caused by a bit copy of one disk onto another. In this case, disk group
mount fails.

Disk Group Recovery
Like any other file system or volume manager, if an ASM instance fails, then all Oracle
database instances on the same node as the ASM instance and that use a disk group
managed by the ASM instance also fail. In a single ASM instance configuration, if the
ASM instance fails while ASM metadata is open for update, then after the ASM
instance reinitializes, it reads the disk group log and recovers all transient changes.
With multiple ASM instances sharing disk groups, if one ASM instance should fail,
another ASM instance automatically recovers transient ASM metadata changes caused
by the failed instance. The failure of an Oracle database instance is not significant here
because only ASM instances update ASM metadata.

Shutting Down an ASM Instance
Automatic Storage Management shutdown is initiated by issuing the SHUTDOWN
command in SQL*Plus.
You must first ensure that the ORACLE_SID environment variable is set to the ASM
SID to connect to the ASM instance. Depending on your operating system and
whether or not you installed ASM in its own Oracle home, you may have to change
other environment variables before starting SQL*Plus. For more information, see
"Selecting an Instance with Environment Variables" on page 1-7.
% sqlplus /nolog
SQL> CONNECT / AS sysdba
Connected.
SQL> SHUTDOWN NORMAL

The table that follows lists the SHUTDOWN modes and describes the behavior of the
ASM instance in each mode.
SHUTDOWN Mode

Action Taken By Automatic Storage Management

NORMAL, IMMEDIATE, or
TRANSACTIONAL

ASM waits for any in-progress SQL to complete before doing an
orderly dismount of all disk groups and shutting down the ASM
instance. If any database instances are connected to the ASM
instance, the SHUTDOWN command returns an error and leaves
the ASM instance running.

ABORT

The ASM instance immediately shuts down without the orderly
dismount of disk groups. This causes recovery upon the next
startup of ASM. If any database instance is connected to the ASM
instance, the database instance aborts.

It is strongly recommended that you shut down all database instances that use the
ASM instance before shutting down the ASM instance.

Using Automatic Storage Management 12-13

Administering Automatic Storage Management Disk Groups

Administering Automatic Storage Management Disk Groups
This section explains how to create and manage your Automatic Storage Management
(ASM) disk groups. If you have one or more database instances that use ASM, you can
keep them open and running while you administer disk groups.
You can administer ASM disk groups with Database Configuration Assistant (DBCA),
Enterprise Manager (EM), or SQL*Plus. In addition, the ASM command-line utility,
ASMCMD, enables you to easily view and manipulate files and directories within disk
groups.
This section provides details on administration with SQL*Plus, but includes
background information that applies to all administration methods.
The SQL statements introduced in this section are only available in an ASM instance.
All sample SQL*Plus sessions in this section assume that the
ORACLE_SID environment variable is changed to the ASM SID before
starting SQL*Plus. Depending on your operating system and whether
or not you installed ASM in its own Oracle home, other environment
variables may have to be changed as well. For more information, see
"Selecting an Instance with Environment Variables" on page 1-7.

Note:

The following topics are contained in this section:
■

Considerations and Guidelines for Configuring Disk Groups

■

Creating a Disk Group

■

Altering the Disk Membership of a Disk Group

■

Mounting and Dismounting Disk Groups

■

Checking Internal Consistency of Disk Group Metadata

■

Dropping Disk Groups

■

Managing Disk Group Directories

■

Managing Alias Names for ASM Filenames

■

Dropping Files and Associated Aliases from a Disk Group

■

Managing Disk Group Templates
See Also:
■

■

Appendix A of Oracle Database 2 Day DBA for instructions on
administering ASM disk groups with Enterprise Manager.
Oracle Database Utilities for details on the ASMCMD utility.

Considerations and Guidelines for Configuring Disk Groups
The following are some considerations and guidelines to be aware of as you configure
disk groups.

Determining the Number of Disk Groups
The following criteria can help you determine the number of disk groups that you
create:

12-14 Oracle Database Administrator’s Guide

Administering Automatic Storage Management Disk Groups

■

■

Disks in a given disk group should have similar size and performance
characteristics. If you have several different types of disks in terms of size and
performance, then it would be better to form several disk groups accordingly.
For recovery reasons, you might feel more comfortable having separate disk
groups for your database files and flash recovery area files. Using this approach,
even with the loss of one disk group, the database would still be intact.

Performance Characteristics when Grouping Disks
ASM eliminates the need for manual I/O tuning. However, to ensure consistent
performance, you should avoid placing dissimilar disks in the same disk group. For
example, the newest and fastest disks might reside in a disk group reserved for the
database work area, and slower drives could reside in a disk group reserved for the
flash recovery area.
ASM load balances file activity by uniformly distributing file extents across all disks in
a disk group. For this technique to be effective it is important that the disks used in a
disk group be of similar performance characteristics.
There may be situations where it is acceptable to temporarily have disks of different
sizes and performance co-existing in a disk group. This would be the case when
migrating from an old set of disks to a new set of disks. The new disks would be
added and the old disks dropped. As the old disks are dropped, their storage is
migrated to the new disks while the disk group is online.

Effects of Adding and Dropping Disks from a Disk Group
ASM automatically rebalances whenever disks are added or dropped. For a normal
drop operation (without the FORCE option), a disk is not released from a disk group
until data is moved off of the disk through rebalancing. Likewise, a newly added disk
cannot support its share of the I/O workload until rebalancing completes. It is more
efficient to add or drop multiple disks at the same time so that they are rebalanced as a
single operation. This avoids unnecessary movement of data.
For a drop operation, when rebalance is complete, ASM takes the disk offline
momentarily, and then drops it, setting disk header status to FORMER.
You can add or drop disks without shutting down the database. However, a
performance impact on I/O activity may result.

How ASM Handles Disk Failures
Depending on the redundancy level of a disk group and how failure groups are
defined, the failure of one or more disks could result in either of the following:
■

The disks are first taken offline and then automatically dropped.
The disk group remains mounted and serviceable, and, thanks to mirroring, all
disk group data remains accessible. Following the disk drop, ASM performs a
rebalance to restore full redundancy for the data on the failed disks.

■

The entire disk group is automatically dismounted, which means loss of data
accessibility.

Disk failure in this context means individual spindle failure or failure of another disk
subsystem component, such as power supply, a controller, or host bus adapter. Here
are the rules for how ASM handles disk failures:
■

A failure group is considered to have failed if at least one disk in the failure group
fails.

Using Automatic Storage Management 12-15

Administering Automatic Storage Management Disk Groups

■

■

■

A normal redundancy disk group can tolerate the failure of one failure group. If
only one failure group fails, the disk group remains mounted and serviceable, and
ASM performs a rebalance of the surviving disks (including the surviving disks in
the failed failure group) to restore redundancy for the data in the failed disks. If
more than one failure group fails, ASM dismounts the disk group.
A high redundancy disk group can tolerate the failure of two failure groups. If one
or two failure groups fail, the disk group remains mounted and serviceable, and
ASM performs a rebalance of the surviving disks to restore redundancy for the
data in the failed disks. If more than two failure groups fail, ASM dismounts the
disk group.
An external redundancy disk group cannot tolerate the failure of any disks in the
disk group. Any kind of disk failure causes ASM to dismount the disk group.

When considering these rules, remember that if a disk is not explicitly assigned to a
failure group with the CREATE DISKGROUP command, ASM puts the disk in its own
failure group. Also, failure of one disk in a failure group does not affect the other disks
in a failure group. For example, a failure group could consist of 6 disks connected to
the same disk controller. If one of the 6 disks fails, the other 5 disks can continue to
operate. The failed disk is dropped from the disk group and the other 5 remain in the
disk group. Depending on the rules stated previously, the disk group may then remain
mounted, or it may be dismounted.
When ASM drops a disk, the disk is not automatically added back to the disk group
when it is repaired or replaced. You must issue an ALTER DISKGROUP...ADD DISK
command to return the disk to the disk group. Similarly, when ASM automatically
dismounts a disk group, you must issue an ALTER DISKGROUP...MOUNT command
to remount the disk group.

Failure Groups and Mirroring
Mirroring of metadata and user data is achieved through failure groups. System
reliability can be hampered if an insufficient number of failure groups are provided.
Consequently, failure group configuration is very important to creating a highly
reliable system. Here are some rules and guidelines for failure groups:
■
■

■

■

■

■

Each disk in a disk group belongs to exactly one failure group.
After a disk has been assigned to a failure group, it cannot be reassigned to
another failure group. If it needs to be in another failure group, it can be dropped
from the disk group and then added back. Because the choice of failure group
depends on hardware configuration, a disk does not need to be reassigned unless
it is physically moved.
It is best if all failure groups are the same size. Failure groups of different sizes can
lead to wasted disk space.
ASM requires at least two failure groups to create a normal redundancy disk
groups and at least three failure groups to create a high redundancy disk group.
This implies that if you do not explicitly define failure groups, a normal
redundancy disk group requires at least two disks, and a high redundancy disk
group requires at least 3 disks.
Most systems do not need to explicitly define failure groups. The default behavior
of putting every disk in its own failure group works well for most installations.
Failure groups are only needed for large, complex systems that need to protect
against failures other than individual spindle failures.
Choice of failure groups depends on the kinds of failures that need to be tolerated
without loss of data availability. For small numbers of disks (fewer than 20) it is

12-16 Oracle Database Administrator’s Guide

Administering Automatic Storage Management Disk Groups

usually best to use default failure groups, where every disk is in its own failure
group. This is true even for large numbers of disks when the main concern is
spindle failure. If there is a need to protect against the simultaneous loss of
multiple disk drives due to a single component failure, you can specify failure
groups. For example, a disk group may be constructed from several small modular
disk arrays. If the system needs to continue operation when an entire modular
array fails, a failure group should consist of all the disks in one module. If one
module fails, a rebalance occurs on the surviving modules to restore redundancy
to the data that was on the failed module. You must place disks in the same failure
group if they depend on a common piece of hardware whose failure needs to be
tolerated with no loss of availability.
■

Having a small number of large failure groups may actually reduce availability in
some cases. For example, half the disks in a disk group could be on one power
supply, while the other half are on a different power supply. If this is used to
divide the disk group into two failure groups, tripping the breaker on one power
supply could drop half the disks in the disk group. Restoring redundancy for data
on the dropped disks would require copying all the data from the surviving disks.
This can be done online, but consumes a lot of I/O and leaves the disk group
unprotected against a spindle failure during the copy. However, if each disk were
in its own failure group, the disk group would be dismounted when the breaker
tripped (assuming that this caused more failure groups to fail than the disk group
can tolerate). Resetting the breaker would allow the disk group to be manually
remounted and no data copying would be needed.

Managing Capacity in Disk Groups
You must have sufficient spare capacity in each disk group to handle the largest failure
that you are willing to tolerate. After one or more disks fail, the process of restoring
redundancy for all data requires space from the surviving disks in the disk group. If
not enough space remains, some files may end up with reduced redundancy.
Reduced redundancy means that one or more extents in the file are not mirrored at the
expected level. For example, a reduced redundancy file in a high redundancy disk
group has at least one file extent with two or fewer total copies of the extent instead of
three. In the case of unprotected files, data extents could be missing altogether. Other
causes of reduced redundancy files are disks running out of space or an insufficient
number of failure groups. The V$ASM_FILE column REDUNDANCY_LOWERED indicates
a file with reduced redundancy.
The following guidelines help ensure that you have sufficient space to restore full
redundancy for all disk group data after the failure of one or more disks.
■

■

In a normal redundancy disk group, it is best to have enough free space in your
disk group to tolerate the loss of all disks in one failure group. The amount of free
space should be equivalent to the size of the largest failure group.
In a high redundancy disk group, it is best to have enough free space to cope with
the loss of all disks in two failure groups. The amount of free space should be
equivalent to the sum of the sizes of the two largest failure groups.

The V$ASM_DISKGROUP view contains columns that help you manage capacity:
■

■

REQUIRED_MIRROR_FREE_MB indicates the amount of space that must be
available in the disk group to restore full redundancy after the worst failure that
can be tolerated by the disk group.
USABLE_FILE_MB indicates the amount of free space, adjusted for mirroring, that
is available for new files.

Using Automatic Storage Management 12-17

Administering Automatic Storage Management Disk Groups

USABLE_FILE_MB is computed by subtracting REQUIRED_MIRROR_FREE_MB from
total free space in the disk group and then adjusting for mirroring. For example, in a
normal redundancy disk group, where by default mirrored files take up disk space
equal to twice their size, if 4 GB of actual usable file space remains, USABLE_FILE_MB
equals roughly 2 GB. You can then add up to a 2 GB file.
The following query shows capacity metrics for a normal redundancy disk group that
consists of six 1 GB (1024 MB) disks, each in its own failure group:
select name, type, total_mb, free_mb, required_mirror_free_mb,
usable_file_mb from v$asm_diskgroup;
NAME
TYPE
TOTAL_MB
FREE_MB REQUIRED_MIRROR_FREE_MB USABLE_FILE_MB
------------ ------ ---------- ---------- ----------------------- -------------DISKGROUP1
NORMAL
6144
3768
1024
1372

The REQUIRED_MIRROR_FREE_MB column shows that 1 GB of extra capacity must be
available to restore full redundancy after one or more disks fail. Note that the first
three numeric columns in the query results are raw numbers. That is, they do not take
redundancy into account. Only the last column is adjusted for normal redundancy.
Notice that:
FREE_MB - REQUIRED_MIRROR_FREE_MB = 2 * USABLE_FILE_MB
or
3768 - 1024 = 2 * 1372 = 2744

Negative Values of USABLE_FILE_MB Due to the relationship between FREE_MB,

REQUIRED_MIRROR_FREE_MB, and USABLE_FILE_MB, USABLE_FILE_MB can go
negative. Although this is not necessarily a critical situation, it does mean that:
■

Depending on the value of FREE_MB, you may not be able to create new files.

■

The next failure may result in files with reduced redundancy.

If USABLE_FILE_MB becomes negative, it is strongly recommended that you add
more space to the disk group as soon as possible.

Scalability
ASM imposes the following limits:
■

63 disk groups in a storage system

■

10,000 ASM disks in a storage system

■

4 petabyte maximum storage for each ASM disk

■

40 exabyte maximum storage for each storage system

■

1 million files for each disk group

■

Maximum files sizes as shown in the following table:

Disk Group Type

Maximum File Size

External redundancy

35 TB

Normal redundancy

5.8 TB

High redundancy

3.9 TB

12-18 Oracle Database Administrator’s Guide

Administering Automatic Storage Management Disk Groups

Creating a Disk Group
You use the CREATE DISKGROUP statement to create disk groups. This statement
enables you to assign a name to the disk group and to specify the disks that are to be
formatted as ASM disks belonging to the disk group. You specify the disks as one or
more operating system dependent search strings that ASM then uses to find the disks.
You can specify the disks as belonging to specific failure groups, and you can specify
the redundancy level for the disk group.
If you want ASM to mirror files, you specify the redundancy level as NORMAL
REDUNDANCY (2-way mirroring by default for most file types) or HIGH REDUNDANCY
(3-way mirroring for all files). You specify EXTERNAL REDUNDANCY if you want no
mirroring by ASM. For example, you might choose EXTERNAL REDUNDANCY if you
want to use storage array protection features. See the Oracle Database SQL Reference for
more information on redundancy levels. See "Overview of the Components of
Automatic Storage Management" on page 12-3 and "Failure Groups and Mirroring" on
page 12-16 for information on failure groups.
ASM programmatically determines the size of each disk. If for some reason this is not
possible, or if you want to restrict the amount of space used on a disk, you are able to
specify a SIZE clause for each disk. ASM creates operating system–independent
names for the disks in a disk group that you can use to reference the disks in other
SQL statements. Optionally, you can provide your own name for a disk using the
NAME clause. Disk names are available in the V$ASM_DISK view.
The ASM instance ensures that any disk being included in the newly created disk
group is addressable and is not already a member of another disk group. This requires
reading the first block of the disk to determine if it already belongs to a disk group. If
not, a header is written. It is not possible for a disk to be a member of multiple disk
groups.
If a disk has a header indicates that it is part of another disk group, you can force it to
become a member of the disk group you are creating by specifying the FORCE clause.
For example, a disk with an ASM header might have failed temporarily, so that its
header could not be cleared when it was dropped from its disk group. After the disk is
repaired, it is no longer part of any disk group, but it still has an ASM header. The
FORCE flag is required to use the disk in a new disk group. The original disk group
must not be mounted, and the disk must have a disk group header, otherwise the
operation fails. Note that if you do this, you may cause another disk group to become
unusable. If you specify NOFORCE, which is the default, you receive an error if you
attempt to include a disk that already belongs to another disk group.
The CREATE DISKGROUP statement mounts the disk group for the first time, and adds
the disk group name to the ASM_DISKGROUPS initialization parameter if a server
parameter file is being used. If a text initialization parameter file is being used and you
want the disk group to be automatically mounted at instance startup, then you must
remember to add the disk group name to the ASM_DISKGROUPS initialization
parameter before the next time that you shut down and restart the ASM instance.
Creating a Disk Group: Example
The following examples assume that the ASM_DISKSTRING initialization parameter is
set to '/devices/*'. Assume the following:
■

ASM disk discovery identifies the following disks in directory /devices.
/devices/diska1
/devices/diska2
/devices/diska3

Using Automatic Storage Management 12-19

Administering Automatic Storage Management Disk Groups

/devices/diska4
/devices/diskb1
/devices/diskb2
/devices/diskb3
/devices/diskb4
■

The disks diska1 - diska4 are on a separate SCSI controller from disks diskb1 diskb4.

The following SQL*Plus session illustrates starting an ASM instance and creating a
disk group named dgroup1.
% SQLPLUS /NOLOG
SQL> CONNECT / AS SYSDBA
Connected to an idle instance.
SQL> STARTUP NOMOUNT
SQL> CREATE DISKGROUP dgroup1 NORMAL REDUNDANCY
2 FAILGROUP controller1 DISK
3 '/devices/diska1',
4 '/devices/diska2',
5 '/devices/diska3',
6 '/devices/diska4'
7 FAILGROUP controller2 DISK
8 '/devices/diskb1',
9 '/devices/diskb2',
10 '/devices/diskb3',
11 '/devices/diskb4';

In this example, dgroup1 is composed of eight disks that are defined as belonging to
either failure group controller1 or controller2. Because NORMAL REDUNDANCY
level is specified for the disk group, ASM provides mirroring for each type of database
file according to the mirroring settings in the system default templates.
For example, in the system default templates shown in Table 12–5 on page 12-30,
default redundancy for the online redo log files (ONLINELOG template) for a normal
redundancy disk group is MIRROR. This means that when one copy of a redo log file
extent is written to a disk in failure group controller1, a mirrored copy of the file
extent is written to a disk in failure group controller2. You can see that to support
the default mirroring of a normal redundancy disk group, at least two failure groups
must be defined.
If you do not specify failure groups, each disk is automatically
placed in its own failure group.

Note:

Because no NAME clauses are provided for any of the disks being included in the disk
group, the disks are assigned the names of dgroup1_0001, dgroup1_0002, ...,
dgroup1_0008.
Note: If you do not provide a NAME clause and you assigned a label
to a disk through ASMLib, the label is used as the disk name.

Altering the Disk Membership of a Disk Group
At a later time after the creation of a disk group, you can change its composition by
adding more disks, resizing disks, or dropping disks. You use clauses of the ALTER

12-20 Oracle Database Administrator’s Guide

Administering Automatic Storage Management Disk Groups

DISKGROUP statement to perform these actions. You can perform multiple operations
with one ALTER DISKGROUP statement.
ASM automatically rebalances when the composition of a disk group changes. Because
rebalancing can be a long running operation, the ALTER DISKGROUP statement by
default does not wait until the operation is complete before returning. To monitor
progress of these long running operations, query the V$ASM_OPERATION dynamic
performance view.
If you want the ALTER DISKGROUP statement to wait until the rebalance operation is
complete before returning, you can add the REBALANCE WAIT clause. This is
especially useful in scripts. The statement also accepts a REBALANCE NOWAIT clause,
which invokes the default behavior of conducting the rebalance operation
asynchronously in the background. You can interrupt a rebalance running in wait
mode by typing CTRL-C on most platforms. This causes the command to return
immediately with the message ORA-01013: user requested cancel of
current operation, and to continue the operation asynchronously. Typing
CTRL-C does not cancel the rebalance operation or any disk add, drop, or resize
operations.
To control the speed and resource consumption of the rebalance operation, you can
include the REBALANCE POWER clause in statements that add, drop, or resize disks.
See "Manually Rebalancing a Disk Group" on page 12-24 for more information on this
clause.

Adding Disks to a Disk Group
You use the ADD clause of the ALTER DISKGROUP statement to add disks to a disk
group, or to add a failure group to the disk group. The ALTER DISKGROUP clauses
that you can use when adding disks to a disk group are similar to those that can be
used when specifying the disks to be included when initially creating a disk group.
This is discussed in "Creating a Disk Group" on page 12-19.
The new disks will gradually start to carry their share of the workload as rebalancing
progresses.
ASM behavior when adding disks to a disk group is best illustrated through examples.
Adding Disks to a Disk Group: Example 1 The following statement adds disks to
dgroup1:
ALTER DISKGROUP dgroup1 ADD DISK
'/devices/diska5' NAME diska5,
'/devices/diska6' NAME diska6;

Because no FAILGROUP clauses are included in the ALTER DISKGROUP statement,
each disk is assigned to its own failure group. The NAME clauses assign names to the
disks, otherwise they would have been assigned system-generated names.
Note: If you do not provide a NAME clause and you assigned a label
to a disk through ASMLib, the label is used as the disk name.
Adding Disks to a Disk Group: Example 2 The statements presented in this example
demonstrate the interactions of disk discovery with the ADD DISK operation.

Assume that disk discovery now identifies the following disks in directory /devices:
/devices/diska1 -- member of dgroup1
/devices/diska2 -- member of dgroup1
/devices/diska3 -- member of dgroup1

Using Automatic Storage Management 12-21

Administering Automatic Storage Management Disk Groups

/devices/diska4 -- member of dgroup1
/devices/diska5 -- candidate disk
/devices/diska6 -- candidate disk
/devices/diska7 -- candidate disk
/devices/diska8 -- candidate disk
/devices/diskb1 -- member of dgroup1
/devices/diskb2 -- member of dgroup1
/devices/diskb3 -- member of dgroup1
/devices/diskb4 -- member of dgroup2
/devices/diskc1 -- member of dgroup2
/devices/diskc2 -- member of dgroup2
/devices/diskc3 -- member of dgroup3
/devices/diskc4 -- candidate disk
/devices/diskd1 -- candidate disk
/devices/diskd2 -- candidate disk
/devices/diskd3 -- candidate disk
/devices/diskd4 -- candidate disk
/devices/diskd5 -- candidate disk
/devices/diskd6 -- candidate disk
/devices/diskd7 -- candidate disk
/devices/diskd8 -- candidate disk
On a server that runs Unix, Solaris, or Linux and that does not have ASMLib installed,
issuing the following statement would successfully add disks /devices/diska5
through /devices/diska8 to dgroup1.
ALTER DISKGROUP dgroup1 ADD DISK
'/devices/diska[5678]';

The following statement would successfully add disks /devices/diska5 and
/devices/diskd5 to dgroup1.
ALTER DISKGROUP dgroup1 ADD DISK
'/devices/disk*5';

The following statement would fail because /devices/diska1 - /devices/diska4
already belong to dgroup1.
ALTER DISKGROUP dgroup1 ADD DISK
'/devices/diska*';

The following statement would fail because the search string matches disks that are
contained in other disk groups. Specifically, /devices/diska4 belongs to disk group
dgroup1 and /devices/diskb4 belongs to disk group dgroup2.
ALTER DISKGROUP dgroup1 ADD DISK
'/devices/disk*4';

The following statement would successfully add /devices/diska5 and
/devices/diskd1 through /devices/diskd8 to disk group dgroup1. It does not
matter that /devices/diskd5 is included in both search strings. This statement runs
with a rebalance power of 5, and does not return until the rebalance operation is
complete.
ALTER DISKGROUP dgroup1 ADD DISK
'/devices/disk*5',

12-22 Oracle Database Administrator’s Guide

Administering Automatic Storage Management Disk Groups

'/devices/diskd*'
REBALANCE POWER 5 WAIT;

The following use of the FORCE clause enables /devices/diskc3 to be added to
dgroup2, even though it is a current member of dgroup3.
ALTER DISKGROUP dgroup2 ADD DISK
'/devices/diskc3' FORCE;

For this statement to succeed, dgroup3 cannot be mounted.

Dropping Disks from Disk Groups
To drop disks from a disk group, use the DROP DISK clause of the ALTER DISKGROUP
statement. You can also drop all of the disks in specified failure groups using the DROP
DISKS IN FAILGROUP clause.
When a disk is dropped, the disk group is rebalanced by moving all of the file extents
from the dropped disk to other disks in the disk group. A drop disk operation may fail
if not enough space is available on the other disks. If you intend to add some disks and
drop others, it is prudent to add disks first to ensure that enough space is available for
the drop operation. The best approach is to perform both the add and drop with the
same ALTER DISKGROUP statement. This can reduce total time spent rebalancing.
WARNING: The ALTER DISKGROUP...DROP DISK statement
returns before the drop and rebalance operations are complete. Do
not reuse, remove, or disconnect the dropped disk until the
HEADER_STATUS column for this disk in the V$ASM_DISK view
changes to FORMER. You can query the V$ASM_OPERATION view to
determine the amount of time remaining for the drop/rebalance
operation to complete. For more information, see Oracle Database
SQL Reference and Oracle Database Reference.

If you specify the FORCE clause for the drop operation, the disk is dropped even if
Automatic Storage Management cannot read or write to the disk. You cannot use the
FORCE flag when dropping a disk from an external redundancy disk group.
Caution: A DROP FORCE operation leaves data at reduced
redundancy for as long as it takes for the subsequent rebalance
operation to complete. This increases your exposure to data loss if
there is a subsequent disk failure during rebalancing. DROP FORCE
should be used only with great care.
Dropping Disks from Disk Groups: Example 1 This example drops diska5 (the
operating system independent name assigned to /devices/diska5 in "Adding
Disks to a Disk Group: Example 1" on page 12-21) from disk group dgroup1.
ALTER DISKGROUP dgroup1 DROP DISK diska5;

Dropping Disks from Disk Groups: Example 2 This example drops diska5 from
disk group dgroup1, and also illustrates how multiple actions are possible with one
ALTER DISKGROUP statement.
ALTER DISKGROUP dgroup1 DROP DISK diska5
ADD FAILGROUP failgrp1 DISK '/devices/diska9' NAME diska9;

Using Automatic Storage Management 12-23

Administering Automatic Storage Management Disk Groups

Resizing Disks in Disk Groups
The RESIZE clause of ALTER DISKGROUP enables you to perform the following
operations:
■

Resize all disks in the disk group

■

Resize specific disks

■

Resize all of the disks in a specified failure group

If you do not specify a new size in the SIZE clause then ASM uses the size of the disk
as returned by the operating system. This could be a means of recovering disk space
when you had previously restricted the size of the disk by specifying a size smaller
than disk capacity.
The new size is written to the ASM disk header record and if the size of the disk is
increasing, then the new space is immediately available for allocation. If the size is
decreasing, rebalancing must relocate file extents beyond the new size limit to
available space below the limit. If the rebalance operation can successfully relocate all
extents, then the new size is made permanent, otherwise the rebalance fails.
Resizing Disks in Disk Groups: Example The following example resizes all of the

disks in failure group failgrp1 of disk group dgroup1. If the new size is greater
than disk capacity, the statement will fail.
ALTER DISKGROUP dgroup1
RESIZE DISKS IN FAILGROUP failgrp1 SIZE 100G;

Undropping Disks in Disk Groups
The UNDROP DISKS clause of the ALTER DISKGROUP statement enables you to cancel
all pending drops of disks within disk groups. If a drop disk operation has already
completed, then this statement cannot be used to restore it. This statement cannot be
used to restore disks that are being dropped as the result of a DROP DISKGROUP
statement, or for disks that are being dropped using the FORCE clause.
Undropping Disks in Disk Groups: Example The following example cancels the
dropping of disks from disk group dgroup1:
ALTER DISKGROUP dgroup1 UNDROP DISKS;

Manually Rebalancing a Disk Group
You can manually rebalance the files in a disk group using the REBALANCE clause of
the ALTER DISKGROUP statement. This would normally not be required, because
ASM automatically rebalances disk groups when their composition changes. You
might want to do a manual rebalance operation if you want to control the speed of
what would otherwise be an automatic rebalance operation.
The POWER clause of the ALTER DISKGROUP...REBALANCE statement specifies the
degree of parallelization, and thus the speed of the rebalance operation. It can be set to
a value from 0 to 11. A value of 0 halts a rebalancing operation until the statement is
either implicitly or explicitly reinvoked. The default rebalance power is set by the
ASM_POWER_LIMIT initialization parameter. See "Tuning Rebalance Operations" on
page 12-9 for more information.
The power level of an ongoing rebalance operation can be changed by entering the
rebalance statement with a new level.
The ALTER DISKGROUP...REBALANCE command by default returns immediately so
that you can issue other commands while the rebalance operation takes place

12-24 Oracle Database Administrator’s Guide

Administering Automatic Storage Management Disk Groups

asynchronously in the background. You can query the V$ASM_OPERATION view for
the status of the rebalance operation.
If you want the ALTER DISKGROUP...REBALANCE command to wait until the
rebalance operation is complete before returning, you can add the WAIT keyword to
the REBALANCE clause. This is especially useful in scripts. The command also accepts a
NOWAIT keyword, which invokes the default behavior of conducting the rebalance
operation asynchronously. You can interrupt a rebalance running in wait mode by
typing CTRL-C on most platforms. This causes the command to return immediately
with the message ORA-01013: user requested cancel of current
operation, and to continue the rebalance operation asynchronously.
Additional rules for the rebalance operation include the following:
■

■
■

■

The ALTER DISKGROUP...REBALANCE statement uses the resources of the single
node upon which it is started.
ASM can perform one rebalance at a time on a given instance.
Rebalancing continues across a failure of the ASM instance performing the
rebalance.
The REBALANCE clause (with its associated POWER and WAIT/NOWAIT keywords)
can also be used in ALTER DISKGROUP commands that add, drop, or resize disks.

Manually Rebalancing a Disk Group: Example The following example manually
rebalances the disk group dgroup2. The command does not return until the rebalance
operation is complete.
ALTER DISKGROUP dgroup2 REBALANCE POWER 5 WAIT;

See Also:

"Tuning Rebalance Operations" on page 12-9

Mounting and Dismounting Disk Groups
Disk groups that are specified in the ASM_DISKGROUPS initialization parameter are
mounted automatically at ASM instance startup. This makes them available to all
database instances running on the same node as ASM. The disk groups are
dismounted at ASM instance shutdown. ASM also automatically mounts a disk group
when you initially create it, and dismounts a disk group if you drop it.
There may be times that you want to mount or dismount disk groups manually. For
these actions use the ALTER DISKGROUP...MOUNT or ALTER
DISKGROUP...DISMOUNT statement. You can mount or dismount disk groups by
name, or specify ALL.
If you try to dismount a disk group that contains open files, the statement will fail,
unless you also specify the FORCE clause.
Dismounting Disk Groups: Example
The following statement dismounts all disk groups that are currently mounted to the
ASM instance:
ALTER DISKGROUP ALL DISMOUNT;

Mounting Disk Groups: Example
The following statement mounts disk group dgroup1:
ALTER DISKGROUP dgroup1 MOUNT;

Using Automatic Storage Management 12-25

Administering Automatic Storage Management Disk Groups

Checking Internal Consistency of Disk Group Metadata
You can check the internal consistency of disk group metadata using the ALTER
DISKGROUP...CHECK statement. Checking can be specified for specific files in a disk
group, specific disks or all disks in a disk group, or specific failure groups within a
disk group. The disk group must be mounted in order to perform these checks.
If any errors are detected, an error message is displayed and details of the errors are
written to the alert log. Automatic Storage Management attempts to correct any errors,
unless you specify the NOREPAIR clause in your ALTER DISKGROUP...CHECK
statement.
The following statement checks for consistency in the metadata for all disks in the
dgroup1 disk group:
ALTER DISKGROUP dgroup1 CHECK ALL;

See Oracle Database SQL Reference for additional CHECK clause syntax.

Dropping Disk Groups
The DROP DISKGROUP statement enables you to delete an ASM disk group and
optionally, all of its files. You can specify the INCLUDING CONTENTS clause if you
want any files that may still be contained in the disk group also to be deleted. The
default is EXCLUDING CONTENTS, which provides syntactic consistency and prevents
you from dropping the disk group if it has any contents
The ASM instance must be started and the disk group must be mounted with none of
the disk group files open, in order for the DROP DISKGROUP statement to succeed.
The statement does not return until the disk group has been dropped.
When you drop a disk group, ASM dismounts the disk group and removes the disk
group name from the ASM_DISKGROUPS initialization parameter if a server parameter
file is being used. If a text initialization parameter file is being used, and the disk
group is mentioned in the ASM_DISKGROUPS initialization parameter, then you must
remember to remove the disk group name from the ASM_DISKGROUPS initialization
parameter before the next time that you shut down and restart the ASM instance.
The following statement deletes dgroup1:
DROP DISKGROUP dgroup1;

After ensuring that none of the files contained in dgroup1 are open, ASM rewrites the
header of each disk in the disk group to remove ASM formatting information. The
statement does not specify INCLUDING CONTENTS, so the drop operation will fail if
the disk group contains any files.

Managing Disk Group Directories
ASM disk groups contain a system-generated hierarchical directory structure for
storing ASM files. The system-generated filename that ASM assigns to each file
represents a path in this directory hierarchy. The following is an example of a
system-generated filename:
+dgroup2/sample/controlfile/Current.256.541956473

The plus sign represents the root of the ASM file system. The dgroup2 directory is the
parent directory for all files in the dgroup2 disk group. The sample directory is the
parent directory for all files in the sample database, and the controlfile directory
contains all control files for the sample database.

12-26 Oracle Database Administrator’s Guide

Administering Automatic Storage Management Disk Groups

You can create your own directories within this hierarchy to store aliases that you
create. Thus, in addition to having user-friendly alias names for ASM files, you can
have user-friendly paths to those names.
This section describes how to use the ALTER DISKGROUP statement to create a
directory structure for aliases. It also describes how you can rename a directory or
drop a directory.
See Also:
■

■

The chapter on ASMCMD in Oracle Database Utilities.
ASMCMD is a command line utility that you can use to easily
create aliases and directories in ASM disk groups.
"About ASM Filenames" on page 12-34 for a discussion of ASM
filenames and how they are formed

Creating a New Directory
Use the ADD DIRECTORY clause of the ALTER DISKGROUP statement to create a
hierarchical directory structure for alias names for ASM files. Use the slash character
(/) to separate components of the directory path. The directory path must start with
the disk group name, preceded by a plus sign (+), followed by any subdirectory names
of your choice.
The parent directory must exist before attempting to create a subdirectory or alias in
that directory.
The following statement creates a hierarchical
directory for disk group dgroup1, which can contain, for example, the alias name
+dgroup1/mydir/control_file1:
Creating a New Directory: Example 1

ALTER DISKGROUP dgroup1 ADD DIRECTORY '+dgroup1/mydir';

Assuming no subdirectory exists under the
directory +dgoup1/mydir, the following statement fails:

Creating a New Directory: Example 2

ALTER DISKGROUP dgroup1
ADD DIRECTORY '+dgroup1/mydir/first_dir/second_dir';

Renaming a Directory
The RENAME DIRECTORY clause of the ALTER DISKGROUP statement enables you to
rename a directory. System created directories (those containing system-generated
names) cannot be renamed.
Renaming a Directory: Example

The following statement renames a directory:

ALTER DISKGROUP dgroup1 RENAME DIRECTORY '+dgroup1/mydir'
TO '+dgroup1/yourdir';

Dropping a Directory
You can delete a directory using the DROP DIRECTORY clause of the ALTER
DISKGROUP statement. You cannot drop a system created directory. You cannot drop a
directory containing alias names unless you also specify the FORCE clause.
Dropping a Directory: Example This statement deletes a directory along with its
contents:
ALTER DISKGROUP dgroup1 DROP DIRECTORY '+dgroup1/yourdir' FORCE;

Using Automatic Storage Management 12-27

Administering Automatic Storage Management Disk Groups

Managing Alias Names for ASM Filenames
Alias names (or just "aliases") are intended to provide a more user-friendly means of
referring to ASM files, rather than using the system-generated filenames.
You can create an alias for a file when you create it in the database, or you can add an
alias to an existing file using the ADD ALIAS clause of the ALTER DISKGROUP
statement. You can create an alias in any system-generated or user-created ASM
directory. You cannot create an alias at the root level (+), however.
The V$ASM_ALIAS view contains a row for every alias name known to the ASM
instance. This view also contains ASM directories.
V$ASM_ALIAS also contains a row for every system-generated
filename. These rows are indicated by the value 'Y' in the column
SYSTEM_CREATED.

Note:

The chapter on ASMCMD in Oracle Database Utilities.
ASMCMD is a command line utility that you can use to easily create
aliases.

See Also:

Adding an Alias Name for an ASM Filename
Use the ADD ALIAS clause of the ALTER DISKGROUP statement to create an alias
name for an ASM filename. The alias name must consist of the full directory path and
the alias itself.
Adding an Alias Name for an ASM Filename: Example 1 The following statement
adds a new alias name for a system-generated file name:
ALTER DISKGROUP dgroup1 ADD ALIAS '+dgroup1/mydir/second.dbf'
FOR '+dgroup1/sample/datafile/mytable.342.3';

Adding an Alias Name for as ASM Filename: Example 2 This statement illustrates
another means of specifying the ASM filename for which the alias is to be created. It
uses the numeric form of the ASM filename, which is an abbreviated and derived form
of the system-generated filename.
ALTER DISKGROUP dgroup1 ADD ALIAS '+dgroup1/mydir/second.dbf'
FOR '+dgroup1.342.3';

Renaming an Alias Name for an ASM Filename
Use the RENAME ALIAS clause of the ALTER DISKGROUP statement to rename an
alias for an ASM filename. The old and the new alias names must consist of the full
directory paths of the alias names.
Renaming an Alias Name for an ASM Filename: Example The following statement

renames an alias:
ALTER DISKGROUP dgroup1 RENAME ALIAS '+dgroup1/mydir/datafile.dbf'
TO '+dgroup1/payroll/compensation.dbf';

Dropping an Alias Name for an ASM Filename
Use the DROP ALIAS clause of the ALTER DISKGROUP statement to drop an alias for
an ASM filename. The alias name must consist of the full directory path and the alias
itself. The underlying file to which the alias refers is unchanged.

12-28 Oracle Database Administrator’s Guide

Administering Automatic Storage Management Disk Groups

Dropping an Alias Name for an ASM Filename: Example 1 The following statement
drops an alias:
ALTER DISKGROUP dgroup1 DROP ALIAS '+dgroup1/payroll/compensation.dbf';

Dropping an Alias Name for an ASM Filename: Example 2 The following statement
will fail because it attempts to drop a system-generated filename. This is not allowed:
ALTER DISKGROUP dgroup1
DROP ALIAS '+dgroup1/sample/datafile/mytable.342.3';

Dropping Files and Associated Aliases from a Disk Group
You can delete ASM files and their associated alias names from a disk group using the
DROP FILE clause of the ALTER DISKGROUP statement. You must use a fully
qualified filename, a numeric filename, or an alias name when specifying the file that
you want to delete.
Some reasons why you might need to delete files include:
■

■

Files created using aliases are not Oracle-managed files. Consequently, they are not
automatically deleted.
A point in time recovery of a database might restore the database to a time before
a tablespace was created. The restore does not delete the tablespace, but there is no
reference to the tablespace (or its datafile) in the restored database. You can
manually delete the datafile.

Dropping an alias does not drop the underlying file on the file system.
Dropping Files and Associated Aliases from a Disk Group: Example 1
The following statement uses the alias name for the file to delete both the file and the
alias:
ALTER DISKGROUP dgroup1 DROP FILE '+dgroup1/payroll/compensation.dbf';

Dropping Files and Associated Aliases from a Disk Group: Example 2
In this example the system-generated filename is used to drop the file and any
associated alias:
ALTER DISKGROUP dgroup1
DROP FILE '+dgroup1/sample/datafile/mytable.342.372642';

Managing Disk Group Templates
Templates are used to set redundancy (mirroring) and striping attributes of files
created in an ASM disk group. When a file is created, redundancy and striping
attributes are set for that file based on an explicitly named template or the system
template that is the default template for the file type.
When a disk group is created, ASM creates a set of default templates for that disk
group. The set consists of one template for each file type (data file, control file, redo log
file, and so on) supported by ASM. For example, a template named ONLINELOG
provides the default file redundancy and striping attributes for all redo log files
written to ASM disks. Default template settings depend on the disk group type. For
example, the default template for datafiles for a normal redundancy disk group sets
2-way mirroring, while the corresponding default template in a high redundancy disk
group sets 3-way mirroring.You can modify these default templates. Table 12–5 lists
the default templates and the attributes that they apply to matching files. As the table

Using Automatic Storage Management 12-29

Administering Automatic Storage Management Disk Groups

shows, the initial redundancy value of each default template depends on the type of
disk group that the template belongs to.
The striping attribute of templates applies to all disk group
types (normal redundancy, high redundancy, and external
redundancy). However, the mirroring attribute of templates applies
only to normal redundancy disk groups, and is ignored for
high-redundancy disk groups (where every file is always 3-way
mirrored) and external redundancy disk groups (where no files are
mirrored by ASM). Nevertheless, each type of disk group gets a full
set of templates, and the redundancy value in each template is always
set to the proper default for the disk group type.

Note:

Using clauses of the ALTER DISKGROUP statement, you can add new templates to a
disk group, modify existing ones, or drop templates. The reason to add templates is to
create the right combination of attributes to meet unique requirements. You can then
reference a template name when creating a file, thereby assigning desired attributes on
an individual file basis rather than on the basis of file type.The V$ASM_TEMPLATE
view lists all of the templates known to the ASM instance.
Template Attributes
Table 12–3 shows the permitted striping attribute values, and Table 12–4 shows the
permitted redundancy values for ASM templates. These values correspond to the
STRIPE and REDUND columns of V$ASM_TEMPLATE.
Table 12–3

Permitted Values for ASM Template Striping Attribute

Striping
Attribute
Value

Description

FINE

Striping in 128KB chunks.

COARSE

Striping in 1MB chunks.

Table 12–4

Permitted Values for ASM Template Redundancy Attribute

Redundancy
Attribute
Value

Resulting Mirroring in Resulting Mirroring Resulting Mirroring in
Normal Redundancy in High Redundancy External Redundancy
Disk Group
Disk Group
Disk Group

MIRROR

2-way mirroring

3-way mirroring

(Not allowed)

HIGH

3-way mirroring

3-way mirroring

(Not allowed)

UNPROTECTED

No mirroring

(Not allowed)

No mirroring

Table 12–5

ASM System Default Templates Attribute Settings
Mirroring,
High
Redundancy
Disk Group

Mirroring,
External
Redundancy
Disk Group

Template Name

Striping

Mirroring,
Normal
Redundancy
Disk Group

CONTROLFILE

FINE

HIGH

HIGH

UNPROTECTED

DATAFILE

COARSE

MIRROR

HIGH

UNPROTECTED

ONLINELOG

FINE

MIRROR

HIGH

UNPROTECTED

12-30 Oracle Database Administrator’s Guide

Administering Automatic Storage Management Disk Groups

Table 12–5 (Cont.) ASM System Default Templates Attribute Settings

Template Name

Striping

Mirroring,
Normal
Redundancy
Disk Group

Mirroring,
High
Redundancy
Disk Group

Mirroring,
External
Redundancy
Disk Group

ARCHIVELOG

COARSE

MIRROR

HIGH

UNPROTECTED

TEMPFILE

COARSE

MIRROR

HIGH

UNPROTECTED

BACKUPSET

COARSE

MIRROR

HIGH

UNPROTECTED

PARAMETERFILE

COARSE

MIRROR

HIGH

UNPROTECTED

DATAGUARDCONFIG

COARSE

MIRROR

HIGH

UNPROTECTED

FLASHBACK

FINE

MIRROR

HIGH

UNPROTECTED

CHANGETRACKING

COARSE

MIRROR

HIGH

UNPROTECTED

DUMPSET

COARSE

MIRROR

HIGH

UNPROTECTED

XTRANSPORT

COARSE

MIRROR

HIGH

UNPROTECTED

AUTOBACKUP

COARSE

MIRROR

HIGH

UNPROTECTED

Adding Templates to a Disk Group
To add a new template for a disk group, you use the ADD TEMPLATE clause of the
ALTER DISKGROUP statement. You specify the name of the template, its redundancy
attribute, and its striping attribute.
If the name of your new template is not one of the names
listed in Table 12–5, it is not used as a default template for database
file types. To use it, you must reference its name when creating a file.
See "About ASM Filenames" on page 12-34 for more information.
Note:

The syntax of the ALTER DISKGROUP command for adding a template is as follows:
ALTER DISKGROUP disk_group_name ADD TEMPLATE template_name
ATTRIBUTES ([{MIRROR|HIGH|UNPROTECTED}] [{FINE|COARSE}]);

Both types of attribute are optional. If no redundancy attribute is specified, the value
defaults to MIRROR for a normal redundancy disk group, HIGH for a high redundancy
disk group, and UNPROTECTED for an external redundancy disk group. If no striping
attribute is specified, the value defaults to COARSE.
Adding Templates to a Disk Group: Example 1 The following statement creates a

new template named reliable for the normal redundancy disk group dgroup2:
ALTER DISKGROUP dgroup2 ADD TEMPLATE reliable ATTRIBUTES (HIGH FINE);

Adding Templates to a Disk Group: Example 2 This statement creates a new

template named unreliable that specifies files are to be unprotected (no mirroring).
(Oracle discourages the use of unprotected files unless hardware mirroring is in place;
this example is presented only to further illustrate how the attributes for templates are
set.)
ALTER DISKGROUP dgroup2 ADD TEMPLATE unreliable ATTRIBUTES (UNPROTECTED);

Oracle Database SQL Reference for more information on the
ALTER DISKGROUP...ADD TEMPLATE command.

See Also:

Using Automatic Storage Management 12-31

Using Automatic Storage Management in the Database

Modifying a Disk Group Template
The ALTER TEMPLATE clause of the ALTER DISKGROUP statement enables you to
modify the attribute specifications of an existing system default or user-defined disk
group template. Only specified template properties are changed. Unspecified
properties retain their current value.
When you modify an existing template, only new files created by the template will
reflect the attribute changes. Existing files maintain their attributes.
Modifying a Disk Group Template: Example The following example changes the
striping attribute specification of the reliable template for disk group dgroup2.
ALTER DISKGROUP dgroup2 ALTER TEMPLATE reliable
ATTRIBUTES (COARSE);

Dropping Templates from a Disk Group
Use the DROP TEMPLATE clause of the ALTER DISKGROUP statement to drop one or
more templates from a disk group. You can only drop templates that are user-defined;
you cannot drop system default templates.
Dropping Templates from a Disk Group: Example

This example drops the previously

created template unreliable from dgroup2:
ALTER DISKGROUP dgroup2 DROP TEMPLATE unreliable;

Using Automatic Storage Management in the Database
This section discusses how you use Automatic Storage Management (ASM) to manage
database files.
This section does not address Real Application Clusters
environments. For this information, see Oracle Real Application Clusters
Installation and Configuration Guide and Oracle Database Oracle
Clusterware and Oracle Real Application Clusters Administration and
Deployment Guide.

Note:

When you use ASM, Oracle database files are stored in ASM disk groups. Your Oracle
Database then benefits from the improved performance, improved resource utilization,
and higher availability provided by ASM. Most ASM files are Oracle-managed files.
See Table 12–7 and Chapter 11, "Using Oracle-Managed Files" for more information.

12-32 Oracle Database Administrator’s Guide

Using Automatic Storage Management in the Database

ASM supports good forward and backward compatibility
between 10.x versions of Oracle Database and 10.x versions of ASM.
That is, any combination of versions 10.1.x.y and 10.2.x.y for either the
ASM instance or the database instance works correctly, with one
caveat: For a 10.1.x.y database instance to connect to a 10.2.x.y ASM
instance, the database must be version 10.1.0.3 or later.

Note:

When mixing software versions, ASM functionality reverts to the
functionality of the earliest version in use. For example, a 10.1.0.3
database working with a 10.2.0.0 ASM instance does not exploit new
features in ASM 10.2.0.0. Conversely, a 10.2.0.0 database working with
a 10.1.0.2 ASM instance behaves as a 10.1.0.2 database instance when
interacting with ASM. Two columns in the view V$ASM_CLIENT,
SOFTWARE_VERSION and COMPATIBLE_VERSION, display version
information. These columns are described in Oracle Database Reference.
Oracle database files that are stored in disk groups are not visible to the operating
system or its utilities, but are visible to RMAN, ASMCMD, and to the XML DB
repository.
This following topics are contained in this section:
■

What Types of Files Does ASM Support?

■

About ASM Filenames

■

Starting the ASM and Database Instances

■

Creating and Referencing ASM Files in the Database

■

Creating a Database in ASM

■

Creating Tablespaces in ASM

■

Creating Redo Logs in ASM

■

Creating a Control File in ASM

■

Creating Archive Log Files in ASM

■

Recovery Manager (RMAN) and ASM

What Types of Files Does ASM Support?
ASM supports most file types required by the database. However, most administrative
files cannot be stored on a ASM disk group. These include trace files, audit files, alert
logs, backup files, export files, tar files, and core files.
Table 12–6 lists file types, indicates if they are supported, and lists the system default
template that provides the attributes for file creation. Some of the file types shown in
the table are related to specific products or features, and are not discussed in this book.
Table 12–6

File Types Supported by Automatic Storage Management

File Type

Supported

Default Templates

Control files

yes

CONTROLFILE

Datafiles

yes

DATAFILE

Redo log files

yes

ONLINELOG

Archive log files

yes

ARCHIVELOG

Using Automatic Storage Management 12-33

Using Automatic Storage Management in the Database

Table 12–6 (Cont.) File Types Supported by Automatic Storage Management
File Type

Supported

Default Templates

Trace files

no

n/a

Temporary files

yes

TEMPFILE

Datafile backup pieces

yes

BACKUPSET

Datafile incremental backup pieces

yes

BACKUPSET

Archive log backup piece

yes

BACKUPSET

Datafile copy

yes

DATAFILE

Persistent initialization parameter file (SPFILE)

yes

PARAMETERFILE

Disaster recovery configurations

yes

DATAGUARDCONFIG

Flashback logs

yes

FLASHBACK

Change tracking file

yes

CHANGETRACKING

Data Pump dumpset

yes

DUMPSET

Automatically generated control file backup

yes

AUTOBACKUP

Cross-platform transportable datafiles

yes

XTRANSPORT

Operating system files

no

n/a

See Also: "Managing Disk Group Templates" on page 12-29 for a
description of the system default templates

About ASM Filenames
Every file created in ASM gets a system-generated filename, otherwise known as a
fully qualified filename. The fully qualified filename represents a complete path
name in the ASM file system. An example of a fully qualified filename is:
+dgroup2/sample/controlfile/Current.256.541956473

You can use the fully qualified filename to reference (read or retrieve) an ASM file. You
can also use other abbreviated filename formats, described later in this section, to
reference an ASM file.
ASM generates a fully qualified filename upon any request to create a file. A creation
request does not—in fact, cannot—specify a fully qualified filename. Instead, it uses a
simpler syntax to specify a file, such as an alias or just a disk group name. ASM then
creates the file, placing it in the correct ASM "path" according to file type, and then
assigns an appropriate fully qualified filename. If you specify an alias in the creation
request, ASM also creates the alias so that it references the fully qualified filename.
ASM file creation requests are either single file creation requests or multiple file
creation request.
Single File Creation Request This is a request to create a single file, such as a datafile
or a control file. The form of the ASM filename in this type of request is either an alias
(such as +dgroup2/control/ctl.f) or a disk group name preceded by a plus sign.
You use the alias or disk group name where a filename is called for in a statement,
such as CREATE TABLESPACE or CREATE CONTROLFILE.

12-34 Oracle Database Administrator’s Guide

Using Automatic Storage Management in the Database

'/ ' and '\' are interchangeable in filenames. Filenames are case
insensitive, but case retentive.

Note:

Multiple File Creation Request This is a request that can occur multiple times to
create an ASM file. For example, if you assign a value to the initialization parameter
DB_CREATE_FILE_DEST, you can issue a CREATE TABLESPACE statement (without
a filename specification) multiple times. Each time, ASM creates a different unique
datafile name.

One form of the ASM filename to use in this type of request is an incomplete
filename, which is just a disk group name preceded by a plus sign. In this case, you set
DB_CREATE_FILE_DEST to an incomplete filename (for example, +dgroup2), and
whenever a command is executed that must create a database file in
DB_CREATE_FILE_DEST, the file is created in the designated disk group and assigned
a unique fully qualified name. You can use an incomplete filename in other *_DEST
initialization parameters.
Every time ASM creates a fully qualified name, it writes an
alert log message containing the ASM-generated name. You can also
find the generated name in database views displaying Oracle file
names, such as V$DATAFILE and V$LOGFILE. You can use this name,
or an abbreviated form of it, if you later need to reference an ASM file
in a SQL statement. Like other Oracle database filenames, ASM
filenames are kept in the control file and the RMAN catalog.

Note:

The sections that follow provide details on each of the six possible forms of an ASM
filename:
■

Fully Qualified ASM Filename

■

Numeric ASM Filename

■

Alias ASM Filenames

■

Alias ASM Filename with Template

■

Incomplete ASM Filename

■

Incomplete ASM Filename with Template

The following table specifies the valid contexts for each filename form, and if the form
is used for file creation, whether the created file is an Oracle-managed file (OMF).
Table 12–7

Valid Contexts for the ASM Filename Forms
Valid Context

OMF

Filename Form

Reference

Single-File
Creation

Multiple File Created as
Creation
Oracle-managed?

Fully qualified filename

Yes

No

No

Numeric filename

Yes

No

No

Alias filename

Yes

Yes

No

No

Alias with template filename

No

Yes

No

No

Incomplete filename

No

Yes

Yes

Yes

Incomplete filename with template

No

Yes

Yes

Yes

Using Automatic Storage Management 12-35

Using Automatic Storage Management in the Database

Fully qualified and numeric filenames can be used in
single-file create if you specify the REUSE keyword, as described in
"Using ASM Filenames in SQL Statements" on page 12-41.

Note:

Fully Qualified ASM Filename
This form of ASM filename can be used for referencing existing ASM files. It is the
filename that ASM always automatically generates when an ASM file is created.
A fully qualified filename has the following form:
+group/dbname/file_type/file_type_tag.file.incarnation

where:
■

+group is the disk group name preceded by a plus sign.
You can think of the plus sign (+) as the root directory of the ASM file system, like
the slash (/) in Unix operating systems.

■
■

■

■

dbname is the DB_UNIQUE_NAME of the database to which the file belongs.
file_type is the Oracle file type and can be one of the file types shown in
Table 12–8.
file_type_tag is type specific information about the file and can be one of the
tags shown in Table 12–8.
file.incarnation is the file/incarnation pair, used to ensure uniqueness.

An example of a fully qualified ASM filename is:
+dgroup2/sample/controlfile/Current.256.541956473

Table 12–8

Oracle File Types and Automatic Storage Management File Type Tags

Automatic Storage
Management
file_type

Automatic Storage
Management
file_type_tag

Comments

Control files and
backup control files

Current

--

DATAFILE

Datafiles and datafile
copies

tsname

Tablespace into which the file is
added

ONLINELOG

Online logs

group_group#

--

ARCHIVELOG

Archive logs

thread_thread#_seq_s -equence#

TEMPFILE

Tempfiles

tsname

Tablespace into which the file is
added

BACKUPSET

Datafile and archive
log backup pieces;
datafile incremental
backup pieces

hasspfile_timestamp

hasspfile can take one of two
values: s indicates that the backup
set includes the spfile; n
indicates that the backup set does
not include the spfile.

PARAMETERFILE

Persistent parameter
files

spfile

CONTROLFILE

Description

12-36 Oracle Database Administrator’s Guide

Backup

Using Automatic Storage Management in the Database

Table 12–8 (Cont.) Oracle File Types and Automatic Storage Management File Type Tags
Automatic Storage
Management
file_type

Description

Automatic Storage
Management
file_type_tag

Comments

DAATAGUARDCONFIG

Data Guard
configuration file

db_unique_name

Data Guard tries to use the service
provider name if it is set.
Otherwise the tag defaults to
DRCname.

FLASHBACK

Flashback logs

log_log#

--

CHANGETRACKING

Block change tracking ctf
data

Used during incremental backups

DUMPSET

Data Pump dumpset

user_obj#_file#

Dump set files encode the user
name, the job number that created
the dump set, and the file number
as part of the tag.

XTRANSPORT

Datafile convert

tsname

--

AUTOBACKUP

Automatic backup
files

hasspfile_timestamp

hasspfile can take one of two
values: s indicates that the backup
set includes the spfile; n
indicates that the backup set does
not include the spfile.

Numeric ASM Filename
The numeric ASM filename can be used for referencing existing files. It is derived from
the fully qualified ASM filename and takes the form:
+group.file.incarnation

Numeric ASM filenames can be used in any interface that requires an existing file
name.
An example of a numeric ASM filename is:
+dgroup2.257.541956473

Alias ASM Filenames
Alias ASM filenames, otherwise known as aliases, can be used both for referencing
existing ASM files and for creating new ASM files. Alias names start with the disk
group name preceded by a plus sign, after which you specify a name string of your
choosing. Alias filenames are implemented using a hierarchical directory structure,
with the slash (/) or backslash (\) character separating name components. You can
create an alias in any system-generated or user-created ASM directory. You cannot
create an alias at the root level (+), however.
When you create an ASM file with an alias filename, the file is created with a fully
qualified name, and the alias filename is additionally created. You can then access the
file with either name.
Alias ASM filenames are distinguished from fully qualified filenames or numeric
filenames because they do not end in a dotted pair of numbers. It is an error to attempt
to create an alias that ends in a dotted pair of numbers. Examples of ASM alias
filenames are:
+dgroup1/myfiles/control_file1
+dgroup2/mydir/second.dbf

Using Automatic Storage Management 12-37

Using Automatic Storage Management in the Database

Oracle Database references database files by their alias filenames, but only if you
create the database files with aliases. If you create database files without aliases and
then add aliases later, the database references the files by their fully qualified
filenames. The following are examples of how the database uses alias filenames:
■

■

■

Alias filenames appear in V$ views. For example, if you create a tablespace and
use an alias filename for the datafile, the V$DATAFILE view shows the alias
filename.
When a controlfile points to datafiles and online redo log files, it can use alias
filenames.
The CONTROL_FILES initialization parameter can use the alias filenames of the
controlfiles. The Database Configuration Assistant (DBCA) creates controlfiles
with alias filenames.
Files created using an alias filename are not considered
Oracle-managed files and may require manual deletion in the future if
they are no longer needed.

Note:

See Also:
■
■

"Managing Alias Names for ASM Filenames" on page 12-28
The chapter on ASMCMD in Oracle Database Utilities. ASMCMD is
a command line utility that you can use to easily create aliases and
directories in ASM disk groups.

Alias ASM Filename with Template
An alias ASM filename with template is used only for ASM file creation operations. It
has the following format:
+dgroup(template_name)/alias

Alias filenames with template behave identically to alias filenames. The only
difference is that a file created with an alias filename with template receives the
mirroring and striping attributes specified by the named template. The template must
belong to the disk group that the file is being created in.
The creation and maintenance of ASM templates is discussed in "Managing Disk
Group Templates" on page 12-29.
An example of an alias ASM filename with template is:
+dgroup1(my_template)/config1

Explicitly specifying a template name, as in this example, overrides the system default
template for the type of file being created.
Files created using an alias filename with template are not
considered Oracle-managed files and may require manual deletion in
the future if they are no longer needed.

Note:

Incomplete ASM Filename
Incomplete ASM filenames are used only for file creation operations and are used for
both single and multiple file creation. They consist only of the disk group name. ASM
uses a system default template to determine the ASM file mirroring and striping
12-38 Oracle Database Administrator’s Guide

Using Automatic Storage Management in the Database

attributes. The system template that is used is determined by the file type that is being
created. For example, if you are creating a datafile for a tablespace, the datafile
template is used.
An example of using an incomplete ASM filename is setting the
DB_CREATE_FILE_DEST initialization parameter to:
+dgroup1

With this setting, every time you create a tablespace, a datafile is created in the disk
group dgroup1, and each datafile is assigned a different fully qualified name. See
"Creating ASM Files Using a Default Disk Group Specification" on page 12-40 for more
information.

Incomplete ASM Filename with Template
Incomplete ASM filenames with templates are used only for file creation operations
and are used for both single and multiple file creation. They consist of the disk group
name followed by the template name in parentheses. When you explicitly specify a
template in a file name, ASM uses the specified template instead of the default
template for that file type to determine mirroring and striping attributes for the file.
An example of using an incomplete ASM filename with template is setting the
DB_CREATE_FILE_DEST initialization parameter to:
+dgroup1(my_template)

Starting the ASM and Database Instances
Start the ASM and database instances in the following order:
1.

Start the ASM Instance.
You start the ASM instance on the same node as the database before you start the
database instance. Starting an ASM instance is discussed in "Starting Up an ASM
Instance" on page 12-10

2.

Start the database instance.
Consider the following before you start your database instance:
■

■

When using SQL*Plus to start the database, you must first set the
ORACLE_SID environment variable to the database SID. If ASM and the
database have different Oracle homes, you must also set the ORACLE_HOME
environment variable. Depending on platform, you may have to change other
environment variables as well.
You must have the INSTANCE_TYPE initialization parameter set as follows:
INSTANCE_TYPE = RDBMS
This the default.

■

If you want ASM to be the default destination for creating database files, you
must specify an incomplete ASM filename in one or more of the following
initialization parameters (see "Creating ASM Files Using a Default Disk Group
Specification" on page 12-40):
–

DB_CREATE_FILE_DEST

–

DB_CREATE_ONLINE_LOG_DEST_n

–

DB_RECOVERY_FILE_DEST

Using Automatic Storage Management 12-39

Using Automatic Storage Management in the Database

■

–

CONTROL_FILES

–

LOG_ARCHIVE_DEST_n

–

LOG_ARCHIVE_DEST

–

STANDBY_ARCHIVE_DEST

Some additional initialization parameter considerations:
–

LOG_ARCHIVE_FORMAT is ignored if a disk group is specified for
LOG_ARCHIVE_DEST (for example, LOG_ARCHIVE_DEST = +dgroup1).

–

DB_BLOCK_SIZE must be one of the standard block sizes (2K, 4K, 8K, 16K
or 32K bytes).

–

LARGE_POOL_SIZE must be set to at least 1 MB.

Your database instance is now able to create ASM files. You can keep your database
instance open and running when you reconfigure disk groups. When you add or
remove disks from a disk group, ASM automatically rebalances file data in the
reconfigured disk group to ensure a balanced I/O load, even while the database is
running.

Creating and Referencing ASM Files in the Database
ASM files are Oracle-managed files unless you created the file using an alias. Any
Oracle-managed file is automatically deleted when it is no longer needed. An ASM file
is deleted if the creation fails.

Creating ASM Files Using a Default Disk Group Specification
Using the Oracle-managed files feature for operating system files, you can specify a
directory as the default location for the creation of datafiles, tempfiles, redo log files,
and control files. Using the Oracle-managed files feature for ASM, you can specify a
disk group, in the form of an incomplete ASM filename, as the default location for
creation of these files, and additional types of files, including archived log files. As for
operating system files, the name of the default disk group is stored in an initialization
parameter and is used whenever a file specification (for example, DATAFILE clause) is
not explicitly specified during file creation.
The following initialization parameters accept the multiple file creation context form
of ASM filenames as a destination:
Initialization Parameter

Description

DB_CREATE_FILE_DEST

Specifies the default disk group location in which to create:
■

Datafiles

■

Tempfiles

If DB_CREATE_ONLINE_LOG_DEST_n is not specified, then also
specifies the default disk group for:

DB_CREATE_ONLINE_LOG_DEST_n

12-40 Oracle Database Administrator’s Guide

■

Redo log files

■

Control file

Specifies the default disk group location in which to create:
■

Redo log files

■

Control files

Using Automatic Storage Management in the Database

Initialization Parameter

Description

DB_RECOVERY_FILE_DEST

If this parameter is specified and
DB_CREATE_ONLINE_LOG_DEST_n and CONTROL_FILES are
not specified, then this parameter specifies a default disk group
for a flash recovery area that contains a copy of:
■

Control file

■

Redo log files

If no local archive destination is specified, then this parameter
implicitly sets LOG_ARCHIVE_DEST_10 to the
USE_DB_RECOVERY_FILE_DEST value.
Specifies a disk group in which to create control files.

CONTROL_FILES

The following initialization parameters accept the multiple file creation context form
of the ASM filenames and ASM directory names as a destination:
Initialization Parameter

Description

LOG_ARCHIVE_DEST_n

Specifies a default disk group or ASM directory as destination for
archiving redo log files

LOG_ARCHIVE_DEST

Optional parameter to use to specify a default disk group or ASM
directory as destination for archiving redo log files. Use when
specifying only one destination.

STANDBY_ARCHIVE_DEST

Relevant only for a standby database in managed recovery mode.
It specifies a default disk group or ASM directory that is the
location of archive logs arriving from a primary database. Not
discussed in this book. See Oracle Data Guard Concepts and
Administration.

The following example illustrates how an ASM file, in this case a datafile, might be
created in a default disk group.
Creating a Datafile Using a Default Disk Group: Example

Assume the following

initialization parameter setting:
DB_CREATE_FILE_DEST = '+dgroup1'

The following statement creates tablespace tspace1.
CREATE TABLESPACE tspace1;

ASM automatically creates and manages the datafile for tspace1 on ASM disks in the
disk group dgroup1. File extents are stored using the attributes defined by the default
template for a datafile.

Using ASM Filenames in SQL Statements
You can specify ASM filenames in the file specification clause of your SQL statements.
If you are creating a file for the first time, use the creation form of an ASM filename. If
the ASM file already exists, you must use the reference context form of the filename,
and if you are trying to re-create the file, you must add the REUSE keyword. The space
will be reused for the new file. This usage might occur when, for example, trying to
re-create a control file, as shown in "Creating a Control File in ASM" on page 12-44.
If a reference context form is used with the REUSE keyword and the file does not exist,
an error results.

Using Automatic Storage Management 12-41

Using Automatic Storage Management in the Database

Partially created files resulting from system errors are automatically deleted.
Using an ASM Filename in a SQL Statement: Example The following is an example

of specifying an ASM filename in a SQL statement. In this case, it is used in the file
creation context:
CREATE TABLESPACE

tspace2 DATAFILE '+dgroup2' SIZE 200M AUTOEXTEND ON;

The tablespace tspace2 is created and is comprised of one datafile of size 200M
contained in the disk group dgroup2. The datafile is set to auto-extensible with an
unlimited maximum size. An AUTOEXTEND clause can be used to override this default.

Creating a Database in ASM
The recommended method of creating your database is to use the Database
Configuration Assistant (DBCA). However, if you choose to create your database
manually using the CREATE DATABASE statement, then ASM enables you to create a
database and all of its underlying files with a minimum of input from you.
The following is an example of using the CREATE DATABASE statement, where
database files are created and managed automatically by ASM.
Creating a Database in ASM: Example
This example creates a database with the following ASM files:
■
■

■

■

■

A SYSTEM tablespace datafile in disk group dgroup1.
A SYSAUX tablespace datafile in disk group dgroup1. The tablespace is locally
managed with automatic segment-space management.
A multiplexed online redo log is created with two online log groups, one member
of each in dgroup1 and dgroup2 (flash recovery area).
If automatic undo management mode is enabled, then an undo tablespace datafile
in directory dgroup1.
If no CONTROL_FILES initialization parameter is specified, then two control files,
one in drgoup1 and another in dgroup2 (flash recovery area). The control file in
dgroup1 is the primary control file.

The following initialization parameter settings are included in the initialization
parameter file:
DB_CREATE_FILE_DEST = '+dgroup1'
DB_RECOVERY_FILE_DEST = '+dgroup2'
DB_RECOVERY_FILE_DEST_SIZE = 10G

The following statement is issued at the SQL prompt:
SQL> CREATE DATABASE sample;

Creating Tablespaces in ASM
When ASM creates a datafile for a permanent tablespace (or a tempfile for a temporary
tablespace), the datafile is set to auto-extensible with an unlimited maximum size and
100 MB default size. You can use the AUTOEXTEND clause to override this default
extensibility and the SIZE clause to override the default size.
Automatic Storage Management applies attributes to the datafile, as specified in the
system default template for a datafile as shown in the table in "Managing Disk Group
Templates" on page 12-29. You can also create and specify your own template.

12-42 Oracle Database Administrator’s Guide

Using Automatic Storage Management in the Database

Files in a tablespace may be in both ASM files and non-ASM files as a result of the
tablespace history. RMAN commands enable non-ASM files to be relocated to a ASM
disk group and enable ASM files to be relocated as non-ASM files.
The following are some examples of creating tablespaces using Automatic Storage
Management. The examples assume that disk groups have already been configured.
See Also:
■

Oracle Database Backup and Recovery Basics

■

Oracle Database Backup and Recovery Advanced User's Guide

Creating a Tablespace in ASM: Example 1
This example illustrates "out of the box" usage of Automatic Storage Management. You
let Automatic Storage Management create and manage the tablespace datafile for you,
using Oracle supplied defaults that are adequate for most situations.
Assume the following initialization parameter setting:
DB_CREATE_FILE_DEST = '+dgroup2'

The following statement creates the tablespace and its datafile:
CREATE TABLESPACE tspace2;

Creating a Tablespace in ASM: Example 2
The following statements create a tablespace that uses a user defined template (assume
it has been defined) to specify the redundancy and striping attributes of the datafile:
SQL> ALTER SYSTEM SET DB_CREATE_FILE_DEST = '+dgroup1(my_template)';
SQL> CREATE TABLESPACE tspace3;

Creating a Tablespace in ASM: Example 3
The following statement creates an undo tablespace with a datafile that has an alias
name, and with attributes that are set by the user defined template
my_undo_template. It assumes a directory has been created in disk group dgroup3
to contain the alias name and that the user defined template exists. Because an alias is
used to create the datafile, the file is not an Oracle-managed file and will not be
automatically deleted when the tablespace is dropped.
CREATE UNDO TABLESPACE myundo
DATAFILE '+dgroup3(my_undo_template)/myfiles/my_undo_ts' SIZE 200M;

The following statement drops the file manually after the tablespace has been
dropped:
ALTER DISKGROUP dgroup3 DROP FILE '+dgroup3/myfiles/my_undo_ts';

Creating Redo Logs in ASM
Online redo logs can be created in multiple disk groups, either implicitly in the
initialization parameter file or explicitly in an ALTER DATABASE...ADD LOGFILE
statement. Each online log should have one log member in multiple disk groups. The
filenames for log file members are automatically generated.
All partially created redo log files, created as a result of a system error, are
automatically deleted.

Using Automatic Storage Management 12-43

Using Automatic Storage Management in the Database

Adding New Redo Log Files: Example
The following example creates a log file with a member in each of the disk groups
dgroup1 and dgroup2.
The following parameter settings are included in the initialization parameter file:
DB_CREATE_ONLINE_LOG_DEST_1 = '+dgroup1'
DB_CREATE_ONLINE_LOG_DEST_2 = '+dgroup2'

The following statement is issued at the SQL prompt:
ALTER DATABASE ADD LOGFILE;

Creating a Control File in ASM
Control files can be explicitly created in multiple disk groups. The filenames for
control files are automatically generated. If an attempt to create a control file fails,
partially created control files will be automatically be deleted.
There may be times when you need to specify a control file by name. Alias filenames
are provided to allow administrators to reference ASM files with
human-understandable names. The use of an alias in the specification of the control
file during its creation allows the DBA to later refer to the control file with a
human-meaningful name. Furthermore, an alias can be specified as a control file name
in the CONTROL_FILES initialization parameter. Control files created without aliases
can be given alias names at a later time. The ALTER DISKGROUP...CREATE ALIAS
statement is used for this purpose.
When creating a control file, datafiles and log files stored in an ASM disk group
should be given to the CREATE CONTROLFILE command using the file reference
context form of their ASM filenames. However, the use of the RESETLOGS option
requires the use of a file creation context form for the specification of the log files.
Creating a Control File in ASM: Example 1
The following CREATE CONTROLFILE statement is generated by an ALTER DATABASE
BACKUP CONTROLFILE TO TRACE command for a database with datafiles and log files
created on disk groups dgroup1 and dgroup2:
CREATE CONTROLFILE REUSE DATABASE "SAMPLE" NORESETLOGS ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 2
MAXDATAFILES 30
MAXINSTANCES 1
MAXLOGHISTORY 226
LOGFILE
GROUP 1 (
'+DGROUP1/db/onlinelog/group_1.258.541956457',
'+DGROUP2/db/onlinelog/group_1.256.541956473'
) SIZE 100M,
GROUP 2 (
'+DGROUP1/db/onlinelog/group_2.257.541956477',
'+DGROUP2/db/onlinelog/group_2.258.541956487'
) SIZE 100M
DATAFILE
'+DGROUP1/db/datafile/system.260.541956497',
'+DGROUP1/db/datafile/sysaux.259.541956511'
CHARACTER SET US7ASCII
;

12-44 Oracle Database Administrator’s Guide

Using Automatic Storage Management in the Database

Creating a Control File in ASM: Example 2
This example is a CREATE CONTROLFILE statement for a database with datafiles, but
uses a RESETLOGS clause, and thus uses the creation context form for log files:
CREATE CONTROLFILE REUSE DATABASE "SAMPLE" RESETLOGS ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 2
MAXDATAFILES 30
MAXINSTANCES 1
MAXLOGHISTORY 226
LOGFILE
GROUP 1 (
'+DGROUP1',
'+DGROUP2'
) SIZE 100M,
GROUP 2 (
'+DGROUP1',
'+DGROUP2'
) SIZE 100M
DATAFILE
'+DGROUP1/db/datafile/system.260.541956497',
'+DGROUP1/db/datafile/sysaux.259.541956511'
CHARACTER SET US7ASCII
;

Creating Archive Log Files in ASM
Disk groups can be specified as archive log destinations in the LOG_ARCHIVE_DEST
and LOG_ARCHIVE_DEST_n initialization parameters. When destinations are specified
in this manner, the archive log filename will be unique, even if archived twice. All
partially created archive files, created as a result of a system error, are automatically
deleted.
If LOG_ARCHIVE_DEST is set to a disk group name, LOG_ARCHIVE_FORMAT is
ignored. Unique filenames for archived logs are automatically created by the Oracle
database. If LOG_ARCHIVE_DEST is set to a directory in a disk group,
LOG_ARCHIVE_FORMAT has its normal semantics.
The following sample archive log names might be generated with
DB_RECOVERY_FILE_DEST set to +dgroup2. SAMPLE is the value of the
DB_UNIQUE_NAME parameter:
+DGROUP2/SAMPLE/ARCHIVELOG/2003_09_23/thread_1_seq_38.614.541956473
+DGROUP2/SAMPLE/ARCHIVELOG/2003_09_23/thread_4_seq_35.609.541956477
+DGROUP2/SAMPLE/ARCHIVELOG/2003_09_23/thread_2_seq_34.603.541956487
+DGROUP2/SAMPLE/ARCHIVELOG/2003_09_25/thread_3_seq_100.621.541956497
+DGROUP2/SAMPLE/ARCHIVELOG/2003_09_25/thread_1_seq_38.614.541956511

Recovery Manager (RMAN) and ASM
RMAN is critical to ASM and is responsible for tracking the ASM filenames and for
deleting obsolete ASM files. Because ASM files cannot be copied through normal
operating system interfaces (other than with the XML DB repository through FTP or
HTTP/WebDAV), RMAN is the preferred means of copying ASM files. RMAN is the
only method for performing backups of a database containing ASM files.
RMAN can also be used for moving databases or files into ASM storage.

Using Automatic Storage Management 12-45

Migrating a Database to Automatic Storage Management

See Also:
■

Oracle Database Backup and Recovery Basics

■

Oracle Database Backup and Recovery Advanced User's Guide

Migrating a Database to Automatic Storage Management
With a new installation of Oracle Database and Automatic Storage Management
(ASM), you initially create your database in ASM. If you have an existing Oracle
database that stores database files in the operating system file system or on raw
devices, you can migrate some or all of these database files to ASM.
There are two ways to migrate database files to ASM:
■

Manually, using RMAN
For detailed instructions on migrating a database to ASM using RMAN, see Oracle
Database Backup and Recovery Advanced User's Guide. You can also use RMAN to
migrate a single tablespace or datafile to ASM.

■

With the Migrate Database To ASM wizard in Enterprise Manager
You access the wizard from the Administration page, under the Change Database
heading. Refer to the Enterprise Manager online help for instructions on using the
wizard.

Accessing Automatic Storage Management Files with the XML DB Virtual
Folder
Automatic Storage Management (ASM) files and directories can be accessed through a
virtual folder in the XML DB repository. The repository path to the virtual folder is
/sys/asm. The folder is virtual because its contents do not actually reside in the
repository; they exist as normal ASM files and directories. /sys/asm provides a
means to access and manipulate the ASM files and directories with programmatic
APIs such as the DBMS_XDB package and with XML DB protocols such as FTP and
HTTP/WebDAV.
A typical use for this capability might be to view /sys/asm as a Web Folder in a
graphical user interface (with the WebDAV protocol), and then copy a Data Pump
dumpset from an ASM disk group to an operating system file system by dragging and
dropping.
You must log in as a user other than SYS and you must have been granted the DBA role
to access /sys/asm with XML DB protocols.
The FTP protocol is initially disabled for a new XML DB
installation. To enable it, you must set the FTP port to a non-zero
value. The easiest way to do this is with the catxdbdbca.sql script.
This script takes two arguments. The first is the FTP port number, and
the second is the HTTP/WebDAV port number. The following
example configures the FTP port number to 7787, and the
HTTP/WebDAV port number to 8080:

Note:

SQL> @?/rdbms/admin/catxdbdbca.sql 7787 8080

Another way to set these port numbers is with the XDB Configuration
page in Enterprise Manager.

12-46 Oracle Database Administrator’s Guide

Accessing Automatic Storage Management Files with the XML DB Virtual Folder

See Also: Oracle XML DB Developer's Guide for information on
Oracle XML DB, including additional ways to configure port numbers
for the XML DB protocol servers, and Oracle Database PL/SQL Packages
and Types Reference for information on the DBMS_XDB package.

Inside /sys/asm
The ASM virtual folder is created by default during XML DB installation. If the
database is not configured to use ASM, the folder is empty and no operations are
permitted on it.
The ASM virtual folder contains folders and subfolders that follow the hierarchy
defined by the structure of an ASM fully qualified file name. Thus, /sys/asm contains
one subfolder for every mounted disk group, and each disk group folder contains one
subfolder for each database that uses the disk group. (In addition, a disk group folder
may contain files and folders corresponding to aliases created by the administrator.)
Continuing the hierarchy, the database folders contain file type folders, which contain
the ASM files. This hierarchy is shown in the following diagram, which for simplicity,
excludes directories created for aliases.

/sys/asm
Disk Groups

DATA

RECOVERY

Databases

HR

MFG

HR

MFG

File
Types
DATAFILE

TEMPFILE

CONTROLFILE

ONLINELOG

CONTROLFILE ONLINELOG

ARCHIVELOG

Restrictions
The following are usage restrictions on /sys/asm:
■

■

You cannot create hard links to existing ASM files or directories with APIs such as
DBMS_XDB.LINK.
You cannot rename (move) an ASM file to another disk group or to a directory
outside ASM.

Using Automatic Storage Management 12-47

Viewing Information About Automatic Storage Management

Sample FTP Session
In the following sample FTP session, the disk groups are DATA and RECOVERY, the
database name is MFG, and dbs is a directory that was created for aliases. All files in
/sys/asm are binary.
ftp> open myhost 7777
ftp> user system
ftp> passwd dba
ftp> cd /sys/asm
ftp> ls
DATA
RECOVERY
ftp> cd DATA
ftp> ls
dbs
MFG
ftp> cd dbs
ftp> ls
t_dbl.f
t_axl.f
ftp> binary
ftp> get t_dbl.f t_axl.f
ftp> put t_db2.f

Viewing Information About Automatic Storage Management
You can use these views to query information about Automatic Storage Management:
View

Description

V$ASM_DISKGROUP

In an ASM instance, describes a disk group (number, name, size
related info, state, and redundancy type).
In a DB instance, contains one row for every ASM disk group
mounted by the local ASM instance.
This view performs disk discovery every time it is queried.

V$ASM_DISK

In an ASM instance, contains one row for every disk discovered
by the ASM instance, including disks that are not part of any
disk group.
In a DB instance, contains rows only for disks in the disk groups
in use by that DB instance.
This view performs disk discovery every time it is queried.

V$ASM_DISKGROUP_STAT

Has the same columns as V$ASM_DISKGROUP, but to reduce
overhead, does not perform a discovery when it is queried. It
therefore does not return information on any disks that are new
to the storage system. For the most accurate data, use
V$ASM_DISKGROUP instead.

V$ASM_DISK_STAT

Has the same columns as V$ASM_DISK, but to reduce
overhead, does not perform a discovery when it is queried. It
therefore does not return information on any disks that are new
to the storage system. For the most accurate data, use
V$ASM_DISK instead.

V$ASM_FILE

In an ASM instance, contains one row for every ASM file in
every disk group mounted by the ASM instance.
In a DB instance, contains no rows.

12-48 Oracle Database Administrator’s Guide

Viewing Information About Automatic Storage Management

View

Description

V$ASM_TEMPLATE

In an ASM or DB instance, contains one row for every template
present in every disk group mounted by the ASM instance.

V$ASM_ALIAS

In an ASM instance, contains one row for every alias present in
every disk group mounted by the ASM instance.
In a DB instance, contains no rows.

V$ASM_OPERATION

In an ASM instance, contains one row for every active ASM long
running operation executing in the ASM instance.
In a DB instance, contains no rows.

V$ASM_CLIENT

In an ASM instance, identifies databases using disk groups
managed by the ASM instance.
In a DB instance, contains one row for the ASM instance if the
database has any open ASM files.

Oracle Database Reference for details on all of these
dynamic performance views

See Also:

Using Automatic Storage Management 12-49

Viewing Information About Automatic Storage Management

12-50 Oracle Database Administrator’s Guide

Part IV
Schema Objects
Part IV describes the creation and maintenance of schema objects in the Oracle
Database. It includes the following chapters:
■

Chapter 13, "Managing Schema Objects"

■

Chapter 14, "Managing Space for Schema Objects"

■

Chapter 15, "Managing Tables"

■

Chapter 16, "Managing Indexes"

■

Chapter 17, "Managing Partitioned Tables and Indexes"

■

Chapter 18, "Managing Clusters"

■

Chapter 19, "Managing Hash Clusters"

■

Chapter 20, "Managing Views, Sequences, and Synonyms"

■

Chapter 21, "Using DBMS_REPAIR to Repair Data Block Corruption"

13
Managing Schema Objects
This chapter describes schema object management issues that are common across
multiple types of schema objects. The following topics are presented:
■

Creating Multiple Tables and Views in a Single Operation

■

Analyzing Tables, Indexes, and Clusters

■

Truncating Tables and Clusters

■

Enabling and Disabling Triggers

■

Managing Integrity Constraints

■

Renaming Schema Objects

■

Managing Object Dependencies

■

Managing Object Name Resolution

■

Switching to a Different Schema

■

Displaying Information About Schema Objects

Creating Multiple Tables and Views in a Single Operation
You can create several tables and views and grant privileges in one operation using the
CREATE SCHEMA statement. The CREATE SCHEMA statement is useful if you want to
guarantee the creation of several tables, views, and grants in one operation. If an
individual table, view or grant fails, the entire statement is rolled back. None of the
objects are created, nor are the privileges granted.
Specifically, the CREATE SCHEMA statement can include only CREATE TABLE,
CREATE VIEW, and GRANT statements. You must have the privileges necessary to
issue the included statements. You are not actually creating a schema, that is done
when the user is created with a CREATE USER statement. Rather, you are populating
the schema.
The following statement creates two tables and a view that joins data from the two
tables:
CREATE SCHEMA AUTHORIZATION scott
CREATE TABLE dept (
deptno NUMBER(3,0) PRIMARY KEY,
dname VARCHAR2(15),
loc VARCHAR2(25))
CREATE TABLE emp (
empno NUMBER(5,0) PRIMARY KEY,
ename VARCHAR2(15) NOT NULL,
job VARCHAR2(10),
Managing Schema Objects 13-1

Analyzing Tables, Indexes, and Clusters

mgr NUMBER(5,0),
hiredate DATE DEFAULT (sysdate),
sal NUMBER(7,2),
comm NUMBER(7,2),
deptno NUMBER(3,0) NOT NULL
CONSTRAINT dept_fkey REFERENCES dept)
CREATE VIEW sales_staff AS
SELECT empno, ename, sal, comm
FROM emp
WHERE deptno = 30
WITH CHECK OPTION CONSTRAINT sales_staff_cnst
GRANT SELECT ON sales_staff TO human_resources;

The CREATE SCHEMA statement does not support Oracle Database extensions to the
ANSI CREATE TABLE and CREATE VIEW statements, including the STORAGE clause.
Oracle Database SQL Reference for syntax and other
information about the CREATE SCHEMA statement

See Also:

Analyzing Tables, Indexes, and Clusters
You analyze a schema object (table, index, or cluster) to:
■

Collect and manage statistics for it

■

Verify the validity of its storage format

■

Identify migrated and chained rows of a table or cluster

Do not use the COMPUTE and ESTIMATE clauses of
ANALYZE to collect optimizer statistics. These clauses are supported
for backward compatibility. Instead, use the DBMS_STATS package,
which lets you collect statistics in parallel, collect global statistics
for partitioned objects, and fine tune your statistics collection in
other ways. The cost-based optimizer, which depends upon
statistics, will eventually use only statistics that have been collected
by DBMS_STATS. See Oracle Database PL/SQL Packages and Types
Reference for more information on the DBMS_STATS package.

Note:

You must use the ANALYZE statement (rather than DBMS_STATS)
for statistics collection not related to the cost-based optimizer, such
as:
■

To use the VALIDATE or LIST CHAINED ROWS clauses

■

To collect information on freelist blocks

The following topics are discussed in this section:
■

Using DBMS_STATS to Collect Table and Index Statistics

■

Validating Tables, Indexes, Clusters, and Materialized Views

■

Listing Chained Rows of Tables and Clusters

Using DBMS_STATS to Collect Table and Index Statistics
You can use the DBMS_STATS package or the ANALYZE statement to gather statistics
about the physical storage characteristics of a table, index, or cluster. These statistics
13-2 Oracle Database Administrator’s Guide

Analyzing Tables, Indexes, and Clusters

are stored in the data dictionary and can be used by the optimizer to choose the most
efficient execution plan for SQL statements accessing analyzed objects.
Oracle recommends using the more versatile DBMS_STATS package for gathering
optimizer statistics, but you must use the ANALYZE statement to collect statistics
unrelated to the optimizer, such as empty blocks, average space, and so forth.
The DBMS_STATS package allows both the gathering of statistics, including utilizing
parallel execution, and the external manipulation of statistics. Statistics can be stored
in tables outside of the data dictionary, where they can be manipulated without
affecting the optimizer. Statistics can be copied between databases or backup copies
can be made.
The following DBMS_STATS procedures enable the gathering of optimizer statistics:
■

GATHER_INDEX_STATS

■

GATHER_TABLE_STATS

■

GATHER_SCHEMA_STATS

■

GATHER_DATABASE_STATS
See Also:
■

■

Oracle Database Performance Tuning Guide for information about
using DBMS_STATS to gather statistics for the optimizer
Oracle Database PL/SQL Packages and Types Reference for a
description of the DBMS_STATS package

Validating Tables, Indexes, Clusters, and Materialized Views
To verify the integrity of the structure of a table, index, cluster, or materialized view,
use the ANALYZE statement with the VALIDATE STRUCTURE option. If the structure is
valid, no error is returned. However, if the structure is corrupt, you receive an error
message.
For example, in rare cases such as hardware or other system failures, an index can
become corrupted and not perform correctly. When validating the index, you can
confirm that every entry in the index points to the correct row of the associated table.
If the index is corrupt, you can drop and re-create it.
If a table, index, or cluster is corrupt, you should drop it and re-create it. If a
materialized view is corrupt, perform a complete refresh and ensure that you have
remedied the problem. If the problem is not corrected, drop and re-create the
materialized view.
The following statement analyzes the emp table:
ANALYZE TABLE emp VALIDATE STRUCTURE;

You can validate an object and all related objects (for example, indexes) by including
the CASCADE option. The following statement validates the emp table and all
associated indexes:
ANALYZE TABLE emp VALIDATE STRUCTURE CASCADE;

You can specify that you want to perform structure validation online while DML is
occurring against the object being validated. There can be a slight performance impact
when validating with ongoing DML affecting the object, but this is offset by the
flexibility of being able to perform ANALYZE online. The following statement validates
the emp table and all associated indexes online:
Managing Schema Objects 13-3

Analyzing Tables, Indexes, and Clusters

ANALYZE TABLE emp VALIDATE STRUCTURE CASCADE ONLINE;

Listing Chained Rows of Tables and Clusters
You can look at the chained and migrated rows of a table or cluster using the ANALYZE
statement with the LIST CHAINED ROWS clause. The results of this statement are
stored in a specified table created explicitly to accept the information returned by the
LIST CHAINED ROWS clause. These results are useful in determining whether you
have enough room for updates to rows.

Creating a CHAINED_ROWS Table
To create the table to accept data returned by an ANALYZE...LIST CHAINED ROWS
statement, execute the UTLCHAIN.SQL or UTLCHN1.SQL script. These scripts are
provided by the database. They create a table named CHAINED_ROWS in the schema of
the user submitting the script.
Your choice of script to execute for creating the
CHAINED_ROWS table is dependent upon the compatibility level of
your database and the type of table you are analyzing. See the
Oracle Database SQL Reference for more information.

Note:

After a CHAINED_ROWS table is created, you specify it in the INTO clause of the
ANALYZE statement. For example, the following statement inserts rows containing
information about the chained rows in the emp_dept cluster into the CHAINED_ROWS
table:
ANALYZE CLUSTER emp_dept LIST CHAINED ROWS INTO CHAINED_ROWS;

See Also:
■

■

Oracle Database Reference for a description of the
CHAINED_ROWS table
"Using the Segment Advisor" on page 14-16 for information on
how the Segment Advisor reports tables with excess row
chaining.

Eliminating Migrated or Chained Rows in a Table
You can use the information in the CHAINED_ROWS table to reduce or eliminate
migrated and chained rows in an existing table. Use the following procedure.
1.

Use the ANALYZE statement to collect information about migrated and chained
rows.
ANALYZE TABLE order_hist LIST CHAINED ROWS;

2.

Query the output table:
SELECT *
FROM CHAINED_ROWS
WHERE TABLE_NAME = 'ORDER_HIST';
OWNER_NAME
---------SCOTT
SCOTT
SCOTT

TABLE_NAME
---------ORDER_HIST
ORDER_HIST
ORDER_HIST

13-4 Oracle Database Administrator’s Guide

CLUST...
-----...
...
...
...

HEAD_ROWID
-----------------AAAAluAAHAAAAA1AAA
AAAAluAAHAAAAA1AAB
AAAAluAAHAAAAA1AAC

TIMESTAMP
--------04-MAR-96
04-MAR-96
04-MAR-96

Truncating Tables and Clusters

The output lists all rows that are either migrated or chained.
3.

If the output table shows that you have many migrated or chained rows, then you
can eliminate migrated rows by continuing through the following steps:

4.

Create an intermediate table with the same columns as the existing table to hold
the migrated and chained rows:
CREATE TABLE int_order_hist
AS SELECT *
FROM order_hist
WHERE ROWID IN
(SELECT HEAD_ROWID
FROM CHAINED_ROWS
WHERE TABLE_NAME = 'ORDER_HIST');

5.

Delete the migrated and chained rows from the existing table:
DELETE FROM order_hist
WHERE ROWID IN
(SELECT HEAD_ROWID
FROM CHAINED_ROWS
WHERE TABLE_NAME = 'ORDER_HIST');

6.

Insert the rows of the intermediate table into the existing table:
INSERT INTO order_hist
SELECT *
FROM int_order_hist;

7.

Drop the intermediate table:
DROP TABLE int_order_history;

8.

Delete the information collected in step 1 from the output table:
DELETE FROM CHAINED_ROWS
WHERE TABLE_NAME = 'ORDER_HIST';

9.

Use the ANALYZE statement again, and query the output table.

Any rows that appear in the output table are chained. You can eliminate chained rows
only by increasing your data block size. It might not be possible to avoid chaining in
all situations. Chaining is often unavoidable with tables that have a LONG column or
large CHAR or VARCHAR2 columns.

Truncating Tables and Clusters
You can delete all rows of a table or all rows in a group of clustered tables so that the
table (or cluster) still exists, but is completely empty. For example, consider a table that
contains monthly data, and at the end of each month, you need to empty it (delete all
rows) after archiving its data.
To delete all rows from a table, you have the following options:
■

Use the DELETE statement.

■

Use the DROP and CREATE statements.

■

Use the TRUNCATE statement.

These options are discussed in the following sections

Managing Schema Objects 13-5

Truncating Tables and Clusters

Using DELETE
You can delete the rows of a table using the DELETE statement. For example, the
following statement deletes all rows from the emp table:
DELETE FROM emp;

If there are many rows present in a table or cluster when using the DELETE statement,
significant system resources are consumed as the rows are deleted. For example, CPU
time, redo log space, and undo segment space from the table and any associated
indexes require resources. Also, as each row is deleted, triggers can be fired. The space
previously allocated to the resulting empty table or cluster remains associated with
that object. With DELETE you can choose which rows to delete, whereas TRUNCATE
and DROP affect the entire object.
Oracle Database SQL Reference for syntax and other
information about the DELETE statement

See Also:

Using DROP and CREATE
You can drop a table and then re-create the table. For example, the following
statements drop and then re-create the emp table:
DROP TABLE emp;
CREATE TABLE emp ( ... );

When dropping and re-creating a table or cluster, all associated indexes, integrity
constraints, and triggers are also dropped, and all objects that depend on the dropped
table or clustered table are invalidated. Also, all grants for the dropped table or
clustered table are dropped.

Using TRUNCATE
You can delete all rows of the table using the TRUNCATE statement. For example, the
following statement truncates the emp table:
TRUNCATE TABLE emp;

Using the TRUNCATE statement provides a fast, efficient method for deleting all rows
from a table or cluster. A TRUNCATE statement does not generate any undo
information and it commits immediately. It is a DDL statement and cannot be rolled
back. A TRUNCATE statement does not affect any structures associated with the table
being truncated (constraints and triggers) or authorizations. A TRUNCATE statement
also specifies whether space currently allocated for the table is returned to the
containing tablespace after truncation.
You can truncate any table or cluster in your own schema. Any user who has the DROP
ANY TABLE system privilege can truncate a table or cluster in any schema.
Before truncating a table or clustered table containing a parent key, all referencing
foreign keys in different tables must be disabled. A self-referential constraint does not
have to be disabled.
As a TRUNCATE statement deletes rows from a table, triggers associated with the table
are not fired. Also, a TRUNCATE statement does not generate any audit information
corresponding to DELETE statements if auditing is enabled. Instead, a single audit
record is generated for the TRUNCATE statement being issued. See the Oracle Database
Security Guide for information about auditing.

13-6 Oracle Database Administrator’s Guide

Enabling and Disabling Triggers

A hash cluster cannot be truncated, nor can tables within a hash or index cluster be
individually truncated. Truncation of an index cluster deletes all rows from all tables
in the cluster. If all the rows must be deleted from an individual clustered table, use
the DELETE statement or drop and re-create the table.
The REUSE STORAGE or DROP STORAGE options of the TRUNCATE statement control
whether space currently allocated for a table or cluster is returned to the containing
tablespace after truncation. The default option, DROP STORAGE, reduces the number
of extents allocated to the resulting table to the original setting for MINEXTENTS. Freed
extents are then returned to the system and can be used by other objects.
Alternatively, the REUSE STORAGE option specifies that all space currently allocated
for the table or cluster remains allocated to it. For example, the following statement
truncates the emp_dept cluster, leaving all extents previously allocated for the cluster
available for subsequent inserts and deletes:
TRUNCATE CLUSTER emp_dept REUSE STORAGE;

The REUSE or DROP STORAGE option also applies to any associated indexes. When a
table or cluster is truncated, all associated indexes are also truncated. The storage
parameters for a truncated table, cluster, or associated indexes are not changed as a
result of the truncation.
Oracle Database SQL Reference for syntax and other
information about the TRUNCATE TABLE and TRUNCATE
CLUSTER statements

See Also:

Enabling and Disabling Triggers
Database triggers are procedures that are stored in the database and activated ("fired")
when specific conditions occur, such as adding a row to a table. You can use triggers to
supplement the standard capabilities of the database to provide a highly customized
database management system. For example, you can create a trigger to restrict DML
operations against a table, allowing only statements issued during regular business
hours.
Database triggers can be associated with a table, schema, or database. They are
implicitly fired when:
■

■

■

DML statements are executed (INSERT, UPDATE, DELETE) against an associated
table
Certain DDL statements are executed (for example: ALTER, CREATE, DROP) on
objects within a database or schema
A specified database event occurs (for example: STARTUP, SHUTDOWN,
SERVERERROR)

This is not a complete list. See the Oracle Database SQL Reference for a full list of
statements and database events that cause triggers to fire
Create triggers with the CREATE TRIGGER statement. They can be defined as firing
BEFORE or AFTER the triggering event, or INSTEAD OF it. The following statement
creates a trigger scott.emp_permit_changes on table scott.emp. The trigger
fires before any of the specified statements are executed.
CREATE TRIGGER scott.emp_permit_changes
BEFORE
DELETE OR INSERT OR UPDATE
ON scott.emp
.

Managing Schema Objects 13-7

Enabling and Disabling Triggers

.
.
pl/sql block
.
.
.

You can later remove a trigger from the database by issuing the DROP TRIGGER
statement.
A trigger can be in either of two distinct modes:
■

Enabled
An enabled trigger executes its trigger body if a triggering statement is issued and
the trigger restriction, if any, evaluates to true. By default, triggers are enabled
when first created.

■

Disabled
A disabled trigger does not execute its trigger body, even if a triggering statement
is issued and the trigger restriction (if any) evaluates to true.

To enable or disable triggers using the ALTER TABLE statement, you must own the
table, have the ALTER object privilege for the table, or have the ALTER ANY TABLE
system privilege. To enable or disable an individual trigger using the ALTER
TRIGGER statement, you must own the trigger or have the ALTER ANY TRIGGER
system privilege.
See Also:
■

■

■

Oracle Database Concepts for a more detailed description of
triggers
Oracle Database SQL Reference for syntax of the CREATE
TRIGGER statement
Oracle Database Application Developer's Guide - Fundamentals for
information about creating and using triggers

Enabling Triggers
You enable a disabled trigger using the ALTER TRIGGER statement with the ENABLE
option. To enable the disabled trigger named reorder on the inventory table, enter
the following statement:
ALTER TRIGGER reorder ENABLE;

To enable all triggers defined for a specific table, use the ALTER TABLE statement
with the ENABLE ALL TRIGGERS option. To enable all triggers defined for the
INVENTORY table, enter the following statement:
ALTER TABLE inventory
ENABLE ALL TRIGGERS;

Oracle Database SQL Reference for syntax and other
information about the ALTER TRIGGER statement

See Also:

Disabling Triggers
Consider temporarily disabling a trigger if one of the following conditions is true:
■

An object that the trigger references is not available.

13-8 Oracle Database Administrator’s Guide

Managing Integrity Constraints

■

■

You must perform a large data load and want it to proceed quickly without firing
triggers.
You are loading data into the table to which the trigger applies.

You disable a trigger using the ALTER TRIGGER statement with the DISABLE option.
To disable the trigger reorder on the inventory table, enter the following
statement:
ALTER TRIGGER reorder DISABLE;

You can disable all triggers associated with a table at the same time using the ALTER
TABLE statement with the DISABLE ALL TRIGGERS option. For example, to disable
all triggers defined for the inventory table, enter the following statement:
ALTER TABLE inventory
DISABLE ALL TRIGGERS;

Managing Integrity Constraints
Integrity constraints are rules that restrict the values for one or more columns in a
table. Constraint clauses can appear in either CREATE TABLE or ALTER TABLE
statements, and identify the column or columns affected by the constraint and identify
the conditions of the constraint.
This section discusses the concepts of constraints and identifies the SQL statements
used to define and manage integrity constraints. The following topics are contained in
this section:
■

Integrity Constraint States

■

Setting Integrity Constraints Upon Definition

■

Modifying, Renaming, or Dropping Existing Integrity Constraints

■

Deferring Constraint Checks

■

Reporting Constraint Exceptions

■

Viewing Constraint Information
See Also:
■

■

Oracle Database Concepts for a more thorough discussion of
integrity constraints
Oracle Database Application Developer's Guide - Fundamentals for
detailed information and examples of using integrity
constraints in applications

Integrity Constraint States
You can specify that a constraint is enabled (ENABLE) or disabled (DISABLE). If a
constraint is enabled, data is checked as it is entered or updated in the database, and
data that does not conform to the constraint is prevented from being entered. If a
constraint is disabled, then data that does not conform can be allowed to enter the
database.
Additionally, you can specify that existing data in the table must conform to the
constraint (VALIDATE). Conversely, if you specify NOVALIDATE, you are not ensured
that existing data conforms.
An integrity constraint defined on a table can be in one of the following states:

Managing Schema Objects 13-9

Managing Integrity Constraints

■

ENABLE, VALIDATE

■

ENABLE, NOVALIDATE

■

DISABLE, VALIDATE

■

DISABLE, NOVALIDATE

For details about the meaning of these states and an understanding of their
consequences, see the Oracle Database SQL Reference. Some of these consequences are
discussed here.

Disabling Constraints
To enforce the rules defined by integrity constraints, the constraints should always be
enabled. However, consider temporarily disabling the integrity constraints of a table
for the following performance reasons:
■
■

■

When loading large amounts of data into a table
When performing batch operations that make massive changes to a table (for
example, changing every employee's number by adding 1000 to the existing
number)
When importing or exporting one table at a time

In all three cases, temporarily disabling integrity constraints can improve the
performance of the operation, especially in data warehouse configurations.
It is possible to enter data that violates a constraint while that constraint is disabled.
Thus, you should always enable the constraint after completing any of the operations
listed in the preceding bullet list.

Enabling Constraints
While a constraint is enabled, no row violating the constraint can be inserted into the
table. However, while the constraint is disabled such a row can be inserted. This row is
known as an exception to the constraint. If the constraint is in the enable novalidated
state, violations resulting from data entered while the constraint was disabled remain.
The rows that violate the constraint must be either updated or deleted in order for the
constraint to be put in the validated state.
You can identify exceptions to a specific integrity constraint while attempting to enable
the constraint. See "Reporting Constraint Exceptions" on page 13-14. All rows violating
constraints are noted in an EXCEPTIONS table, which you can examine.

Enable Novalidate Constraint State
When a constraint is in the enable novalidate state, all subsequent statements are
checked for conformity to the constraint. However, any existing data in the table is not
checked. A table with enable novalidated constraints can contain invalid data, but it is
not possible to add new invalid data to it. Enabling constraints in the novalidated state
is most useful in data warehouse configurations that are uploading valid OLTP data.
Enabling a constraint does not require validation. Enabling a constraint novalidate is
much faster than enabling and validating a constraint. Also, validating a constraint
that is already enabled does not require any DML locks during validation (unlike
validating a previously disabled constraint). Enforcement guarantees that no
violations are introduced during the validation. Hence, enabling without validating
enables you to reduce the downtime typically associated with enabling a constraint.

13-10 Oracle Database Administrator’s Guide

Managing Integrity Constraints

Efficient Use of Integrity Constraints: A Procedure
Using integrity constraint states in the following order can ensure the best benefits:
1.

Disable state.

2.

Perform the operation (load, export, import).

3.

Enable novalidate state.

4.

Enable state.

Some benefits of using constraints in this order are:
■

No locks are held.

■

All constraints can go to enable state concurrently.

■

Constraint enabling is done in parallel.

■

Concurrent activity on table is permitted.

Setting Integrity Constraints Upon Definition
When an integrity constraint is defined in a CREATE TABLE or ALTER TABLE
statement, it can be enabled, disabled, or validated or not validated as determined by
your specification of the ENABLE/DISABLE clause. If the ENABLE/DISABLE clause is
not specified in a constraint definition, the database automatically enables and
validates the constraint.

Disabling Constraints Upon Definition
The following CREATE TABLE and ALTER TABLE statements both define and disable
integrity constraints:
CREATE TABLE emp (
empno NUMBER(5) PRIMARY KEY DISABLE,

. . . ;

ALTER TABLE emp
ADD PRIMARY KEY (empno) DISABLE;

An ALTER TABLE statement that defines and disables an integrity constraint never
fails because of rows in the table that violate the integrity constraint. The definition of
the constraint is allowed because its rule is not enforced.

Enabling Constraints Upon Definition
The following CREATE TABLE and ALTER TABLE statements both define and enable
integrity constraints:
CREATE TABLE emp (
empno NUMBER(5) CONSTRAINT emp.pk PRIMARY KEY,

. . . ;

ALTER TABLE emp
ADD CONSTRAINT emp.pk PRIMARY KEY (empno);

An ALTER TABLE statement that defines and attempts to enable an integrity
constraint can fail because rows of the table violate the integrity constraint. If this case,
the statement is rolled back and the constraint definition is not stored and not enabled.
When you enable a UNIQUE or PRIMARY KEY constraint an associated index is
created.

Managing Schema Objects

13-11

Managing Integrity Constraints

An efficient procedure for enabling a constraint that can
make use of parallelism is described in "Efficient Use of Integrity
Constraints: A Procedure" on page 13-11.

Note:

See Also: "Creating an Index Associated with a Constraint" on
page 16-7

Modifying, Renaming, or Dropping Existing Integrity Constraints
You can use the ALTER TABLE statement to enable, disable, modify, or drop a
constraint. When the database is using a UNIQUE or PRIMARY KEY index to enforce a
constraint, and constraints associated with that index are dropped or disabled, the
index is dropped, unless you specify otherwise.
While enabled foreign keys reference a PRIMARY or UNIQUE key, you cannot disable or
drop the PRIMARY or UNIQUE key constraint or the index.

Disabling Enabled Constraints
The following statements disable integrity constraints. The second statement specifies
that the associated indexes are to be kept.
ALTER TABLE dept
DISABLE CONSTRAINT dname_ukey;
ALTER TABLE dept
DISABLE PRIMARY KEY KEEP INDEX,
DISABLE UNIQUE (dname, loc) KEEP INDEX;

The following statements enable novalidate disabled integrity constraints:
ALTER TABLE dept
ENABLE NOVALIDATE CONSTRAINT dname_ukey;
ALTER TABLE dept
ENABLE NOVALIDATE PRIMARY KEY,
ENABLE NOVALIDATE UNIQUE (dname, loc);

The following statements enable or validate disabled integrity constraints:
ALTER TABLE dept
MODIFY CONSTRAINT dname_key VALIDATE;
ALTER TABLE dept
MODIFY PRIMARY KEY ENABLE NOVALIDATE;

The following statements enable disabled integrity constraints:
ALTER TABLE dept
ENABLE CONSTRAINT dname_ukey;
ALTER TABLE dept
ENABLE PRIMARY KEY,
ENABLE UNIQUE (dname, loc);

To disable or drop a UNIQUE key or PRIMARY KEY constraint and all dependent
FOREIGN KEY constraints in a single step, use the CASCADE option of the DISABLE or
DROP clauses. For example, the following statement disables a PRIMARY KEY
constraint and any FOREIGN KEY constraints that depend on it:
13-12 Oracle Database Administrator’s Guide

Managing Integrity Constraints

ALTER TABLE dept
DISABLE PRIMARY KEY CASCADE;

Renaming Constraints
The ALTER TABLE...RENAME CONSTRAINT statement enables you to rename any
currently existing constraint for a table. The new constraint name must not conflict
with any existing constraint names for a user.
The following statement renames the dname_ukey constraint for table dept:
ALTER TABLE dept
RENAME CONSTRAINT dname_ukey TO dname_unikey;

When you rename a constraint, all dependencies on the base table remain valid.
The RENAME CONSTRAINT clause provides a means of renaming system generated
constraint names.

Dropping Constraints
You can drop an integrity constraint if the rule that it enforces is no longer true, or if
the constraint is no longer needed. You can drop the constraint using the ALTER
TABLE statement with one of the following clauses:
■

DROP PRIMARY KEY

■

DROP UNIQUE

■

DROP CONSTRAINT

The following two statements drop integrity constraints. The second statement keeps
the index associated with the PRIMARY KEY constraint:
ALTER TABLE dept
DROP UNIQUE (dname, loc);
ALTER TABLE emp
DROP PRIMARY KEY KEEP INDEX,
DROP CONSTRAINT dept_fkey;

If FOREIGN KEYs reference a UNIQUE or PRIMARY KEY, you must include the
CASCADE CONSTRAINTS clause in the DROP statement, or you cannot drop the
constraint.

Deferring Constraint Checks
When the database checks a constraint, it signals an error if the constraint is not
satisfied. You can defer checking the validity of constraints until the end of a
transaction.
When you issue the SET CONSTRAINTS statement, the SET CONSTRAINTS mode
lasts for the duration of the transaction, or until another SET CONSTRAINTS
statement resets the mode.
Notes:
■

■

You cannot issue a SET CONSTRAINT statement inside a
trigger.
Deferrable unique and primary keys must use nonunique
indexes.

Managing Schema Objects

13-13

Managing Integrity Constraints

Set All Constraints Deferred
Within the application being used to manipulate the data, you must set all constraints
deferred before you actually begin processing any data. Use the following DML
statement to set all deferrable constraints deferred:
SET CONSTRAINTS ALL DEFERRED;

Note: The SET CONSTRAINTS statement applies only to the
current transaction. The defaults specified when you create a
constraint remain as long as the constraint exists. The ALTER
SESSION SET CONSTRAINTS statement applies for the current
session only.

Check the Commit (Optional)
You can check for constraint violations before committing by issuing the SET
CONSTRAINTS ALL IMMEDIATE statement just before issuing the COMMIT. If there
are any problems with a constraint, this statement fails and the constraint causing the
error is identified. If you commit while constraints are violated, the transaction is
rolled back and you receive an error message.

Reporting Constraint Exceptions
If exceptions exist when a constraint is validated, an error is returned and the integrity
constraint remains novalidated. When a statement is not successfully executed because
integrity constraint exceptions exist, the statement is rolled back. If exceptions exist,
you cannot validate the constraint until all exceptions to the constraint are either
updated or deleted.
To determine which rows violate the integrity constraint, issue the ALTER TABLE
statement with the EXCEPTIONS option in the ENABLE clause. The EXCEPTIONS
option places the rowid, table owner, table name, and constraint name of all exception
rows into a specified table.
You must create an appropriate exceptions report table to accept information from the
EXCEPTIONS option of the ENABLE clause before enabling the constraint. You can
create an exception table by executing the UTLEXCPT.SQL script or the
UTLEXPT1.SQL script.
Your choice of script to execute for creating the
EXCEPTIONS table is dependent upon the compatibility level of
your database and the type of table you are analyzing. See the
Oracle Database SQL Reference for more information.

Note:

Both of these scripts create a table named EXCEPTIONS. You can create additional
exceptions tables with different names by modifying and resubmitting the script.
The following statement attempts to validate the PRIMARY KEY of the dept table, and
if exceptions exist, information is inserted into a table named EXCEPTIONS:
ALTER TABLE dept ENABLE PRIMARY KEY EXCEPTIONS INTO EXCEPTIONS;

If duplicate primary key values exist in the dept table and the name of the PRIMARY
KEY constraint on dept is sys_c00610, then the following query will display those
exceptions:

13-14 Oracle Database Administrator’s Guide

Managing Integrity Constraints

SELECT * FROM EXCEPTIONS;

The following exceptions are shown:
fROWID
-----------------AAAAZ9AABAAABvqAAB
AAAAZ9AABAAABvqAAG

OWNER
--------SCOTT
SCOTT

TABLE_NAME
-------------DEPT
DEPT

CONSTRAINT
----------SYS_C00610
SYS_C00610

A more informative query would be to join the rows in an exception report table and
the master table to list the actual rows that violate a specific constraint, as shown in the
following statement and results:
SELECT deptno, dname, loc FROM dept, EXCEPTIONS
WHERE EXCEPTIONS.constraint = 'SYS_C00610'
AND dept.rowid = EXCEPTIONS.row_id;
DEPTNO
---------10
10

DNAME
-------------ACCOUNTING
RESEARCH

LOC
----------NEW YORK
DALLAS

All rows that violate a constraint must be either updated or deleted from the table
containing the constraint. When updating exceptions, you must change the value
violating the constraint to a value consistent with the constraint or to a null. After the
row in the master table is updated or deleted, the corresponding rows for the
exception in the exception report table should be deleted to avoid confusion with later
exception reports. The statements that update the master table and the exception
report table should be in the same transaction to ensure transaction consistency.
To correct the exceptions in the previous examples, you might issue the following
transaction:
UPDATE dept SET deptno = 20 WHERE dname = 'RESEARCH';
DELETE FROM EXCEPTIONS WHERE constraint = 'SYS_C00610';
COMMIT;

When managing exceptions, the goal is to eliminate all exceptions in your exception
report table.
While you are correcting current exceptions for a table with
the constraint disabled, it is possible for other users to issue
statements creating new exceptions. You can avoid this by marking
the constraint ENABLE NOVALIDATE before you start eliminating
exceptions.

Note:

Oracle Database Reference for a description of the
EXCEPTIONS table

See Also:

Viewing Constraint Information
Oracle Database provides the following views that enable you to see constraint
definitions on tables and to identify columns that are specified in constraints:

Managing Schema Objects

13-15

Renaming Schema Objects

View

Description

DBA_CONSTRAINTS

DBA view describes all constraint definitions in the database.
ALL view describes constraint definitions accessible to current
user. USER view describes constraint definitions owned by the
current user.

ALL_CONSTRAINTS
USER_CONSTRAINTS
DBA_CONS_COLUMNS
ALL_CONS_COLUMNS
USER_CONS_COLUMNS

DBA view describes all columns in the database that are
specified in constraints. ALL view describes only those columns
accessible to current user that are specified in constraints. USER
view describes only those columns owned by the current user
that are specified in constraints.

Oracle Database Reference contains descriptions of the
columns in these views

See Also:

Renaming Schema Objects
To rename an object, it must be in your schema. You can rename schema objects in
either of the following ways:
■

Drop and re-create the object

■

Rename the object using the RENAME statement

If you drop and re-create an object, all privileges granted for that object are lost.
Privileges must be regranted when the object is re-created.
Alternatively, a table, view, sequence, or a private synonym of a table, view, or
sequence can be renamed using the RENAME statement. When using the RENAME
statement, integrity constraints, indexes, and grants made for the object are carried
forward for the new name. For example, the following statement renames the
sales_staff view:
RENAME sales_staff TO dept_30;

You cannot use RENAME for a stored PL/SQL program unit,
public synonym, index, or cluster. To rename such an object, you
must drop and re-create it.

Note:

Before renaming a schema object, consider the following effects:
■

■

All views and PL/SQL program units dependent on a renamed object become
invalid, and must be recompiled before next use.
All synonyms for a renamed object return an error when used.
See Also:

Oracle Database SQL Reference for syntax of the RENAME

statement

Managing Object Dependencies
This section describes object dependencies, and contains the following topics:
■

About Dependencies Among Schema Objects

■

Manually Recompiling Views

■

Manually Recompiling Procedures and Functions

13-16 Oracle Database Administrator’s Guide

Managing Object Dependencies

■

Manually Recompiling Packages

About Dependencies Among Schema Objects
Some types of schema objects can reference other objects as part of their definition. For
example, a view is defined by a query that references tables or other views. A
procedure’s body can include SQL statements that reference other schema objects. An
object that references another object as part of its definition is called a dependent
object, while the object being referenced is a referenced object.
If you alter the definition of a referenced object, dependent objects may become invalid.
The next time that an invalid object is referenced—for example, used in a query or
invoked by a procedure—the database automatically recompiles it before proceeding
with the current operation. In many cases, this restores the validity of the object.
However, some changes to a referenced object cause invalidations to dependent
objects that cannot be remedied by just recompilation. For example, if you drop a
table, any procedure that queries that table will generate errors upon recompilation. In
this case, code changes are required before that procedure can be made valid again.
Even if all currently invalid objects can be made valid again with just recompilation, it
is obvious that a large number of invalid objects can have a negative impact on
application performance. In addition, if a package were to become invalid and then
run by an application, the automatic recompilation of that package could result in an
error ORA-04068 (existing state of package pkg_name has been
discarded) in one or more sessions. As a DBA, you must therefore be aware of object
dependencies before altering the definitions of schema objects.
Oracle Database automatically tracks dependencies among schema objects and tracks
the current state (VALID or INVALID) of every schema object. You can determine the
state of all objects in a schema with the following query:
select object_name, object_type, status from user_objects;

You can force the database to recompile a schema object using the appropriate SQL
statement with the COMPILE clause.
The following are some generalized rules for the invalidation of schema objects:
■

■

■

If you change the definition of a schema object, dependent objects are cascade
invalidated. This means that if B depends on A and C depends on B, if you change
the definition of A, then B becomes invalid, which in turn invalidates C.
If you revoke privileges from a schema object, dependent objects are cascade
invalidated.
There are a small number of situations where altering the definition of a schema
object does not invalidate dependent objects. For example, if a procedure includes
a query on a view, if you change only the view’s WHERE clause, the procedure
remains valid.

Table 13–1 shows how objects are affected by changes to other objects on which they
depend.

Managing Schema Objects

13-17

Managing Object Dependencies

Table 13–1

Operations that Affect Object Status

Operation
ALTER TABLE (ADD, RENAME, MODIFY
columns)

Resulting Status
of Dependent
Objects
INVALID

RENAME [TABLE|SEQUENCE|SYNONYM|VIEW]
CREATE OR REPLACE VIEW

Either INVALID or
No change,
depending on
dependent object
type and what is
changed in new
version of view

CREATE OR REPLACE SYNONYM

No change if the
synonym is
repointed in a
compatible1 way,
otherwise
INVALID

DROP any object

INVALID

CREATE OR REPLACE PROCEDURE2

INVALID

CREATE OR REPLACE PACKAGE

INVALID

CREATE OR REPLACE PACKAGE BODY

No change

REVOKE object privilege3 ON object
TO|FROM user

All objects of user
that depend on
object are
INVALID3

REVOKE object privilege3 ON object
TO|FROM PUBLIC

All objects in the
database that
depend on object
are INVALID3

Notes to Table 13–1:
1. Compatible means that the select list of the new target object is identical to that of
the old target, and the new target and old target have identical privileges. An
identical select list means that the names, data types, and order of columns are
identical.
2.

Standalone procedures and functions, packages, and triggers.

3.

Only DML object privileges, including SELECT, INSERT, UPDATE, DELETE,
and EXECUTE; revalidation does not require recompiling.
Oracle Database Concepts for more information on
dependencies among schema objects.

See Also:

Manually Recompiling Views
To recompile a view manually, you must have the ALTER ANY TABLE system
privilege or the view must be contained in your schema. Use the ALTER VIEW
statement with the COMPILE clause to recompile a view. The following statement
recompiles the view emp_dept contained in your schema:
ALTER VIEW emp_dept COMPILE;
13-18 Oracle Database Administrator’s Guide

Managing Object Name Resolution

Oracle Database SQL Reference for syntax and other
information about the ALTER VIEW statement

See Also:

Manually Recompiling Procedures and Functions
To recompile a standalone procedure manually, you must have the ALTER ANY
PROCEDURE system privilege or the procedure must be contained in your schema. Use
the ALTER PROCEDURE/FUNCTION statement with the COMPILE clause to recompile
a standalone procedure or function. The following statement recompiles the stored
procedure update_salary contained in your schema:
ALTER PROCEDURE update_salary COMPILE;

Oracle Database SQL Reference for syntax and other
information about the ALTER PROCEDURE statement

See Also:

Manually Recompiling Packages
To recompile a package manually, you must have the ALTER ANY PROCEDURE
system privilege or the package must be contained in your schema. Use the ALTER
PACKAGE statement with the COMPILE clause to recompile either a package body or
both a package specification and body. The following statement recompiles just the
body of the package acct_mgmt:
ALTER PACKAGE acct_mgmt COMPILE BODY;

The next statement compiles both the body and specification of the package
acct_mgmt:
ALTER PACKAGE acct_mgmt COMPILE PACKAGE;

Oracle Database SQL Reference for syntax and other
information about the ALTER PACKAGE statement

See Also:

Managing Object Name Resolution
Object names referenced in SQL statements can consist of several pieces, separated by
periods. The following describes how the database resolves an object name.
1.

Oracle Database attempts to qualify the first piece of the name referenced in the
SQL statement. For example, in scott.emp, scott is the first piece. If there is
only one piece, the one piece is considered the first piece.
a.

In the current schema, the database searches for an object whose name
matches the first piece of the object name. If it does not find such an object, it
continues with step b.

b.

The database searches for a public synonym that matches the first piece of the
name. If it does not find one, it continues with step c.

c.

The database searches for a schema whose name matches the first piece of the
object name. If it finds one, it returns to step b, now using the second piece of
the name as the object to find in the qualified schema. If the second piece does
not correspond to an object in the previously qualified schema or there is not a
second piece, the database returns an error.

If no schema is found in step c, the object cannot be qualified and the database
returns an error.

Managing Schema Objects

13-19

Managing Object Name Resolution

2.

A schema object has been qualified. Any remaining pieces of the name must match
a valid part of the found object. For example, if scott.emp.deptno is the name,
scott is qualified as a schema, emp is qualified as a table, and deptno must
correspond to a column (because emp is a table). If emp is qualified as a package,
deptno must correspond to a public constant, variable, procedure, or function of
that package.

When global object names are used in a distributed database, either explicitly or
indirectly within a synonym, the local database resolves the reference locally. For
example, it resolves a synonym to global object name of a remote table. The partially
resolved statement is shipped to the remote database, and the remote database
completes the resolution of the object as described here.
Because of how the database resolves references, it is possible for an object to depend
on the nonexistence of other objects. This situation occurs when the dependent object
uses a reference that would be interpreted differently were another object present. For
example, assume the following:
■
■

At the current point in time, the company schema contains a table named emp.
A PUBLIC synonym named emp is created for company.emp and the SELECT
privilege for company.emp is granted to the PUBLIC role.

■

The jward schema does not contain a table or private synonym named emp.

■

The user jward creates a view in his schema with the following statement:
CREATE VIEW dept_salaries AS
SELECT deptno, MIN(sal), AVG(sal), MAX(sal) FROM emp
GROUP BY deptno
ORDER BY deptno;

When jward creates the dept_salaries view, the reference to emp is resolved by
first looking for jward.emp as a table, view, or private synonym, none of which is
found, and then as a public synonym named emp, which is found. As a result, the
database notes that jward.dept_salaries depends on the nonexistence of
jward.emp and on the existence of public.emp.
Now assume that jward decides to create a new view named emp in his schema using
the following statement:
CREATE VIEW emp AS
SELECT empno, ename, mgr, deptno
FROM company.emp;

Notice that jward.emp does not have the same structure as company.emp.
As it attempts to resolve references in object definitions, the database internally makes
note of dependencies that the new dependent object has on "nonexistent"
objects--schema objects that, if they existed, would change the interpretation of the
object's definition. Such dependencies must be noted in case a nonexistent object is
later created. If a nonexistent object is created, all dependent objects must be
invalidated so that dependent objects can be recompiled and verified and all
dependent function-based indexes must be marked unusable.
Therefore, in the previous example, as jward.emp is created,
jward.dept_salaries is invalidated because it depends on jward.emp. Then
when jward.dept_salaries is used, the database attempts to recompile the view.
As the database resolves the reference to emp, it finds jward.emp (public.emp is no
longer the referenced object). Because jward.emp does not have a sal column, the
database finds errors when replacing the view, leaving it invalid.

13-20 Oracle Database Administrator’s Guide

Displaying Information About Schema Objects

In summary, you must manage dependencies on nonexistent objects checked during
object resolution in case the nonexistent object is later created.
See Also: "Schema Objects and Database Links" on page 29-14 for
information about name resolution in a distributed database

Switching to a Different Schema
The following statement sets the schema of the current session to the schema name
specified in the statement.
ALTER SESSION SET CURRENT_SCHEMA = 

In subsequent SQL statements, Oracle Database uses this schema name as the schema
qualifier when the qualifier is omitted. In addition, the database uses the temporary
tablespace of the specified schema for sorts, joins, and storage of temporary database
objects. The session retains its original privileges and does not acquire any extra
privileges by the preceding ALTER SESSION statement.
For example:
CONNECT scott/tiger
ALTER SESSION SET CURRENT_SCHEMA = joe;
SELECT * FROM emp;

Because emp is not schema-qualified, the table name is resolved under schema joe.
But if scott does not have select privilege on table joe.emp, then scott cannot
execute the SELECT statement.

Displaying Information About Schema Objects
Oracle Database provides a PL/SQL package that enables you to determine the DDL
that created an object and data dictionary views that you can use to display
information about schema objects. Packages and views that are unique to specific
types of schema objects are described in the associated chapters. This section describes
views and packages that are generic in nature and apply to multiple schema objects.

Using a PL/SQL Package to Display Information About Schema Objects
The Oracle-supplied PL/SQL package DBMS_METADATA.GET_DDL lets you obtain
metadata (in the form of DDL used to create the object) about a schema object.
See Also: Oracle Database PL/SQL Packages and Types Reference for
a description of PL/SQL packages

Example: Using the DBMS_METADATA Package
The DBMS_METADATA package is a powerful tool for obtaining the complete definition
of a schema object. It enables you to obtain all of the attributes of an object in one pass.
The object is described as DDL that can be used to (re)create it.
In the following statements the GET_DDL function is used to fetch the DDL for all
tables in the current schema, filtering out nested tables and overflow segments. The
SET_TRANSFORM_PARAM (with the handle value equal to
DBMS_METADATA.SESSION_TRANSFORM meaning "for the current session") is used to
specify that storage clauses are not to be returned in the SQL DDL. Afterwards, the
session-level transform parameters are reset to their defaults. Once set, transform
parameter values remain in effect until specifically reset to their defaults.

Managing Schema Objects

13-21

Displaying Information About Schema Objects

EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(
DBMS_METADATA.SESSION_TRANSFORM,'STORAGE',false);
SELECT DBMS_METADATA.GET_DDL('TABLE',u.table_name)
FROM USER_ALL_TABLES u
WHERE u.nested='NO'
AND (u.iot_type is null or u.iot_type='IOT');
EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(
DBMS_METADATA.SESSION_TRANSFORM,'DEFAULT');

The output from DBMS_METADATA.GET_DDL is a LONG datatype. When using
SQL*Plus, your output may be truncated by default. Issue the following SQL*Plus
command before issuing the DBMS_METADATA.GET_DDL statement to ensure that
your output is not truncated:
SQL> SET LONG 9999

See Also: Oracle XML Developer's Kit Programmer's Guide for
detailed information and further examples relating to the use of the
DBMS_METADATA package

Using Views to Display Information About Schema Objects
These views display general information about schema objects:
View

Description

DBA_OBJECTS

DBA view describes all schema objects in the database. ALL view
describes objects accessible to current user. USER view describes
objects owned by the current user.

ALL_OBJECTS
USER_OBJECTS
DBA_CATALOG
ALL_CATALOG

List the name, type, and owner (USER view does not display
owner) for all tables, views, synonyms, and sequences in the
database.

USER_CATALOG
DBA_DEPENDENCIES
ALL_DEPENDENCIES

Describe all dependencies between procedures, packages,
functions, package bodies, and triggers, including dependencies on
views without any database links.

USER_DEPENDENCIES

The following sections contain examples of using some of these views.
Oracle Database Reference for a complete description of
data dictionary views

See Also:

Example 1: Displaying Schema Objects By Type
The following query lists all of the objects owned by the user issuing the query:
SELECT OBJECT_NAME, OBJECT_TYPE
FROM USER_OBJECTS;

The following is the query output:
OBJECT_NAME
------------------------EMP_DEPT
EMP
DEPT
EMP_DEPT_INDEX
PUBLIC_EMP

13-22 Oracle Database Administrator’s Guide

OBJECT_TYPE
------------------CLUSTER
TABLE
TABLE
INDEX
SYNONYM

Displaying Information About Schema Objects

EMP_MGR

VIEW

Example 2: Displaying Dependencies of Views and Synonyms
When you create a view or a synonym, the view or synonym is based on its
underlying base object. The ALL_DEPENDENCIES, USER_DEPENDENCIES, and
DBA_DEPENDENCIES data dictionary views can be used to reveal the dependencies for
a view. The ALL_SYNONYMS, USER_SYNONYMS, and DBA_SYNONYMS data dictionary
views can be used to list the base object of a synonym. For example, the following
query lists the base objects for the synonyms created by user jward:
SELECT TABLE_OWNER, TABLE_NAME, SYNONYM_NAME
FROM DBA_SYNONYMS
WHERE OWNER = 'JWARD';

The following is the query output:
TABLE_OWNER
---------------------SCOTT
SCOTT

TABLE_NAME
----------DEPT
EMP

SYNONYM_NAME
----------------DEPT
EMP

Managing Schema Objects

13-23

Displaying Information About Schema Objects

13-24 Oracle Database Administrator’s Guide

14
Managing Space for Schema Objects
This chapter offers guidelines for managing space for schema objects. You should
familiarize yourself with the concepts in this chapter before attempting to manage
specific schema objects as described in later chapters.
This chapter contains the following topics:
■

Managing Tablespace Alerts

■

Managing Space in Data Blocks

■

Managing Storage Parameters

■

Managing Resumable Space Allocation

■

Reclaiming Wasted Space

■

Understanding Space Usage of Datatypes

■

Displaying Information About Space Usage for Schema Objects

■

Capacity Planning for Database Objects

Managing Tablespace Alerts
Oracle Database provides proactive help in managing disk space for tablespaces by
alerting you when available space is running low. Two alert thresholds are defined by
default: warning and critical. The warning threshold is the limit at which space is
beginning to run low. The critical threshold is a serious limit that warrants your
immediate attention. The database issues alerts at both thresholds.
There are two ways to specify alert thresholds for both locally managed and dictionary
managed tablespaces:
■

By percent full
For both warning and critical thresholds, when space used becomes greater than
or equal to a percent of total space, an alert is issued.

■

By free space remaining (in kilobytes (KB))
For both warning and critical thresholds, when remaining space falls below an
amount in KB, an alert is issued. Free-space-remaining thresholds are more useful
for very large tablespaces.

Alerts for locally managed tablespaces are server-generated. For dictionary managed
tablespaces, Enterprise Manager provides this functionality. See "Server-Generated
Alerts" on page 4-18 for more information.
New tablespaces are assigned alert thresholds as follows:

Managing Space for Schema Objects 14-1

Managing Tablespace Alerts

■

■

Locally managed tablespace—When you create a new locally managed
tablespace, it is assigned the default threshold values defined for the database. A
newly created database has a default of 85% full for the warning threshold and
97% full for the critical threshold. Defaults for free space remaining thresholds for
a new database are both zero (disabled). You can change these database defaults,
as described later in this section.
Dictionary managed tablespace—When you create a new dictionary managed
tablespace, it is assigned the threshold values that Enterprise Manager lists for "All
others" in the metrics categories "Tablespace Free Space (MB) (dictionary
managed)" and "Tablespace Space Used (%) (dictionary managed)." You change
these values on the Metric and Policy Settings page.
In a database that is upgraded from version 9.x or earlier to
10.x, database defaults for all locally managed tablespace alert
thresholds are set to zero. This setting effectively disables the alert
mechanism to avoid excessive alerts in a newly migrated database.

Note:

Setting Alert Thresholds
For each tablespace, you can set just percent-full thresholds, just free-space-remaining
thresholds, or both types of thresholds simultaneously. Setting either type of threshold
to zero disables it.
The ideal setting for the warning threshold is one that issues an alert early enough for
you to resolve the problem before it becomes critical. The critical threshold should be
one that issues an alert still early enough so that you can take immediate action to
avoid loss of service.
To set alert threshold values:
■

■

For locally managed tablespaces, use Enterprise Manager (see Oracle Database 2
Day DBA for instructions) or the DBMS_SERVER_ALERT.SET_THRESHOLD
package procedure (see Oracle Database PL/SQL Packages and Types Reference for
usage details).
For dictionary managed tablespaces, use Enterprise Manager. See Oracle Database 2
Day DBA for instructions.

Example—Locally Managed Tablespace
The following example sets the free-space-remaining thresholds in the USERS
tablespace to 10 MB (warning) and 2 MB (critical), and disables the percent-full
thresholds.
BEGIN
DBMS_SERVER_ALERT.SET_THRESHOLD(
metrics_id
=> DBMS_SERVER_ALERT.TABLESPACE_BYT_FREE,
warning_operator
=> DBMS_SERVER_ALERT.OPERATOR_LE,
warning_value
=> '10240',
critical_operator
=> DBMS_SERVER_ALERT.OPERATOR_LE,
critical_value
=> '2048',
observation_period
=> 1,
consecutive_occurrences => 1,
instance_name
=> NULL,
object_type
=> DBMS_SERVER_ALERT.OBJECT_TYPE_TABLESPACE,
object_name
=> 'USERS');
DBMS_SERVER_ALERT.SET_THRESHOLD(

14-2 Oracle Database Administrator’s Guide

Managing Tablespace Alerts

metrics_id
warning_operator
warning_value
critical_operator
critical_value
observation_period
consecutive_occurrences
instance_name
object_type
object_name
END;
/

=>
=>
=>
=>
=>
=>
=>
=>
=>
=>

DBMS_SERVER_ALERT.TABLESPACE_PCT_FULL,
DBMS_SERVER_ALERT.OPERATOR_GT,
'0',
DBMS_SERVER_ALERT.OPERATOR_GT,
'0',
1,
1,
NULL,
DBMS_SERVER_ALERT.OBJECT_TYPE_TABLESPACE,
'USERS');

When setting non-zero values for percent-full thresholds, use
the greater-than-or-equal-to operator, OPERATOR_GE.

Note:

Restoring a Tablespace to Database Default Thresholds
After explicitly setting values for locally managed tablespace alert thresholds, you can
cause the values to revert to the database defaults by setting them to NULL with
DBMS_SERVER_ALERT.SET_THRESHOLD.
Modifying Database Default Thresholds
To modify database default thresholds for locally managed tablespaces, invoke
DBMS_SERVER_ALERT.SET_THRESHOLD as shown in the previous example, but set
object_name to NULL. All tablespaces that use the database default are then
switched to the new default.

Viewing Alerts
You view alerts by accessing the home page of Enterprise Manager Database Control.

You can also view alerts for locally managed tablespaces with the
DBA_OUTSTANDING_ALERTS view. See "Viewing Alert Data" on page 4-21 for more
information.

Limitations
Threshold-based alerts have the following limitations:
■

Alerts are not issued for locally managed tablespaces that are offline or in
read-only mode. However, the database reactivates the alert system for such
tablespaces after they become read/write or available.

Managing Space for Schema Objects 14-3

Managing Space in Data Blocks

■

When you take a tablespace offline or put it in read-only mode, you should disable
the alerts for the tablespace by setting the thresholds to zero. You can then
reenable the alerts by resetting the thresholds when the tablespace is once again
online and in read/write mode.
See Also:
■

■

■

■

■

"Server-Generated Alerts" on page 4-18 for additional
information on server-generated alerts in general
Oracle Database PL/SQL Packages and Types Reference for
information on the procedures of the DBMS_SERVER_ALERT
package and how to use them
Oracle Database Performance Tuning Guide for information on
using the Automatic Workload Repository to gather statistics
on space usage
"Reclaiming Wasted Space" on page 14-15 for various ways to
reclaim space that is no longer being used in the tablespace
"Purging Objects in the Recycle Bin" on page 15-40 for
information on reclaiming recycle bin space

Managing Space in Data Blocks
The following topics are contained in this section:
■

Specifying the INITRANS Parameter
See Also:
■
■

Oracle Database Concepts for more information on data blocks
Oracle Database SQL Reference for syntax and other details of the
INITRANS physical attributes parameters

Specifying the INITRANS Parameter
INITRANS specifies the number of update transaction entries for which space is
initially reserved in the data block header. Space is reserved in the headers of all data
blocks in the associated segment.
As multiple transactions concurrently access the rows of the same data block, space is
allocated for each update transaction entry in the block. Once the space reserved by
INITRANS is depleted, space for additional transaction entries is allocated out of the
free space in a block, if available. Once allocated, this space effectively becomes a
permanent part of the block header.
In earlier releases of Oracle Database, the MAXTRANS
parameter limited the number of transaction entries that could
concurrently use data in a data block. This parameter has been
deprecated. Oracle Database now automatically allows up to 255
concurrent update transactions for any data block, depending on
the available space in the block.

Note:

The database ignores MAXTRANS when specified by users only for
new objects created when the COMPATIBLE initialization parameter
is set to 10.0 or greater.

14-4 Oracle Database Administrator’s Guide

Managing Storage Parameters

You should consider the following when setting the INITRANS parameter for a
schema object:
■

■

The space you would like to reserve for transaction entries compared to the space
you would reserve for database data
The number of concurrent transactions that are likely to touch the same data
blocks at any given time

For example, if a table is very large and only a small number of users simultaneously
access the table, the chances of multiple concurrent transactions requiring access to the
same data block is low. Therefore, INITRANS can be set low, especially if space is at a
premium in the database.
Alternatively, assume that a table is usually accessed by many users at the same time.
In this case, you might consider preallocating transaction entry space by using a high
INITRANS. This eliminates the overhead of having to allocate transaction entry space,
as required when the object is in use.
In general, Oracle recommends that you not change the value of INITRANS from its
default.

Managing Storage Parameters
This section describes the storage parameters that you can specify for schema object
segments to tell the database how to store the object in the database. Schema objects
include tables, indexes, partitions, clusters, materialized views, and materialized view
logs
The following topics are contained in this section:
■

Identifying the Storage Parameters

■

Specifying Storage Parameters at Object Creation

■

Setting Storage Parameters for Clusters

■

Setting Storage Parameters for Partitioned Tables

■

Setting Storage Parameters for Index Segments

■

Setting Storage Parameters for LOBs, Varrays, and Nested Tables

■

Changing Values of Storage Parameters

■

Understanding Precedence in Storage Parameters

Identifying the Storage Parameters
Storage parameters determine space allocation for objects when their segments are
created in a tablespace. Not all storage parameters can be specified for every type of
database object, and not all storage parameters can be specified in both the CREATE
and ALTER statements. Storage parameters for objects in locally managed tablespaces
are supported mainly for backward compatibility.
The Oracle Database server manages extents for locally managed tablespaces. If you
specified the UNIFORM clause when the tablespace was created, then the database
creates all extents of a uniform size that you specified (or a default size) for any objects
created in the tablespace. If you specified the AUTOALLOCATE clause, then the
database determines the extent sizing policy for the tablespace. So, for example, if you
specific the INITIAL clause when you create an object in a locally managed tablespace

Managing Space for Schema Objects 14-5

Managing Storage Parameters

you are telling the database to preallocate at least that much space. The database then
determines the appropriate number of extents needed to allocate that much space.
Table 14–1 contains a brief description of each storage parameter. For a complete
description of these parameters, including their default, minimum, and maximum
settings, see the Oracle Database SQL Reference.
Table 14–1

Object Storage Parameters

Parameter

Description

INITIAL

In a tablespace that is specified as EXTENT MANAGEMENT LOCAL,
the database uses the value of INITIAL with the extent size for the
tablespace to determine the initial amount of space to reserve for the
object. For example, in a uniform locally managed tablespace with
5M extents, if you specify an INITIAL value of 1M, then the
database must allocate one 5M extent. If the extent size of the
tablespace is smaller than the value of INITIAL, then the initial
amount of space allocated will in fact be more than one extent.

MINEXTENTS

In a tablespace that is specified as EXTENT MANAGEMENT LOCAL,
MINEXTENTS is used to compute the initial amount of space that is
allocated. The initial amount of space that is allocated is equal to
INITIAL * MINEXTENTS.Thereafter it is set to 1 (as seen in the
DBA_SEGMENTS view).

BUFFER POOL

Defines a default buffer pool (cache) for a schema object. For
information on the use of this parameter, see Oracle Database
Performance Tuning Guide.

Specifying Storage Parameters at Object Creation
At object creation, you can specify storage parameters for each individual schema
object. These parameter settings override any default storage settings. Use the
STORAGE clause of the CREATE or ALTER statement for specifying storage parameters
for the individual object.

Setting Storage Parameters for Clusters
Use the STORAGE clause of the CREATE TABLE or ALTER TABLE statement to set the
storage parameters for non-clustered tables.
In contrast, set the storage parameters for the data segments of a cluster using the
STORAGE clause of the CREATE CLUSTER or ALTER CLUSTER statement, rather than
the individual CREATE or ALTER statements that put tables into the cluster. Storage
parameters specified when creating or altering a clustered table are ignored. The
storage parameters set for the cluster override the table storage parameters.

Setting Storage Parameters for Partitioned Tables
With partitioned tables, you can set default storage parameters at the table level. When
creating a new partition of the table, the default storage parameters are inherited from
the table level (unless you specify them for the individual partition). If no storage
parameters are specified at the table level, then they are inherited from the tablespace.

Setting Storage Parameters for Index Segments
Storage parameters for an index segment created for a table index can be set using the
STORAGE clause of the CREATE INDEX or ALTER INDEX statement.

14-6 Oracle Database Administrator’s Guide

Managing Storage Parameters

Storage parameters of an index segment created for the index used to enforce a
primary key or unique key constraint can be set in either of the following ways:
■

■

In the ENABLE ... USING INDEX clause of the CREATE TABLE or ALTER TABLE
statement
In the STORAGE clause of the ALTER INDEX statement

Setting Storage Parameters for LOBs, Varrays, and Nested Tables
A table or materialized view can contain LOB, varray, or nested table column types.
These entities can be stored in their own segments. LOBs and varrays are stored in LOB
segments, while a nested table is stored in a storage table. You can specify a STORAGE
clause for these segments that will override storage parameters specified at the table
level.
See Also:
■

■

Oracle Database Application Developer's Guide - Large Objects for
more information about LOBs
Oracle Database Application Developer's Guide - Object-Relational
Features for more information about varrays and nested tables

Changing Values of Storage Parameters
You can alter default storage parameters for tablespaces and specific storage
parameters for individual objects if you so choose. Default storage parameters can be
reset for a tablespace; however, changes affect only new objects created in the
tablespace or new extents allocated for a segment. As discussed previously, you cannot
specify default storage parameters for locally managed tablespaces, so this discussion
does not apply.
The INITIAL and MINEXTENTS storage parameters cannot be altered for an existing
table, cluster, index. If only NEXT is altered for a segment, the next incremental extent
is the size of the new NEXT, and subsequent extents can grow by PCTINCREASE as
usual.
If both NEXT and PCTINCREASE are altered for a segment, the next extent is the new
value of NEXT, and from that point forward, NEXT is calculated using PCTINCREASE
as usual.

Understanding Precedence in Storage Parameters
Starting with default values, the storage parameters in effect for a database object at a
given time are determined by the following, listed in order of precedence (where
higher numbers take precedence over lower numbers):
1.

Oracle Database default values

2.

DEFAULT STORAGE clause of CREATE TABLESPACE statement

3.

DEFAULT STORAGE clause of ALTER TABLESPACE statement

4.

STORAGE clause of CREATE [TABLE | CLUSTER | MATERIALIZED VIEW |
MATERIALIZED VIEW LOG | INDEX] statement

5.

STORAGE clause of ALTER [TABLE | CLUSTER | MATERIALIZED VIEW |
MATERIALIZED VIEW LOG | INDEX] statement

Any storage parameter specified at the object level overrides the corresponding option
set at the tablespace level. When storage parameters are not explicitly set at the object
Managing Space for Schema Objects 14-7

Managing Resumable Space Allocation

level, they default to those at the tablespace level. When storage parameters are not set
at the tablespace level, Oracle Database system defaults apply. If storage parameters
are altered, the new options apply only to the extents not yet allocated.
The storage parameters for temporary segments always use
the default storage parameters set for the associated tablespace.

Note:

Managing Resumable Space Allocation
Oracle Database provides a means for suspending, and later resuming, the execution
of large database operations in the event of space allocation failures. This enables you
to take corrective action instead of the Oracle Database server returning an error to the
user. After the error condition is corrected, the suspended operation automatically
resumes. This feature is called resumable space allocation. The statements that are
affected are called resumable statements.
This section contains the following topics:
■

Resumable Space Allocation Overview

■

Enabling and Disabling Resumable Space Allocation

■

Detecting Suspended Statements

■

Operation-Suspended Alert

■

Resumable Space Allocation Example: Registering an AFTER SUSPEND Trigger

Resumable Space Allocation Overview
This section provides an overview of resumable space allocation. It describes how
resumable space allocation works, and specifically defines qualifying statements and
error conditions.

How Resumable Space Allocation Works
The following is an overview of how resumable space allocation works. Details are
contained in later sections.
1.

2.

3.

A statement executes in a resumable mode only if its session has been enabled for
resumable space allocation by one of the following actions:
■

The RESUMABLE_TIMEOUT initialization parameter is set to a nonzero value.

■

The ALTER SESSION ENABLE RESUMABLE statement is issued.

A resumable statement is suspended when one of the following conditions occur
(these conditions result in corresponding errors being signalled for non-resumable
statements):
■

Out of space condition

■

Maximum extents reached condition

■

Space quota exceeded condition.

When the execution of a resumable statement is suspended, there are mechanisms
to perform user supplied operations, log errors, and to query the status of the
statement execution. When a resumable statement is suspended the following
actions are taken:
■

The error is reported in the alert log.

14-8 Oracle Database Administrator’s Guide

Managing Resumable Space Allocation

■
■

The system issues the Resumable Session Suspended alert.
If the user registered a trigger on the AFTER SUSPEND system event, the user
trigger is executed. A user supplied PL/SQL procedure can access the error
message data using the DBMS_RESUMABLE package and the DBA_ or
USER_RESUMABLE view.

4.

Suspending a statement automatically results in suspending the transaction. Thus
all transactional resources are held through a statement suspend and resume.

5.

When the error condition is resolved (for example, as a result of user intervention
or perhaps sort space released by other queries), the suspended statement
automatically resumes execution and the Resumable Session Suspended alert is
cleared.

6.

A suspended statement can be forced to throw the exception using the
DBMS_RESUMABLE.ABORT() procedure. This procedure can be called by a DBA,
or by the user who issued the statement.

7.

A suspension time out interval is associated with resumable statements. A
resumable statement that is suspended for the timeout interval (the default is two
hours) wakes up and returns the exception to the user.

8.

A resumable statement can be suspended and resumed multiple times during
execution.

What Operations are Resumable?
The following operations are resumable:
■

Queries
SELECT statements that run out of temporary space (for sort areas) are candidates
for resumable execution. When using OCI, the calls OCIStmtExecute() and
OCIStmtFetch() are candidates.

■

DML
INSERT, UPDATE, and DELETE statements are candidates. The interface used to
execute them does not matter; it can be OCI, SQLJ, PL/SQL, or another interface.
Also, INSERT INTO...SELECT from external tables can be resumable.

■

Import/Export
As for SQL*Loader, a command line parameter controls whether statements are
resumable after recoverable errors.

■

DDL
The following statements are candidates for resumable execution:
–

CREATE TABLE ... AS SELECT

–

CREATE INDEX

–

ALTER INDEX ... REBUILD

–

ALTER TABLE ... MOVE PARTITION

–

ALTER TABLE ... SPLIT PARTITION

–

ALTER INDEX ... REBUILD PARTITION

–

ALTER INDEX ... SPLIT PARTITION

–

CREATE MATERIALIZED VIEW

Managing Space for Schema Objects 14-9

Managing Resumable Space Allocation

–

CREATE MATERIALIZED VIEW LOG

What Errors are Correctable?
There are three classes of correctable errors:
■

Out of space condition
The operation cannot acquire any more extents for a table/index/temporary
segment/undo segment/cluster/LOB/table partition/index partition in a
tablespace. For example, the following errors fall in this category:
ORA-1653 unable to extend table ... in tablespace ...
ORA-1654 unable to extend index ... in tablespace ...

■

Maximum extents reached condition
The number of extents in a table/index/temporary segment/undo
segment/cluster/LOB/table partition/index partition equals the maximum
extents defined on the object. For example, the following errors fall in this
category:
ORA-1631 max # extents ... reached in table ...
ORA-1654 max # extents ... reached in index ...

■

Space quota exceeded condition
The user has exceeded his assigned space quota in the tablespace. Specifically, this
is noted by the following error:
ORA-1536 space quote exceeded for tablespace string

Resumable Space Allocation and Distributed Operations
In a distributed environment, if a user enables or disables resumable space allocation,
or if you, as a DBA, alter the RESUMABLE_TIMEOUT initialization parameter, only the
local instance is affected. In a distributed transaction, sessions or remote instances are
suspended only if RESUMABLE has been enabled in the remote instance.

Parallel Execution and Resumable Space Allocation
In parallel execution, if one of the parallel execution server processes encounters a
correctable error, that server process suspends its execution. Other parallel execution
server processes will continue executing their respective tasks, until either they
encounter an error or are blocked (directly or indirectly) by the suspended server
process. When the correctable error is resolved, the suspended process resumes
execution and the parallel operation continues execution. If the suspended operation is
terminated, the parallel operation aborts, throwing the error to the user.
Different parallel execution server processes may encounter one or more correctable
errors. This may result in firing an AFTER SUSPEND trigger multiple times, in parallel.
Also, if a parallel execution server process encounters a non-correctable error while
another parallel execution server process is suspended, the suspended statement is
immediately aborted.
For parallel execution, every parallel execution coordinator and server process has its
own entry in the DBA_ or USER_RESUMABLE view.

Enabling and Disabling Resumable Space Allocation
Resumable space allocation is only possible when statements are executed within a
session that has resumable mode enabled. There are two means of enabling and
14-10 Oracle Database Administrator’s Guide

Managing Resumable Space Allocation

disabling resumable space allocation. You can control it at the system level with the
RESUMABLE_TIMEOUT initialization parameter, or users can enable it at the session
level using clauses of the ALTER SESSION statement.
Because suspended statements can hold up some system
resources, users must be granted the RESUMABLE system privilege
before they are allowed to enable resumable space allocation and
execute resumable statements.

Note:

Setting the RESUMABLE_TIMEOUT Initialization Parameter
You can enable resumable space allocation system wide and specify a timeout interval
by setting the RESUMABLE_TIMEOUT initialization parameter. For example, the
following setting of the RESUMABLE_TIMEOUT parameter in the initialization
parameter file causes all sessions to initially be enabled for resumable space allocation
and sets the timeout period to 1 hour:
RESUMABLE_TIMEOUT

= 3600

If this parameter is set to 0, then resumable space allocation is disabled initially for all
sessions. This is the default.
You can use the ALTER SYSTEM SET statement to change the value of this parameter
at the system level. For example, the following statement will disable resumable space
allocation for all sessions:
ALTER SYSTEM SET RESUMABLE_TIMEOUT=0;

Within a session, a user can issue the ALTER SESSION SET statement to set the
RESUMABLE_TIMEOUT initialization parameter and enable resumable space allocation,
change a timeout value, or to disable resumable mode.

Using ALTER SESSION to Enable and Disable Resumable Space Allocation
A user can enable resumable mode for a session, using the following SQL statement:
ALTER SESSION ENABLE RESUMABLE;

To disable resumable mode, a user issues the following statement:
ALTER SESSION DISABLE RESUMABLE;

The default for a new session is resumable mode disabled, unless the
RESUMABLE_TIMEOUT initialization parameter is set to a nonzero value.
The user can also specify a timeout interval, and can provide a name used to identify a
resumable statement. These are discussed separately in following sections.
See Also: "Using a LOGON Trigger to Set Default Resumable
Mode" on page 14-12

Specifying a Timeout Interval A timeout period, after which a suspended statement will
error if no intervention has taken place, can be specified when resumable mode is
enabled. The following statement specifies that resumable transactions will time out
and error after 3600 seconds:
ALTER SESSION ENABLE RESUMABLE TIMEOUT 3600;

The value of TIMEOUT remains in effect until it is changed by another ALTER
SESSION ENABLE RESUMABLE statement, it is changed by another means, or the

Managing Space for Schema Objects

14-11

Managing Resumable Space Allocation

session ends. The default timeout interval when using the ENABLE RESUMABLE
TIMEOUT clause to enable resumable mode is 7200 seconds.
See Also: "Setting the RESUMABLE_TIMEOUT Initialization
Parameter" on page 14-11 for other methods of changing the
timeout interval for resumable space allocation

Naming Resumable Statements Resumable statements can be identified by name. The
following statement assigns a name to resumable statements:
ALTER SESSION ENABLE RESUMABLE TIMEOUT 3600 NAME 'insert into table';

The NAME value remains in effect until it is changed by another ALTER SESSION
ENABLE RESUMABLE statement, or the session ends. The default value for NAME is
'User username(userid), Session sessionid, Instance instanceid'.
The name of the statement is used to identify the resumable statement in the
DBA_RESUMABLE and USER_RESUMABLE views.

Using a LOGON Trigger to Set Default Resumable Mode
Another method of setting default resumable mode, other than setting the
RESUMABLE_TIMEOUT initialization parameter, is that you can register a database
level LOGON trigger to alter a user's session to enable resumable and set a timeout
interval.
If there are multiple triggers registered that change default
mode and timeout for resumable statements, the result will be
unspecified because Oracle Database does not guarantee the order
of trigger invocation.

Note:

Detecting Suspended Statements
When a resumable statement is suspended, the error is not raised to the client. In order
for corrective action to be taken, Oracle Database provides alternative methods for
notifying users of the error and for providing information about the circumstances.

Notifying Users: The AFTER SUSPEND System Event and Trigger
When a resumable statement encounter a correctable error, the system internally
generates the AFTER SUSPEND system event. Users can register triggers for this event
at both the database and schema level. If a user registers a trigger to handle this
system event, the trigger is executed after a SQL statement has been suspended.
SQL statements executed within a AFTER SUSPEND trigger are always non-resumable
and are always autonomous. Transactions started within the trigger use the SYSTEM
rollback segment. These conditions are imposed to overcome deadlocks and reduce
the chance of the trigger experiencing the same error condition as the statement.
Users can use the USER_RESUMABLE or DBA_RESUMABLE views, or the
DBMS_RESUMABLE.SPACE_ERROR_INFO function, within triggers to get information
about the resumable statements.
Triggers can also call the DBMS_RESUMABLE package to terminate suspended
statements and modify resumable timeout values. In the following example, the
default system timeout is changed by creating a system wide AFTER SUSPEND trigger
that calls DBMS_RESUMABLE to set the timeout to 3 hours:

14-12 Oracle Database Administrator’s Guide

Managing Resumable Space Allocation

CREATE OR REPLACE TRIGGER resumable_default_timeout
AFTER SUSPEND
ON DATABASE
BEGIN
DBMS_RESUMABLE.SET_TIMEOUT(10800);
END;

See Also: Oracle Database Application Developer's Guide Fundamentals for information about system events, triggers, and
attribute functions

Using Views to Obtain Information About Suspended Statements
The following views can be queried to obtain information about the status of
resumable statements:
View

Description

DBA_RESUMABLE

These views contain rows for all currently executing or suspended
resumable statements. They can be used by a DBA, AFTER
SUSPEND trigger, or another session to monitor the progress of, or
obtain specific information about, resumable statements.

USER_RESUMABLE

V$SESSION_WAIT

When a statement is suspended the session invoking the statement is
put into a wait state. A row is inserted into this view for the session
with the EVENT column containing "statement suspended, wait error
to be cleared".

Oracle Database Reference for specific information about
the columns contained in these views

See Also:

Using the DBMS_RESUMABLE Package
The DBMS_RESUMABLE package helps control resumable space allocation. The
following procedures can be invoked:
Procedure

Description

ABORT(sessionID)

This procedure aborts a suspended resumable statement. The
parameter sessionID is the session ID in which the statement
is executing. For parallel DML/DDL, sessionID is any
session ID which participates in the parallel DML/DDL.
Oracle Database guarantees that the ABORT operation always
succeeds. It may be called either inside or outside of the AFTER
SUSPEND trigger.
The caller of ABORT must be the owner of the session with
sessionID, have ALTER SYSTEM privilege, or have DBA
privileges.

GET_SESSION_TIMEOU
T(sessionID)

This function returns the current timeout value of resumable
space allocation for the session with sessionID. This returned
timeout is in seconds. If the session does not exist, this function
returns -1.

SET_SESSION_TIMEOU
T(sessionID,
timeout)

This procedure sets the timeout interval of resumable space
allocation for the session with sessionID. The parameter
timeout is in seconds. The new timeout setting will applies
to the session immediately. If the session does not exist, no
action is taken.

Managing Space for Schema Objects

14-13

Managing Resumable Space Allocation

Procedure

Description

GET_TIMEOUT()

This function returns the current timeout value of resumable
space allocation for the current session. The returned value is in
seconds.

SET_TIMEOUT(timeou
t)

This procedure sets a timeout value for resumable space
allocation for the current session. The parameter timeout is in
seconds. The new timeout setting applies to the session
immediately.

See Also:

Oracle Database PL/SQL Packages and Types Reference

Operation-Suspended Alert
When a resumable session is suspended, an operation-suspended alert is issued on the
object that needs allocation of resource for the operation to complete. Once the
resource is allocated and the operation completes, the operation-suspended alert is
cleared. Please refer to "Managing Tablespace Alerts" on page 14-1 for more
information on system-generated alerts.

Resumable Space Allocation Example: Registering an AFTER SUSPEND Trigger
In the following example, a system wide AFTER SUSPEND trigger is created and
registered as user SYS at the database level. Whenever a resumable statement is
suspended in any session, this trigger can have either of two effects:
■

■

If an undo segment has reached its space limit, then a message is sent to the DBA
and the statement is aborted.
If any other recoverable error has occurred, the timeout interval is reset to 8 hours.

Here are the statements for this example:
CREATE OR REPLACE TRIGGER resumable_default
AFTER SUSPEND
ON DATABASE
DECLARE
/* declare transaction in this trigger is autonomous */
/* this is not required because transactions within a trigger
are always autonomous */
PRAGMA AUTONOMOUS_TRANSACTION;
cur_sid
NUMBER;
cur_inst
NUMBER;
errno
NUMBER;
err_type
VARCHAR2;
object_owner
VARCHAR2;
object_type
VARCHAR2;
table_space_name VARCHAR2;
object_name
VARCHAR2;
sub_object_name
VARCHAR2;
error_txt
VARCHAR2;
msg_body
VARCHAR2;
ret_value
BOOLEAN;
mail_conn
UTL_SMTP.CONNECTION;
BEGIN
-- Get session ID
SELECT DISTINCT(SID) INTO cur_SID FROM V$MYSTAT;
-- Get instance number
cur_inst := userenv('instance');
14-14 Oracle Database Administrator’s Guide

Reclaiming Wasted Space

-- Get space error information
ret_value :=
DBMS_RESUMABLE.SPACE_ERROR_INFO(err_type,object_type,object_owner,
table_space_name,object_name, sub_object_name);
/*
-- If the error is related to undo segments, log error, send email
-- to DBA, and abort the statement. Otherwise, set timeout to 8 hours.
--- sys.rbs_error is a table which is to be
-- created by a DBA manually and defined as
-- (sql_text VARCHAR2(1000), error_msg VARCHAR2(4000),
-- suspend_time DATE)
*/
IF OBJECT_TYPE = 'UNDO SEGMENT' THEN
/* LOG ERROR */
INSERT INTO sys.rbs_error (
SELECT SQL_TEXT, ERROR_MSG, SUSPEND_TIME
FROM DBMS_RESUMABLE
WHERE SESSION_ID = cur_sid AND INSTANCE_ID = cur_inst
);
SELECT ERROR_MSG INTO error_txt FROM DBMS_RESUMABLE
WHERE SESSION_ID = cur_sid and INSTANCE_ID = cur_inst;
-- Send email to receipient via UTL_SMTP package
msg_body:='Subject: Space Error Occurred
Space limit reached for undo segment ' || object_name ||
on ' || TO_CHAR(SYSDATE, 'Month dd, YYYY, HH:MIam') ||
'. Error message was ' || error_txt;
mail_conn := UTL_SMTP.OPEN_CONNECTION('localhost', 25);
UTL_SMTP.HELO(mail_conn, 'localhost');
UTL_SMTP.MAIL(mail_conn, 'sender@localhost');
UTL_SMTP.RCPT(mail_conn, 'recipient@localhost');
UTL_SMTP.DATA(mail_conn, msg_body);
UTL_SMTP.QUIT(mail_conn);
-- Abort the statement
DBMS_RESUMABLE.ABORT(cur_sid);
ELSE
-- Set timeout to 8 hours
DBMS_RESUMABLE.SET_TIMEOUT(28800);
END IF;
/* commit autonomous transaction */
COMMIT;
END;
/

Reclaiming Wasted Space
This section explains how to reclaim wasted space, and also introduces the Segment
Advisor, which is the Oracle Database component that identifies segments that have
space available for reclamation.
In This Section
■

Understanding Reclaimable Unused Space

Managing Space for Schema Objects

14-15

Reclaiming Wasted Space

■

Using the Segment Advisor

■

Shrinking Database Segments Online

■

Deallocating Unused Space

Understanding Reclaimable Unused Space
Over time, updates and deletes on objects within a tablespace can create pockets of
empty space that individually are not large enough to be reused for new data. This
type of empty space is referred to as fragmented free space.
Objects with fragmented free space can result in much wasted space, and can impact
database performance. The preferred way to defragment and reclaim this space is to
perform an online segment shrink. This process consolidates fragmented free space
below the high water mark and compacts the segment. After compaction, the high
water mark is moved, resulting in new free space above the high water mark. That
space above the high water mark is then deallocated. The segment remains available
for queries and DML during most of the operation, and no extra disk space need be
allocated.
You use the Segment Advisor to identify segments that would benefit from online
segment shrink. Only segments in locally managed tablespaces with automatic
segment space management (ASSM) are eligible. Other restrictions on segment type
exist. For more information, see "Shrinking Database Segments Online" on page 14-29.
If a table with reclaimable space is not eligible for online segment shrink, or if you
want to make changes to logical or physical attributes of the table while reclaiming
space, you can use online table redefinition as an alternative to segment shrink.
Online redefinition is also referred to as reorganization. Unlike online segment shrink,
it requires extra disk space to be allocated. See "Redefining Tables Online" on
page 15-21 for more information.

Using the Segment Advisor
The Segment Advisor identifies segments that have space available for reclamation. It
performs its analysis by examining usage and growth statistics in the Automatic
Workload Repository (AWR), and by sampling the data in the segment. It is configured
to run automatically at regular intervals, and you can also run it on demand
(manually). The regularly scheduled Segment Advisor run is known as the Automatic
Segment Advisor.
The Segment Advisor generates the following types of advice:
■

■

If the Segment Advisor determines that an object has a significant amount of free
space, it recommends online segment shrink. If the object is a table that is not
eligible for shrinking, as in the case of a table in a tablespace without automatic
segment space management, the Segment Advisor recommends online table
redefinition.
If the Segment Advisor encounters a table with row chaining above a certain
threshold, it records that fact that the table has an excess of chained rows.
The Segment Advisor flags only the type of row chaining that
results from updates that increase row length.

Note:

If you receive a space management alert, or if you decide that you want to reclaim
space, you should start with the Segment Advisor.

14-16 Oracle Database Administrator’s Guide

Reclaiming Wasted Space

To use the Segment Advisor:
1.

Check the results of the Automatic Segment Advisor.
To understand the Automatic Segment Advisor, see "Automatic Segment
Advisor", later in this section. For details on how to view results, see "Viewing
Segment Advisor Results" on page 14-21.

2.

(Optional) Obtain updated results on individual segments by rerunning the
Segment Advisor manually.
See "Running the Segment Advisor Manually", later in this section.

Automatic Segment Advisor
The Automatic Segment Advisor is started by a Scheduler job that is configured to run
during the default maintenance window. The default maintenance window is specified
in the Scheduler, and is initially defined as follows:
■

■

Weeknights, Monday through Friday, from 10:00 p.m. to 6:00 a.m. (8 hours each
night)
Weekends, from Saturday morning at 12:00 a.m. to Monday morning at 12:00 a.m.
(for a total of 48 hours)

The Automatic Segment Advisor does not analyze every database object. Instead, it
examines database statistics, samples segment data, and then selects the following
objects to analyze:
■

Tablespaces that have exceeded a critical or warning space threshold

■

Segments that have the most activity

■

Segments that have the highest growth rate

If an object is selected for analysis but the maintenance window expires before the
Segment Advisor can process the object, the object is included in the next Automatic
Segment Advisor run.
You cannot change the set of tablespaces and segments that the Automatic Segment
Advisor selects for analysis. You can, however, enable or disable the Automatic
Segment Advisor job, change the times during which the Automatic Segment Advisor
is scheduled to run, or adjust Automatic Segment Advisor system resource utilization.
See "Configuring the Automatic Segment Advisor Job" on page 14-27 for more
information.
See Also:
■

"Viewing Segment Advisor Results" on page 14-21

■

Chapter 27, "Using the Scheduler"

■

Chapter 23, "Managing Automatic System Tasks Using the
Maintenance Window"

Running the Segment Advisor Manually
You can manually run the Segment Advisor at any time with Enterprise Manager or
with PL/SQL package procedure calls. Reasons to manually run the Segment Advisor
include the following:
■

You want to analyze a tablespace or segment that was not selected by the
Automatic Segment Advisor.

Managing Space for Schema Objects

14-17

Reclaiming Wasted Space

■

You want to repeat the analysis of an individual tablespace or segment to get more
up-to-date recommendations.

You can request advice from the Segment Advisor at three levels:
■

■

■

Segment level—Advice is generated for a single segment, such as an
unpartitioned table, a partition or subpartition of a partitioned table, an index, or a
LOB column.
Object level—Advice is generated for an entire object, such as a table or index. If
the object is partitioned, advice is generated on all the partitions of the object. In
addition, if you run Segment Advisor manually from Enterprise Manager, you can
request advice on the object's dependent objects, such as indexes and LOB
segments for a table.
Tablespace level—Advice is generated for every segment in a tablespace.

The OBJECT_TYPE column of Table 14–3 on page 14-20 shows the types of objects for
which you can request advice.
To run the Segment Advisor, you must have ADVISOR and CREATE JOB or CREATE
ANY JOB privileges.
Running the Segment Advisor Manually with Enterprise Manager There are two ways to run
the Segment Advisor manually with Enterprise Manager:
■

Using the Segment Advisor Wizard
This method enables you to request advice at the tablespace level or object level.
At the object level, you can request advice on tables, indexes, table partitions, and
index partitions. Dependent objects such as LOB segments cannot be included in
the analysis.

■

Using the Run Segment Advisor command on a schema object display page.
For example, if you display a list of tables on the Tables page (in the
Administration section of Enterprise Manager Database Control), you can select a
table and then select the Run Segment Advisor command from the Actions menu.

Figure 14–1 Tables page

This method enables you to include the schema object's dependent objects in the
Segment Advisor run. For example, if you select a table and select the Run
Segment Advisor command, Enterprise Manager displays the table's dependent

14-18 Oracle Database Administrator’s Guide

Reclaiming Wasted Space

objects, such as partitions, index segments, LOB segments, and so on. You can then
select dependent objects to include in the run.
In both cases, Enterprise Manager creates the Segment Advisor task as an Oracle
Database Scheduler job. You can schedule the job to run immediately, or can take
advantage of advanced scheduling features offered by the Scheduler.
To run the Segment Advisor manually with the Segment Advisor Wizard:
1.

From the database Home page, under Related Links, click Advisor Central.
The Advisor Central page appears. (See Figure 14–2.)

2.

Under Advisors, click Segment Advisor.
The first page of the Segment Advisor wizard appears.

3.

Follow the wizard steps to schedule the Segment Advisor job, and then click
Submit on the final wizard page.
The Advisor Central page reappears, with the new Segment Advisor job at the top
of the list under the Results heading. The job status is SCHEDULED. (If you don't
see your job, use the search fields above the list to display it.)

4.

Check the status of the job. If it is not COMPLETED, click the Refresh button at the
top of the page repeatedly. (Do not use your browser's Refresh icon.)
When the job status changes to COMPLETED, select the job by clicking in the Select
column, and then click View Result.

Figure 14–2 Advisor Central page

Chapter 27, "Using the Scheduler" for more information
about the advanced scheduling features of the Scheduler.

See Also:

Running the Segment Advisor Manually with PL/SQL You can also run the Segment Advisor
with the DBMS_ADVISOR package. You use package procedures to create a Segment
Advisor task, set task arguments, and then execute the task. Table 14–2 shows the
procedures that are relevant for the Segment Advisor. Please refer to Oracle Database
PL/SQL Packages and Types Reference for more details on these procedures.

Managing Space for Schema Objects

14-19

Reclaiming Wasted Space

Table 14–2

DBMS_ADVISOR package procedures relevant to the Segment Advisor

Package Procedure Name

Description

CREATE_TASK

Use this procedure to create the Segment Advisor task. Specify 'Segment Advisor'
as the value of the ADVISOR_NAME parameter.

CREATE_OBJECT

Use this procedure to identify the target object for segment space advice. The
parameter values of this procedure depend upon the object type. Table 14–3 lists
the parameter values for each type of object.
Note: To request advice on an IOT overflow segment, use an object type of TABLE,
TABLE PARTITION, or TABLE SUBPARTITION. Use the following query to find
the overflow segment for an IOT and to determine the overflow segment table
name to use with CREATE_OBJECT:
select table_name, iot_name, iot_type from dba_tables;

SET_TASK_PARAMETER

Use this procedure to describe the segment advice that you need. Table 14–4 shows
the relevant input parameters of this procedure. Parameters not listed here are not
used by the Segment Advisor.

EXECUTE_TASK

Use this procedure to execute the Segment Advisor task.

Table 14–3

Input for DBMS_ADVISOR.CREATE_OBJECT

Input Parameter
OBJECT_TYPE

ATTR1

ATTR2

ATTR3

ATTR4

TABLESPACE

tablespace
name

NULL

NULL

Unused. Specify NULL.

TABLE

schema name table name

NULL

Unused. Specify NULL.

INDEX

schema name index name

NULL

Unused. Specify NULL.

TABLE PARTITION

schema name table name

table partition name

Unused. Specify NULL.

INDEX PARTITION

schema name index name

index partition name

Unused. Specify NULL.

TABLE
SUBPARTITION

schema name table name

table subpartition
name

Unused. Specify NULL.

INDEX
SUBPARTITION

schema name index name

index subpartition
name

Unused. Specify NULL.

LOB

schema name segment name NULL

Unused. Specify NULL.

LOB PARTITION

schema name segment name lob partition name

Unused. Specify NULL.

LOB SUBPARTITION

schema name segment name lob subpartition name

Unused. Specify NULL.

Table 14–4

Input for DBMS_ADVISOR.SET_TASK_PARAMETER

Input Parameter

Description

Possible Values

Default Value

time_limit

The time limit for the Segment
Advisor run, specified in
seconds.

Any number of seconds

UNLIMITED

recommend_all Whether the Segment Advisor TRUE: Findings are generated on all
should generate findings for all segments specified, whether or not space
reclamation is recommended.
segments.
FALSE: Findings are generated only for
those objects that generate
recommendations for space reclamation.

14-20 Oracle Database Administrator’s Guide

TRUE

Reclaiming Wasted Space

Example The example that follows shows how to use the DBMS_ADVISOR
procedures to run the Segment Advisor for the sample table hr.employees. The user
executing these package procedures must have the EXECUTE object privilege on the
package or the ADVISOR system privilege.

Note that passing an object type of TABLE to DBMS_ADVISOR.CREATE_OBJECT
amounts to an object level request. If the table is not partitioned, the table segment is
analyzed (without any dependent segments like index or LOB segments). If the table is
partitioned, the Segment Advisor analyzes all table partitions and generates separate
findings and recommendations for each.
variable id number;
begin
declare
name varchar2(100);
descr varchar2(500);
obj_id number;
begin
name:='Manual_Employees';
descr:='Segment Advisor Example';
dbms_advisor.create_task (
advisor_name
=> 'Segment Advisor',
task_id
=> :id,
task_name
=> name,
task_desc
=> descr);
dbms_advisor.create_object (
task_name
=> name,
object_type
=> 'TABLE',
attr1
=> 'HR',
attr2
=> 'EMPLOYEES',
attr3
=> NULL,
attr4
=> NULL,
attr5
=> NULL,
object_id
=> obj_id);
dbms_advisor.set_task_parameter(
task_name
=> name,
parameter
=> 'recommend_all',
value
=> 'TRUE');
dbms_advisor.execute_task(name);
end;
end;
/

Viewing Segment Advisor Results
The Segment Advisor creates several types of results: recommendations, findings,
actions, and objects. You can view results in the following ways:
■

With Enterprise Manager

■

By querying the DBA_ADVISOR_* views

■

By calling the DBMS_SPACE.ASA_RECOMMENDATIONS procedure

Table Table 14–5 describes the various result types and their associated
DBA_ADVISOR_* views.

Managing Space for Schema Objects

14-21

Reclaiming Wasted Space

Table 14–5

Segment Advisor Result Types

Result Type

Associated View

Description

Recommendations

DBA_ADVISOR_RECOMMENDATIONS

If a segment would benefit from a segment shrink or
reorganization, the Segment Advisor generates a
recommendation for the segment. Table 14–6 shows
examples of generated findings and
recommendations.

Findings

DBA_ADVISOR_FINDINGS

Findings are a report of what the Segment Advisor
observed in analyzed segments. Findings include
space used and free space statistics for each analyzed
segment. Not all findings result in a
recommendation. (There may be only a few
recommendations, but there could be many findings.)
When running the Segment Advisor manually with
PL/SQL, if you specify 'TRUE' for recommend_all
in the SET_TASK_PARAMETER procedure, then the
Segment Advisor generates a finding for each
segment that qualifies for analysis, whether or not a
recommendation is made for that segment. For row
chaining advice, the Automatic Segment Advisor
generates findings only, and not recommendations. If
the Automatic Segment Advisor has no space
reclamation recommendations to make, it does not
generate findings.

Actions

DBA_ADVISOR_ACTIONS

Every recommendation is associated with a
suggested action to perform: either segment shrink or
online redefinition (reorganization). The
DBA_ADVISOR_ACTIONS view provides either the
SQL that you need to perform a segment shrink, or a
suggestion to reorganize the object.

Objects

DBA_ADVISOR_OBJECTS

All findings, recommendations, and actions are
associated with an object. If the Segment Advisor
analyzes more than one segment, as with a tablespace
or partitioned table, then one entry is created in the
DBA_ADVISOR_OBJECTS view for each analyzed
segment. Table 14–3 defines the columns in this view
to query for information on the analyzed segments.
You can correlate the objects in this view with the
objects in the findings, recommendations, and actions
views.

See Also:
■

■

Oracle Database Reference for details on the DBA_ADVISOR_*
views.
Oracle Database PL/SQL Packages and Types Reference for details on
the DBMS_SPACE.ASA_RECOMMENDATIONS function.

Viewing Segment Advisor Results with Enterprise Manager
With Enterprise Manager (EM), you can view Segment Advisor results for both
Automatic Segment Advisor runs and manual Segment Advisor runs. You can view
the following types of results:
■

All recommendations (multiple automatic and manual Segment Advisor runs)

■

Recommendations from the last Automatic Segment Advisor run

■

Recommendations from a specific run

14-22 Oracle Database Administrator’s Guide

Reclaiming Wasted Space

■

Row chaining findings

You can also view a list of the segments that were analyzed by the last Automatic
Segment Advisor run.
To view Segment Advisor results with EM—All runs:
1.

On the database Home page, under the Space Summary heading, click the
numeric link next to the title Segment Advisor Recommendations.

The Segment Advisor Recommendations page appears. Recommendations are
organized by tablespace.
Figure 14–3 Segment Advisor Recommendations page

2.

If any recommendations are present, click in the Select column to select a
tablespace, and then click Recommendation Details.
The Recommendation Details page appears. You can initiate the recommended
activity from this page (shrink or reorganize).

Figure 14–4 Recommendation Details page

Managing Space for Schema Objects

14-23

Reclaiming Wasted Space

Tip: The list entries are sorted in descending order by reclaimable
space. You can click column headings to change the sort order or to
change from ascending to descending order.

To view Segment Advisor results with EM—Last Automatic Segment Advisor run:
1.

On the database Home page, under the Space Summary heading, click the
numeric link next to the title Segment Advisor Recommendations.
The Segment Advisor Recommendations page appears. (See Figure 14–3.)

2.

In the View drop-down list, select Recommendations from Last Automatic Run.

3.

If any recommendations are present, click in the Select column to select a
tablespace, and then click Recommendation Details.
The Recommendation Details page appears. (See Figure 14–4.) You can initiate the
recommended activity from this page (shrink or reorganize).

To view Segment Advisor results with EM—Specific run:
1.

Start at the Advisor Central page.
If you ran the Segment Advisor with the Enterprise Manager wizard, the Advisor
Central page appears after you submit the Segment Advisor task. Otherwise, to
get to this page, on the database Home page, under Related Links, click Advisor
Central.

2.

Check that your task appears in the list under the Results heading. If it does not,
complete these steps (See Figure 14–2 on page 14-19):
a.

In the Search section of the page, under Advisor Tasks, select Segment
Advisor in the Advisory Type list.

b.

Enter the task name. Or, in the Advisor Runs list, select Last Run.

c.

Click Go.
Your Segment Advisor task appears in the Results section.

3.

Check the status of the job. If it is not COMPLETED, click the Refresh button at the
top of the page until your task status shows COMPLETED. (Do not use your
browser's refresh icon.)

4.

Click the task name.
The Segment Advisor Task page appears, with recommendations organized by
tablespace.

5.

Select a tablespace in the list, and then click Recommendation Details.
The Recommendation Details page appears. (See Figure 14–4.) You can initiate the
recommended activity from this page (shrink or reorganize).

To view row chaining findings
1.

On the database Home page, under the Space Summary heading, click the
numeric link next to the title Segment Advisor Recommendations.
The Segment Advisor Recommendations page appears. (See Figure 14–3.)

14-24 Oracle Database Administrator’s Guide

Reclaiming Wasted Space

2.

Under the Related Links heading, click Chained Row Analysis.
The Chained Row Analysis page appears, showing all segments that have chained
rows, with a chained rows percentage for each.

To view the list of segments that were analyzed by the last Automatic Segment
Advisor run:
1.

On the database Home page, under the Space Summary heading, click the
numeric link next to the title Segment Advisor Recommendations.
The Segment Advisor Recommendations page appears.

2.

Under the Related Links heading, click Automatic Segment Advisor Job.
The Automatic Segment Advisor Job page appears.

3.

Under the Last Run heading, click the View Processed Segments link.
The Segments Processed In Last Run page appears. Use the search fields above the
list to limit the segments displayed.

Viewing Segment Advisor Results by Querying the DBA_ADVISOR_* Views
The headings of Table 14–6 show the columns in the DBA_ADVISOR_* views that
contain output from the Segment Advisor. Refer to Oracle Database Reference for a
description of these views. The table contents summarize the possible outcomes. In
addition, Table 14–3 on page 14-20 defines the columns in the
DBA_ADVISOR_OBJECTS view that contain information on the analyzed segments.
Before querying the DBA_ADVISOR_* views, you can check that the Segment Advisor
task is complete by querying the STATUS column in DBA_ADVISOR_TASKS.
select task_name, status from dba_advisor_tasks
where owner = 'STEVE' and advisor_name = 'Segment Advisor';
TASK_NAME
STATUS
------------------------------ ----------Manual Employees
COMPLETED

The following example shows how to query the DBA_ADVISOR_* views to retrieve
findings from all Segment Advisor runs submitted by user STEVE:
select af.task_name, ao.attr2 segname, ao.attr3 partition, ao.type, af.message
from dba_advisor_findings af, dba_advisor_objects ao
where ao.task_id = af.task_id
and ao.object_id = af.object_id
and ao.owner = 'STEVE';

TASK_NAME
SEGNAME
PARTITION
TYPE
MESSAGE
------------------ ------------ --------------- ---------------- -------------------------Manual_Employees
EMPLOYEES
TABLE
The free space in the obje
ct is less than 10MB.
Manual_Salestable4 SALESTABLE4

SALESTABLE4_P1

TABLE PARTITION

Perform shrink, estimated
savings is 74444154 bytes.

Manual_Salestable4 SALESTABLE4

SALESTABLE4_P2

TABLE PARTITION

The free space in the obje
ct is less than 10MB.

Managing Space for Schema Objects

14-25

Reclaiming Wasted Space

Table 14–6

Segment Advisor Outcomes: Summary

MESSAGE column of
MORE_INFO column of
DBA_ADVISOR_FINDINGS DBA_ADVISOR_FINDINGS

BENEFIT_TYPE column of
DBA_ADVISOR_RECOMMEN ATTR1 column of
DATIONS
DBA_ADVISOR_ACTIONS

Insufficient information to
make a recommendation.

None

None

None

The free space in the object
is less than 10MB.

Allocated Space:xxx: Used
Space:xxx: Reclaimable
Space :xxx

None

None

The object has some free
space but cannot be shrunk
because...

Allocated Space:xxx: Used
Space:xxx: Reclaimable
Space :xxx

None

None

The free space in the object
is less than the size of the
last extent.

Allocated Space:xxx: Used
Space:xxx: Reclaimable
Space :xxx

None

None

Perform shrink, estimated
savings is xxx bytes.

Allocated Space:xxx: Used
Space:xxx: Reclaimable
Space :xxx

Perform shrink, estimated
savings is xxx bytes.

The command to execute.
For example: ALTER
object SHRINK SPACE;)

Enable row movement of
the table schema.table and
perform shrink, estimated
savings is xxx bytes.

Allocated Space:xxx: Used
Space:xxx: Reclaimable
Space :xxx

Enable row movement of the
table schema.table and perform
shrink, estimated savings is
xxx bytes

The command to execute.
For example: ALTER
object SHRINK SPACE;)

Perform re-org on the object
object, estimated savings is
xxx bytes.

Allocated Space:xxx: Used
Space:xxx: Reclaimable
Space :xxx

Perform re-org on the object
object, estimated savings is xxx
bytes.

Perform reorg

xx percent chained rows can
be removed by re-org.

None

None

(Note: This finding is for
objects with reclaimable
space that are not eligible
for online segment shrink.)
The object has chained rows
that can be removed by
re-org.

Viewing Segment Advisor Results with DBMS_SPACE.ASA_RECOMMENDATIONS
The ASA_RECOMMENDATIONS procedure in the DBMS_SPACE package returns a nested
table object that contains findings or recommendations for Automatic Segment
Advisor runs and, optionally, manual Segment Advisor runs. Calling this procedure
may be easier than working with the DBA_ADVISOR_* views, because the procedure
performs all the required joins for you and returns information in an easily
consumable format.
The following query returns recommendations by the most recent run of the Auto
Segment Advisor, with the suggested command to run to follow the
recommendations:
select tablespace_name, segment_name, segment_type, partition_name,
recommendations, c1 from
table(dbms_space.asa_recommendations('FALSE', 'FALSE', 'FALSE'));

TABLESPACE_NAME
SEGMENT_NAME
SEGMENT_TYPE
------------------------------ ------------------------------ -------------PARTITION_NAME
-----------------------------RECOMMENDATIONS
----------------------------------------------------------------------------C1
-----------------------------------------------------------------------------

14-26 Oracle Database Administrator’s Guide

Reclaiming Wasted Space

TVMDS_ASSM
ORDERS1
TABLE PARTITION
ORDERS1_P2
Perform shrink, estimated savings is 57666422 bytes.
alter table "STEVE"."ORDERS1" modify partition "ORDERS1_P2" shrink space
TVMDS_ASSM
ORDERS1
TABLE PARTITION
ORDERS1_P1
Perform shrink, estimated savings is 45083514 bytes.
alter table "STEVE"."ORDERS1" modify partition "ORDERS1_P1" shrink space
TVMDS_ASSM_NEW

ORDERS_NEW

TABLE

Perform shrink, estimated savings is 155398992 bytes.
alter table "STEVE"."ORDERS_NEW" shrink space
TVMDS_ASSM_NEW

ORDERS_NEW_INDEX

INDEX

Perform shrink, estimated savings is 102759445 bytes.
alter index "STEVE"."ORDERS_NEW_INDEX" shrink space

See Oracle Database PL/SQL Packages and Types Reference for details on
DBMS_SPACE.ASA_RECOMMENDATIONS.

Configuring the Automatic Segment Advisor Job
The Automatic Segment Advisor is run by a Scheduler job. As such, you can use
Enterprise Manager or PL/SQL package procedure calls to modify job attributes to
suit your needs. The following are examples of modifications that you can make:
■

Disable or enable the job

■

Change the job schedule

■

Adjust system resources consumed by the job

You can call DBMS_SCHEDULER package procedures to make these changes, but the
easier way to is to use Enterprise Manager.

To configure the Automatic Segment Advisor job with Enterprise Manager:
1.

Log in to Enterprise Manager as user SYS or as a user with the following
privileges:
■

■

2.

ALTER privilege on the Automatic Segment Advisor job
SYS.AUTO_SPACE_ADVISOR_JOB
MANAGE SCHEDULER system privilege

On the database Home page, under the Space Summary heading, click the
numeric link next to the title Segment Advisor Recommendations.

The Segment Advisor Recommendations page appears.
3.

Under the Related Links heading, click the link entitled Automatic Segment
Advisor Job.
Managing Space for Schema Objects

14-27

Reclaiming Wasted Space

The Automatic Segment Advisor Job page appears.
4.

Click Configure.
The Edit Job page appears. This is the generic Scheduler page that enables you to
modify job attributes.

5.

Modify job attributes as needed, including enabling or disabling the job. Click the
Help link at the top of the page for information on the Scheduler and on
modifying job attributes.

6.

Modify the job schedule, job resource consumption, or other job attributes using
the generic Scheduler pages in Enterprise Manager.
■

■

To adjust the job schedule, modify the window group
SYS.MAINTENANCE_WINDOW_GROUP or its member windows.
To adjust system resources consumed by the job, either modify the job class
AUTO_TASKS_JOB_CLASS, associating it with a different resource consumer
group, or modify the resource consumer group
AUTO_TASK_CONSUMER_GROUP.
See Also:
■

■

■

Chapter 23, "Managing Automatic System Tasks Using the
Maintenance Window"
Chapter 26, "Scheduler Concepts" and Chapter 27, "Using the
Scheduler"
"How to Manage Scheduler Privileges" on page 28-14 for
information on the privileges required to administer Scheduler
objects.

Viewing Automatic Segment Advisor Information
The following views display information specific to the Automatic Segment Advisor.
For details, see Oracle Database Reference.

14-28 Oracle Database Administrator’s Guide

Reclaiming Wasted Space

View

Description

DBA_AUTO_SEGADV_SUMMARY

Each row of this view summarizes one Automatic
Segment Advisor run. Fields include number of
tablespaces and segments processed, and number of
recommendations made.

DBA_AUTO_SEGADV_CTL

Contains control information that the Automatic
Segment Advisor uses to select and process segments.
Each row contains information on a single object
(tablespace or segment), including whether the object
has been processed, and if so, the task ID under which it
was processed and the reason for selecting it.

Shrinking Database Segments Online
You use online segment shrink to reclaim fragmented free space below the high water
mark in an Oracle Database segment. The benefits of segment shrink are these:
■

■

Compaction of data leads to better cache utilization, which in turn leads to better
online transaction processing (OLTP) performance.
The compacted data requires fewer blocks to be scanned in full table scans, which
in turns leads to better decision support system (DSS) performance.

Segment shrink is an online, in-place operation. DML operations and queries can be
issued during the data movement phase of segment shrink. Concurrent DML
operation are blocked for a short time at the end of the shrink operation, when the
space is deallocated. Indexes are maintained during the shrink operation and remain
usable after the operation is complete. Segment shrink does not require extra disk
space to be allocated.
Segment shrink reclaims unused space both above and below the high water mark. In
contrast, space deallocation reclaims unused space only above the high water mark. In
shrink operations, by default, the database compacts the segment, adjusts the high
water mark, and releases the reclaimed space.
Segment shrink requires that rows be moved to new locations. Therefore, you must
first enable row movement in the object you want to shrink and disable any
rowid-based triggers defined on the object. You enable row movement in a table with
the ALTER TABLE ... ENABLE ROW MOVEMENT command.
Shrink operations can be performed only on segments in locally managed tablespaces
with automatic segment space management (ASSM). Within an ASSM tablespace, all
segment types are eligible for online segment shrink except these:
■

IOT mapping tables

■

Tables with rowid based materialized views

■

Tables with function-based indexes
Oracle Database SQL Reference for more information on the
ALTER TABLE command.

See Also:

Invoking Online Segment Shrink
Before invoking online segment shrink, view the findings and recommendations of the
Segment Advisor. For more information, see "Using the Segment Advisor" on
page 14-16.

Managing Space for Schema Objects

14-29

Reclaiming Wasted Space

You invoke online segment shrink with Enterprise Manager (EM) or with SQL
commands in SQL*Plus. The remainder of this section discusses the command line
method.
You can invoke segment shrink directly from the
Recommendation Details page in EM. (See Figure 14–4.) Or, to invoke
segment shrink for an individual table in EM, display the table on the
Tables page, select the table, and then click Shrink Segment in the
Actions list. (See Figure 14–1.) Perform a similar operation in EM to
shrink indexes, materialized views, and so on.

Note:

You can shrink space in a table, index-organized table, index, partition, subpartition,
materialized view, or materialized view log. You do this using ALTER TABLE, ALTER
INDEX, ALTER MATERIALIZED VIEW, or ALTER MATERIALIZED VIEW LOG statement
with the SHRINK SPACE clause. Refer to Oracle Database SQL Reference for the syntax
and additional information on shrinking a database object, including restrictions.
Two optional clauses let you control how the shrink operation proceeds:
■

■

The COMPACT clause lets you divide the shrink segment operation into two phases.
When you specify COMPACT, Oracle Database defragments the segment space and
compacts the table rows but postpones the resetting of the high water mark and
the deallocation of the space until a future time. This option is useful if you have
long-running queries that might span the operation and attempt to read from
blocks that have been reclaimed. The defragmentation and compaction results are
saved to disk, so the data movement does not have to be redone during the second
phase. You can reissue the SHRINK SPACE clause without the COMPACT clause
during off-peak hours to complete the second phase.
The CASCADE clause extends the segment shrink operation to all dependent
segments of the object. For example, if you specify CASCADE when shrinking a
table segment, all indexes of the table will also be shrunk. (You need not specify
CASCADE to shrink the partitions of a partitioned table.) To see a list of dependent
segments of a given object, you can run the OBJECT_DEPENDENT_SEGMENTS
procedure of the DBMS_SPACE package.

As with other DDL operations, segment shrink causes subsequent SQL statements to
be reparsed because of invalidation of cursors unless you specify the COMPACT clause.
Examples
Shrink a table and all of its dependent segments (including LOB segments):
ALTER TABLE employees SHRINK SPACE CASCADE;

Shrink a LOB segment only:
ALTER TABLE employees MODIFY LOB (perf_review) (SHRINK SPACE);

Shrink a single partition of a partitioned table:
ALTER TABLE customers MODIFY PARTITION cust_P1 SHRINK SPACE;

Shrink an IOT index segment and the overflow segment:
ALTER TABLE cities SHRINK SPACE CASCADE;

Shrink an IOT overflow segment only:
ALTER TABLE cities OVERFLOW SHRINK SPACE;

14-30 Oracle Database Administrator’s Guide

Displaying Information About Space Usage for Schema Objects

Deallocating Unused Space
When you deallocate unused space, the database frees the unused space at the unused
(high water mark) end of the database segment and makes the space available for
other segments in the tablespace.
Prior to deallocation, you can run the UNUSED_SPACE procedure of the DBMS_SPACE
package, which returns information about the position of the high water mark and the
amount of unused space in a segment. For segments in locally managed tablespaces
with automatic segment space management, use the SPACE_USAGE procedure for
more accurate information on unused space.
See Also: Oracle Database PL/SQL Packages and Types Reference
contains the description of the DBMS_SPACE package

The following statements deallocate unused space in a segment (table, index or
cluster):
ALTER TABLE table DEALLOCATE UNUSED KEEP integer;
ALTER INDEX index DEALLOCATE UNUSED KEEP integer;
ALTER CLUSTER cluster DEALLOCATE UNUSED KEEP integer;

The KEEP clause is optional and lets you specify the amount of space retained in the
segment. You can verify that the deallocated space is freed by examining the
DBA_FREE_SPACE view.
See Also:
■

■

Oracle Database SQL Reference for details on the syntax and
semantics of deallocating unused space
Oracle Database Reference for more information about the
DBA_FREE_SPACE view

Understanding Space Usage of Datatypes
When creating tables and other data structures, you need to know how much space
they will require. Each datatype has different space requirements. The Oracle Database
PL/SQL User's Guide and Reference and Oracle Database SQL Reference contain extensive
descriptions of datatypes and their space requirements.

Displaying Information About Space Usage for Schema Objects
Oracle Database provides data dictionary views and PL/SQL packages that allow you
to display information about the space usage of schema objects. Views and packages
that are unique to a particular schema object are described in the chapter of this book
associated with that object. This section describes views and packages that are generic
in nature and apply to multiple schema objects.

Using PL/SQL Packages to Display Information About Schema Object Space Usage
These Oracle-supplied PL/SQL packages provide information about schema objects:
Package and Procedure/Function

Description

DBMS_SPACE.UNUSED_SPACE

Returns information about unused space in an
object (table, index, or cluster).

Managing Space for Schema Objects

14-31

Displaying Information About Space Usage for Schema Objects

Package and Procedure/Function

Description

DBMS_SPACE.FREE_BLOCKS

Returns information about free data blocks in an
object (table, index, or cluster) whose segment free
space is managed by free lists (segment space
management is MANUAL).

DBMS_SPACE.SPACE_USAGE

Returns information about free data blocks in an
object (table, index, or cluster) whose segment space
management is AUTO.

See Also: Oracle Database PL/SQL Packages and Types Reference for
a description of PL/SQL packages

Example: Using DBMS_SPACE.UNUSED_SPACE
The following SQL*Plus example uses the DBMS_SPACE package to obtain unused
space information.
SQL>
SQL>
SQL>
SQL>
SQL>
SQL>
SQL>
SQL>
>
>

VARIABLE total_blocks NUMBER
VARIABLE total_bytes NUMBER
VARIABLE unused_blocks NUMBER
VARIABLE unused_bytes NUMBER
VARIABLE lastextf NUMBER
VARIABLE last_extb NUMBER
VARIABLE lastusedblock NUMBER
exec DBMS_SPACE.UNUSED_SPACE('SCOTT', 'EMP', 'TABLE', :total_blocks, :total_bytes,:unused_blocks, :unused_bytes, :lastextf, :last_extb, :lastusedblock);

PL/SQL procedure successfully completed.
SQL> PRINT
TOTAL_BLOCKS
-----------5
TOTAL_BYTES
----------10240
...
LASTUSEDBLOCK
------------3

Using Views to Display Information About Space Usage in Schema Objects
These views display information about space usage in schema objects:
View

Description

DBA_SEGMENTS

DBA view describes storage allocated for all database segments.
User view describes storage allocated for segments for the
current user.

USER_SEGMENTS
DBA_EXTENTS
USER_EXTENTS

14-32 Oracle Database Administrator’s Guide

DBA view describes extents comprising all segments in the
database. User view describes extents comprising segments for
the current user.

Displaying Information About Space Usage for Schema Objects

View

Description

DBA_FREE_SPACE

DBA view lists free extents in all tablespaces. User view shows
free space information for tablespaces for which the user has
quota.

USER_FREE_SPACE

The following sections contain examples of using some of these views.
Oracle Database Reference for a complete description of
data dictionary views

See Also:

Example 1: Displaying Segment Information
The following query returns the name and size of each index segment in schema hr:
SELECT SEGMENT_NAME, TABLESPACE_NAME, BYTES, BLOCKS, EXTENTS
FROM DBA_SEGMENTS
WHERE SEGMENT_TYPE = 'INDEX'
AND OWNER='HR'
ORDER BY SEGMENT_NAME;

The query output is:
SEGMENT_NAME
------------------------COUNTRY_C_ID_PK
DEPT_ID_PK
DEPT_LOCATION_IX
EMP_DEPARTMENT_IX
EMP_EMAIL_UK
EMP_EMP_ID_PK
EMP_JOB_IX
EMP_MANAGER_IX
EMP_NAME_IX
JHIST_DEPARTMENT_IX
JHIST_EMPLOYEE_IX
JHIST_EMP_ID_ST_DATE_PK
JHIST_JOB_IX
JOB_ID_PK
LOC_CITY_IX
LOC_COUNTRY_IX
LOC_ID_PK
LOC_STATE_PROVINCE_IX
REG_ID_PK

TABLESPACE_NAME
BYTES BLOCKS EXTENTS
--------------- -------- ------ ------EXAMPLE
65536
32
1
EXAMPLE
65536
32
1
EXAMPLE
65536
32
1
EXAMPLE
65536
32
1
EXAMPLE
65536
32
1
EXAMPLE
65536
32
1
EXAMPLE
65536
32
1
EXAMPLE
65536
32
1
EXAMPLE
65536
32
1
EXAMPLE
65536
32
1
EXAMPLE
65536
32
1
EXAMPLE
65536
32
1
EXAMPLE
65536
32
1
EXAMPLE
65536
32
1
EXAMPLE
65536
32
1
EXAMPLE
65536
32
1
EXAMPLE
65536
32
1
EXAMPLE
65536
32
1
EXAMPLE
65536
32
1

19 rows selected.

Example 2: Displaying Extent Information
Information about the currently allocated extents in a database is stored in the
DBA_EXTENTS data dictionary view. For example, the following query identifies the
extents allocated to each index segment in the hr schema and the size of each of those
extents:
SELECT SEGMENT_NAME, SEGMENT_TYPE, TABLESPACE_NAME, EXTENT_ID, BYTES, BLOCKS
FROM DBA_EXTENTS
WHERE SEGMENT_TYPE = 'INDEX'
AND OWNER='HR'
ORDER BY SEGMENT_NAME;

The query output is:
Managing Space for Schema Objects

14-33

Displaying Information About Space Usage for Schema Objects

SEGMENT_NAME
------------------------COUNTRY_C_ID_PK
DEPT_ID_PK
DEPT_LOCATION_IX
EMP_DEPARTMENT_IX
EMP_EMAIL_UK
EMP_EMP_ID_PK
EMP_JOB_IX
EMP_MANAGER_IX
EMP_NAME_IX
JHIST_DEPARTMENT_IX
JHIST_EMPLOYEE_IX
JHIST_EMP_ID_ST_DATE_PK
JHIST_JOB_IX
JOB_ID_PK
LOC_CITY_IX
LOC_COUNTRY_IX
LOC_ID_PK
LOC_STATE_PROVINCE_IX
REG_ID_PK

SEGMENT_TYPE
-----------INDEX
INDEX
INDEX
INDEX
INDEX
INDEX
INDEX
INDEX
INDEX
INDEX
INDEX
INDEX
INDEX
INDEX
INDEX
INDEX
INDEX
INDEX
INDEX

TABLESPACE_NAME EXTENT_ID
BYTES BLOCKS
--------------- --------- -------- -----EXAMPLE
0
65536
32
EXAMPLE
0
65536
32
EXAMPLE
0
65536
32
EXAMPLE
0
65536
32
EXAMPLE
0
65536
32
EXAMPLE
0
65536
32
EXAMPLE
0
65536
32
EXAMPLE
0
65536
32
EXAMPLE
0
65536
32
EXAMPLE
0
65536
32
EXAMPLE
0
65536
32
EXAMPLE
0
65536
32
EXAMPLE
0
65536
32
EXAMPLE
0
65536
32
EXAMPLE
0
65536
32
EXAMPLE
0
65536
32
EXAMPLE
0
65536
32
EXAMPLE
0
65536
32
EXAMPLE
0
65536
32

19 rows selected.

For the hr schema, no segment has more than one extent allocated to it.

Example 3: Displaying the Free Space (Extents) in a Tablespace
Information about the free extents (extents not allocated to any segment) in a database
is stored in the DBA_FREE_SPACE data dictionary view. For example, the following
query reveals the amount of free space available as free extents in the SMUNDO
tablespace:
SELECT TABLESPACE_NAME, FILE_ID, BYTES, BLOCKS
FROM DBA_FREE_SPACE
WHERE TABLESPACE_NAME='SMUNDO';

The query output is:
TABLESPACE_NAME FILE_ID
BYTES BLOCKS
--------------- -------- -------- -----SMUNDO
3
65536
32
SMUNDO
3
65536
32
SMUNDO
3
65536
32
SMUNDO
3
65536
32
SMUNDO
3
65536
32
SMUNDO
3
65536
32
SMUNDO
3
131072
64
SMUNDO
3
131072
64
SMUNDO
3
65536
32
SMUNDO
3 3407872
1664
10 rows selected.

Example 4: Displaying Segments that Cannot Allocate Additional Extents
It is possible that a segment cannot be allocated to an extent for any of the following
reasons:
■

The tablespace containing the segment does not have enough room for the next
extent.

14-34 Oracle Database Administrator’s Guide

Capacity Planning for Database Objects

■
■

The segment has the maximum number of extents.
The segment has the maximum number of extents allowed by the data block size,
which is operating system specific.

The following query returns the names, owners, and tablespaces of all segments that
satisfy any of these criteria:
SELECT a.SEGMENT_NAME, a.SEGMENT_TYPE, a.TABLESPACE_NAME, a.OWNER
FROM DBA_SEGMENTS a
WHERE a.NEXT_EXTENT >= (SELECT MAX(b.BYTES)
FROM DBA_FREE_SPACE b
WHERE b.TABLESPACE_NAME = a.TABLESPACE_NAME)
OR a.EXTENTS = a.MAX_EXTENTS
OR a.EXTENTS = 'data_block_size' ;

Note: When you use this query, replace data_block_size with
the data block size for your system.

Once you have identified a segment that cannot allocate additional extents, you can
solve the problem in either of two ways, depending on its cause:
■

■

If the tablespace is full, add a datafile to the tablespace or extend the existing
datafile.
If the segment has too many extents, and you cannot increase MAXEXTENTS for
the segment, perform the following steps.
1.

Export the data in the segment

2.

Drop and re-create the segment, giving it a larger INITIAL storage parameter
setting so that it does not need to allocate so many extents. Alternatively, you
can adjust the PCTINCREASE and NEXT storage parameters to allow for more
space in the segment.

3.

Import the data back into the segment.

Capacity Planning for Database Objects
Oracle Database provides two ways to plan capacity for database objects:
■

With Enterprise Manager

■

With the DBMS_SPACE PL/SQL package

This section discusses the PL/SQL method. Refer to Enterprise Manager online help
and Oracle Database 2 Day DBA for details on capacity planning with Enterprise
Manager.
Three procedures in the DBMS_SPACE package enable you to predict the size of new
objects and monitor the size of existing database objects. This section discusses those
procedures and contains the following sections:
■

Estimating the Space Use of a Table

■

Estimating the Space Use of an Index

■

Obtaining Object Growth Trends

Managing Space for Schema Objects

14-35

Capacity Planning for Database Objects

Estimating the Space Use of a Table
The size of a database table can vary greatly depending on tablespace storage
attributes, tablespace block size, and many other factors. The CREATE_TABLE_COST
procedure of the DBMS_SPACE package lets you estimate the space use cost of creating
a table. Please refer to Oracle Database PL/SQL Packages and Types Reference for details on
the parameters of this procedure.
The procedure has two variants. The first variant uses average row size to estimate
size. The second variant uses column information to estimate table size. Both variants
require as input the following values:
■

■
■

TABLESPACE_NAME: The tablespace in which the object will be created. The
default is the SYSTEM tablespace.
ROW_COUNT: The anticipated number of rows in the table.
PCT_FREE: The percentage of free space you want to reserve in each block for
future expansion of existing rows due to updates.

In addition, the first variant also requires as input a value for AVG_ROW_SIZE, which
is the anticipated average row size in bytes.
The second variant also requires for each anticipated column values for COLINFOS,
which is an object type comprising the attributes COL_TYPE (the datatype of the
column) and COL_SIZE (the number of characters or bytes in the column).
The procedure returns two values:
■

■

USED_BYTES: The actual bytes used by the data, including overhead for block
metadata, PCT_FREE space, and so forth.
ALLOC_BYTES: The amount of space anticipated to be allocated for the object
taking into account the tablespace extent characteristics.

Estimating the Space Use of an Index
The CREATE_INDEX_COST procedure of the DBMS_SPACE package lets you estimate
the space use cost of creating an index on an existing table.
The procedure requires as input the following values:
■

■

DDL: The CREATE INDEX statement that would create the index. The table
specified in this DDL statement must be an existing table.
[Optional] PLAN_TABLE: The name of the plan table to use. The default is NULL.

The results returned by this procedure depend on statistics gathered on the segment.
Therefore, be sure to obtain statistics shortly before executing this procedure. In the
absence of recent statistics, the procedure does not issue an error, but it may return
inappropriate results. The procedure returns the following values:
■

USED_BYTES: The number of bytes representing the actual index data.

■

ALLOC_BYTES: The amount of space allocated for the index in the tablespace.

Obtaining Object Growth Trends
The OBJECT_GROWTH_TREND procedure of the DBMS_SPACE package produces a
table of one or more rows, where each row describes the space use of the object at a
specific time. The procedure retrieves the space use totals from the Automatic
Workload Repository or computes current space use and combines it with historic

14-36 Oracle Database Administrator’s Guide

Capacity Planning for Database Objects

space use changes retrieved from Automatic Workload Repository. Please refer to
[ARPLS] for detailed information on the parameters of this procedure.
The procedure requires as input the following values:
■

OBJECT_OWNER: The owner of the object.

■

OBJECT_NAME: The name of the object.

■

■
■

■

■

■

PARTITION_NAME: The name of the table or index partition, is relevant. Specify
NULL otherwise.
OBJECT_TYPE: The type of the object.
START_TIME: A TIMESTAMP value indicating the beginning of the growth trend
analysis.
END_TIME: A TIMESTAMP value indicating the end of the growth trend analysis.
The default is "NOW".
INTERVAL: The length in minutes of the reporting interval during which the
procedure should retrieve space use information.
SKIP_INTERPOLATED: Determines whether the procedure should omit values
based on recorded statistics before and after the INTERVAL ('YES') or not ('NO').
This setting is useful when the result table will be displayed as a table rather than
a chart, because you can see more clearly how the actual recording interval relates
to the requested reporting interval.

The procedure returns a table, each of row of which provides space use information on
the object for one interval. If the return table is very large, the results are pipelined so
that another application can consume the information as it is being produced. The
output table has the following columns:
■

TIMEPOINT: A TIMESTAMP value indicating the time of the reporting interval.
Records are not produced for values of TIME that precede the oldest recorded
statistics for the object.

■
■

■

SPACE_USAGE: The number of bytes actually being used by the object data.
SPACE_ALLOC: The number of bytes allocated to the object in the tablespace at
that time.
QUALITY: A value indicating how well the requested reporting interval matches
the actual recording of statistics. This information is useful because there is no
guaranteed reporting interval for object size use statistics, and the actual reporting
interval varies over time and from object to object.
The values of the QUALITY column are:
–

GOOD: The value whenever the value of TIME is based on recorded statistics
with a recorded timestamp within 10% of the INTERVAL specified in the input
parameters.

–

INTERPOLATED: The value did not meet the criteria for GOOD, but was based
on recorded statistics before and after the value of TIME. Current in-memory
statistics can be collected across all instances in a cluster and treated as the
"recorded" value for the present time.

–

PROJECTION: The value of TIME is in the future as of the time the table was
produced. In a Real Application Clusters environment, the rules for recording
statistics allow each instance to choose independently which objects will be
selected.

Managing Space for Schema Objects

14-37

Capacity Planning for Database Objects

The output returned by this procedure is an aggregation of values recorded across
all instances in a RAC environment. Each value can be computed from a
combination of GOOD and INTERPOLATED values. The aggregate value returned is
marked GOOD if at least 80% of that value was derived from GOOD instance values.

14-38 Oracle Database Administrator’s Guide

15
Managing Tables
This chapter describes the various aspects of managing tables, and includes the
following topics:
■

About Tables

■

Guidelines for Managing Tables

■

Creating Tables

■

Loading Tables

■

Automatically Collecting Statistics on Tables

■

Altering Tables

■

Redefining Tables Online

■

Auditing Table Changes Using Flashback Transaction Query

■

Recovering Tables Using the Flashback Table Feature

■

Dropping Tables

■

Using Flashback Drop and Managing the Recycle Bin

■

Managing Index-Organized Tables

■

Managing External Tables

■

Viewing Information About Tables

About Tables
Tables are the basic unit of data storage in an Oracle Database. Data is stored in rows
and columns. You define a table with a table name, such as employees, and a set of
columns. You give each column a column name, such as employee_id, last_name,
and job_id; a datatype, such as VARCHAR2, DATE, or NUMBER; and a width. The
width can be predetermined by the datatype, as in DATE. If columns are of the NUMBER
datatype, define precision and scale instead of width. A row is a collection of column
information corresponding to a single record.
You can specify rules for each column of a table. These rules are called integrity
constraints. One example is a NOT NULL integrity constraint. This constraint forces the
column to contain a value in every row.
You can also invoke transparent data encryption to encrypt data before storing it in the
datafile. Then, if users attempt to circumvent the database access control mechanisms
by looking inside datafiles directly with operating system tools, encryption prevents
these users from viewing sensitive data.

Managing Tables 15-1

Guidelines for Managing Tables

After you create a table, insert rows of data using SQL statements. Table data can then
be queried, deleted, or updated using SQL.
See Also:
■

■

■

■

Oracle Database Concepts for a more detailed description of
tables
Chapter 14, "Managing Space for Schema Objects" for
guidelines for managing space for tables
Chapter 13, "Managing Schema Objects" for information on
additional aspects of managing tables, such as specifying
integrity constraints and analyzing tables
Oracle Database Security Guide for a discussion of transparent
data encryption

Guidelines for Managing Tables
This section describes guidelines to follow when managing tables. Following these
guidelines can make the management of your tables easier and can improve
performance when creating the table, as well as when loading, updating, and querying
the table data.
The following topics are discussed:
■

Design Tables Before Creating Them

■

Consider Your Options for the Type of Table to Create

■

Specify the Location of Each Table

■

Consider Parallelizing Table Creation

■

Consider Using NOLOGGING When Creating Tables

■

Estimate Table Size and Plan Accordingly

■

Restrictions to Consider When Creating Tables

Design Tables Before Creating Them
Usually, the application developer is responsible for designing the elements of an
application, including the tables. Database administrators are responsible for
establishing the attributes of the underlying tablespace that will hold the application
tables. Either the DBA or the applications developer, or both working jointly, can be
responsible for the actual creation of the tables, depending upon the practices for a
site.
Working with the application developer, consider the following guidelines when
designing tables:
■
■

■

Use descriptive names for tables, columns, indexes, and clusters.
Be consistent in abbreviations and in the use of singular and plural forms of table
names and columns.
Document the meaning of each table and its columns with the COMMENT
command.

■

Normalize each table.

■

Select the appropriate datatype for each column.

15-2 Oracle Database Administrator’s Guide

Guidelines for Managing Tables

■

■
■

Consider invoking transparent data encryption to encrypt columns that will
contain sensitive data.
Define columns that allow nulls last, to conserve storage space.
Cluster tables whenever appropriate, to conserve storage space and optimize
performance of SQL statements.

Before creating a table, you should also determine whether to use integrity constraints.
Integrity constraints can be defined on the columns of a table to enforce the business
rules of your database automatically.
See Also: Oracle Database Application Developer's Guide Fundamentals for information about designing tables, and Oracle
Database Advanced Security Administrator's Guide for information on
transparent data encryption.

Consider Your Options for the Type of Table to Create
What types of tables can you create? Here are some choices:
Type of Table

Description

Ordinary
(heap-organized) table

This is the basic, general purpose type of table which is the primary
subject of this chapter. Its data is stored as an unordered collection
(heap)

Clustered table

A clustered table is a table that is part of a cluster. A cluster is a
group of tables that share the same data blocks because they share
common columns and are often used together.
Clusters and clustered tables are discussed in Chapter 18,
"Managing Clusters".

Index-organized table

Unlike an ordinary (heap-organized) table, data for an
index-organized table is stored in a B-tree index structure in a
primary key sorted manner. Besides storing the primary key
column values of an index-organized table row, each index entry in
the B-tree stores the nonkey column values as well.
Index-organized tables are discussed in "Managing
Index-Organized Tables" on page 15-42.

Partitioned table

Partitioned tables allow your data to be broken down into smaller,
more manageable pieces called partitions, or even subpartitions.
Each partition can be managed individually, and can operate
independently of the other partitions, thus providing a structure
that can be better tuned for availability and performance.
Partitioned tables are discussed in Chapter 17, "Managing
Partitioned Tables and Indexes".

Specify the Location of Each Table
It is advisable to specify the TABLESPACE clause in a CREATE TABLE statement to
identify the tablespace that is to store the new table. Ensure that you have the
appropriate privileges and quota on any tablespaces that you use. If you do not specify
a tablespace in a CREATE TABLE statement, the table is created in your default
tablespace.
When specifying the tablespace to contain a new table, ensure that you understand
implications of your selection. By properly specifying a tablespace during the creation
of each table, you can increase the performance of the database system and decrease
the time needed for database administration.

Managing Tables 15-3

Guidelines for Managing Tables

The following situations illustrate how not specifying a tablespace, or specifying an
inappropriate one, can affect performance:
■

■

If users' objects are created in the SYSTEM tablespace, the performance of the
database can suffer, since both data dictionary objects and user objects must
contend for the same datafiles. Users' objects should not be stored in the SYSTEM
tablespace. To avoid this, ensure that all users are assigned default tablespaces
when they are created in the database.
If application-associated tables are arbitrarily stored in various tablespaces, the
time necessary to complete administrative operations (such as backup and
recovery) for the data of that application can be increased.

Consider Parallelizing Table Creation
You can utilize parallel execution when creating tables using a subquery (AS SELECT)
in the CREATE TABLE statement. Because multiple processes work together to create
the table, performance of the table creation operation is improved.
Parallelizing table creation is discussed in the section "Parallelizing Table Creation" on
page 15-8.

Consider Using NOLOGGING When Creating Tables
To create a table most efficiently use the NOLOGGING clause in the CREATE
TABLE...AS SELECT statement. The NOLOGGING clause causes minimal redo
information to be generated during the table creation. This has the following benefits:
■

Space is saved in the redo log files.

■

The time it takes to create the table is decreased.

■

Performance improves for parallel creation of large tables.

The NOLOGGING clause also specifies that subsequent direct loads using SQL*Loader
and direct load INSERT operations are not logged. Subsequent DML statements
(UPDATE, DELETE, and conventional path insert) are unaffected by the NOLOGGING
attribute of the table and generate redo.
If you cannot afford to lose the table after you have created it (for example, you will no
longer have access to the data used to create the table) you should take a backup
immediately after the table is created. In some situations, such as for tables that are
created for temporary use, this precaution may not be necessary.
In general, the relative performance improvement of specifying NOLOGGING is greater
for larger tables than for smaller tables. For small tables, NOLOGGING has little effect on
the time it takes to create a table. However, for larger tables the performance
improvement can be significant, especially when you are also parallelizing the table
creation.

Consider Using Table Compression when Creating Tables
The Oracle Database table compression feature compresses data by eliminating
duplicate values in a database block. Duplicate values in all the rows and columns in a
block are stored once at the beginning of the block, in what is called a symbol table for
that block. All occurrences of such values are replaced with a short reference to the
symbol table.

15-4 Oracle Database Administrator’s Guide

Guidelines for Managing Tables

Using table compression reduces disk use and memory use in the buffer cache, often
resulting in better scale-up for read-only operations. Table compression can also speed
up query execution. There is, however, a slight cost in CPU overhead.
Compression occurs when data is inserted with a bulk (direct-path) insert operation. A
table can consist of compressed and uncompressed blocks transparently. Any DML
operation can be applied to a table storing compressed blocks. However, conventional
DML operations cause records to be stored uncompressed afterward the operations
and the operations themselves are subject to a small performance overhead due to the
nature of the table compression.
Consider using table compression when your data is mostly read only. Do not use
table compression for tables that updated frequently.

Estimate Table Size and Plan Accordingly
Estimate the sizes of tables before creating them. Preferably, do this as part of database
planning. Knowing the sizes, and uses, for database tables is an important part of
database planning.
You can use the combined estimated size of tables, along with estimates for indexes,
undo space, and redo log files, to determine the amount of disk space that is required
to hold an intended database. From these estimates, you can make correct hardware
purchases.
You can use the estimated size and growth rate of an individual table to better
determine the attributes of a tablespace and its underlying datafiles that are best suited
for the table. This can enable you to more easily manage the table disk space and
improve I/O performance of applications that use the table.

Restrictions to Consider When Creating Tables
Here are some restrictions that may affect your table planning and usage:
■
■

■

■

Tables containing object types cannot be imported into a pre-Oracle8 database.
You cannot merge an exported table into a preexisting table having the same name
in a different schema.
You cannot move types and extent tables to a different schema when the original
data still exists in the database.
Oracle Database has a limit on the total number of columns that a table (or
attributes that an object type) can have. See Oracle Database Reference for this limit.
Further, when you create a table that contains user-defined type data, the database
maps columns of user-defined type to relational columns for storing the
user-defined type data. This causes additional relational columns to be created.
This results in "hidden" relational columns that are not visible in a DESCRIBE table
statement and are not returned by a SELECT * statement. Therefore, when you
create an object table, or a relational table with columns of REF, varray, nested
table, or object type, be aware that the total number of columns that the database
actually creates for the table can be more than those you specify.
See Also: Oracle Database Application Developer's Guide Object-Relational Features for more information about user-defined
types

Managing Tables 15-5

Creating Tables

Creating Tables
To create a new table in your schema, you must have the CREATE TABLE system
privilege. To create a table in another user's schema, you must have the CREATE ANY
TABLE system privilege. Additionally, the owner of the table must have a quota for the
tablespace that contains the table, or the UNLIMITED TABLESPACE system privilege.
Create tables using the SQL statement CREATE TABLE.
This section contains the following topics:
■

Creating a Table

■

Creating a Temporary Table

■

Parallelizing Table Creation
Oracle Database SQL Reference for exact syntax of the
CREATE TABLE and other SQL statements discussed in this
chapter

See Also:

Creating a Table
When you issue the following statement, you create a table named admin_emp in the
hr schema and store it in the admin_tbs tablespace with an initial extent size of 50K:
CREATE TABLE
empno
ename
ssn
job
mgr
hiredate
sal
comm
deptno

hr.admin_emp (
NUMBER(5) PRIMARY KEY,
VARCHAR2(15) NOT NULL,
NUMBER(9) ENCRYPT,
VARCHAR2(10),
NUMBER(5),
DATE DEFAULT (sysdate),
NUMBER(7,2),
NUMBER(7,2),
NUMBER(3) NOT NULL
CONSTRAINT admin_dept_fkey REFERENCES hr.departments
(department_id))
TABLESPACE admin_tbs
STORAGE ( INITIAL 50K);

In this CREATE TABLE statement, integrity constraints are defined on several columns
of the table, and transparent data encryption is defined on one (ssn). Integrity
constraints are discussed in "Managing Integrity Constraints" on page 13-9, and
transparent data encryption is discussed in Oracle Database Security Guide.
Oracle Database SQL Reference for description of the
datatypes that can be specified for columns

See Also:

Creating a Temporary Table
It is also possible to create a temporary table. The definition of a temporary table is
visible to all sessions, but the data in a temporary table is visible only to the session
that inserts the data into the table. Use the CREATE GLOBAL TEMPORARY TABLE
statement to create a temporary table. The ON COMMIT clause indicate if the data in
the table is transaction-specific (the default) or session-specific, the implications of
which are as follows:

15-6 Oracle Database Administrator’s Guide

Creating Tables

ON COMMIT Setting

Implications

DELETE ROWS

This creates a temporary table that is transaction specific. A
session becomes bound to the temporary table with a
transactions first insert into the table. The binding goes away at
the end of the transaction. The database truncates the table
(delete all rows) after each commit.

PRESERVE ROWS

This creates a temporary table that is session specific. A session
gets bound to the temporary table with the first insert into the
table in the session. This binding goes away at the end of the
session or by issuing a TRUNCATE of the table in the session. The
database truncates the table when you terminate the session.

Temporary tables are useful in applications where a result set is to be buffered,
perhaps because it is constructed by running multiple DML operations. For example,
consider the following:
A Web-based airlines reservations application allows a customer to create several
optional itineraries. Each itinerary is represented by a row in a temporary table. The
application updates the rows to reflect changes in the itineraries. When the customer
decides which itinerary she wants to use, the application moves the row for that
itinerary to a persistent table.
During the session, the itinerary data is private. At the end of the session, the optional
itineraries are dropped.
This statement creates a temporary table that is transaction specific:
CREATE GLOBAL TEMPORARY TABLE admin_work_area
(startdate DATE,
enddate DATE,
class CHAR(20))
ON COMMIT DELETE ROWS;

Indexes can be created on temporary tables. They are also temporary and the data in
the index has the same session or transaction scope as the data in the underlying table.
Unlike permanent tables, temporary tables and their indexes do not automatically
allocate a segment when they are created. Instead, segments are allocated when the
first INSERT (or CREATE TABLE AS SELECT) is performed. This means that if a
SELECT, UPDATE, or DELETE is performed before the first INSERT, the table appears
to be empty.
DDL operations (except TRUNCATE) are allowed on an existing temporary table only if
no session is currently bound to that temporary table.
If you rollback a transaction, the data you entered is lost, although the table definition
persists.
A transaction-specific temporary table allows only one transaction at a time. If there
are several autonomous transactions in a single transaction scope, each autonomous
transaction can use the table only as soon as the previous one commits.
Because the data in a temporary table is, by definition, temporary, backup and
recovery of temporary table data is not available in the event of a system failure. To
prepare for such a failure, you should develop alternative methods for preserving
temporary table data.

Managing Tables 15-7

Loading Tables

Parallelizing Table Creation
When you specify the AS SELECT clause to create a table and populate it with data
from another table, you can utilize parallel execution. The CREATE TABLE...AS
SELECT statement contains two parts: a CREATE part (DDL) and a SELECT part
(query). Oracle Database can parallelize both parts of the statement. The CREATE part
is parallelized if one of the following is true:
■

A PARALLEL clause is included in the CREATE TABLE...AS SELECT statement

■

An ALTER SESSION FORCE PARALLEL DDL statement is specified

The query part is parallelized if all of the following are true:
■

■

The query includes a parallel hint specification (PARALLEL or PARALLEL_INDEX)
or the CREATE part includes the PARALLEL clause or the schema objects referred to
in the query have a PARALLEL declaration associated with them.
At least one of the tables specified in the query requires either a full table scan or
an index range scan spanning multiple partitions.

If you parallelize the creation of a table, that table then has a parallel declaration (the
PARALLEL clause) associated with it. Any subsequent DML or queries on the table, for
which parallelization is possible, will attempt to use parallel execution.
The following simple statement parallelizes the creation of a table and stores the result
in a compressed format, using table compression:
CREATE TABLE hr.admin_emp_dept
PARALLEL COMPRESS
AS SELECT * FROM hr.employees
WHERE department_id = 10;

In this case, the PARALLEL clause tells the database to select an optimum number of
parallel execution servers when creating the table.
See Also:
■

■

■

Oracle Database Concepts for more information about parallel
execution
Oracle Database Data Warehousing Guide for a detailed discussion
about using parallel execution
"Managing Processes for Parallel SQL Execution" on page 4-14

Loading Tables
There are several means of inserting or initially loading data into your tables. Most
commonly used are the following:
Method

Description

SQL*Loader

This Oracle utility program loads data from external files into
tables of an Oracle Database.
For information about SQL*Loader, see Oracle Database
Utilities.

CREATE TABLE ... AS SELECT Using this SQL statement you can create a table and populate
statement (CTAS)
it with data selected from another existing table.

15-8 Oracle Database Administrator’s Guide

Loading Tables

Method

Description

INSERT statement

The INSERT statement enables you to add rows to a table,
either by specifying the column values or by specifying a
subquery that selects data from another existing table.

MERGE statement

The MERGE statement enables you to insert rows into or
update rows of a table, by selecting rows from another
existing table. If a row in the new data corresponds to an item
that already exists in the table, then an UPDATE is performed,
else an INSERT is performed.

See Oracle Database SQL Reference for details on the CREATE TABLE ... AS SELECT,
INSERT, and MERGE statements.

Inserting Data with DML Error Logging
When you load a table using an INSERT statement with subquery, if an error occurs,
the statement is terminated and rolled back in its entirety. This can be wasteful of time
and system resources. For such INSERT statements, you can avoid this situation by
using the DML error logging feature.
To use DML error logging, you add a statement clause that specifies the name of an
error logging table into which the database records errors encountered during DML
operations. When you add this error logging clause to the INSERT statement, certain
types of errors no longer terminate and roll back the statement. Instead, each error is
logged and the statement continues. You then take corrective action on the erroneous
rows at a later time.
DML error logging works with INSERT, UPDATE, MERGE, and DELETE statements.
This section focuses on INSERT statements.
To insert data with DML error logging:
1.

Create an error logging table. (Optional)
You can create the table manually or use the DBMS_ERRLOG package to
automatically create it for you. See "Creating an Error Logging Table" on
page 15-11 for details.

2.

Execute an INSERT statement and include an error logging clause. This clause:
■

■

■

Optionally references the error logging table that you created. If you do not
provide an error logging table name, the database logs to an error logging
table with a default name. The default error logging table name is ERR$_
followed by the first 25 characters of the name of the table that is being
inserted into.
Optionally includes a tag (a numeric or string literal in parentheses) that gets
added to the error log to help identify the statement that caused the errors. If
the tag is omitted, a NULL value is used.
Optionally includes a REJECT LIMIT subclause.
This subclause indicates the maximum number of errors that can be
encountered before the INSERT statement terminates and rolls back. You can
also specify UNLIMITED. The default reject limit is zero, which means that
upon encountering the first error, the error is logged and the statement rolls
back. For parallel DML operations, the reject limit is applied to each parallel
server.

Managing Tables 15-9

Loading Tables

If the statement exceeds the reject limit and rolls back, the
error logging table retains the log entries recorded so far.

Note:

See Oracle Database SQL Reference for error logging clause syntax information.
3.

Query the error logging table and take corrective action for the rows that
generated errors.
See "Error Logging Table Format", later in this section, for details on the error
logging table structure.

The following statement inserts rows into the DW_EMPL table and logs
errors to the ERR_EMPL table. The tag 'daily_load' is copied to each log entry. The
statement terminates and rolls back if the number of errors exceeds 25.

Example

INSERT INTO dw_empl
SELECT employee_id, first_name, last_name, hire_date, salary, department_id
FROM employees
WHERE hire_date > sysdate - 7
LOG ERRORS INTO err_empl ('daily_load') REJECT LIMIT 25

For more examples, see Oracle Database SQL Reference and Oracle Database Data
Warehousing Guide.

Error Logging Table Format
The error logging table consists of two parts:
■

A mandatory set of columns that describe the error. For example, one column
contains the Oracle error number.
Table 15–1 lists these error description columns.

■

An optional set of columns that contain data from the row that caused the error.
The column names match the column names from the table being inserted into
(the "DML table").
The number of columns in this part of the error logging table can be zero, one, or
more, up to the number of columns in the DML table. If a column exists in the
error logging table that has the same name as a column in the DML table, the
corresponding data from the offending row being inserted is written to this error
logging table column. If a DML table column does not have a corresponding
column in the error logging table, the column is not logged. If the error logging
table contains a column with a name that does not match a DML table column, the
column is ignored.
Because type conversion errors are one type of error that might occur, the data
types of the optional columns in the error logging table must be types that can
capture any value without data loss or conversion errors. (If the optional log
columns were of the same types as the DML table columns, capturing the
problematic data into the log could suffer the same data conversion problem that
caused the error.) The database makes a best effort to log a meaningful value for
data that causes conversion errors. If a value cannot be derived, NULL is logged for
the column. An error on insertion into the error logging table causes the statement
to terminate.
Table 15–2 lists the recommended error logging table column data types to use for
each data type from the DML table. These recommended data types are used

15-10 Oracle Database Administrator’s Guide

Loading Tables

when you create the error logging table automatically with the DBMS_ERRLOG
package.
Table 15–1

Mandatory Error Description Columns

Column Name

Data Type

ORA_ERR_NUMBER$ NUMBER

Description
Oracle error number

ORA_ERR_MESG$

VARCHAR2(2000) Oracle error message text

ORA_ERR_ROWID$

ROWID

Rowid of the row in error (for update and
delete)

ORA_ERR_OPTYP$

VARCHAR2(2)

Type of operation: insert (I), update (U),
delete (D)
Note: Errors from the update clause and
insert clause of a MERGE operation are
distinguished by the U and I values.

ORA_ERR_TAG$

Table 15–2

VARCHAR2(2000) Value of the tag supplied by the user in
the error logging clause

Error Logging Table Column Data Types

Error Logging Table
DML Table Column Type Column Type

Notes

NUMBER

VARCHAR2(4000)

Able to log conversion errors

CHAR/VARCHAR2(n)

VARCHAR2(4000)

Logs any value without information loss

NCHAR/NVARCHAR2(n)

NVARCHAR2(4000)

Logs any value without information loss

DATE/TIMESTAMP

VARCHAR2(4000)

Logs any value without information loss.
Converts to character format with the
default date/time format mask

RAW

RAW(2000)

Logs any value without information loss

ROWID

UROWID

Logs any rowid type

LONG/LOB

Not supported

User-defined types

Not supported

Creating an Error Logging Table
You can create an error logging table manually, or you can use a PL/SQL package to
automatically create one for you.
Creating an Error Logging Table Automatically You use the DBMS_ERRLOG package to
automatically create an error logging table. The CREATE_ERROR_LOG procedure
creates an error logging table with all of the mandatory error description columns plus
all of the columns from the named DML table, and performs the data type mappings
shown in Table 15–2.
The following statement creates the error logging table used in the previous example.
EXECUTE DBMS_ERRLOG.CREATE_ERROR_LOG('DW_EMPL', 'ERR_EMPL');

See Oracle Database PL/SQL Packages and Types Reference for details on DBMS_ERRLOG.
Creating an Error Logging Table Manually You use standard DDL to manually create the
error logging table. See "Error Logging Table Format" on page 15-10 for table structure

Managing Tables

15-11

Loading Tables

requirements. You must include all mandatory error description columns. They can be
in any order, but must be the first columns in the table.

Error Logging Restrictions and Caveats
Oracle Database logs the following errors during DML operations:
■

Column values that are too large

■

Constraint violations (NOT NULL, unique, referential, and check constraints)

■

Errors raised during trigger execution

■

■
■

Errors resulting from type conversion between a column in a subquery and the
corresponding column of the table
Partition mapping errors
Certain MERGE operation errors (ORA-30926: Unable to get a stable set of rows for
MERGE operation.)

Some errors are not logged, and cause the DML operation to terminate and roll back.
For a list of these errors and for other DML logging restrictions, see the discussion of
the error_logging_clause in the INSERT section of Oracle Database SQL Reference.
Space Considerations Ensure that you consider space requirements before using DML
error logging. You require available space not only for the table being inserted into, but
also for the error logging table.
Security The user who issues the INSERT statement with DML error logging must
have INSERT privileges on the error logging table.
See Also: Oracle Database SQL Reference and Oracle Database Data
Warehousing Guide for DML error logging examples.

Inserting Data Into Tables Using Direct-Path INSERT
Oracle Database inserts data into a table in one of two ways:
■

■

During conventional INSERT operations, the database reuses free space in the
table, interleaving newly inserted data with existing data. During such operations,
the database also maintains referential integrity constraints.
During direct-path INSERT operations, the database appends the inserted data
after existing data in the table. Data is written directly into datafiles, bypassing the
buffer cache. Free space in the existing data is not reused, and referential integrity
constraints are ignored. These procedures combined can enhance performance.

Further, the data can be inserted either in serial mode, where one process executes the
statement, or parallel mode, where multiple processes work together simultaneously
to run a single SQL statement. The latter is referred to as parallel execution.
This section discusses one aspect of inserting data into tables. Specifically, using the
direct-path form of the INSERT statement. It contains the following topics:
■

Advantages of Using Direct-Path INSERT

■

Enabling Direct-Path INSERT

■

How Direct-Path INSERT Works

■

Specifying the Logging Mode for Direct-Path INSERT

■

Additional Considerations for Direct-Path INSERT

15-12 Oracle Database Administrator’s Guide

Loading Tables

Only a few details and examples of inserting data into
tables are included in this book. Oracle documentation specific to
data warehousing and application development provide more
extensive information about inserting and manipulating data in
tables. For example:

Note:

■

Oracle Database Data Warehousing Guide

■

Oracle Database Application Developer's Guide - Fundamentals

■

Oracle Database Application Developer's Guide - Large Objects

Advantages of Using Direct-Path INSERT
The following are performance benefits of direct-path INSERT:
■

■

■

■

■

During direct-path INSERT, you can disable the logging of redo and undo entries.
Conventional insert operations, in contrast, must always log such entries, because
those operations reuse free space and maintain referential integrity.
To create a new table with data from an existing table, you have the choice of
creating the new table and then inserting into it, or executing a CREATE TABLE ...
AS SELECT statement. By creating the table and then using direct-path INSERT
operations, you update any indexes defined on the target table during the insert
operation. The table resulting from a CREATE TABLE ... AS SELECT statement, in
contrast, does not have any indexes defined on it; you must define them later.
Direct-path INSERT operations ensure atomicity of the transaction, even when run
in parallel mode. Atomicity cannot be guaranteed during parallel direct-path loads
(using SQL*Loader).
If errors occur during parallel direct-path loads, some indexes could be marked
UNUSABLE at the end of the load. Parallel direct-path INSERT, in contrast, rolls
back the statement if errors occur during index update.
Direct-path INSERT must be used if you want to store the data in compressed
form using table compression.

Enabling Direct-Path INSERT
You can implement direct-path INSERT operations by using direct-path INSERT
statements, inserting data in parallel mode, or by using the Oracle SQL*Loader utility
in direct-path mode. Direct-path inserts can be done in either serial or parallel mode.
To activate direct-path INSERT in serial mode, you must specify the APPEND hint in
each INSERT statement, either immediately after the INSERT keyword, or
immediately after the SELECT keyword in the subquery of the INSERT statement.
When you are inserting in parallel DML mode, direct-path INSERT is the default. In
order to run in parallel DML mode, the following requirements must be met:
■
■

You must have Oracle Enterprise Edition installed.
You must enable parallel DML in your session. To do this, run the following
statement:
ALTER SESSION { ENABLE | FORCE } PARALLEL DML;

■

You must specify the parallel attribute for the target table, either at create time or
subsequently, or you must specify the PARALLEL hint for each insert operation.

Managing Tables

15-13

Loading Tables

To disable direct-path INSERT, specify the NOAPPEND hint in each INSERT statement.
Doing so overrides parallel DML mode.
Notes:
■

■

Direct-path INSERT supports only the subquery syntax of the
INSERT statement, not the VALUES clause. For more
information on the subquery syntax of INSERT statements, see
Oracle Database SQL Reference.
There are some additional restrictions for using direct-path
INSERT. These are listed in the Oracle Database SQL Reference.

See Also: Oracle Database Performance Tuning Guide for more
information on using hints

How Direct-Path INSERT Works
You can use direct-path INSERT on both partitioned and non-partitioned tables.
Serial Direct-Path INSERT into Partitioned or Non-partitioned Tables The single process inserts
data beyond the current high water mark of the table segment or of each partition
segment. (The high-water mark is the level at which blocks have never been formatted
to receive data.) When a COMMIT runs, the high-water mark is updated to the new
value, making the data visible to users.
Parallel Direct-Path INSERT into Partitioned Tables This situation is analogous to serial
direct-path INSERT. Each parallel execution server is assigned one or more partitions,
with no more than one process working on a single partition. Each parallel execution
server inserts data beyond the current high-water mark of its assigned partition
segment(s). When a COMMIT runs, the high-water mark of each partition segment is
updated to its new value, making the data visible to users.
Parallel Direct-Path INSERT into Non-partitioned Tables Each parallel execution server
allocates a new temporary segment and inserts data into that temporary segment.
When a COMMIT runs, the parallel execution coordinator merges the new temporary
segments into the primary table segment, where it is visible to users.

Specifying the Logging Mode for Direct-Path INSERT
Direct-path INSERT lets you choose whether to log redo and undo information during
the insert operation.
■

■

■

You can specify logging mode for a table, partition, index, or LOB storage at create
time (in a CREATE statement) or subsequently (in an ALTER statement).
If you do not specify either LOGGING or NOLOGGING at these times:
–

The logging attribute of a partition defaults to the logging attribute of its table.

–

The logging attribute of a table or index defaults to the logging attribute of the
tablespace in which it resides.

–

The logging attribute of LOB storage defaults to LOGGING if you specify
CACHE for LOB storage. If you do not specify CACHE, then the logging
attributes defaults to that of the tablespace in which the LOB values resides.

You set the logging attribute of a tablespace in a CREATE TABLESPACE or ALTER
TABLESPACE statements.

15-14 Oracle Database Administrator’s Guide

Loading Tables

If the database or tablespace is in FORCE LOGGING mode,
then direct path INSERT always logs, regardless of the logging
setting.
Note:

Direct-Path INSERT with Logging In this mode, Oracle Database performs full redo
logging for instance and media recovery. If the database is in ARCHIVELOG mode, then
you can archive redo logs to tape. If the database is in NOARCHIVELOG mode, then you
can recover instance crashes but not disk failures.
Direct-Path INSERT without Logging In this mode, Oracle Database inserts data without
redo or undo logging. (Some minimal logging is done to mark new extents invalid,
and data dictionary changes are always logged.) This mode improves performance.
However, if you subsequently must perform media recovery, the extent invalidation
records mark a range of blocks as logically corrupt, because no redo data was logged
for them. Therefore, it is important that you back up the data after such an insert
operation.

Additional Considerations for Direct-Path INSERT
The following are some additional considerations when using direct-path INSERT.
Index Maintenance with Direct-Path INSERT Oracle Database performs index maintenance
at the end of direct-path INSERT operations on tables (partitioned or non-partitioned)
that have indexes. This index maintenance is performed by the parallel execution
servers for parallel direct-path INSERT or by the single process for serial direct-path
INSERT. You can avoid the performance impact of index maintenance by dropping the
index before the INSERT operation and then rebuilding it afterward.
Space Considerations with Direct-Path INSERT Direct-path INSERT requires more space
than conventional-path INSERT.
All serial direct-path INSERT operations, as well as parallel direct-path INSERT into
partitioned tables, insert data above the high-water mark of the affected segment. This
requires some additional space.
Parallel direct-path INSERT into non-partitioned tables requires even more space,
because it creates a temporary segment for each degree of parallelism. If the
non-partitioned table is not in a locally managed tablespace in automatic
segment-space management mode, you can modify the values of the NEXT and
PCTINCREASE storage parameter and MINIMUM EXTENT tablespace parameter to
provide sufficient (but not excess) storage for the temporary segments. Choose values
for these parameters so that:
■

■

The size of each extent is not too small (no less than 1 MB). This setting affects the
total number of extents in the object.
The size of each extent is not so large that the parallel INSERT results in wasted
space on segments that are larger than necessary.

After the direct-path INSERT operation is complete, you can reset these parameters to
settings more appropriate for serial operations.
Locking Considerations with Direct-Path INSERT During direct-path INSERT, the database
obtains exclusive locks on the table (or on all partitions of a partitioned table). As a
result, users cannot perform any concurrent insert, update, or delete operations on the
table, and concurrent index creation and build operations are not permitted.

Managing Tables

15-15

Automatically Collecting Statistics on Tables

Concurrent queries, however, are supported, but the query will return only the
information before the insert operation.

Automatically Collecting Statistics on Tables
The PL/SQL package DBMS_STATS lets you generate and manage statistics for
cost-based optimization. You can use this package to gather, modify, view, export,
import, and delete statistics. You can also use this package to identify or name
statistics that have been gathered.
Formerly, you enabled DBMS_STATS to automatically gather statistics for a table by
specifying the MONITORING keyword in the CREATE (or ALTER) TABLE statement.
Starting with Oracle Database 10g, the MONITORING and NOMONITORING keywords
have been deprecated and statistics are collected automatically. If you do specify these
keywords, they are ignored.
Monitoring tracks the approximate number of INSERT, UPDATE, and DELETE
operations for the table since the last time statistics were gathered. Information about
how many rows are affected is maintained in the SGA, until periodically (about every
three hours) SMON incorporates the data into the data dictionary. This data dictionary
information is made visible through the
DBA_TAB_MODIFICATIONS,ALL_TAB_MODIFICATIONS, or
USER_TAB_MODIFICATIONS views. The database uses these views to identify tables
with stale statistics.
To disable monitoring of a table, set the STATISTICS_LEVEL initialization parameter
to BASIC. Its default is TYPICAL, which enables automatic statistics collection.
Automatic statistics collection and the DBMS_STATS package enable the optimizer to
generate accurate execution plans.
See Also:
■

■

■

■

Oracle Database Reference for detailed information on the
STATISTICS_LEVEL initialization parameter
Oracle Database Performance Tuning Guide for information on
managing optimizer statistics
Oracle Database PL/SQL Packages and Types Reference for
information about using the DBMS_STATS package
"Automatic Statistics Collection Job" on page 23-2 for
information on using the Scheduler to collect statistics
automatically

Altering Tables
You alter a table using the ALTER TABLE statement. To alter a table, the table must be
contained in your schema, or you must have either the ALTER object privilege for the
table or the ALTER ANY TABLE system privilege.
Many of the usages of the ALTER TABLE statement are presented in the following
sections:
■

Reasons for Using the ALTER TABLE Statement

■

Altering Physical Attributes of a Table

■

Moving a Table to a New Segment or Tablespace

■

Manually Allocating Storage for a Table

15-16 Oracle Database Administrator’s Guide

Altering Tables

■

Modifying an Existing Column Definition

■

Adding Table Columns

■

Renaming Table Columns

■

Dropping Table Columns
Caution: Before altering a table, familiarize yourself with the
consequences of doing so. The Oracle Database SQL Reference lists
many of these consequences in the descriptions of the ALTER
TABLE clauses.

If a view, materialized view, trigger, domain index, function-based
index, check constraint, function, procedure of package depends on
a base table, the alteration of the base table or its columns can affect
the dependent object. See "Managing Object Dependencies" on
page 13-16 for information about how the database manages
dependencies.

Reasons for Using the ALTER TABLE Statement
You can use the ALTER TABLE statement to perform any of the following actions that
affect a table:
■

Modify physical characteristics (INITRANS or storage parameters)

■

Move the table to a new segment or tablespace

■

Explicitly allocate an extent or deallocate unused space

■

Add, drop, or rename columns, or modify an existing column definition (datatype,
length, default value, NOT NULL integrity constraint, and encryption.)

■

Modify the logging attributes of the table

■

Modify the CACHE/NOCACHE attributes

■

Add, modify or drop integrity constraints associated with the table

■

Enable or disable integrity constraints or triggers associated with the table

■

Modify the degree of parallelism for the table

■

Rename a table

■

Add or modify index-organized table characteristics

■

Alter the characteristics of an external table

■

Add or modify LOB columns

■

Add or modify object type, nested table, or varray columns

Many of these operations are discussed in succeeding sections.

Altering Physical Attributes of a Table
When altering the transaction entry setting INITRANS of a table, note that a new
setting for INITRANS applies only to data blocks subsequently allocated for the table.
To better understand this transaction entry setting parameter, see "Specifying the
INITRANS Parameter" on page 14-4.

Managing Tables

15-17

Altering Tables

The storage parameters INITIAL and MINEXTENTS cannot be altered. All new
settings for the other storage parameters (for example, NEXT, PCTINCREASE) affect
only extents subsequently allocated for the table. The size of the next extent allocated
is determined by the current values of NEXT and PCTINCREASE, and is not based on
previous values of these parameters. Storage parameters are discussed in "Managing
Storage Parameters" on page 14-5.

Moving a Table to a New Segment or Tablespace
The ALTER TABLE...MOVE statement enables you to relocate data of a
non-partitioned table or of a partition of a partitioned table into a new segment, and
optionally into a different tablespace for which you have quota. This statement also
lets you modify any of the storage attributes of the table or partition, including those
which cannot be modified using ALTER TABLE. You can also use the ALTER
TABLE...MOVE statement with the COMPRESS keyword to store the new segment
using table compression.
The ALTER TABLE...MOVE statement does not permit DML
against the table while the statement is executing. If you want to leave
the table available for DML while moving it, see "Redefining Tables
Online" on page 15-21.

Note:

The following statement moves the hr.admin_emp table to a new segment,
specifying new storage parameters:
ALTER TABLE hr.admin_emp MOVE
STORAGE ( INITIAL 20K
NEXT 40K
MINEXTENTS 2
MAXEXTENTS 20
PCTINCREASE 0 );

Moving a table changes the rowids of the rows in the table. This causes indexes on the
table to be marked UNUSABLE, and DML accessing the table using these indexes will
receive an ORA-01502 error. The indexes on the table must be dropped or rebuilt.
Likewise, any statistics for the table become invalid and new statistics should be
collected after moving the table.
If the table includes LOB column(s), this statement can be used to move the table along
with LOB data and LOB index segments (associated with this table) which the user
explicitly specifies. If not specified, the default is to not move the LOB data and LOB
index segments.

Manually Allocating Storage for a Table
Oracle Database dynamically allocates additional extents for the data segment of a
table, as required. However, perhaps you want to allocate an additional extent for a
table explicitly. For example, in an Oracle Real Application Clusters environment, an
extent of a table can be allocated explicitly for a specific instance.
A new extent can be allocated for a table using the ALTER TABLE...ALLOCATE
EXTENT clause.
You can also explicitly deallocate unused space using the DEALLOCATE UNUSED
clause of ALTER TABLE. This is described in "Reclaiming Wasted Space" on
page 14-15.

15-18 Oracle Database Administrator’s Guide

Altering Tables

See Also: Oracle Database Oracle Clusterware and Oracle Real
Application Clusters Administration and Deployment Guide for
information about using the ALLOCATE EXTENT clause in an
Oracle Real Application Clusters environment

Modifying an Existing Column Definition
Use the ALTER TABLE...MODIFY statement to modify an existing column definition.
You can modify column datatype, default value, column constraint, and column
encryption.
You can increase the length of an existing column, or decrease it, if all existing data
satisfies the new length. You can change a column from byte semantics to CHAR
semantics or vice versa. You must set the initialization parameter
BLANK_TRIMMING=TRUE to decrease the length of a non-empty CHAR column.
If you are modifying a table to increase the length of a column of datatype CHAR,
realize that this can be a time consuming operation and can require substantial
additional storage, especially if the table contains many rows. This is because the CHAR
value in each row must be blank-padded to satisfy the new column length.
See Also: Oracle Database SQL Reference for additional information
about modifying table columns and additional restrictions

Adding Table Columns
To add a column to an existing table, use the ALTER TABLE...ADD statement.
The following statement alters the hr.admin_emp table to add a new column named
bonus:
ALTER TABLE hr.admin_emp
ADD (bonus NUMBER (7,2));

If a new column is added to a table, the column is initially NULL unless you specify the
DEFAULT clause. When you specify a default value, the database updates each row in
the new column with the values specified. Specifying a DEFAULT value is not
supported for tables using table compression.
You can add a column with a NOT NULL constraint to a table only if the table does not
contain any rows, or you specify a default value.
See Also: Oracle Database SQL Reference for additional
information about adding table columns and additional restrictions

Renaming Table Columns
Oracle Database lets you rename existing columns in a table. Use the RENAME
COLUMN clause of the ALTER TABLE statement to rename a column. The new name
must not conflict with the name of any existing column in the table. No other clauses
are allowed in conjunction with the RENAME COLUMN clause.
The following statement renames the comm column of the hr.admin_emp table.
ALTER TABLE hr.admin_emp
RENAME COLUMN comm TO commission;

As noted earlier, altering a table column can invalidate dependent objects. However,
when you rename a column, the database updates associated data dictionary tables to
ensure that function-based indexes and check constraints remain valid.

Managing Tables

15-19

Altering Tables

Oracle Database also lets you rename column constraints. This is discussed in
"Renaming Constraints" on page 13-13.
Note: The RENAME TO clause of ALTER TABLE appears similar in
syntax to the RENAME COLUMN clause, but is used for renaming the
table itself.

Dropping Table Columns
You can drop columns that are no longer needed from a table, including an
index-organized table. This provides a convenient means to free space in a database,
and avoids your having to export/import data then re-create indexes and constraints.
You cannot drop all columns from a table, nor can you drop columns from a table
owned by SYS. Any attempt to do so results in an error.
See Also: Oracle Database SQL Reference for information about
additional restrictions and options for dropping columns from a
table

Removing Columns from Tables
When you issue an ALTER TABLE...DROP COLUMN statement, the column
descriptor and the data associated with the target column are removed from each row
in the table. You can drop multiple columns with one statement. The ALTER
TABLE...DROP COLUMN statement is not supported for tables using table
compression.
The following statements are examples of dropping columns from the hr.admin_emp
table. The first statement drops only the sal column:
ALTER TABLE hr.admin_emp DROP COLUMN sal;

The next statement drops both the bonus and comm columns:
ALTER TABLE hr.admin_emp DROP (bonus, commission);

Marking Columns Unused
If you are concerned about the length of time it could take to drop column data from
all of the rows in a large table, you can use the ALTER TABLE...SET UNUSED
statement. This statement marks one or more columns as unused, but does not actually
remove the target column data or restore the disk space occupied by these columns.
However, a column that is marked as unused is not displayed in queries or data
dictionary views, and its name is removed so that a new column can reuse that name.
All constraints, indexes, and statistics defined on the column are also removed.
To mark the hiredate and mgr columns as unused, execute the following statement:
ALTER TABLE hr.admin_emp SET UNUSED (hiredate, mgr);

You can later remove columns that are marked as unused by issuing an ALTER
TABLE...DROP UNUSED COLUMNS statement. Unused columns are also removed
from the target table whenever an explicit drop of any particular column or columns of
the table is issued.
The data dictionary views USER_UNUSED_COL_TABS, ALL_UNUSED_COL_TABS, or
DBA_UNUSED_COL_TABS can be used to list all tables containing unused columns. The
COUNT field shows the number of unused columns in the table.

15-20 Oracle Database Administrator’s Guide

Redefining Tables Online

SELECT * FROM DBA_UNUSED_COL_TABS;
OWNER
TABLE_NAME
COUNT
--------------------------- --------------------------- ----HR
ADMIN_EMP
2

Removing Unused Columns
The ALTER TABLE...DROP UNUSED COLUMNS statement is the only action allowed
on unused columns. It physically removes unused columns from the table and
reclaims disk space.
In the ALTER TABLE statement that follows, the optional clause CHECKPOINT is
specified. This clause causes a checkpoint to be applied after processing the specified
number of rows, in this case 250. Checkpointing cuts down on the amount of undo
logs accumulated during the drop column operation to avoid a potential exhaustion of
undo space.
ALTER TABLE hr.admin_emp DROP UNUSED COLUMNS CHECKPOINT 250;

Redefining Tables Online
In any database system, it is occasionally necessary to modify the logical or physical
structure of a table to:
■

Improve the performance of queries or DML

■

Accommodate application changes

■

Manage storage

Oracle Database provides a mechanism to make table structure modifications without
significantly affecting the availability of the table. The mechanism is called online
table redefinition. Redefining tables online provides a substantial increase in
availability compared to traditional methods of redefining tables.
When a table is redefined online, it is accessible to both queries and DML during much
of the redefinition process. The table is locked in the exclusive mode only during a
very small window that is independent of the size of the table and complexity of the
redefinition, and that is completely transparent to users.
Online table redefinition requires an amount of free space that is approximately
equivalent to the space used by the table being redefined. More space may be required
if new columns are added.
You can perform online table redefinition with the Enterprise Manager Reorganize
Objects wizard or with the DBMS_REDEFINITION package.
Note:

To invoke the Reorganize Objects wizard:

1.

On the Tables page of Enterprise Manager, click in the Select column to
select the table to redefine.

2.

In the Actions list, select Reorganize.

3.

Click Go.

This section describes online redefinition with the DBMS_REDEFINITION package. It
contains the following topics:
■

Features of Online Table Redefinition

Managing Tables

15-21

Redefining Tables Online

■

Performing Online Redefinition with DBMS_REDEFINITION

■

Results of the Redefinition Process

■

Performing Intermediate Synchronization

■

Aborting Online Table Redefinition and Cleaning Up After Errors

■

Restrictions for Online Redefinition of Tables

■

Online Redefinition of a Single Partition

■

Online Table Redefinition Examples

■

Privileges Required for the DBMS_REDEFINITION Package
See Also: Oracle Database PL/SQL Packages and Types Reference for
a description of the DBMS_REDEFINITION package

Features of Online Table Redefinition
Online table redefinition enables you to:
■

Modify the storage parameters of a table or cluster

■

Move a table or cluster to a different tablespace in the same schema
If it is not important to keep a table available for DML when
moving it to another tablespace, you can use the simpler ALTER
TABLE MOVE command. See "Moving a Table to a New Segment or
Tablespace" on page 15-18.
Note:

■

Add, modify, or drop one or more columns in a table or cluster

■

Add or drop partitioning support (non-clustered tables only)

■

Change partition structure

■

■

Change physical properties of a single table partition, including moving it to a
different tablespace in the same schema
Change physical properties of a materialized view log or an Oracle Streams
Advanced Queueing queue table

■

Add support for parallel queries

■

Re-create a table or cluster to reduce fragmentation
In many cases, online segment shrink is an easier way to
reduce fragmentation. See "Reclaiming Wasted Space" on page 14-15.

Note:

■

■
■

Change the organization of a normal table (heap organized) to an index-organized
table, or do the reverse.
Convert a relational table into a table with object columns, or do the reverse.
Convert an object table into a relational table or a table with object columns, or do
the reverse.

Performing Online Redefinition with DBMS_REDEFINITION
To perform online redefinition of a table with the DBMS_REDEFINITION package:
15-22 Oracle Database Administrator’s Guide

Redefining Tables Online

1.

Choose the redefinition method: by key or by rowid
By key—Select a primary key or pseudo-primary key to use for the redefinition.
Pseudo-primary keys are unique keys with all component columns having NOT
NULL constraints. For this method, the versions of the tables before and after
redefinition should have the same primary key columns. This is the preferred and
default method of redefinition.
By rowid—Use this method if no key is available. In this method, a hidden
column named M_ROW$$ is added to the post-redefined version of the table. It is
recommended that this column be dropped or marked as unused after the
redefinition is complete. You cannot use this method on index-organized tables.

2.

Verify that the table can be redefined online by invoking the CAN_REDEF_TABLE
procedure. If the table is not a candidate for online redefinition, then this
procedure raises an error indicating why the table cannot be redefined online.

3.

Create an empty interim table (in the same schema as the table to be redefined)
with all of the desired logical and physical attributes. If columns are to be
dropped, do not include them in the definition of the interim table. If a column is
to be added, then add the column definition to the interim table. If a column is to
be modified, create it in the interim table with the properties that you want.
It is not necessary to create the interim table with all the indexes, constraints,
grants, and triggers of the table being redefined, because these will be defined in
step 6 when you copy dependent objects.

4.

(Optional) If you are redefining a large table and want to improve the performance
of the next step by running it in parallel, issue the following statements:
alter session force parallel dml parallel degree-of-parallelism;
alter session force parallel query parallel degree-of-parallelism;

5.

Start the redefinition process by calling START_REDEF_TABLE, providing the
following:
■

The schema and table name of the table to be redefined

■

The interim table name

■

A column mapping string that maps the columns of table to be redefined to
the columns of the interim table
See "Constructing a Column Mapping String" on page 15-24 for details.

■

The redefinition method
If not specified, the default method of redefinition (using keys) is assumed.

■

Optionally, the columns to be used in ordering rows

■

If redefining only a single partition of a partitioned table, the partition name

Because this process involves copying data, it may take a while. The table being
redefined remains available for queries and DML during the entire process.
Note: If START_REDEF_TABLE fails for any reason, you must call
ABORT_REDEF_TABLE, otherwise subsequent attempts to redefine the
table will fail.
6.

Copy dependent objects (such as triggers, indexes, grants, and constraints) and
statistics from the table being redefined to the interim table, using one of the

Managing Tables

15-23

Redefining Tables Online

following two methods. Method 1 is the preferred method because it is more
automatic, but there may be times that you would choose to use method 2.
Method 1 also enables you to copy table statistics to the interim table.
■

Method 1: Automatically Creating Dependent Objects
Use the COPY_TABLE_DEPENDENTS procedure to automatically create
dependent objects on the interim table. This procedure also registers the
dependent objects. Registering the dependent objects enables the identities of
these objects and their copied counterparts to be automatically swapped later
as part of the redefinition completion process. The result is that when the
redefinition is completed, the names of the dependent objects will be the same
as the names of the original dependent objects.
For more information, see "Creating Dependent Objects Automatically" on
page 15-25.

■

Method 2: Manually Creating Dependent Objects
You can manually create dependent objects on the interim table and then
register them. For more information, see "Creating Dependent Objects
Manually" on page 15-26.
In Oracle Database Release 9i, you were required to
manually create the triggers, indexes, grants, and constraints on the
interim table, and there may still be situations where you want to or
must do so. In such cases, any referential constraints involving the
interim table (that is, the interim table is either a parent or a child
table of the referential constraint) must be created disabled. When
online redefinition completes, the referential constraint is
automatically enabled. In addition, until the redefinition process is
either completed or aborted, any trigger defined on the interim
table does not execute.

Note:

7.

Execute the FINISH_REDEF_TABLE procedure to complete the redefinition of the
table. During this procedure, the original table is locked in exclusive mode for a
very short time, independent of the amount of data in the original table. However,
FINISH_REDEF_TABLE will wait for all pending DML to commit before
completing the redefinition.

8.

If you used rowids for the redefinition and your COMPATIBLE initialization
parameter is set to 10.1 or lower, set to UNUSED the hidden column M_ROW$$ that
is now in the redefined table.
ALTER TABLE table_name SET UNUSED (M_ROW$$);

If COMPATIBLE is 10.2 or higher, this hidden column is automatically set to
UNUSED for you when redefinition completes.
You can also drop the column if you prefer.

Constructing a Column Mapping String
The column mapping string that you pass as an argument to START_REDEF_TABLE
contains a comma-separated list of column mapping pairs, where each pair has the
following syntax:
[expression]

column_name

15-24 Oracle Database Administrator’s Guide

Redefining Tables Online

The column_name term indicates a column in the interim table. The optional
expression can include columns from the table being redefined, constants,
operators, function or method calls, and so on, in accordance with the rules for
expressions in a SQL SELECT statement. However, only simple deterministic
subexpressions—that is, subexpressions whose results do not vary between one
evaluation and the next—plus sequences and SYSDATE can be used. No subqueries are
permitted. In the simplest case, the expression consists of just a column name from the
table being redefined.
If an expression is present, its value is placed in the designated interim table column
during redefinition. If the expression is omitted, it is assumed that both the table being
redefined and the interim table have a column named column_name, and the value of
that column in the table being redefined is placed in the same column in the interim
table.
For example, if the override column in the table being redefined is to be renamed to
override_commission, and every override commission is to be raised by 2%, the
correct column mapping pair is:
override*1.02

override_commission

If you supply '*' or NULL as the column mapping string, it is assumed that all the
columns (with their names unchanged) are to be included in the interim table.
Otherwise, only those columns specified explicitly in the string are considered. The
order of the column mapping pairs is unimportant.
For examples of column mapping strings, see "Online Table Redefinition Examples" on
page 15-30.
Data Conversions

When mapping columns, you can convert data types, with some

restrictions.
If you provide '*' or NULL as the column mapping string, only the implicit conversions
permitted by SQL are supported. For example, you can convert from CHAR to
VARCHAR2, from INTEGER to NUMBER, and so on.
If you want to perform other data type conversions, including converting from one
object type to another or one collection type to another, you must provide a column
mapping pair with an expression that performs the conversion. The expression can
include the CAST function, built-in functions like TO_NUMBER, conversion functions
that you create, and so on.

Creating Dependent Objects Automatically
You use the COPY_TABLE_DEPENDENTS procedure to automatically create dependent
objects on the interim table.
You can discover if errors occurred while copying dependent objects by checking the
num_errors output argument. If the ignore_errors argument is set to TRUE, the
COPY_TABLE_DEPENDENTS procedure continues copying dependent objects even if
an error is encountered when creating an object. You can view these errors by querying
the DBA_REDEFINITION_ERRORS view.
Reasons for errors include:
■
■

A lack of system resources
A change in the logical structure of the table that would require recoding the
dependent object.

Managing Tables

15-25

Redefining Tables Online

See Example 3 in "Online Table Redefinition Examples" on page 15-30 for a
discussion of this type of error.
If ignore_errors is set to FALSE, the COPY_TABLE_DEPENDENTS procedure stops
copying objects as soon as any error is encountered.
After you correct any errors you can again attempt to copy the dependent objects by
reexecuting the COPY_TABLE_DEPENDENTS procedure. Optionally you can create the
objects manually and then register them as explained in "Creating Dependent Objects
Manually". The COPY_TABLE_DEPENDENTS procedure can be used multiple times as
necessary. If an object has already been successfully copied, it is not copied again.

Creating Dependent Objects Manually
If you manually create dependent objects on the interim table with SQL*Plus or
Enterprise Manager, you must then use the REGISTER_DEPENDENT_OBJECT
procedure to register the dependent objects. Registering dependent objects enables the
redefinition completion process to restore dependent object names to what they were
before redefinition.
You would also use the REGISTER_DEPENDENT_OBJECT procedure if the
COPY_TABLE_DEPENDENTS procedure failed to copy a dependent object and manual
intervention is required.
You can query the DBA_REDEFINITION_OBJECTS view to determine which
dependent objects are registered. This view shows dependent objects that were
registered explicitly with the REGISTER_DEPENDENT_OBJECT procedure or implicitly
with the COPY_TABLE_DEPENDENTS procedure. Only current information is shown in
the view.
The UNREGISTER_DEPENDENT_OBJECT procedure can be used to unregister a
dependent object on the table being redefined and on the interim table.

Results of the Redefinition Process
The following are the end results of the redefinition process:
■

■

The original table is redefined with the columns, indexes, constraints, grants,
triggers, and statistics of the interim table.
Dependent objects that were registered, either explicitly using
REGISTER_DEPENDENT_OBJECT or implicitly using COPY_TABLE_DEPENDENTS,
are renamed automatically so that dependent object names on the redefined table
are the same as before redefinition.
If no registration is done or no automatic copying is done,
then you must manually rename the dependent objects.

Note:

■

■

■

The referential constraints involving the interim table now involve the redefined
table and are enabled.
Any indexes, triggers, grants and constraints defined on the original table (prior to
redefinition) are transferred to the interim table and are dropped when the user
drops the interim table. Any referential constraints involving the original table
before the redefinition now involve the interim table and are disabled.
Any PL/SQL procedures and cursors defined on the original table (prior to
redefinition) are invalidated. They are automatically revalidated whenever they
are used next.

15-26 Oracle Database Administrator’s Guide

Redefining Tables Online

Note: The revalidation can fail if the logical structure of the table was
changed as a result of the redefinition process.

Performing Intermediate Synchronization
After the redefinition process has been started by calling START_REDEF_TABLE and
before FINISH_REDEF_TABLE has been called, it is possible that a large number of
DML statements have been executed on the original table. If you know that this is the
case, it is recommended that you periodically synchronize the interim table with the
original table. This is done by calling the SYNC_INTERIM_TABLE procedure. Calling
this procedure reduces the time taken by FINISH_REDEF_TABLE to complete the
redefinition process. There is no limit to the number of times that you can call
SYNC_INTERIM_TABLE.
The small amount of time that the original table is locked during
FINISH_REDEF_TABLE is independent of whether SYNC_INTERIM_TABLE has been
called.

Aborting Online Table Redefinition and Cleaning Up After Errors
In the event that an error is raised during the redefinition process, or if you choose to
terminate the redefinition process, call ABORT_REDEF_TABLE. This procedure drops
temporary logs and tables associated with the redefinition process. After this
procedure is called, you can drop the interim table and its dependent objects.
If the online redefinition process must be restarted, if you do not first call
ABORT_REDEF_TABLE, subsequent attempts to redefine the table will fail.

Restrictions for Online Redefinition of Tables
The following restrictions apply to the online redefinition of tables:
■

■

■

■

■
■

■

If the table is to be redefined using primary key or pseudo-primary keys (unique
keys or constraints with all component columns having not null constraints), then
the post-redefinition table must have the same primary key or pseudo-primary
key columns. If the table is to be redefined using rowids, then the table must not
be an index-organized table.
Tables that are replicated in an n-way master configuration can be redefined, but
horizontal subsetting (subset of rows in the table), vertical subsetting (subset of
columns in the table), and column transformations are not allowed.
The overflow table of an index-organized table cannot be redefined online
independently.
Tables with fine-grained access control (row-level security) cannot be redefined
online.
Tables with BFILE columns cannot be redefined online.
Tables with LONG columns can be redefined online, but those columns must be
converted to CLOBS. Also, LONG RAW columns must be converted to BLOBS.
Tables with LOB columns are acceptable.
On a system with sufficient resources for parallel execution, and in the case where
the interim table is not partitioned, redefinition of a LONG column to a LOB column
can be executed in parallel, provided that:

Managing Tables

15-27

Redefining Tables Online

–

The segment used to store the LOB column in the interim table belongs to a
locally managed tablespace with Automatic Segment Space Management
(ASSM) enabled.

–

There is a simple mapping from one LONG column to one LOB column, and the
interim table has only one LOB column.

In the case where the interim table is partitioned, the normal methods for parallel
execution for partitioning apply.
■

Tables in the SYS and SYSTEM schema cannot be redefined online.

■

Temporary tables cannot be redefined.

■

A subset of rows in the table cannot be redefined.

■

■

■

■
■

■
■

Only simple deterministic expressions, sequences, and SYSDATE can be used
when mapping the columns in the interim table to those of the original table. For
example, subqueries are not allowed.
If new columns are being added as part of the redefinition and there are no
column mappings for these columns, then they must not be declared NOT NULL
until the redefinition is complete.
There cannot be any referential constraints between the table being redefined and
the interim table.
Table redefinition cannot be done NOLOGGING.
For materialized view logs and queue tables, online redefinition is restricted to
changes in physical properties. No horizontal or vertical subsetting is permitted,
nor are any column transformations. The only valid value for the column mapping
string is NULL.
Tables with materialized view logs defined on them cannot be redefined online.
You can convert a VARRAY to a nested table with the CAST operator in the column
mapping. However, you cannot convert a nested table to a VARRAY.

Online Redefinition of a Single Partition
Beginning with Oracle Database 10g Release 2, you can redefine online a single
partition of a table. This is useful if, for example, you want to move a partition to a
different tablespace and keep the partition available for DML during the operation.
Another use for this capability is redefining online an entire table, but doing it one
partition at a time to reduce resource requirements. For example, if you want to move
a table to a different tablespace, you can move it one partition at a time to minimize
the free space and undo space required to complete the move.
Redefining a single partition differs from redefining a table in the following ways:
■

There is no need to copy dependent objects. It is not valid to use the
COPY_TABLE_DEPENDENTS procedure when redefining a single partition.

■

You must manually create any local indexes on the interim table.

■

The column mapping string for START_REDEF_TABLE must be NULL.

15-28 Oracle Database Administrator’s Guide

Redefining Tables Online

If it is not important to keep a partition available for DML
when moving it to another tablespace, you can use the simpler ALTER
TABLE MOVE PARTITION command.
Note:

See also:
■

"Moving Partitions" on page 17-42

■

Oracle Database SQL Reference

Rules for Online Redefinition of a Single Partition
The underlying mechanism for redefinition of a single partition is the exchange
partition capability of the database (ALTER TABLE...EXCHANGE PARTITION). Rules
and restrictions for online redefinition of a single partition are therefore governed by
this mechanism. Here are some general restrictions:
■
■

■

No logical changes (such as adding or dropping a column) are permitted.
No changes to the partitioning method (such as changing from range partitioning
to hash partitioning) are permitted.
If a global index is present, it is marked as UNUSABLE when redefinition of any
table partition is complete.

Here are the rules for defining the interim table:
■

■

■

If the partition being redefined is a range, hash, or list partition, the interim table
must be non-partitioned.
If the partition being redefined is a range partition of a composite range-hash
partitioned table, the interim table must be a hash partitioned table. In addition,
the partitioning key of the interim table must be identical to the subpartitioning
key of the range-hash partitioned table, and the number of partitions in the
interim table must be identical to the number of subpartitions in the range
partition being redefined.
If the partition being redefined is a range partition of a composite range-list
partitioned table, the interim table must be a list partitioned table. In addition, the
partitioning key of the interim table must be identical to the subpartitioning key
of the range-list partitioned table, and the values lists of the interim table's list
partitions must exactly match the values lists of the list subpartitions in the range
partition being redefined.

These additional rules apply if the table being redefined is a partitioned
index-organized table:
■
■

■

■

■

The interim table must also be index-organized.
The original and interim tables must have primary keys on the same columns, in
the same order.
If key compression is enabled, it must be enabled for both the original and interim
tables, with the same prefix length.
Both the original and interim tables must have overflow segments, or neither can
have them. Likewise for mapping tables.
Both the original and interim tables must have identical storage attributes for any
LOB columns.
See Also:

"Exchanging Partitions" on page 17-32

Managing Tables

15-29

Redefining Tables Online

Online Table Redefinition Examples
For the following examples, see Oracle Database PL/SQL Packages and Types Reference for
descriptions of all DBMS_REDEFINITION subprograms.
Example

Description

Example 1

Redefines a table by adding new columns and adding partitioning.

Example 2

Demonstrates redefinition with object datatypes.

Example 3

Demonstrates redefinition with manually registered dependent objects.

Example 4

Redefines a single table partition, moving it to a different tablespace.

Example 1
This example illustrates online redefinition of the previously created table
hr.admin_emp, which at this point only contains columns: empno, ename, job,
deptno. The table is redefined as follows:
■

New columns mgr, hiredate, sal, and bonus are added. (These existed in the
original table but were dropped in previous examples.)

■

The new column bonus is initialized to 0

■

The column deptno has its value increased by 10.

■

The redefined table is partitioned by range on empno.

The steps in this redefinition are illustrated below.
1.

Verify that the table is a candidate for online redefinition. In this case you specify
that the redefinition is to be done using primary keys or pseudo-primary keys.
BEGIN
DBMS_REDEFINITION.CAN_REDEF_TABLE('hr','admin_emp',
DBMS_REDEFINITION.CONS_USE_PK);
END;
/

2.

Create an interim table hr.int_admin_emp.
CREATE TABLE hr.int_admin_emp
(empno
NUMBER(5) PRIMARY KEY,
ename
VARCHAR2(15) NOT NULL,
job
VARCHAR2(10),
mgr
NUMBER(5),
hiredate
DATE DEFAULT (sysdate),
sal
NUMBER(7,2),
deptno
NUMBER(3) NOT NULL,
bonus
NUMBER (7,2) DEFAULT(1000))
PARTITION BY RANGE(empno)
(PARTITION emp1000 VALUES LESS THAN (1000) TABLESPACE admin_tbs,
PARTITION emp2000 VALUES LESS THAN (2000) TABLESPACE admin_tbs2);

3.

Start the redefinition process.
BEGIN
DBMS_REDEFINITION.START_REDEF_TABLE('hr', 'admin_emp','int_admin_emp',
'empno empno, ename ename, job job, deptno+10 deptno, 0 bonus',
dbms_redefinition.cons_use_pk);
END;
/

15-30 Oracle Database Administrator’s Guide

Redefining Tables Online

4.

Copy dependent objects. (Automatically create any triggers, indexes, grants, and
constraints on hr.int_admin_emp.)
DECLARE
num_errors PLS_INTEGER;
BEGIN
DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS('hr', 'admin_emp','int_admin_emp',
DBMS_REDEFINITION.CONS_ORIG_PARAMS, TRUE, TRUE, TRUE, TRUE, num_errors);
END;

Note that the ignore_errors argument is set to TRUE for this call. The reason is
that the interim table was created with a primary key constraint, and when
COPY_TABLE_DEPENDENTS attempts to copy the primary key constraint and
index from the original table, errors occurs. You can ignore these errors, but you
must run the query shown in the next step to see if there are other errors.
5.

Query the DBA_REDEFINITION_ERRORS view to check for errors.
SQL> select object_name, base_table_name, ddl_txt from
DBA_REDEFINITION_ERRORS;
OBJECT_NAME
BASE_TABLE_NAME DDL_TXT
------------- ---------------- -----------------------------SYS_C005836
ADMIN_EMP
CREATE UNIQUE INDEX "HR"."TMP$
$_SYS_C0058360" ON "HR"."INT_A
DMIN_EMP" ("EMPNO")
SYS_C005836

ADMIN_EMP

ALTER TABLE "HR"."INT_ADMIN_EM
P" ADD CONSTRAINT "TMP$$_SYS_C
0058360" PRIMARY KEY

These errors are caused by the existing primary key constraint on the interim table
and can be ignored. Note that with this approach, the names of the primary key
constraint and index on the post-redefined table are changed. An alternate
approach, one that avoids errors and name changes, would be to define the
interim table without a primary key constraint. In this case, the primary key
constraint and index are copied from the original table.
The best approach is to define the interim table with a primary
key constraint, use REGISTER_DEPENDENT_OBJECT to register the
primary key constraint and index, and then copy the remaining
dependent objects with COPY_TABLE_DEPENDENTS. This approach
avoids errors and ensures that the redefined table always has a
primary key and that the dependent object names do not change.

Note:

6.

Optionally, synchronize the interim table hr.int_admin_emp.
BEGIN
DBMS_REDEFINITION.SYNC_INTERIM_TABLE('hr', 'admin_emp', 'int_admin_emp');
END;
/

7.

Complete the redefinition.
BEGIN
DBMS_REDEFINITION.FINISH_REDEF_TABLE('hr', 'admin_emp', 'int_admin_emp');
END;
/

Managing Tables

15-31

Redefining Tables Online

The table hr.admin_emp is locked in the exclusive mode only for a small
window toward the end of this step. After this call the table hr.admin_emp is
redefined such that it has all the attributes of the hr.int_admin_emp table.
8.

Drop the interim table.

Example 2
This example redefines a table to change columns into object attributes. The redefined
table gets a new column that is an object type.
The original table, named CUSTOMER, is defined as follows:
Name
-----------CID
NAME
STREET
CITY
STATE
ZIP

Type
------------NUMBER
VARCHAR2(30)
VARCHAR2(100)
VARCHAR2(30)
VARCHAR2(2)
NUMBER(5)

<- Primary key

The type definition for the new object is:
CREATE TYPE ADDR_T AS OBJECT (
street VARCHAR2(100),
city VARCHAR2(30),
state VARCHAR2(2),
zip NUMBER(5, 0) );

Here are the steps for this redefinition:
1.

Verify that the table is a candidate for online redefinition. Specify that the
redefinition is to be done using primary keys or pseudo-primary keys.
BEGIN
DBMS_REDEFINITION.CAN_REDEF_TABLE('STEVE','CUSTOMER',
DBMS_REDEFINITION.CONS_USE_PK);
END;
/

2.

Create the interim table int_customer.
CREATE TABLE INT_CUSTOMER(
CID NUMBER,
NAME VARCHAR2(30),
ADDR ADDR_T);

Note that no primary key is defined on the interim table. When dependent objects
are copied in step 5, the primary key constraint and index are copied.
3.

Because CUSTOMER is a very large table, specify parallel operations for the next
step.
alter session force parallel dml parallel 4;
alter session force parallel query parallel 4;

4.

Start the redefinition process using primary keys.
BEGIN
DBMS_REDEFINITION.START_REDEF_TABLE(
uname
=> 'STEVE',
orig_table => 'CUSTOMER',
int_table
=> 'INT_CUSTOMER',

15-32 Oracle Database Administrator’s Guide

Redefining Tables Online

col_mapping => 'cid cid, name name,
addr_t(street, city, state, zip) addr');
END;
/

Note that addr_t(street, city, state, zip) is a call to the object
constructor.
5.

Copy dependent objects.
DECLARE
num_errors PLS_INTEGER;
BEGIN
DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS(
'STEVE','CUSTOMER','INT_CUSTOMER',DBMS_REDEFINITION.CONS_ORIG_PARAMS,
TRUE, TRUE, TRUE, FALSE, num_errors, TRUE);
END;
/

Note that for this call, the final argument indicates that table statistics are to be
copied to the interim table.
6.

Optionally synchronize the interim table.
BEGIN
DBMS_REDEFINITION.SYNC_INTERIM_TABLE('STEVE', 'CUSTOMER', 'INT_CUSTOMER');
END;
/

7.

Complete the redefinition.
BEGIN
DBMS_REDEFINITION.FINISH_REDEF_TABLE('STEVE', 'CUSTOMER', 'INT_CUSTOMER');
END;
/

8.

Drop the interim table.

Example 3
This example addresses the situation where a dependent object must be manually
created and registered.
Consider the case where a table T1 has a column named C1, and where this column
becomes C2 after the redefinition. Assume that there is an index Index1 on C1. In this
case, COPY_TABLE_DEPENDENTS tries to create an index on the interim table
corresponding to Index1, and tries to create it on a column C1, which does not exist
on the interim table. This results in an error. You must therefore manually create the
index on column C2 and register it. Here are the steps:
1.

Create the interim table INT_T1 and create an index Int_Index1 on column C2.

2.

Ensure that T1 is a candidate for online redefinition with CAN_REDEF_TABLE, and
then begin the redefinition process with START_REDEF_TABLE.

3.

Register the original (Index1) and interim (Int_Index1) dependent objects.
BEGIN
DBMS_REDEFINITION.REGISTER_DEPENDENT_OBJECT(
uname
=> 'STEVE',
orig_table
=> 'T1',
int_table
=> 'INT_T1',
dep_type
=> DBMS_REDEFINITION.CONS_INDEX,

Managing Tables

15-33

Redefining Tables Online

dep_owner
=> 'STEVE',
dep_orig_name => 'Index1',
dep_int_name => 'Int_Index1');
END;
/
4.

Use COPY_TABLE_DEPENDENTS to copy the remaining dependent objects.

5.

Optionally synchronize the interim table.

6.

Complete the redefinition and drop the interim table.

Example 4
This example demonstrates redefining a single partition. It moves the oldest partition
of a range-partitioned sales table to a tablespace named TBS_LOW_FREQ. The table
containing the partition to be redefined is defined as follows:
CREATE TABLE salestable
(s_productid NUMBER,
s_saledate DATE,
s_custid NUMBER,
s_totalprice NUMBER)
TABLESPACE users
PARTITION BY RANGE(s_saledate)
(PARTITION sal03q1 VALUES LESS THAN (TO_DATE('01-APR-2003', 'DD-MON-YYYY')),
PARTITION sal03q2 VALUES LESS THAN (TO_DATE('01-JUL-2003', 'DD-MON-YYYY')),
PARTITION sal03q3 VALUES LESS THAN (TO_DATE('01-OCT-2003', 'DD-MON-YYYY')),
PARTITION sal03q4 VALUES LESS THAN (TO_DATE('01-JAN-2004', 'DD-MON-YYYY')));

The table has a local partitioned index that is defined as follows:
CREATE INDEX sales_index ON salestable
(s_saledate, s_productid, s_custid) LOCAL;

Here are the steps. In the following procedure calls, note the extra argument: partition
name (part_name).
1.

Ensure that salestable is a candidate for redefinition.
BEGIN
DBMS_REDEFINITION.CAN_REDEF_TABLE(
uname
=> 'STEVE',
tname
=> 'SALESTABLE',
options_flag => DBMS_REDEFINITION.CONS_USE_ROWID,
part_name
=> 'sal03q1');
END;
/

2.

Create the interim table in the TBS_LOW_FREQ tablespace. Because this is a
redefinition of a range partition, the interim table is non-partitioned.
CREATE TABLE int_salestable
(s_productid NUMBER,
s_saledate DATE,
s_custid NUMBER,
s_totalprice NUMBER)
TABLESPACE tbs_low_freq;

3.

Start the redefinition process using rowid.
BEGIN
DBMS_REDEFINITION.START_REDEF_TABLE(

15-34 Oracle Database Administrator’s Guide

Redefining Tables Online

uname
orig_table
int_table
col_mapping
options_flag
part_name
END;
/
4.

=>
=>
=>
=>
=>
=>

'STEVE',
'salestable',
'int_salestable',
NULL,
DBMS_REDEFINITION.CONS_USE_ROWID,
'sal03q1');

Manually create any local indexes on the interim table.
CREATE INDEX int_sales_index ON int_salestable
(s_saledate, s_productid, s_custid)
TABLESPACE tbs_low_freq;

5.

Optionally synchronize the interim table.
BEGIN
DBMS_REDEFINITION.SYNC_INTERIM_TABLE(
uname
=> 'STEVE',
orig_table => 'salestable',
int_table => 'int_salestable',
part_name => 'sal03q1');
END;
/

6.

Complete the redefinition.
BEGIN
DBMS_REDEFINITION.FINISH_REDEF_TABLE(
uname
=> 'STEVE',
orig_table => 'salestable',
int_table => 'int_salestable',
part_name => 'sal03q1');
END;
/

7.

Drop the interim table.

The following query shows that the oldest partition has been moved to the new
tablespace:
select partition_name, tablespace_name from user_tab_partitions
where table_name = 'SALESTABLE';
PARTITION_NAME
-----------------------------SAL03Q1
SAL03Q2
SAL03Q3
SAL03Q4

TABLESPACE_NAME
-----------------------------TBS_LOW_FREQ
USERS
USERS
USERS

4 rows selected.

Privileges Required for the DBMS_REDEFINITION Package
Execute privileges on the DBMS_REDEFINITION package are granted to
EXECUTE_CATALOG_ROLE. In addition to having execute privileges on this package,
you must be granted the following privileges:
■

CREATE ANY TABLE

Managing Tables

15-35

Auditing Table Changes Using Flashback Transaction Query

■

ALTER ANY TABLE

■

DROP ANY TABLE

■

LOCK ANY TABLE

■

SELECT ANY TABLE

The following additional privileges are required to execute
COPY_TABLE_DEPENDENTS:
■

CREATE ANY TRIGGER

■

CREATE ANY INDEX

Auditing Table Changes Using Flashback Transaction Query
You must be using automatic undo management to use the
Flashback Transaction Query feature. It is based on undo
information stored in an undo tablespace.

Note:

To understand how to configure your database for the Flashback
Transaction Query feature, see "Viewing Information About Undo"
on page 10-10.
You may discover that somehow data in a table has been inappropriately changed. To
research this change, you can use multiple flashback queries to view row data at
specific points in time. More efficiently, you can use the Flashback Version Query
feature to view all changes to a row over a period of time. That feature lets you append
a VERSIONS clause to a SELECT statement that specifies an SCN or timestamp range
between which you want to view changes to row values. The query also can return
associated metadata, such as the transaction responsible for the change.
Further, once you identify an erroneous transaction, you can then use the Flashback
Transaction Query feature to identify other changes that were done by the transaction,
and to request the undo SQL to reverse those changes. But using the undo SQL is only
one means of recovering your data. You also have the option of using the Flashback
Table feature, described in "Recovering Tables Using the Flashback Table Feature" on
page 15-36, to restore the table to a state before the changes were made.
See Also: Oracle Database Application Developer's Guide Fundamentals for directions for using Oracle Flashback Query and
Flashback Version Query.

Recovering Tables Using the Flashback Table Feature
You must be using automatic undo management to use the
Flashback Table feature. It is based on undo information stored in
an undo tablespace.

Note:

To understand how to configure your undo tablespace for the
Flashback Table feature, see "Viewing Information About Undo" on
page 10-10.
The FLASHBACK TABLE statement enables users to recover a table to a previous point
in time. It provides a fast, online solution for recovering a table that has been
15-36 Oracle Database Administrator’s Guide

Dropping Tables

accidentally modified or deleted by a user or application. In many cases, this
Flashback Table feature alleviates the need for you, as the administrator, to perform
more complicated point in time recovery operations.
The functionality of the Flashback Table feature can be summarized as follows:
■

■
■

■

■

■

Restores all data in a specified table to a previous point in time described by a
timestamp or SCN.
Performs the restore operation online.
Automatically maintains all of the table attributes, such as indexes, triggers, and
constraints that are necessary for an application to function with the flashed-back
table.
Maintains any remote state in a distributed environment. For example, all of the
table modifications required by replication if a replicated table is flashed back.
Maintains data integrity as specified by constraints. Tables are flashed back
provided none of the table constraints are violated. This includes any referential
integrity constraints specified between a table included in the FLASHBACK TABLE
statement and another table that is not included in the FLASHBACK TABLE
statement.
Even after a flashback operation, the data in the original table is not lost. You can
later revert to the original state.
See Also: Oracle Database Backup and Recovery Advanced User's
Guide for more information about the FLASHBACK TABLE
statement.

Dropping Tables
To drop a table that you no longer need, use the DROP TABLE statement. The table
must be contained in your schema or you must have the DROP ANY TABLE system
privilege.
Caution: Before dropping a table, familiarize yourself with the
consequences of doing so:
■

■
■

■

■

Dropping a table removes the table definition from the data
dictionary. All rows of the table are no longer accessible.
All indexes and triggers associated with a table are dropped.
All views and PL/SQL program units dependent on a dropped
table remain, yet become invalid (not usable). See "Managing
Object Dependencies" on page 13-16 for information about how
the database manages dependencies.
All synonyms for a dropped table remain, but return an error
when used.
All extents allocated for a table that is dropped are returned to
the free space of the tablespace and can be used by any other
object requiring new extents or new objects. All rows
corresponding to a clustered table are deleted from the blocks
of the cluster. Clustered tables are the subject of Chapter 18,
"Managing Clusters".

Managing Tables

15-37

Using Flashback Drop and Managing the Recycle Bin

The following statement drops the hr.int_admin_emp table:
DROP TABLE hr.int_admin_emp;

If the table to be dropped contains any primary or unique keys referenced by foreign
keys of other tables and you intend to drop the FOREIGN KEY constraints of the child
tables, then include the CASCADE clause in the DROP TABLE statement, as shown
below:
DROP TABLE hr.admin_emp CASCADE CONSTRAINTS;

When you drop a table, normally the database does not immediately release the space
associated with the table. Rather, the database renames the table and places it in a
recycle bin, where it can later be recovered with the FLASHBACK TABLE statement if
you find that you dropped the table in error. If you should want to immediately
release the space associated with the table at the time you issue the DROP TABLE
statement, include the PURGE clause as shown in the following statement:
DROP TABLE hr.admin_emp PURGE;

Perhaps instead of dropping a table, you want to truncate it. The TRUNCATE statement
provides a fast, efficient method for deleting all rows from a table, but it does not affect
any structures associated with the table being truncated (column definitions,
constraints, triggers, and so forth) or authorizations. The TRUNCATE statement is
discussed in "Truncating Tables and Clusters" on page 13-5.

Using Flashback Drop and Managing the Recycle Bin
When you drop a table, the database does not immediately remove the space
associated with the table. The database renames the table and places it and any
associated objects in a recycle bin, where, in case the table was dropped in error, it can
be recovered at a later time. This feature is called Flashback Drop, and the FLASHBACK
TABLE statement is used to restore the table. Before discussing the use of the
FLASHBACK TABLE statement for this purpose, it is important to understand how the
recycle bin works, and how you manage its contents.
This section contains the following topics:
■

What Is the Recycle Bin?

■

Viewing and Querying Objects in the Recycle Bin

■

Purging Objects in the Recycle Bin

■

Restoring Tables from the Recycle Bin

What Is the Recycle Bin?
The recycle bin is actually a data dictionary table containing information about
dropped objects. Dropped tables and any associated objects such as indexes,
constraints, nested tables, and the likes are not removed and still occupy space. They
continue to count against user space quotas, until specifically purged from the recycle
bin or the unlikely situation where they must be purged by the database because of
tablespace space constraints.
Each user can be thought of as having his own recycle bin, since unless a user has the
SYSDBA privilege, the only objects that the user has access to in the recycle bin are
those that the user owns. A user can view his objects in the recycle bin using the
following statement:

15-38 Oracle Database Administrator’s Guide

Using Flashback Drop and Managing the Recycle Bin

SELECT * FROM RECYCLEBIN;

When you drop a tablespace including its contents, the objects in the tablespace are not
placed in the recycle bin and the database purges any entries in the recycle bin for
objects located in the tablespace. The database also purges any recycle bin entries for
objects in a tablespace when you drop the tablespace, not including contents, and the
tablespace is otherwise empty. Likewise:
■

■

■

When you drop a user, any objects belonging to the user are not placed in the
recycle bin and any objects in the recycle bin are purged.
When you drop a cluster, its member tables are not placed in the recycle bin and
any former member tables in the recycle bin are purged.
When you drop a type, any dependent objects such as subtypes are not placed in
the recycle bin and any former dependent objects in the recycle bin are purged.

Object Naming in the Recycle Bin
When a dropped table is moved to the recycle bin, the table and its associated objects
are given system-generated names. This is necessary to avoid name conflicts that may
arise if multiple tables have the same name. This could occur under the following
circumstances:
■

A user drops a table, re-creates it with the same name, then drops it again.

■

Two users have tables with the same name, and both users drop their tables.

The renaming convention is as follows:
BIN$unique_id$version

where:
■

■

unique_id is a 26-character globally unique identifier for this object, which
makes the recycle bin name unique across all databases
version is a version number assigned by the database

Enabling and Disabling the Recycle Bin
You can enable and disable the recycle bin with the recyclebin initialization
parameter. When the recycle bin is enabled, dropped tables and their dependent
objects are placed in the recycle bin. When the recycle bin is disabled, dropped tables
and their dependent objects are not placed in the recycle bin; they are just dropped,
and you must use other means to recover them (such as recovering from backup).
The recycle bin is enabled by default.
To disable the recycle bin:
■

Issue one of the following statements:
ALTER SESSION SET recyclebin = OFF;
ALTER SYSTEM SET recyclebin = OFF;

To enable the recycle bin:
■

Issue one of the following statements:
ALTER SESSION SET recyclebin = ON;
ALTER SYSTEM SET recyclebin = ON;

Managing Tables

15-39

Using Flashback Drop and Managing the Recycle Bin

Enabling and disabling the recycle bin with an ALTER SYSTEM or ALTER SESSION
statement takes effect immediately. Disabling the recycle bin does not purge or
otherwise affect objects already in the recycle bin.
Like any other initialization parameter, you can set the initial value of the
recyclebin parameter in the text initialization file initSID.ora:
recyclebin=on

"Understanding Initialization Parameters" on page 2-19
for more information on initialization parameters.

See Also:

Viewing and Querying Objects in the Recycle Bin
Oracle Database provides two views for obtaining information about objects in the
recycle bin:
View

Description

USER_RECYCLEBIN

This view can be used by users to see their own dropped objects
in the recycle bin. It has a synonym RECYCLEBIN, for ease of
use.

DBA_RECYCLEBIN

This view gives administrators visibility to all dropped objects
in the recycle bin

One use for these views is to identify the name that the database has assigned to a
dropped object, as shown in the following example:
SELECT object_name, original_name FROM dba_recyclebin
WHERE owner = 'HR';
OBJECT_NAME
ORIGINAL_NAME
------------------------------ -------------------------------BIN$yrMKlZaLMhfgNAgAIMenRA==$0 EMPLOYEES

You can also view the contents of the recycle bin using the SQL*Plus command SHOW
RECYCLEBIN.
SQL> show recyclebin
ORIGINAL NAME
RECYCLEBIN NAME
OBJECT TYPE DROP TIME
---------------- ------------------------------ ------------ ------------------EMPLOYEES
BIN$yrMKlZaVMhfgNAgAIMenRA==$0 TABLE
2003-10-27:14:00:19

You can query objects that are in the recycle bin, just as you can query other objects.
However, you must specify the name of the object as it is identified in the recycle bin.
For example:
SELECT * FROM "BIN$yrMKlZaVMhfgNAgAIMenRA==$0";

Purging Objects in the Recycle Bin
If you decide that you are never going to restore an item from the recycle bin, you can
use the PURGE statement to remove the items and their associated objects from the
recycle bin and release their storage space. You need the same privileges as if you were
dropping the item.
When you use the PURGE statement to purge a table, you can use the name that the
table is known by in the recycle bin or the original name of the table. The recycle bin
15-40 Oracle Database Administrator’s Guide

Using Flashback Drop and Managing the Recycle Bin

name can be obtained from either the DBA_ or USER_RECYCLEBIN view as shown in
"Viewing and Querying Objects in the Recycle Bin" on page 15-40. The following
hypothetical example purges the table hr.int_admin_emp, which was renamed to
BIN$jsleilx392mk2=293$0 when it was placed in the recycle bin:
PURGE TABLE BIN$jsleilx392mk2=293$0;

You can achieve the same result with the following statement:
PURGE TABLE int_admin_emp;

You can use the PURGE statement to purge all the objects in the recycle bin that are
from a specified tablespace or only the tablespace objects belonging to a specified user,
as shown in the following examples:
PURGE TABLESPACE example;
PURGE TABLESPACE example USER oe;

Users can purge the recycle bin of their own objects, and release space for objects, by
using the following statement:
PURGE RECYCLEBIN;

If you have the SYSDBA privilege, then you can purge the entire recycle bin by
specifying DBA_RECYCLEBIN, instead of RECYCLEBIN in the previous statement.
You can also use the PURGE statement to purge an index from the recycle bin or to
purge from the recycle bin all objects in a specified tablespace.
Oracle Database SQL Reference for more information on
the PURGE statement

See Also:

Restoring Tables from the Recycle Bin
Use the FLASHBACK TABLE ... TO BEFORE DROP statement to recover objects from the
recycle bin. You can specify either the name of the table in the recycle bin or the
original table name. An optional RENAME TO clause lets you rename the table as you
recover it. The recycle bin name can be obtained from either the DBA_ or
USER_RECYCLEBIN view as shown in "Viewing and Querying Objects in the Recycle
Bin" on page 15-40. To use the FLASHBACK TABLE ... TO BEFORE DROP statement, you
need the same privileges you need to drop the table.
The following example restores int_admin_emp table and assigns to it a new name:
FLASHBACK TABLE int_admin_emp TO BEFORE DROP
RENAME TO int2_admin_emp;

The system-generated recycle bin name is very useful if you have dropped a table
multiple times. For example, suppose you have three versions of the
int2_admin_emp table in the recycle bin and you want to recover the second version.
You can do this by issuing two FLASHBACK TABLE statements, or you can query the
recycle bin and then flashback to the appropriate system-generated name, as shown in
the following example. Including the create time in the query can help you verify that
you are restoring the correct table.
SELECT object_name, original_name, createtime FROM recyclebin;
OBJECT_NAME
-----------------------------BIN$yrMKlZaLMhfgNAgAIMenRA==$0
BIN$yrMKlZaVMhfgNAgAIMenRA==$0

ORIGINAL_NAME
--------------INT2_ADMIN_EMP
INT2_ADMIN_EMP

CREATETIME
------------------2006-02-05:21:05:52
2006-02-05:21:25:13

Managing Tables

15-41

Managing Index-Organized Tables

BIN$yrMKlZaQMhfgNAgAIMenRA==$0 INT2_ADMIN_EMP

2006-02-05:22:05:53

FLASHBACK TABLE BIN$yrMKlZaVMhfgNAgAIMenRA==$0 TO BEFORE DROP;

Restoring Dependent Objects
When you restore a table from the recycle bin, dependent objects such as indexes do
not get their original names back; they retain their system-generated recycle bin
names. You must manually rename dependent objects if you want to restore their
original names. If you plan to manually restore original names for dependent objects,
ensure that you make note of each dependent object’s system-generated recycle bin
name before you restore the table.
The following is an example of restoring the original names of some of the indexes of
the dropped table JOB_HISTORY, from the HR sample schema. The example assumes
that you are logged in as the HR user.
1.

After dropping JOB_HISTORY and before restoring it from the recycle bin, run the
following query:
SELECT OBJECT_NAME, ORIGINAL_NAME, TYPE FROM RECYCLEBIN;
OBJECT_NAME
-----------------------------BIN$DBo9UChtZSbgQFeMiAdCcQ==$0
BIN$DBo9UChuZSbgQFeMiAdCcQ==$0
BIN$DBo9UChvZSbgQFeMiAdCcQ==$0
BIN$DBo9UChwZSbgQFeMiAdCcQ==$0
BIN$DBo9UChxZSbgQFeMiAdCcQ==$0

2.

ORIGINAL_NAME
------------------------JHIST_JOB_IX
JHIST_EMPLOYEE_IX
JHIST_DEPARTMENT_IX
JHIST_EMP_ID_ST_DATE_PK
JOB_HISTORY

TYPE
-------INDEX
INDEX
INDEX
INDEX
TABLE

Restore the table with the following command:
FLASHBACK TABLE JOB_HISTORY TO BEFORE DROP;

3.

Run the following query to verify that all JOB_HISTORY indexes retained their
system-generated recycle bin names:
SELECT INDEX_NAME FROM USER_INDEXES WHERE TABLE_NAME = 'JOB_HISTORY';
INDEX_NAME
-----------------------------BIN$DBo9UChwZSbgQFeMiAdCcQ==$0
BIN$DBo9UChtZSbgQFeMiAdCcQ==$0
BIN$DBo9UChuZSbgQFeMiAdCcQ==$0
BIN$DBo9UChvZSbgQFeMiAdCcQ==$0

4.

Restore the original names of the first two indexes as follows:
ALTER INDEX "BIN$DBo9UChtZSbgQFeMiAdCcQ==$0" RENAME TO JHIST_JOB_IX;
ALTER INDEX "BIN$DBo9UChuZSbgQFeMiAdCcQ==$0" RENAME TO JHIST_EMPLOYEE_IX;

Note that double quotes are required around the system-generated names.

Managing Index-Organized Tables
This section describes aspects of managing index-organized tables, and contains the
following topics:
■

What Are Index-Organized Tables?

■

Creating Index-Organized Tables

15-42 Oracle Database Administrator’s Guide

Managing Index-Organized Tables

■

Maintaining Index-Organized Tables

■

Creating Secondary Indexes on Index-Organized Tables

■

Analyzing Index-Organized Tables

■

Using the ORDER BY Clause with Index-Organized Tables

■

Converting Index-Organized Tables to Regular Tables

What Are Index-Organized Tables?
An index-organized table has a storage organization that is a variant of a primary
B-tree. Unlike an ordinary (heap-organized) table whose data is stored as an
unordered collection (heap), data for an index-organized table is stored in a B-tree
index structure in a primary key sorted manner. Each leaf block in the index structure
stores both the key and nonkey columns.
The structure of an index-organized table provides the following benefits:
■

■

■

Fast random access on the primary key because an index-only scan is sufficient.
And, because there is no separate table storage area, changes to the table data
(such as adding new rows, updating rows, or deleting rows) result only in
updating the index structure.
Fast range access on the primary key because the rows are clustered in primary
key order.
Lower storage requirements because duplication of primary keys is avoided. They
are not stored both in the index and underlying table, as is true with
heap-organized tables.

Index-organized tables have full table functionality. They support features such as
constraints, triggers, LOB and object columns, partitioning, parallel operations, online
reorganization, and replication. And, they offer these additional features:
■

Key compression

■

Overflow storage area and specific column placement

■

Secondary indexes, including bitmap indexes.

Index-organized tables are ideal for OLTP applications, which require fast primary key
access and high availability. Queries and DML on an orders table used in electronic
order processing are predominantly primary-key based and heavy volume causes
fragmentation resulting in a frequent need to reorganize. Because an index-organized
table can be reorganized online and without invalidating its secondary indexes, the
window of unavailability is greatly reduced or eliminated.
Index-organized tables are suitable for modeling application-specific index structures.
For example, content-based information retrieval applications containing text, image
and audio data require inverted indexes that can be effectively modeled using
index-organized tables. A fundamental component of an internet search engine is an
inverted index that can be modeled using index-organized tables.
These are but a few of the applications for index-organized tables.
See Also:
■

■

Oracle Database Concepts for a more thorough description of
index-organized tables
Chapter 17, "Managing Partitioned Tables and Indexes"
contains information about partitioning index-organized tables
Managing Tables

15-43

Managing Index-Organized Tables

Creating Index-Organized Tables
You use the CREATE TABLE statement to create index-organized tables, but you must
provide additional information:
■

■

An ORGANIZATION INDEX qualifier, which indicates that this is an
index-organized table
A primary key, specified through a column constraint clause (for a single column
primary key) or a table constraint clause (for a multiple-column primary key).

Optionally, you can specify the following:
■

■

■

An OVERFLOW clause, which preserves dense clustering of the B-tree index by
storing the row column values exceeding a specified threshold in a separate
overflow data segment.
A PCTTHRESHOLD value, which defines the percentage of space reserved in the
index block for an index-organized table. Any portion of the row that exceeds the
specified threshold is stored in the overflow segment. In other words, the row is
broken at a column boundary into two pieces, a head piece and tail piece. The
head piece fits in the specified threshold and is stored along with the key in the
index leaf block. The tail piece is stored in the overflow area as one or more row
pieces. Thus, the index entry contains the key value, the nonkey column values
that fit the specified threshold, and a pointer to the rest of the row.
An INCLUDING clause, which can be used to specify nonkey columns that are to
be stored in the overflow data segment.

Creating an Index-Organized Table
The following statement creates an index-organized table:
CREATE TABLE admin_docindex(
token char(20),
doc_id NUMBER,
token_frequency NUMBER,
token_offsets VARCHAR2(512),
CONSTRAINT pk_admin_docindex PRIMARY KEY (token, doc_id))
ORGANIZATION INDEX
TABLESPACE admin_tbs
PCTTHRESHOLD 20
OVERFLOW TABLESPACE admin_tbs2;

Specifying ORGANIZATION INDEX causes the creation of an index-organized table,
admin_docindex, where the key columns and nonkey columns reside in an index
defined on columns that designate the primary key or keys for the table. In this case,
the primary keys are token and doc_id. An overflow segment is specified and is
discussed in "Using the Overflow Clause" on page 15-45.

Creating Index-Organized Tables that Contain Object Types
Index-organized tables can store object types. The following example creates object
type admin_typ, then creates an index-organized table containing a column of object
type admin_typ:
CREATE OR REPLACE TYPE admin_typ AS OBJECT
(col1 NUMBER, col2 VARCHAR2(6));
CREATE TABLE admin_iot (c1 NUMBER primary key, c2 admin_typ)
ORGANIZATION INDEX;

You can also create an index-organized table of object types. For example:
15-44 Oracle Database Administrator’s Guide

Managing Index-Organized Tables

CREATE TABLE admin_iot2 OF admin_typ (col1 PRIMARY KEY)
ORGANIZATION INDEX;

Another example, that follows, shows that index-organized tables store nested tables
efficiently. For a nested table column, the database internally creates a storage table to
hold all the nested table rows.
CREATE TYPE project_t AS OBJECT(pno NUMBER, pname VARCHAR2(80));
/
CREATE TYPE project_set AS TABLE OF project_t;
/
CREATE TABLE proj_tab (eno NUMBER, projects PROJECT_SET)
NESTED TABLE projects STORE AS emp_project_tab
((PRIMARY KEY(nested_table_id, pno))
ORGANIZATION INDEX)
RETURN AS LOCATOR;

The rows belonging to a single nested table instance are identified by a
nested_table_id column. If an ordinary table is used to store nested table columns,
the nested table rows typically get de-clustered. But when you use an index-organized
table, the nested table rows can be clustered based on the nested_table_id column.
See Also:
■

■

■

Oracle Database SQL Reference for details of the syntax used for
creating index-organized tables
"Creating Partitioned Index-Organized Tables" on page 17-18
for information about creating partitioned index-organized
tables
Oracle Database Application Developer's Guide - Object-Relational
Features for information about object types

Using the Overflow Clause
The overflow clause specified in the statement shown in "Creating an Index-Organized
Table" on page 15-44 indicates that any nonkey columns of rows exceeding 20% of the
block size are placed in a data segment stored in the admin_tbs2 tablespace. The key
columns should fit the specified threshold.
If an update of a nonkey column causes the row to decrease in size, the database
identifies the row piece (head or tail) to which the update is applicable and rewrites
that piece.
If an update of a nonkey column causes the row to increase in size, the database
identifies the piece (head or tail) to which the update is applicable and rewrites that
row piece. If the target of the update turns out to be the head piece, note that this piece
can again be broken into two to keep the row size below the specified threshold.
The nonkey columns that fit in the index leaf block are stored as a row head-piece that
contains a rowid field linking it to the next row piece stored in the overflow data
segment. The only columns that are stored in the overflow area are those that do not
fit.

Choosing and Monitoring a Threshold Value
You should choose a threshold value that can accommodate your key columns, as well
as the first few nonkey columns (if they are frequently accessed).

Managing Tables

15-45

Managing Index-Organized Tables

After choosing a threshold value, you can monitor tables to verify that the value you
specified is appropriate. You can use the ANALYZE TABLE ... LIST CHAINED ROWS
statement to determine the number and identity of rows exceeding the threshold
value.
See Also:
■

■

"Listing Chained Rows of Tables and Clusters" on page 13-4 for
more information about chained rows
Oracle Database SQL Reference for syntax of the ANALYZE
statement

Using the INCLUDING Clause
In addition to specifying PCTTHRESHOLD, you can use the INCLUDING clause to
control which nonkey columns are stored with the key columns. The database
accommodates all nonkey columns up to the column specified in the INCLUDING
clause in the index leaf block, provided it does not exceed the specified threshold. All
nonkey columns beyond the column specified in the INCLUDING clause are stored in
the overflow area.
Oracle Database moves all primary key columns of an
indexed-organized table to the beginning of the table (in their key
order), in order to provide efficient primary key based access. As an
example:

Note:

CREATE TABLE admin_iot4(a INT, b INT, c INT, d INT,
primary key(c,b))
ORGANIZATION INDEX;

The stored column order is: c b a d (instead of: a b c d). The
last primary key column is b, based on the stored column order.
The INCLUDING column can be the last primary key column (b in
this example), or any nonkey column (that is, any column after b in
the stored column order).
The following CREATE TABLE statement is similar to the one shown earlier in
"Creating an Index-Organized Table" on page 15-44 but is modified to create an
index-organized table where the token_offsets column value is always stored in
the overflow area:
CREATE TABLE admin_docindex2(
token CHAR(20),
doc_id NUMBER,
token_frequency NUMBER,
token_offsets VARCHAR2(512),
CONSTRAINT pk_admin_docindex2 PRIMARY KEY (token, doc_id))
ORGANIZATION INDEX
TABLESPACE admin_tbs
PCTTHRESHOLD 20
INCLUDING token_frequency
OVERFLOW TABLESPACE admin_tbs2;

Here, only nonkey columns prior to token_offsets (in this case a single column
only) are stored with the key column values in the index leaf block.

15-46 Oracle Database Administrator’s Guide

Managing Index-Organized Tables

Parallelizing Index-Organized Table Creation
The CREATE TABLE...AS SELECT statement enables you to create an
index-organized table and load data from an existing table into it. By including the
PARALLEL clause, the load can be done in parallel.
The following statement creates an index-organized table in parallel by selecting rows
from the conventional table hr.jobs:
CREATE TABLE admin_iot3(i PRIMARY KEY, j, k, l)
ORGANIZATION INDEX
PARALLEL
AS SELECT * FROM hr.jobs;

This statement provides an alternative to parallel bulk-load using SQL*Loader.

Using Key Compression
Creating an index-organized table using key compression enables you to eliminate
repeated occurrences of key column prefix values.
Key compression breaks an index key into a prefix and a suffix entry. Compression is
achieved by sharing the prefix entries among all the suffix entries in an index block.
This sharing can lead to huge savings in space, allowing you to store more keys in
each index block while improving performance.
You can enable key compression using the COMPRESS clause while:
■

Creating an index-organized table

■

Moving an index-organized table

You can also specify the prefix length (as the number of key columns), which identifies
how the key columns are broken into a prefix and suffix entry.
CREATE TABLE admin_iot5(i INT, j INT, k INT, l INT, PRIMARY KEY (i, j, k))
ORGANIZATION INDEX COMPRESS;

The preceding statement is equivalent to the following statement:
CREATE TABLE admin_iot6(i INT, j INT, k INT, l INT, PRIMARY KEY(i, j, k))
ORGANIZATION INDEX COMPRESS 2;

For the list of values (1,2,3), (1,2,4), (1,2,7), (1,3,5), (1,3,4), (1,4,4) the repeated
occurrences of (1,2), (1,3) are compressed away.
You can also override the default prefix length used for compression as follows:
CREATE TABLE admin_iot7(i INT, j INT, k INT, l INT, PRIMARY KEY (i, j, k))
ORGANIZATION INDEX COMPRESS 1;

For the list of values (1,2,3), (1,2,4), (1,2,7), (1,3,5), (1,3,4), (1,4,4), the repeated
occurrences of 1 are compressed away.
You can disable compression as follows:
ALTER TABLE admin_iot5 MOVE NOCOMPRESS;

One application of key compression is in a time-series application that uses a set of
time-stamped rows belonging to a single item, such as a stock price. Index-organized
tables are attractive for such applications because of the ability to cluster rows based
on the primary key. By defining an index-organized table with primary key (stock
symbol, time stamp), you can store and manipulate time-series data efficiently. You
can achieve more storage savings by compressing repeated occurrences of the item

Managing Tables

15-47

Managing Index-Organized Tables

identifier (for example, the stock symbol) in a time series by using an index-organized
table with key compression.
See Also: Oracle Database Concepts for more information about
key compression

Maintaining Index-Organized Tables
Index-organized tables differ from ordinary tables only in physical organization.
Logically, they are manipulated in the same manner as ordinary tables. You can specify
an index-organized table just as you would specify a regular table in INSERT, SELECT,
DELETE, and UPDATE statements.

Altering Index-Organized Tables
All of the alter options available for ordinary tables are available for index-organized
tables. This includes ADD, MODIFY, and DROP COLUMNS and CONSTRAINTS. However,
the primary key constraint for an index-organized table cannot be dropped, deferred,
or disabled
You can use the ALTER TABLE statement to modify physical and storage attributes
for both primary key index and overflow data segments. All the attributes specified
prior to the OVERFLOW keyword are applicable to the primary key index segment. All
attributes specified after the OVERFLOW key word are applicable to the overflow data
segment. For example, you can set the INITRANS of the primary key index segment to
4 and the overflow of the data segment INITRANS to 6 as follows:
ALTER TABLE admin_docindex INITRANS 4 OVERFLOW INITRANS 6;

You can also alter PCTTHRESHOLD and INCLUDING column values. A new setting is
used to break the row into head and overflow tail pieces during subsequent
operations. For example, the PCTHRESHOLD and INCLUDING column values can be
altered for the admin_docindex table as follows:
ALTER TABLE admin_docindex PCTTHRESHOLD 15 INCLUDING doc_id;

By setting the INCLUDING column to doc_id, all the columns that follow
token_frequency and token_offsets, are stored in the overflow data segment.
For index-organized tables created without an overflow data segment, you can add an
overflow data segment by using the ADD OVERFLOW clause. For example, you can add
an overflow segment to table admin_iot3 as follows:
ALTER TABLE admin_iot3 ADD OVERFLOW TABLESPACE admin_tbs2;

Moving (Rebuilding) Index-Organized Tables
Because index-organized tables are primarily stored in a B-tree index, you can
encounter fragmentation as a consequence of incremental updates. However, you can
use the ALTER TABLE...MOVE statement to rebuild the index and reduce this
fragmentation.
The following statement rebuilds the index-organized table admin_docindex:
ALTER TABLE admin_docindex MOVE;

You can rebuild index-organized tables online using the ONLINE keyword. The
overflow data segment, if present, is rebuilt when the OVERFLOW keyword is specified.
For example, to rebuild the admin_docindex table but not the overflow data
segment, perform a move online as follows:

15-48 Oracle Database Administrator’s Guide

Managing Index-Organized Tables

ALTER TABLE admin_docindex MOVE ONLINE;

To rebuild the admin_docindex table along with its overflow data segment perform
the move operation as shown in the following statement. This statement also
illustrates moving both the table and overflow data segment to new tablespaces.
ALTER TABLE admin_docindex MOVE TABLESPACE admin_tbs2
OVERFLOW TABLESPACE admin_tbs3;

In this last statement, an index-organized table with a LOB column (CLOB) is created.
Later, the table is moved with the LOB index and data segment being rebuilt and
moved to a new tablespace.
CREATE TABLE admin_iot_lob
(c1 number (6) primary key,
admin_lob CLOB)
ORGANIZATION INDEX
LOB (admin_lob) STORE AS (TABLESPACE admin_tbs2);
.
.
.
ALTER TABLE admin_iot_lob MOVE LOB (admin_lob) STORE AS (TABLESPACE admin_tbs3);

See Also: Oracle Database Application Developer's Guide - Large
Objects contains information about LOBs in index-organized tables

Creating Secondary Indexes on Index-Organized Tables
You can create secondary indexes on an index-organized tables to provide multiple
access paths. Secondary indexes on index-organized tables differ from indexes on
ordinary tables in two ways:
■

■

They store logical rowids instead of physical rowids. This is necessary because the
inherent movability of rows in a B-tree index results in the rows having no
permanent physical addresses. If the physical location of a row changes, its logical
rowid remains valid. One effect of this is that a table maintenance operation, such
as ALTER TABLE ... MOVE, does not make the secondary index unusable.
The logical rowid also includes a physical guess which identifies the database
block address at which the row is likely to be found. If the physical guess is
correct, a secondary index scan would incur a single additional I/O once the
secondary key is found. The performance would be similar to that of a secondary
index-scan on an ordinary table.

Unique and non-unique secondary indexes, function-based secondary indexes, and
bitmap indexes are supported as secondary indexes on index-organized tables.

Creating a Secondary Index on an Index-Organized Table
The following statement shows the creation of a secondary index on the docindex
index-organized table where doc_id and token are the key columns:
CREATE INDEX Doc_id_index on Docindex(Doc_id, Token);

This secondary index allows the database to efficiently process a query, such as the
following, the involves a predicate on doc_id:
SELECT Token FROM Docindex WHERE Doc_id = 1;

Managing Tables

15-49

Managing Index-Organized Tables

Maintaining Physical Guesses in Logical Rowids
A logical rowid can include a guess, which identifies the block location of a row at the
time the guess is made. Instead of doing a full key search, the database uses the guess
to search the block directly. However, as new rows are inserted, guesses can become
stale. The indexes are still usable through the primary key-component of the logical
rowid, but access to rows is slower.
Collect index statistics with the DBMS_STATS package to monitor the staleness of
guesses. The database checks whether the existing guesses are still valid and records
the percentage of rows with valid guesses in the data dictionary. This statistic is stored
in the PCT_DIRECT_ACCESS column of the DBA_INDEXES view (and related views).
To obtain fresh guesses, you can rebuild the secondary index. Note that rebuilding a
secondary index on an index-organized table involves reading the base table, unlike
rebuilding an index on an ordinary table. A quicker, more light weight means of fixing
the guesses is to use the ALTER INDEX ... UPDATE BLOCK REFERENCES statement. This
statement is performed online, while DML is still allowed on the underlying
index-organized table.
After you rebuild a secondary index, or otherwise update the block references in the
guesses, collect index statistics again.

Bitmap Indexes
Bitmap indexes on index-organized tables are supported, provided the
index-organized table is created with a mapping table. This is done by specifying the
MAPPING TABLE clause in the CREATE TABLE statement that you use to create the
index-organized table, or in an ALTER TABLE statement to add the mapping table
later.
See Also: Oracle Database Concepts for a description of mapping
tables

Analyzing Index-Organized Tables
Just like ordinary tables, index-organized tables are analyzed using the DBMS_STATS
package, or the ANALYZE statement.

Collecting Optimizer Statistics for Index-Organized Tables
To collect optimizer statistics, use the DBMS_STATS package.
For example, the following statement gathers statistics for the index-organized
countries table in the hr schema:
EXECUTE DBMS_STATS.GATHER_TABLE_STATS ('HR','COUNTRIES');

The DBMS_STATS package analyzes both the primary key index segment and the
overflow data segment, and computes logical as well as physical statistics for the table.
■

■

The logical statistics can be queried using USER_TABLES, ALL_TABLES or
DBA_TABLES.
You can query the physical statistics of the primary key index segment using
USER_INDEXES, ALL_INDEXES or DBA_INDEXES (and using the primary key
index name). For example, you can obtain the primary key index segment physical
statistics for the table admin_docindex as follows:
SELECT LAST_ANALYZED, BLEVEL,LEAF_BLOCKS, DISTINCT_KEYS
FROM DBA_INDEXES WHERE INDEX_NAME= 'PK_ADMIN_DOCINDEX';

15-50 Oracle Database Administrator’s Guide

Managing Index-Organized Tables

■

You can query the physical statistics for the overflow data segment using the
USER_TABLES, ALL_TABLES or DBA_TABLES. You can identify the overflow
entry by searching for IOT_TYPE = 'IOT_OVERFLOW'. For example, you can
obtain overflow data segment physical attributes associated with the
admin_docindex table as follows:
SELECT LAST_ANALYZED, NUM_ROWS, BLOCKS, EMPTY_BLOCKS
FROM DBA_TABLES WHERE IOT_TYPE='IOT_OVERFLOW'
and IOT_NAME= 'ADMIN_DOCINDEX';

See Also:
■

■

Oracle Database Performance Tuning Guide for more information
about collecting optimizer statistics
Oracle Database PL/SQL Packages and Types Reference for more
information about of the DBMS_STATS package

Validating the Structure of Index-Organized Tables
Use the ANALYZE statement if you want to validate the structure of your
index-organized table or to list any chained rows. These operations are discussed in
the following sections located elsewhere in this book:
■

"Validating Tables, Indexes, Clusters, and Materialized Views" on page 13-3

■

"Listing Chained Rows of Tables and Clusters" on page 13-4
There are special considerations when listing chained rows
for index-organized tables. These are discussed in the Oracle
Database SQL Reference.

Note:

Using the ORDER BY Clause with Index-Organized Tables
If an ORDER BY clause only references the primary key column or a prefix of it, then
the optimizer avoids the sorting overhead, as the rows are returned sorted on the
primary key columns.
The following queries avoid sorting overhead because the data is already sorted on the
primary key:
SELECT * FROM admin_docindex2 ORDER BY token, doc_id;
SELECT * FROM admin_docindex2 ORDER BY token;

If, however, you have an ORDER BY clause on a suffix of the primary key column or
non-primary-key columns, additional sorting is required (assuming no other
secondary indexes are defined).
SELECT * FROM admin_docindex2 ORDER BY doc_id;
SELECT * FROM admin_docindex2 ORDER BY token_frequency;

Converting Index-Organized Tables to Regular Tables
You can convert index-organized tables to regular tables using the Oracle import or
export utilities, or the CREATE TABLE...AS SELECT statement.
To convert an index-organized table to a regular table:
■

Export the index-organized table data using conventional path.

■

Create a regular table definition with the same definition.

Managing Tables

15-51

Managing External Tables

■

Import the index-organized table data, making sure IGNORE=y (ensures that
object exists error is ignored).
Note: Before converting an index-organized table to a regular
table, be aware that index-organized tables cannot be exported
using pre-Oracle8 versions of the Export utility.

See Also: Oracle Database Utilities for more details about using the
IMPORT and EXPORT utilities

Managing External Tables
Oracle Database allows you read-only access to data in external tables. External tables
are defined as tables that do not reside in the database, and can be in any format for
which an access driver is provided. By providing the database with metadata
describing an external table, the database is able to expose the data in the external
table as if it were data residing in a regular database table. The external data can be
queried directly and in parallel using SQL.
You can, for example, select, join, or sort external table data. You can also create views
and synonyms for external tables. However, no DML operations (UPDATE, INSERT, or
DELETE) are possible, and no indexes can be created, on external tables.
External tables also provide a framework to unload the result of an arbitrary SELECT
statement into a platform-independent Oracle-proprietary format that can be used by
Oracle Data Pump.
The DBMS_STATS package can be used for gathering
statistics for external tables. The ANALYZE statement is not
supported for gathering statistics for external tables.

Note:

For information about using the DBMS_STATS package, see Oracle
Database Performance Tuning Guide
The means of defining the metadata for external tables is through the CREATE
TABLE...ORGANIZATION EXTERNAL statement. This external table definition can be
thought of as a view that allows running any SQL query against external data without
requiring that the external data first be loaded into the database. An access driver is
the actual mechanism used to read the external data in the table. When you use
external tables to unload data, the metadata is automatically created based on the
datatypes in the SELECT statement (sometimes referred to as the shape of the query).
Oracle Database provides two access drivers for external tables. The default access
driver is ORACLE_LOADER, which allows the reading of data from external files using
the Oracle loader technology. The ORACLE_LOADER access driver provides data
mapping capabilities which are a subset of the control file syntax of SQL*Loader
utility. The second access driver, ORACLE_DATAPUMP, lets you unload data--that is,
read data from the database and insert it into an external table, represented by one or
more external files--and then reload it into an Oracle Database.
The Oracle Database external tables feature provides a valuable means for performing
basic extraction, transformation, and loading (ETL) tasks that are common for data
warehousing.

15-52 Oracle Database Administrator’s Guide

Managing External Tables

These following sections discuss the DDL statements that are supported for external
tables. Only DDL statements discussed are supported, and not all clauses of these
statements are supported.
■

Creating External Tables

■

Altering External Tables

■

Dropping External Tables

■

System and Object Privileges for External Tables
See Also:
■

■

Oracle Database Utilities contains more information about
external tables and describes the access drivers and their access
parameters
Oracle Database Data Warehousing Guide for information about
using external tables in a data warehousing environment

Creating External Tables
You create external tables using the ORGANIZATION EXTERNAL clause of the CREATE
TABLE statement. You are not in fact creating a table; that is, an external table does not
have any extents associated with it. Rather, you are creating metadata in the data
dictionary that enables you to access external data.
The following example creates an external table and then uploads the data to a
database table. Alternatively, you can unload data through the external table
framework by specifying the AS subquery clause of the CREATE TABLE statement.
External table data pump unload can use only the ORACLE_DATAPUMP access driver.

EXAMPLE: Creating an External Table and Loading Data
The file empxt1.dat contains the following sample data:
360,Jane,Janus,ST_CLERK,121,17-MAY-2001,3000,0,50,jjanus
361,Mark,Jasper,SA_REP,145,17-MAY-2001,8000,.1,80,mjasper
362,Brenda,Starr,AD_ASST,200,17-MAY-2001,5500,0,10,bstarr
363,Alex,Alda,AC_MGR,145,17-MAY-2001,9000,.15,80,aalda

The file empxt2.dat contains the following sample data:
401,Jesse,Cromwell,HR_REP,203,17-MAY-2001,7000,0,40,jcromwel
402,Abby,Applegate,IT_PROG,103,17-MAY-2001,9000,.2,60,aapplega
403,Carol,Cousins,AD_VP,100,17-MAY-2001,27000,.3,90,ccousins
404,John,Richardson,AC_ACCOUNT,205,17-MAY-2001,5000,0,110,jrichard

The following hypothetical SQL statements create an external table in the hr schema
named admin_ext_employees and load its data into the hr.employees table.
CONNECT / AS SYSDBA;
-- Set up directories and grant access to hr
CREATE OR REPLACE DIRECTORY admin_dat_dir
AS '/flatfiles/data';
CREATE OR REPLACE DIRECTORY admin_log_dir
AS '/flatfiles/log';
CREATE OR REPLACE DIRECTORY admin_bad_dir
AS '/flatfiles/bad';
GRANT READ ON DIRECTORY admin_dat_dir TO hr;
GRANT WRITE ON DIRECTORY admin_log_dir TO hr;
GRANT WRITE ON DIRECTORY admin_bad_dir TO hr;

Managing Tables

15-53

Managing External Tables

-- hr connects
CONNECT hr/hr
-- create the external table
CREATE TABLE admin_ext_employees
(employee_id
NUMBER(4),
first_name
VARCHAR2(20),
last_name
VARCHAR2(25),
job_id
VARCHAR2(10),
manager_id
NUMBER(4),
hire_date
DATE,
salary
NUMBER(8,2),
commission_pct
NUMBER(2,2),
department_id
NUMBER(4),
email
VARCHAR2(25)
)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY admin_dat_dir
ACCESS PARAMETERS
(
records delimited by newline
badfile admin_bad_dir:'empxt%a_%p.bad'
logfile admin_log_dir:'empxt%a_%p.log'
fields terminated by ','
missing field values are null
( employee_id, first_name, last_name, job_id, manager_id,
hire_date char date_format date mask "dd-mon-yyyy",
salary, commission_pct, department_id, email
)
)
LOCATION ('empxt1.dat', 'empxt2.dat')
)
PARALLEL
REJECT LIMIT UNLIMITED;
-- enable parallel for loading (good if lots of data to load)
ALTER SESSION ENABLE PARALLEL DML;
-- load the data in hr employees table
INSERT INTO employees (employee_id, first_name, last_name, job_id, manager_id,
hire_date, salary, commission_pct, department_id, email)
SELECT * FROM admin_ext_employees;

The following paragraphs contain descriptive information about this example.
The first few statements in this example create the directory objects for the operating
system directories that contain the data sources, and for the bad record and log files
specified in the access parameters. You must also grant READ or WRITE directory
object privileges, as appropriate.
When creating a directory object or BFILEs, ensure that the
following conditions are met:

Note:
■
■

■

The operating system file must not be a symbolic or hard link.
The operating system directory path named in the Oracle
Database directory object must be an existing OS directory
path.
The operating system directory path named in the directory
object should not contain any symbolic links in its components.

15-54 Oracle Database Administrator’s Guide

Managing External Tables

The TYPE specification indicates the access driver of the external table. The access
driver is the API that interprets the external data for 5the database. Oracle Database
provides two access drivers: ORACLE_LOADER and ORACLE_DATAPUMP. If you omit
the TYPE specification, ORACLE_LOADER is the default access driver. You must specify
the ORACLE_DATAPUMP access driver if you specify the AS subquery clause to
unload data from one Oracle Database and reload it into the same or a different Oracle
Database.
The access parameters, specified in the ACCESS PARAMETERS clause, are opaque to
the database. These access parameters are defined by the access driver, and are
provided to the access driver by the database when the external table is accessed. See
Oracle Database Utilities for a description of the ORACLE_LOADER access parameters.
The PARALLEL clause enables parallel query on the data sources. The granule of
parallelism is by default a data source, but parallel access within a data source is
implemented whenever possible. For example, if PARALLEL=3 were specified, then
more than one parallel execution server could be working on a data source. But,
parallel access within a data source is provided by the access driver only if all of the
following conditions are met:
■

The media allows random positioning within a data source

■

It is possible to find a record boundary from a random position

■

The data files are large enough to make it worthwhile to break up into multiple
chunks
Specifying a PARALLEL clause is of value only when dealing
with large amounts of data. Otherwise, it is not advisable to specify
a PARALLEL clause, and doing so can be detrimental.

Note:

The REJECT LIMIT clause specifies that there is no limit on the number of errors that
can occur during a query of the external data. For parallel access, this limit applies to
each parallel execution server independently. For example, if REJECT LIMIT is
specified, each parallel query process is allowed 10 rejections. Hence, the only
precisely enforced values for REJECT LIMIT on parallel query are 0 and UNLIMITED.
In this example, the INSERT INTO TABLE statement generates a dataflow from the
external data source to the Oracle Database SQL engine where data is processed. As
data is parsed by the access driver from the external table sources and provided to the
external table interface, the external data is converted from its external representation
to its Oracle Database internal datatype.
Oracle Database SQL Reference provides details of the
syntax of the CREATE TABLE statement for creating external tables
and specifies restrictions on the use of clauses

See Also:

Altering External Tables
You can use any of the ALTER TABLE clauses shown in Table 15–3 to change the
characteristics of an external table. No other clauses are permitted.

Managing Tables

15-55

Managing External Tables

Table 15–3

ALTER TABLE Clauses for External Tables

ALTER TABLE
Clause

Description

Example

REJECT
LIMIT

Changes the reject limit

ALTER TABLE admin_ext_employees
REJECT LIMIT 100;

PROJECT
COLUMN

Determines how the access driver validates
rows in subsequent queries:

ALTER TABLE admin_ext_employees
PROJECT COLUMN REFERNCED;

■

■

PROJECT COLUMN REFERENCED: the
access driver processes only the columns
in the select list of the query. This setting
may not provide a consistent set of rows
when querying a different column list
from the same external table. This is the
default.

ALTER TABLE admin_ext_employees
PROJECT COLUMN ALL;

PROJECT COLUMN ALL: the access driver
processes all of the columns defined on
the external table. This setting always
provides a consistent set of rows when
querying an external table.

DEFAULT
DIRECTORY

Changes the default directory specification

ALTER TABLE admin_ext_employees
DEFAULT DIRECTORY admin_dat2_dir;

ACCESS
PARAMETERS

Allows access parameters to be changed
without dropping and re-creating the external
table metadata

ALTER TABLE admin_ext_employees
ACCESS PARAMETERS
(FIELDS TERMINATED BY ';');

LOCATION

Allows data sources to be changed without
dropping and re-creating the external table
metadata

ALTER TABLE admin_ext_employees
LOCATION ('empxt3.txt',
'empxt4.txt');

PARALLEL

No difference from regular tables. Allows
degree of parallelism to be changed.

No new syntax

ADD COLUMN

No difference from regular tables. Allows a
column to be added to an external table.

No new syntax

MODIFY
COLUMN

No difference from regular tables. Allows an
external table column to be modified.

No new syntax

DROP COLUMN

No difference from regular tables. Allows an
external table column to be dropped.

No new syntax

RENAME TO

No difference from regular tables. Allows
external table to be renamed.

No new syntax

Dropping External Tables
For an external table, the DROP TABLE statement removes only the table metadata in
the database. It has no affect on the actual data, which resides outside of the database.

System and Object Privileges for External Tables
System and object privileges for external tables are a subset of those for regular table.
Only the following system privileges are applicable to external tables:
■

CREATE ANY TABLE

■

ALTER ANY TABLE

■

DROP ANY TABLE

■

SELECT ANY TABLE

15-56 Oracle Database Administrator’s Guide

Viewing Information About Tables

Only the following object privileges are applicable to external tables:
■

ALTER

■

SELECT

However, object privileges associated with a directory are:
■

READ

■

WRITE

For external tables, READ privileges are required on directory objects that contain data
sources, while WRITE privileges are required for directory objects containing bad, log,
or discard files.

Viewing Information About Tables
The following views allow you to access information about tables.
View

Description

DBA_TABLES

DBA view describes all relational tables in the database. ALL view describes all
tables accessible to the user. USER view is restricted to tables owned by the
user. Some columns in these views contain statistics that are generated by the
DBMS_STATS package or ANALYZE statement.

ALL_TABLES
USER_TABLES
DBA_TAB_COLUMNS
ALL_TAB_COLUMNS

These views describe the columns of tables, views, and clusters in the
database. Some columns in these views contain statistics that are generated
by the DBMS_STATS package or ANALYZE statement.

USER_TAB_COLUMNS
DBA_ALL_TABLES
ALL_ALL_TABLES

These views describe all relational and object tables in the database. Object
tables are not specifically discussed in this book.

USER_ALL_TABLES
DBA_TAB_COMMENTS
ALL_TAB_COMMENTS

These views display comments for tables and views. Comments are entered
using the COMMENT statement.

USER_TAB_COMMENTS
DBA_COL_COMMENTS
ALL_COL_COMMENTS

These views display comments for table and view columns. Comments are
entered using the COMMENT statement.

USER_COL_COMMENTS
DBA_EXTERNAL_TABLES

These views list the specific attributes of external tables in the database.

ALL_EXTERNAL_TABLES
USER_EXTERNAL_TABLES
DBA_EXTERNAL_LOCATIONS

These views list the data sources for external tables.

ALL_EXTERNAL_LOCATIONS
USER_EXTERNAL_LOCATIONS
DBA_TAB_HISTOGRAMS

These views describe histograms on tables and views.

ALL_TAB_HISTOGRAMS
USER_TAB_HISTOGRAMS
DBA_TAB_STATISTICS

These views contain optimizer statistics for tables.

ALL_TAB_STATISTICS
USER_TAB_STATISTICS

Managing Tables

15-57

Viewing Information About Tables

View

Description

DBA_TAB_COL_STATISTICS

These views provide column statistics and histogram information extracted
from the related TAB_COLUMNS views.

ALL_TAB_COL_STATISTICS
USER_TAB_COL_STATISTICS
DBA_TAB_MODIFICATIONS
ALL_TAB_MODIFICATIONS

These views describe tables that have been modified since the last time table
statistics were gathered on them. They are not populated immediately, but
after a time lapse (usually 3 hours).

USER_TAB_MODIFICATIONS
DBA_ENCRYPTED_COLUMNS
USER_ENCRYPTED_COLUMNS

These views list table columns that are encrypted, and for each column, lists
the encryption algorithm in use.

ALL_ENCRYPTED_COLUMNS
DBA_UNUSED_COL_TABS
ALL_UNUSED_COL_TABS

These views list tables with unused columns, as marked by the ALTER
TABLE ... SET UNUSED statement.

USER_UNUSED_COL_TABS
DBA_PARTIAL_DROP_TABS
ALL_PARTIAL_DROP_TABS

These views list tables that have partially completed DROP COLUMN
operations. These operations could be incomplete because the operation was
interrupted by the user or a system failure.

USER_PARTIAL_DROP_TABS

Example: Displaying Column Information
Column information, such as name, datatype, length, precision, scale, and default data
values can be listed using one of the views ending with the _COLUMNS suffix. For
example, the following query lists all of the default column values for the emp and
dept tables:
SELECT TABLE_NAME, COLUMN_NAME, DATA_TYPE, DATA_LENGTH, LAST_ANALYZED
FROM DBA_TAB_COLUMNS
WHERE OWNER = 'HR'
ORDER BY TABLE_NAME;

The following is the output from the query:
TABLE_NAME
-------------------COUNTRIES
COUNTRIES
COUNTRIES
DEPARTMENTS
DEPARTMENTS
DEPARTMENTS
DEPARTMENTS
EMPLOYEES
EMPLOYEES
EMPLOYEES
EMPLOYEES
.
.
.
LOCATIONS
REGIONS
REGIONS

COLUMN_NAME
-------------------COUNTRY_ID
COUNTRY_NAME
REGION_ID
DEPARTMENT_ID
DEPARTMENT_NAME
MANAGER_ID
LOCATION_ID
EMPLOYEE_ID
FIRST_NAME
LAST_NAME
EMAIL

DATA_TYPE
DATA_LENGTH LAST_ANALYZED
---------- ------------ ------------CHAR
2 05-FEB-03
VARCHAR2
40 05-FEB-03
NUMBER
22 05-FEB-03
NUMBER
22 05-FEB-03
VARCHAR2
30 05-FEB-03
NUMBER
22 05-FEB-03
NUMBER
22 05-FEB-03
NUMBER
22 05-FEB-03
VARCHAR2
20 05-FEB-03
VARCHAR2
25 05-FEB-03
VARCHAR2
25 05-FEB-03

COUNTRY_ID
REGION_ID
REGION_NAME

CHAR
NUMBER
VARCHAR2

51 rows selected.

15-58 Oracle Database Administrator’s Guide

2 05-FEB-03
22 05-FEB-03
25 05-FEB-03

Viewing Information About Tables

See Also:
■

■

■

■

Oracle Database Reference for complete descriptions of these
views
Oracle Database Application Developer's Guide - Object-Relational
Features for information about object tables
Oracle Database Performance Tuning Guide for information about
histograms and generating statistics for tables
"Analyzing Tables, Indexes, and Clusters" on page 13-2

Managing Tables

15-59

Viewing Information About Tables

15-60 Oracle Database Administrator’s Guide

16
Managing Indexes
This chapter discusses the management of indexes, and contains the following topics:
■

About Indexes

■

Guidelines for Managing Indexes

■

Creating Indexes

■

Altering Indexes

■

Monitoring Space Use of Indexes

■

Dropping Indexes

■

Viewing Index Information

About Indexes
Indexes are optional structures associated with tables and clusters that allow SQL
statements to execute more quickly against a table. Just as the index in this manual
helps you locate information faster than if there were no index, an Oracle Database
index provides a faster access path to table data. You can use indexes without
rewriting any queries. Your results are the same, but you see them more quickly.
Oracle Database provides several indexing schemes that provide complementary
performance functionality. These are:
■

B-tree indexes: the default and the most common

■

B-tree cluster indexes: defined specifically for cluster

■

Hash cluster indexes: defined specifically for a hash cluster

■

Global and local indexes: relate to partitioned tables and indexes

■

Reverse key indexes: most useful for Oracle Real Application Clusters applications

■

Bitmap indexes: compact; work best for columns with a small set of values

■

Function-based indexes: contain the precomputed value of a function/expression

■

Domain indexes: specific to an application or cartridge.

Indexes are logically and physically independent of the data in the associated table.
Being independent structures, they require storage space. You can create or drop an
index without affecting the base tables, database applications, or other indexes. The
database automatically maintains indexes when you insert, update, and delete rows of
the associated table. If you drop an index, all applications continue to work. However,
access to previously indexed data might be slower.

Managing Indexes 16-1

Guidelines for Managing Indexes

See Also: Chapter 14, "Managing Space for Schema Objects" is
recommended reading before attempting tasks described in this
chapter.

Guidelines for Managing Indexes
This section discusses guidelines for managing indexes and contains the following
topics:
■

Create Indexes After Inserting Table Data

■

Index the Correct Tables and Columns

■

Order Index Columns for Performance

■

Limit the Number of Indexes for Each Table

■

Drop Indexes That Are No Longer Required

■

Estimate Index Size and Set Storage Parameters

■

Specify the Tablespace for Each Index

■

Consider Parallelizing Index Creation

■

Consider Creating Indexes with NOLOGGING

■

Consider Costs and Benefits of Coalescing or Rebuilding Indexes

■

Consider Cost Before Disabling or Dropping Constraints
See Also:
■

■

■

Oracle Database Concepts for conceptual information about
indexes and indexing, including descriptions of the various
indexing schemes offered by Oracle
Oracle Database Performance Tuning Guide and Oracle Database
Data Warehousing Guide for information about bitmap indexes
Oracle Database Data Cartridge Developer's Guide for information
about defining domain-specific operators and indexing
schemes and integrating them into the Oracle Database server

Create Indexes After Inserting Table Data
Data is often inserted or loaded into a table using either the SQL*Loader or an import
utility. It is more efficient to create an index for a table after inserting or loading the
data. If you create one or more indexes before loading data, the database then must
update every index as each row is inserted.
Creating an index on a table that already has data requires sort space. Some sort space
comes from memory allocated for the index creator. The amount for each user is
determined by the initialization parameter SORT_AREA_SIZE. The database also
swaps sort information to and from temporary segments that are only allocated during
the index creation in the users temporary tablespace.
Under certain conditions, data can be loaded into a table with SQL*Loader direct-path
load and an index can be created as data is loaded.
See Also: Oracle Database Utilities for information about using
SQL*Loader for direct-path load

16-2 Oracle Database Administrator’s Guide

Guidelines for Managing Indexes

Index the Correct Tables and Columns
Use the following guidelines for determining when to create an index:
■

■

Create an index if you frequently want to retrieve less than 15% of the rows in a
large table. The percentage varies greatly according to the relative speed of a table
scan and how the distribution of the row data in relation to the index key. The
faster the table scan, the lower the percentage; the more clustered the row data, the
higher the percentage.
To improve performance on joins of multiple tables, index columns used for joins.
Primary and unique keys automatically have indexes, but
you might want to create an index on a foreign key.

Note:

■

Small tables do not require indexes. If a query is taking too long, then the table
might have grown from small to large.

Some columns are strong candidates for indexing. Columns with one or more of the
following characteristics are candidates for indexing:
■

Values are relatively unique in the column.

■

There is a wide range of values (good for regular indexes).

■

There is a small range of values (good for bitmap indexes).

■

The column contains many nulls, but queries often select all rows having a value.
In this case, use the following phrase:
WHERE COL_X > -9.99 * power(10,125)

Using the preceding phrase is preferable to:
WHERE COL_X IS NOT NULL

This is because the first uses an index on COL_X (assuming that COL_X is a
numeric column).
Columns with the following characteristics are less suitable for indexing:
■

There are many nulls in the column and you do not search on the not null values.

LONG and LONG RAW columns cannot be indexed.
The size of a single index entry cannot exceed roughly one-half (minus some
overhead) of the available space in the data block.

Order Index Columns for Performance
The order of columns in the CREATE INDEX statement can affect query performance.
In general, specify the most frequently used columns first.
If you create a single index across columns to speed up queries that access, for
example, col1, col2, and col3; then queries that access just col1, or that access just
col1 and col2, are also speeded up. But a query that accessed just col2, just col3,
or just col2 and col3 does not use the index.

Limit the Number of Indexes for Each Table
A table can have any number of indexes. However, the more indexes there are, the
more overhead is incurred as the table is modified. Specifically, when rows are
Managing Indexes 16-3

Guidelines for Managing Indexes

inserted or deleted, all indexes on the table must be updated as well. Also, when a
column is updated, all indexes that contain the column must be updated.
Thus, there is a trade-off between the speed of retrieving data from a table and the
speed of updating the table. For example, if a table is primarily read-only, having more
indexes can be useful; but if a table is heavily updated, having fewer indexes could be
preferable.

Drop Indexes That Are No Longer Required
Consider dropping an index if:
■

It does not speed up queries. The table could be very small, or there could be
many rows in the table but very few index entries.

■

The queries in your applications do not use the index.

■

The index must be dropped before being rebuilt.
See Also:

"Monitoring Index Usage" on page 16-13

Estimate Index Size and Set Storage Parameters
Estimating the size of an index before creating one can facilitate better disk space
planning and management. You can use the combined estimated size of indexes, along
with estimates for tables, the undo tablespace, and redo log files, to determine the
amount of disk space that is required to hold an intended database. From these
estimates, you can make correct hardware purchases and other decisions.
Use the estimated size of an individual index to better manage the disk space that the
index uses. When an index is created, you can set appropriate storage parameters and
improve I/O performance of applications that use the index. For example, assume that
you estimate the maximum size of an index before creating it. If you then set the
storage parameters when you create the index, fewer extents are allocated for the table
data segment, and all of the index data is stored in a relatively contiguous section of
disk space. This decreases the time necessary for disk I/O operations involving this
index.
The maximum size of a single index entry is approximately one-half the data block
size.
"Managing Storage Parameters" on page 14-5 for
specific information about storage parameters

See Also:

Specify the Tablespace for Each Index
Indexes can be created in any tablespace. An index can be created in the same or
different tablespace as the table it indexes. If you use the same tablespace for a table
and its index, it can be more convenient to perform database maintenance (such as
tablespace or file backup) or to ensure application availability. All the related data is
always online together.
Using different tablespaces (on different disks) for a table and its index produces better
performance than storing the table and index in the same tablespace. Disk contention
is reduced. But, if you use different tablespaces for a table and its index and one
tablespace is offline (containing either data or index), then the statements referencing
that table are not guaranteed to work.

16-4 Oracle Database Administrator’s Guide

Guidelines for Managing Indexes

Consider Parallelizing Index Creation
You can parallelize index creation, much the same as you can parallelize table creation.
Because multiple processes work together to create the index, the database can create
the index more quickly than if a single server process created the index sequentially.
When creating an index in parallel, storage parameters are used separately by each
query server process. Therefore, an index created with an INITIAL value of 5M and a
parallel degree of 12 consumes at least 60M of storage during index creation.
See Also:
■

■

Oracle Database Concepts for more information about parallel
execution
Oracle Database Data Warehousing Guide for information about
utilizing parallel execution in a data warehousing environment

Consider Creating Indexes with NOLOGGING
You can create an index and generate minimal redo log records by specifying
NOLOGGING in the CREATE INDEX statement.
Because indexes created using NOLOGGING are not
archived, perform a backup after you create the index.

Note:

Creating an index with NOLOGGING has the following benefits:
■

Space is saved in the redo log files.

■

The time it takes to create the index is decreased.

■

Performance improves for parallel creation of large indexes.

In general, the relative performance improvement is greater for larger indexes created
without LOGGING than for smaller ones. Creating small indexes without LOGGING has
little effect on the time it takes to create an index. However, for larger indexes the
performance improvement can be significant, especially when you are also
parallelizing the index creation.

Consider Costs and Benefits of Coalescing or Rebuilding Indexes
Improper sizing or increased growth can produce index fragmentation. To eliminate or
reduce fragmentation, you can rebuild or coalesce the index. But before you perform
either task weigh the costs and benefits of each option and choose the one that works
best for your situation. Table 16–1 is a comparison of the costs and benefits associated
with rebuilding and coalescing indexes.
Table 16–1

To Rebuild or Coalesce ... That Is the Question

Rebuild Index

Coalesce Index

Quickly moves index to another tablespace Cannot move index to another tablespace
Higher costs: requires more disk space

Lower costs: does not require more disk space

Creates new tree, shrinks height if
applicable

Coalesces leaf blocks within same branch of
tree

Enables you to quickly change storage and
tablespace parameters without having to
drop the original index.

Quickly frees up index leaf blocks for use.

Managing Indexes 16-5

Creating Indexes

In situations where you have B-tree index leaf blocks that can be freed up for reuse,
you can merge those leaf blocks using the following statement:
ALTER INDEX vmoore COALESCE;

Figure 16–1 illustrates the effect of an ALTER INDEX COALESCE on the index
vmoore. Before performing the operation, the first two leaf blocks are 50% full. This
means you have an opportunity to reduce fragmentation and completely fill the first
block, while freeing up the second.
Figure 16–1 Coalescing Indexes
B-tree Index

Before ALTER INDEX vmoore COALESCE;

B-tree Index

After ALTER INDEX vmoore COALESCE;

Consider Cost Before Disabling or Dropping Constraints
Because unique and primary keys have associated indexes, you should factor in the
cost of dropping and creating indexes when considering whether to disable or drop a
UNIQUE or PRIMARY KEY constraint. If the associated index for a UNIQUE key or
PRIMARY KEY constraint is extremely large, you can save time by leaving the
constraint enabled rather than dropping and re-creating the large index. You also have
the option of explicitly specifying that you want to keep or drop the index when
dropping or disabling a UNIQUE or PRIMARY KEY constraint.
See Also:

"Managing Integrity Constraints" on page 13-9

Creating Indexes
This section describes how to create indexes. To create an index in your own schema,
at least one of the following conditions must be true:
■

The table or cluster to be indexed is in your own schema.

■

You have INDEX privilege on the table to be indexed.

■

You have CREATE ANY INDEX system privilege.

To create an index in another schema, all of the following conditions must be true:
■
■

You have CREATE ANY INDEX system privilege.
The owner of the other schema has a quota for the tablespaces to contain the index
or index partitions, or UNLIMITED TABLESPACE system privilege.

This section contains the following topics:
■

Creating an Index Explicitly

16-6 Oracle Database Administrator’s Guide

Creating Indexes

■

Creating a Unique Index Explicitly

■

Creating an Index Associated with a Constraint

■

Collecting Incidental Statistics when Creating an Index

■

Creating a Large Index

■

Creating an Index Online

■

Creating a Function-Based Index

■

Creating a Key-Compressed Index

Creating an Index Explicitly
You can create indexes explicitly (outside of integrity constraints) using the SQL
statement CREATE INDEX. The following statement creates an index named
emp_ename for the ename column of the emp table:
CREATE INDEX emp_ename ON emp(ename)
TABLESPACE users
STORAGE (INITIAL 20K
NEXT 20k
PCTINCREASE 75);

Notice that several storage settings and a tablespace are explicitly specified for the
index. If you do not specify storage options (such as INITIAL and NEXT) for an index,
the default storage options of the default or specified tablespace are automatically
used.
See Also: Oracle Database SQL Reference for syntax and restrictions
on the use of the CREATE INDEX statement

Creating a Unique Index Explicitly
Indexes can be unique or non-unique. Unique indexes guarantee that no two rows of a
table have duplicate values in the key column (or columns). Non-unique indexes do
not impose this restriction on the column values.
Use the CREATE UNIQUE INDEX statement to create a unique index. The following
example creates a unique index:
CREATE UNIQUE INDEX dept_unique_index ON dept (dname)
TABLESPACE indx;

Alternatively, you can define UNIQUE integrity constraints on the desired columns.
The database enforces UNIQUE integrity constraints by automatically defining a
unique index on the unique key. This is discussed in the following section. However, it
is advisable that any index that exists for query performance, including unique
indexes, be created explicitly.
See Also: Oracle Database Performance Tuning Guide for more
information about creating an index for performance

Creating an Index Associated with a Constraint
Oracle Database enforces a UNIQUE key or PRIMARY KEY integrity constraint on a
table by creating a unique index on the unique key or primary key. This index is
automatically created by the database when the constraint is enabled. No action is
required by you when you issue the CREATE TABLE or ALTER TABLE statement to

Managing Indexes 16-7

Creating Indexes

create the index, but you can optionally specify a USING INDEX clause to exercise
control over its creation. This includes both when a constraint is defined and enabled,
and when a defined but disabled constraint is enabled.
To enable a UNIQUE or PRIMARY KEY constraint, thus creating an associated index,
the owner of the table must have a quota for the tablespace intended to contain the
index, or the UNLIMITED TABLESPACE system privilege. The index associated with a
constraint always takes the name of the constraint, unless you optionally specify
otherwise.
An efficient procedure for enabling a constraint that can
make use of parallelism is described in"Efficient Use of Integrity
Constraints: A Procedure" on page 13-11.

Note:

Specifying Storage Options for an Index Associated with a Constraint
You can set the storage options for the indexes associated with UNIQUE and PRIMARY
KEY constraints using the USING INDEX clause. The following CREATE TABLE
statement enables a PRIMARY KEY constraint and specifies the storage options of the
associated index:
CREATE TABLE emp (
empno NUMBER(5) PRIMARY KEY, age INTEGER)
ENABLE PRIMARY KEY USING INDEX
TABLESPACE users;

Specifying the Index Associated with a Constraint
If you require more explicit control over the indexes associated with UNIQUE and
PRIMARY KEY constraints, the database lets you:
■
■

Specify an existing index that the database is to use to enforce the constraint
Specify a CREATE INDEX statement that the database is to use to create the index
and enforce the constraint

These options are specified using the USING INDEX clause. The following statements
present some examples.
Example 1:
CREATE TABLE a (
a1 INT PRIMARY KEY USING INDEX (create index ai on a (a1)));

Example 2:
CREATE TABLE b(
b1 INT,
b2 INT,
CONSTRAINT bu1 UNIQUE (b1, b2)
USING INDEX (create unique index bi on b(b1, b2)),
CONSTRAINT bu2 UNIQUE (b2, b1) USING INDEX bi);

Example 3:
CREATE TABLE c(c1 INT, c2 INT);
CREATE INDEX ci ON c (c1, c2);
ALTER TABLE c ADD CONSTRAINT cpk PRIMARY KEY (c1) USING INDEX ci;

16-8 Oracle Database Administrator’s Guide

Creating Indexes

If a single statement creates an index with one constraint and also uses that index for
another constraint, the system will attempt to rearrange the clauses to create the index
before reusing it.
See Also:

"Managing Integrity Constraints" on page 13-9

Collecting Incidental Statistics when Creating an Index
Oracle Database provides you with the opportunity to collect statistics at very little
resource cost during the creation or rebuilding of an index. These statistics are stored
in the data dictionary for ongoing use by the optimizer in choosing a plan for the
execution of SQL statements. The following statement computes index, table, and
column statistics while building index emp_ename on column ename of table emp:
CREATE INDEX emp_ename ON emp(ename)
COMPUTE STATISTICS;

See Also:
■

■

Oracle Database Performance Tuning Guide for information about
collecting statistics and their use by the optimizer
"Analyzing Tables, Indexes, and Clusters" on page 13-2

Creating a Large Index
When creating an extremely large index, consider allocating a larger temporary
tablespace for the index creation using the following procedure:
1.

Create a new temporary tablespace using the CREATE TABLESPACE or CREATE
TEMPORARY TABLESPACE statement.

2.

Use the TEMPORARY TABLESPACE option of the ALTER USER statement to make
this your new temporary tablespace.

3.

Create the index using the CREATE INDEX statement.

4.

Drop this tablespace using the DROP TABLESPACE statement. Then use the ALTER
USER statement to reset your temporary tablespace to your original temporary
tablespace.

Using this procedure can avoid the problem of expanding your usual, and usually
shared, temporary tablespace to an unreasonably large size that might affect future
performance.

Creating an Index Online
You can create and rebuild indexes online. This enables you to update base tables at
the same time you are building or rebuilding indexes on that table. You can perform
DML operations while the index build is taking place, but DDL operations are not
allowed. Parallel execution is not supported when creating or rebuilding an index
online.
The following statements illustrate online index build operations:
CREATE INDEX emp_name ON emp (mgr, emp1, emp2, emp3) ONLINE;

Managing Indexes 16-9

Creating Indexes

While you can perform DML operations during an online
index build, Oracle recommends that you do not perform
major/large DML operations during this procedure. This is because
while the DML on the base table is taking place it holds a lock on
that resource. The DDL to build the index cannot proceed until the
transaction acting on the base table commits or rolls back, thus
releasing the lock.

Note:

For example, if you want to load rows that total up to 30% of the
size of an existing table, you should perform this load before the
online index build.

See Also:

"Rebuilding an Existing Index" on page 16-12

Creating a Function-Based Index
Function-based indexes facilitate queries that qualify a value returned by a function
or expression. The value of the function or expression is precomputed and stored in
the index.
To create a function-based index, you must have the COMPATIBLE parameter set to
8.1.0.0.0 or higher. In addition to the prerequisites for creating a conventional index, if
the index is based on user-defined functions, then those functions must be marked
DETERMINISTIC. Also, you just have the EXECUTE object privilege on any
user-defined function(s) used in the function-based index if those functions are owned
by another user.
Additionally, to use a function-based index:
■
■

The table must be analyzed after the index is created.
The query must be guaranteed not to need any NULL values from the indexed
expression, since NULL values are not stored in indexes.
Note: CREATE INDEX stores the timestamp of the most recent
function used in the function-based index. This timestamp is
updated when the index is validated. When performing tablespace
point-in-time recovery of a function-based index, if the timestamp
on the most recent function used in the index is newer than the
timestamp stored in the index, then the index is marked invalid.
You must use the ANALYZE INDEX...VALIDATE STRUCTURE
statement to validate this index.

To illustrate a function-based index, consider the following statement that defines a
function-based index (area_index) defined on the function area(geo):
CREATE INDEX area_index ON rivers (area(geo));

In the following SQL statement, when area(geo) is referenced in the WHERE clause,
the optimizer considers using the index area_index.
SELECT id, geo, area(geo), desc
FROM rivers
WHERE Area(geo) >5000;

16-10 Oracle Database Administrator’s Guide

Altering Indexes

Table owners should have EXECUTE privileges on the functions used in function-based
indexes.
Because a function-based index depends upon any function it is using, it can be
invalidated when a function changes. If the function is valid, you can use an ALTER
INDEX...ENABLE statement to enable a function-based index that has been disabled.
The ALTER INDEX...DISABLE statement lets you disable the use of a function-based
index. Consider doing this if you are working on the body of the function.
See Also:
■

■

Oracle Database Concepts for more information about
function-based indexes
Oracle Database Application Developer's Guide - Fundamentals for
information about using function-based indexes in applications
and examples of their use

Creating a Key-Compressed Index
Creating an index using key compression enables you to eliminate repeated
occurrences of key column prefix values.
Key compression breaks an index key into a prefix and a suffix entry. Compression is
achieved by sharing the prefix entries among all the suffix entries in an index block.
This sharing can lead to huge savings in space, allowing you to store more keys for
each index block while improving performance.
Key compression can be useful in the following situations:
■

■

You have a non-unique index where ROWID is appended to make the key unique.
If you use key compression here, the duplicate key is stored as a prefix entry on
the index block without the ROWID. The remaining rows become suffix entries
consisting of only the ROWID.
You have a unique multicolumn index.

You enable key compression using the COMPRESS clause. The prefix length (as the
number of key columns) can also be specified to identify how the key columns are
broken into a prefix and suffix entry. For example, the following statement compresses
duplicate occurrences of a key in the index leaf block:
CREATE INDEX emp_ename ON emp(ename)
TABLESPACE users
COMPRESS 1;

The COMPRESS clause can also be specified during rebuild. For example, during
rebuild you can disable compression as follows:
ALTER INDEX emp_ename REBUILD NOCOMPRESS;

See Also: Oracle Database Concepts for a more detailed discussion
of key compression

Altering Indexes
To alter an index, your schema must contain the index or you must have the ALTER
ANY INDEX system privilege. Among the actions allowed by the ALTER INDEX
statement are:
■

Rebuild or coalesce an existing index

Managing Indexes 16-11

Altering Indexes

■

Deallocate unused space or allocate a new extent

■

Specify parallel execution (or not) and alter the degree of parallelism

■

Alter storage parameters or physical attributes

■

Specify LOGGING or NOLOGGING

■

Enable or disable key compression

■

Mark the index unusable

■

Start or stop the monitoring of index usage

You cannot alter index column structure.
More detailed discussions of some of these operations are contained in the following
sections:
■

Altering Storage Characteristics of an Index

■

Rebuilding an Existing Index

■

Monitoring Index Usage

Altering Storage Characteristics of an Index
Alter the storage parameters of any index, including those created by the database to
enforce primary and unique key integrity constraints, using the ALTER INDEX
statement. For example, the following statement alters the emp_ename index:
ALTER INDEX emp_ename
STORAGE (PCTINCREASE 50);

The storage parameters INITIAL and MINEXTENTS cannot be altered. All new
settings for the other storage parameters affect only extents subsequently allocated for
the index.
For indexes that implement integrity constraints, you can adjust storage parameters by
issuing an ALTER TABLE statement that includes the USING INDEX subclause of the
ENABLE clause. For example, the following statement changes the storage options of
the index created on table emp to enforce the primary key constraint:
ALTER TABLE emp
ENABLE PRIMARY KEY USING INDEX;

See Also: Oracle Database SQL Reference for syntax and restrictions
on the use of the ALTER INDEX statement

Rebuilding an Existing Index
Before rebuilding an existing index, compare the costs and benefits associated with
rebuilding to those associated with coalescing indexes as described in Table 16–1 on
page 16-5.
When you rebuild an index, you use an existing index as the data source. Creating an
index in this manner enables you to change storage characteristics or move to a new
tablespace. Rebuilding an index based on an existing data source removes intra-block
fragmentation. Compared to dropping the index and using the CREATE INDEX
statement, re-creating an existing index offers better performance.
The following statement rebuilds the existing index emp_name:
ALTER INDEX emp_name REBUILD;

16-12 Oracle Database Administrator’s Guide

Monitoring Space Use of Indexes

The REBUILD clause must immediately follow the index name, and precede any other
options. It cannot be used in conjunction with the DEALLOCATE UNUSED clause.
You have the option of rebuilding the index online. Rebuilding online enables you to
update base tables at the same time that you are rebuilding. The following statement
rebuilds the emp_name index online:
ALTER INDEX emp_name REBUILD ONLINE;

Online index rebuilding has stricter limitations on the
maximum key length that can be handled, compared to other methods
of rebuilding an index. If an ORA-1450 (maximum key length
exceeded) error occurs when rebuilding online, try rebuilding offline,
coalescing, or dropping and recreating the index.

Note:

If you do not have the space required to rebuild an index, you can choose instead to
coalesce the index. Coalescing an index is an online operation.
See Also:
■

"Creating an Index Online" on page 16-9

■

"Monitoring Space Use of Indexes" on page 16-13

Monitoring Index Usage
Oracle Database provides a means of monitoring indexes to determine whether they
are being used. If an index is not being used, then it can be dropped, eliminating
unnecessary statement overhead.
To start monitoring the usage of an index, issue this statement:
ALTER INDEX index MONITORING USAGE;

Later, issue the following statement to stop the monitoring:
ALTER INDEX index NOMONITORING USAGE;

The view V$OBJECT_USAGE can be queried for the index being monitored to see if the
index has been used. The view contains a USED column whose value is YES or NO,
depending upon if the index has been used within the time period being monitored.
The view also contains the start and stop times of the monitoring period, and a
MONITORING column (YES/NO) to indicate if usage monitoring is currently active.
Each time that you specify MONITORING USAGE, the V$OBJECT_USAGE view is reset
for the specified index. The previous usage information is cleared or reset, and a new
start time is recorded. When you specify NOMONITORING USAGE, no further
monitoring is performed, and the end time is recorded for the monitoring period. Until
the next ALTER INDEX...MONITORING USAGE statement is issued, the view
information is left unchanged.

Monitoring Space Use of Indexes
If key values in an index are inserted, updated, and deleted frequently, the index can
lose its acquired space efficiently over time. Monitor index efficiency of space usage at
regular intervals by first analyzing the index structure, using the ANALYZE

Managing Indexes 16-13

Dropping Indexes

INDEX...VALIDATE STRUCTURE statement, and then querying the INDEX_STATS
view:
SELECT PCT_USED FROM INDEX_STATS WHERE NAME = 'index';

The percentage of index space usage varies according to how often index keys are
inserted, updated, or deleted. Develop a history of average efficiency of space usage
for an index by performing the following sequence of operations several times:
■

Analyzing statistics

■

Validating the index

■

Checking PCT_USED

■

Dropping and rebuilding (or coalescing) the index

When you find that index space usage drops below its average, you can condense the
index space by dropping the index and rebuilding it, or coalescing it.
See Also:

"Analyzing Tables, Indexes, and Clusters" on page 13-2

Dropping Indexes
To drop an index, the index must be contained in your schema, or you must have the
DROP ANY INDEX system privilege.
Some reasons for dropping an index include:
■
■

The index is no longer required.
The index is not providing anticipated performance improvements for queries
issued against the associated table. For example, the table might be very small, or
there might be many rows in the table but very few index entries.

■

Applications do not use the index to query the data.

■

The index has become invalid and must be dropped before being rebuilt.

■

The index has become too fragmented and must be dropped before being rebuilt.

When you drop an index, all extents of the index segment are returned to the
containing tablespace and become available for other objects in the tablespace.
How you drop an index depends on whether you created the index explicitly with a
CREATE INDEX statement, or implicitly by defining a key constraint on a table. If you
created the index explicitly with the CREATE INDEX statement, then you can drop the
index with the DROP INDEX statement. The following statement drops the
emp_ename index:
DROP INDEX emp_ename;

You cannot drop only the index associated with an enabled UNIQUE key or PRIMARY
KEY constraint. To drop a constraints associated index, you must disable or drop the
constraint itself.
If a table is dropped, all associated indexes are dropped
automatically.

Note:

16-14 Oracle Database Administrator’s Guide

Viewing Index Information

See Also:
■

■

Oracle Database SQL Reference for syntax and restrictions on the
use of the DROP INDEX statement
"Managing Integrity Constraints" on page 13-9

Viewing Index Information
The following views display information about indexes:
View

Description

DBA_INDEXES

DBA view describes indexes on all tables in the database. ALL view describes
indexes on all tables accessible to the user. USER view is restricted to indexes
owned by the user. Some columns in these views contain statistics that are
generated by the DBMS_STATS package or ANALYZE statement.

ALL_INDEXES
USER_INDEXES
DBA_IND_COLUMNS
ALL_IND_COLUMNS

These views describe the columns of indexes on tables. Some columns in these
views contain statistics that are generated by the DBMS_STATS package or
ANALYZE statement.

USER_IND_COLUMNS
DBA_IND_EXPRESSIONS

These views describe the expressions of function-based indexes on tables.

ALL_IND_EXPRESSIONS
USER_IND_EXPRESSIONS
DBA_IND_STATISTICS

These views contain optimizer statistics for indexes.

ALL_IND_STATISTICS
USER_IND_STATISTICS
INDEX_STATS

Stores information from the last ANALYZE INDEX...VALIDATE STRUCTURE
statement.

INDEX_HISTOGRAM

Stores information from the last ANALYZE INDEX...VALIDATE STRUCTURE
statement.

V$OBJECT_USAGE

Contains index usage information produced by the ALTER
INDEX...MONITORING USAGE functionality.

See Also:

Oracle Database Reference for a complete description of

these views

Managing Indexes 16-15

Viewing Index Information

16-16 Oracle Database Administrator’s Guide

17
Managing Partitioned Tables and Indexes
This chapter describes various aspects of managing partitioned tables and indexes,
and contains the following topics:
■

About Partitioned Tables and Indexes

■

Partitioning Methods

■

Creating Partitioned Tables

■

Maintaining Partitioned Tables

■

Dropping Partitioned Tables

■

Partitioned Tables and Indexes Example

■

Viewing Information About Partitioned Tables and Indexes

About Partitioned Tables and Indexes
Modern enterprises frequently run mission-critical databases containing upwards of
several hundred gigabytes and, in many cases, several terabytes of data. These
enterprises are challenged by the support and maintenance requirements of very large
databases (VLDB), and must devise methods to meet those challenges.
One way to meet VLDB demands is to create and use partitioned tables and indexes.
Partitioned tables allow your data to be broken down into smaller, more manageable
pieces called partitions. Partitioning can also bring better performance, because many
queries can prune (ignore) partitions that, according to the WHERE clause, won't have
the requested rows, thereby reducing the amount of data to be scanned to produce a
result set. Partitions can be further broken down into subpartitions for finer levels of
manageability and improved performance. Indexes can be partitioned in similar
fashion.
Each partition is stored in its own segment and can be managed individually. It can
function independently of the other partitions, thus providing a structure that can be
better tuned for availability and performance.
If you are using parallel execution, partitions provide another means of parallelization.
Operations on partitioned tables and indexes are performed in parallel by assigning
different parallel execution servers to different partitions of the table or index.
Partitions and subpartitions of a table or index all share the same logical attributes. For
example, all partitions (or subpartitions) in a table share the same column and
constraint definitions, and all partitions (or subpartitions) of an index share the same
index options. They can, however, have different physical attributes (such as
TABLESPACE).

Managing Partitioned Tables and Indexes 17-1

Partitioning Methods

Although you are not required to keep each table or index partition (or subpartition) in
a separate tablespace, it is to your advantage to do so. Storing partitions in separate
tablespaces enables you to:
■

Reduce the possibility of data corruption in multiple partitions

■

Back up and recover each partition independently

■

■

Control the mapping of partitions to disk drives (important for balancing I/O
load)
Improve manageability, availability, and performance

Partitioning is transparent to existing applications and standard DML statements run
against partitioned tables. However, an application can be programmed to take
advantage of partitioning by using partition-extended table or index names in DML.
The maximum number of partitions or subpartitions that a table may have is 1024K-1.
You can use SQL*Loader and the import and export utilities to load or unload data
stored in partitioned tables. These utilities are all partition and subpartition aware.
See Also:
■

■

■

■

Chapter 14, "Managing Space for Schema Objects" is
recommended reading before attempting tasks described in this
chapter.
Oracle Database Concepts contains more information about
partitioning. Before the first time you attempt to create a
partitioned table or index, or perform maintenance operations
on any partitioned table, it is recommended that you review the
information contained in that book.
Oracle Database Data Warehousing Guide and Oracle Database
Concepts contain information about parallel execution
Oracle Database Utilities describes SQL*Loader and the import
and export utilities.

Partitioning Methods
There are several partitioning methods offered by Oracle Database:
■

Range partitioning

■

Hash partitioning

■

List partitioning

■

Composite range-hash partitioning

■

Composite range-list partitioning

Indexes, as well as tables, can be partitioned. A global index can be partitioned by the
range or hash method, and it can be defined on any type of partitioned, or
non-partitioned, table. It can require more maintenance than a local index.
A local index is constructed so that it reflects the structure of the underlying table. It is
equipartitioned with the underlying table, meaning that it is partitioned on the same
columns as the underlying table, creates the same number of partitions or
subpartitions, and gives them the same partition bounds as corresponding partitions
of the underlying table. For local indexes, index partitioning is maintained

17-2 Oracle Database Administrator’s Guide

Partitioning Methods

automatically when partitions are affected by maintenance activity. This ensures that
the index remains equipartitioned with the underlying table.
The following sections can help you decide on a partitioning method appropriate for
your needs:
■

When to Use Range Partitioning

■

When to Use Hash Partitioning

■

When to Use List Partitioning

■

When to Use Composite Range-Hash Partitioning

■

When to Use Composite Range-List Partitioning

When to Use Range Partitioning
Use range partitioning to map rows to partitions based on ranges of column values.
This type of partitioning is useful when dealing with data that has logical ranges into
which it can be distributed; for example, months of the year. Performance is best when
the data evenly distributes across the range. If partitioning by range causes partitions
to vary dramatically in size because of unequal distribution, you may want to consider
one of the other methods of partitioning.
When creating range partitions, you must specify:
■

Partitioning method: range

■

Partitioning column(s)

■

Partition descriptions identifying partition bounds

The example below creates a table of four partitions, one for each quarter of sales. The
columns sale_year, sale_month, and sale_day are the partitioning columns,
while their values constitute the partitioning key of a specific row. The VALUES LESS
THAN clause determines the partition bound: rows with partitioning key values that
compare less than the ordered list of values specified by the clause are stored in the
partition. Each partition is given a name (sales_q1, sales_q2, ...), and each
partition is contained in a separate tablespace (tsa, tsb, ...).
CREATE TABLE sales
( invoice_no NUMBER,
sale_year INT NOT NULL,
sale_month INT NOT NULL,
sale_day
INT NOT NULL )
PARTITION BY RANGE (sale_year, sale_month, sale_day)
( PARTITION sales_q1 VALUES LESS THAN (1999, 04, 01)
TABLESPACE tsa,
PARTITION sales_q2 VALUES LESS THAN (1999, 07, 01)
TABLESPACE tsb,
PARTITION sales_q3 VALUES LESS THAN (1999, 10, 01)
TABLESPACE tsc,
PARTITION sales_q4 VALUES LESS THAN (2000, 01, 01)
TABLESPACE tsd );

A row with sale_year=1999, sale_month=8, and sale_day=1 has a partitioning
key of (1999, 8, 1) and would be stored in partition sales_q3.
See Also:

"Using Multicolumn Partitioning Keys" on page 17-15

Managing Partitioned Tables and Indexes 17-3

Partitioning Methods

When to Use Hash Partitioning
Use hash partitioning if your data does not easily lend itself to range partitioning, but
you would like to partition for performance and manageability reasons. Hash
partitioning provides a method of evenly distributing data across a specified number
of partitions. Rows are mapped into partitions based on a hash value of the
partitioning key. Creating and using hash partitions gives you a highly tunable
method of data placement, because you can influence availability and performance by
spreading these evenly sized partitions across I/O devices (striping).
To create hash partitions you specify the following:
■

Partitioning method: hash

■

Partitioning column(s)

■

Number of partitions or individual partition descriptions

The following example creates a hash-partitioned table. The partitioning column is id,
four partitions are created and assigned system generated names, and they are placed
in four named tablespaces (gear1, gear2, ...).
CREATE TABLE scubagear
(id NUMBER,
name VARCHAR2 (60))
PARTITION BY HASH (id)
PARTITIONS 4
STORE IN (gear1, gear2, gear3, gear4);

See Also:

"Using Multicolumn Partitioning Keys" on page 17-15

When to Use List Partitioning
Use list partitioning when you require explicit control over how rows map to
partitions. You can specify a list of discrete values for the partitioning column in the
description for each partition. This is different from range partitioning, where a range
of values is associated with a partition, and from hash partitioning, where the user has
no control of the row to partition mapping.
The list partitioning method is specifically designed for modeling data distributions
that follow discrete values. This cannot be easily done by range or hash partitioning
because:
■

■

Range partitioning assumes a natural range of values for the partitioning column.
It is not possible to group together out-of-range values partitions.
Hash partitioning allows no control over the distribution of data because the data
is distributed over the various partitions using the system hash function. Again,
this makes it impossible to logically group together discrete values for the
partitioning columns into partitions.

Further, list partitioning allows unordered and unrelated sets of data to be grouped
and organized together very naturally.
Unlike the range and hash partitioning methods, multicolumn partitioning is not
supported for list partitioning. If a table is partitioned by list, the partitioning key can
consist only of a single column of the table. Otherwise all columns that can be
partitioned by the range or hash methods can be partitioned by the list partitioning
method.
When creating list partitions, you must specify:
■

Partitioning method: list

17-4 Oracle Database Administrator’s Guide

Partitioning Methods

■
■

Partitioning column
Partition descriptions, each specifying a list of literal values (a value list), which
are the discrete values of the partitioning column that qualify a row to be included
in the partition

The following example creates a list-partitioned table. It creates table
q1_sales_by_region which is partitioned by regions consisting of groups of states.
CREATE TABLE q1_sales_by_region
(deptno number,
deptname varchar2(20),
quarterly_sales number(10, 2),
state varchar2(2))
PARTITION BY LIST (state)
(PARTITION q1_northwest VALUES ('OR', 'WA'),
PARTITION q1_southwest VALUES ('AZ', 'UT', 'NM'),
PARTITION q1_northeast VALUES ('NY', 'VM', 'NJ'),
PARTITION q1_southeast VALUES ('FL', 'GA'),
PARTITION q1_northcentral VALUES ('SD', 'WI'),
PARTITION q1_southcentral VALUES ('OK', 'TX'));

A row is mapped to a partition by checking whether the value of the partitioning
column for a row matches a value in the value list that describes the partition.
For example, some sample rows are inserted as follows:
■

(10, 'accounting', 100, 'WA') maps to partition q1_northwest

■

(20, 'R&D', 150, 'OR') maps to partition q1_northwest

■

(30, 'sales', 100, 'FL') maps to partition q1_southeast

■

(40, 'HR', 10, 'TX') maps to partition q1_southwest

■

(50, 'systems engineering', 10, 'CA') does not map to any partition in the table and
raises an error

Unlike range partitioning, with list partitioning, there is no apparent sense of order
between partitions. You can also specify a default partition into which rows that do
not map to any other partition are mapped. If a default partition were specified in the
preceding example, the state CA would map to that partition.

When to Use Composite Range-Hash Partitioning
Range-hash partitioning partitions data using the range method, and within each
partition, subpartitions it using the hash method. These composite partitions are ideal
for both historical data and striping, and provide improved manageability of range
partitioning and data placement, as well as the parallelism advantages of hash
partitioning.
When creating range-hash partitions, you specify the following:
■

Partitioning method: range

■

Partitioning column(s)

■

Partition descriptions identifying partition bounds

■

Subpartitioning method: hash

■

Subpartitioning column(s)

■

Number of subpartitions for each partition or descriptions of subpartitions

Managing Partitioned Tables and Indexes 17-5

Partitioning Methods

The following statement creates a range-hash partitioned table. In this example, three
range partitions are created, each containing eight subpartitions. Because the
subpartitions are not named, system generated names are assigned, but the STORE IN
clause distributes them across the 4 specified tablespaces (ts1, ...,ts4).
CREATE TABLE scubagear (equipno NUMBER, equipname VARCHAR(32), price NUMBER)
PARTITION BY RANGE (equipno) SUBPARTITION BY HASH(equipname)
SUBPARTITIONS 8 STORE IN (ts1, ts2, ts3, ts4)
(PARTITION p1 VALUES LESS THAN (1000),
PARTITION p2 VALUES LESS THAN (2000),
PARTITION p3 VALUES LESS THAN (MAXVALUE));

The partitions of a range-hash partitioned table are logical structures only, as their data
is stored in the segments of their subpartitions. As with partitions, these subpartitions
share the same logical attributes. Unlike range partitions in a range-partitioned table,
the subpartitions cannot have different physical attributes from the owning partition,
although they are not required to reside in the same tablespace.

When to Use Composite Range-List Partitioning
Like the composite range-hash partitioning method, the composite range-list
partitioning method provides for partitioning based on a two level hierarchy. The first
level of partitioning is based on a range of values, as for range partitioning; the second
level is based on discrete values, as for list partitioning. This form of composite
partitioning is well suited for historical data, but lets you further group the rows of
data based on unordered or unrelated column values.
When creating range-list partitions, you specify the following:
■

Partitioning method: range

■

Partitioning column(s)

■

Partition descriptions identifying partition bounds

■

Subpartitioning method: list

■

Subpartitioning column

■

Subpartition descriptions, each specifying a list of literal values (a value list),
which are the discrete values of the subpartitioning column that qualify a row to
be included in the subpartition

The following example illustrates how range-list partitioning might be used. The
example tracks sales data of products by quarters and within each quarter, groups it
by specified states.
CREATE TABLE quarterly_regional_sales
(deptno number, item_no varchar2(20),
txn_date date, txn_amount number, state varchar2(2))
TABLESPACE ts4
PARTITION BY RANGE (txn_date)
SUBPARTITION BY LIST (state)
(PARTITION q1_1999 VALUES LESS THAN (TO_DATE('1-APR-1999','DD-MON-YYYY'))
(SUBPARTITION q1_1999_northwest VALUES ('OR', 'WA'),
SUBPARTITION q1_1999_southwest VALUES ('AZ', 'UT', 'NM'),
SUBPARTITION q1_1999_northeast VALUES ('NY', 'VM', 'NJ'),
SUBPARTITION q1_1999_southeast VALUES ('FL', 'GA'),
SUBPARTITION q1_1999_northcentral VALUES ('SD', 'WI'),
SUBPARTITION q1_1999_southcentral VALUES ('OK', 'TX')
),
PARTITION q2_1999 VALUES LESS THAN ( TO_DATE('1-JUL-1999','DD-MON-YYYY'))

17-6 Oracle Database Administrator’s Guide

Creating Partitioned Tables

(SUBPARTITION q2_1999_northwest VALUES ('OR', 'WA'),
SUBPARTITION q2_1999_southwest VALUES ('AZ', 'UT', 'NM'),
SUBPARTITION q2_1999_northeast VALUES ('NY', 'VM', 'NJ'),
SUBPARTITION q2_1999_southeast VALUES ('FL', 'GA'),
SUBPARTITION q2_1999_northcentral VALUES ('SD', 'WI'),
SUBPARTITION q2_1999_southcentral VALUES ('OK', 'TX')
),
PARTITION q3_1999 VALUES LESS THAN (TO_DATE('1-OCT-1999','DD-MON-YYYY'))
(SUBPARTITION q3_1999_northwest VALUES ('OR', 'WA'),
SUBPARTITION q3_1999_southwest VALUES ('AZ', 'UT', 'NM'),
SUBPARTITION q3_1999_northeast VALUES ('NY', 'VM', 'NJ'),
SUBPARTITION q3_1999_southeast VALUES ('FL', 'GA'),
SUBPARTITION q3_1999_northcentral VALUES ('SD', 'WI'),
SUBPARTITION q3_1999_southcentral VALUES ('OK', 'TX')
),
PARTITION q4_1999 VALUES LESS THAN ( TO_DATE('1-JAN-2000','DD-MON-YYYY'))
(SUBPARTITION q4_1999_northwest VALUES ('OR', 'WA'),
SUBPARTITION q4_1999_southwest VALUES ('AZ', 'UT', 'NM'),
SUBPARTITION q4_1999_northeast VALUES ('NY', 'VM', 'NJ'),
SUBPARTITION q4_1999_southeast VALUES ('FL', 'GA'),
SUBPARTITION q4_1999_northcentral VALUES ('SD', 'WI'),
SUBPARTITION q4_1999_southcentral VALUES ('OK', 'TX')
)
);

A row is mapped to a partition by checking whether the value of the partitioning
column for a row falls within a specific partition range. The row is then mapped to a
subpartition within that partition by identifying the subpartition whose descriptor
value list contains a value matching the subpartition column value.
For example, some sample rows are inserted as follows:
■

■

■

■

■

■

(10, 4532130, '23-Jan-1999', 8934.10, 'WA') maps to subpartition
q1_1999_northwest
(20, 5671621, '15-May-1999', 49021.21, 'OR') maps to subpartition
q2_1999_northwest
(30, 9977612, '07-Sep-1999', 30987.90, 'FL') maps to subpartition
q3_1999_southeast
(40, 9977612, '29-Nov-1999', 67891.45, 'TX') maps to subpartition
q4_1999_southcentral
(40, 4532130, '5-Jan-2000', 897231.55, 'TX') does not map to any partition in the
table and raises an error
(50, 5671621, '17-Dec-1999', 76123.35, 'CA') does not map to any subpartition in the
table and raises an error

The partitions of a range-list partitioned table are logical structures only, as their data
is stored in the segments of their subpartitions. The list subpartitions have the same
characteristics as list partitions. You can specify a default subpartition, just as you
specify a default partition for list partitioning.

Creating Partitioned Tables
Creating a partitioned table or index is very similar to creating a non-partitioned table
or index (as described in Chapter 15, "Managing Tables"), but you include a
partitioning clause. The partitioning clause, and subclauses, that you include depend
upon the type of partitioning you want to achieve.

Managing Partitioned Tables and Indexes 17-7

Creating Partitioned Tables

You can partition both regular (heap organized) tables and index-organized tables,
except for those containing LONG or LONG RAW columns. You can create
non-partitioned global indexes, range or hash-partitioned global indexes, and local
indexes on partitioned tables.
When you create (or alter) a partitioned table, a row movement clause, either ENABLE
ROW MOVEMENT or DISABLE ROW MOVEMENT can be specified. This clause either
enables or disables the migration of a row to a new partition if its key is updated. The
default is DISABLE ROW MOVEMENT.
The following sections present details and examples of creating partitions for the
various types of partitioned tables and indexes:
■

Creating Range-Partitioned Tables and Global Indexes

■

Creating Hash-Partitioned Tables and Global Indexes

■

Creating List-Partitioned Tables

■

Creating Composite Range-Hash Partitioned Tables

■

Creating Composite Range-List Partitioned Tables

■

Using Subpartition Templates to Describe Composite Partitioned Tables

■

Using Multicolumn Partitioning Keys

■

Creating Partitioned Index-Organized Tables

■

Partitioning Restrictions for Multiple Block Sizes
See Also:
■

■

■

Oracle Database SQL Reference for the exact syntax of the
partitioning clauses for creating and altering partitioned tables
and indexes, any restrictions on their use, and specific
privileges required for creating and altering tables
Oracle Database Application Developer's Guide - Large Objects for
information specific to creating partitioned tables containing
columns with LOBs or other objects stored as LOBs
Oracle Database Application Developer's Guide - Object-Relational
Features for information specific to creating tables with object
types, nested tables, or VARRAYs.

Creating Range-Partitioned Tables and Global Indexes
The PARTITION BY RANGE clause of the CREATE TABLE statement specifies that the
table or index is to be range-partitioned. The PARTITION clauses identify the
individual partition ranges, and optional subclauses of a PARTITION clause can
specify physical and other attributes specific to a partition segment. If not overridden
at the partition level, partitions inherit the attributes of their underlying table.

Creating a Range Partitioned Table
In this example, more complexity is added to the example presented earlier for a
range-partitioned table. Storage parameters and a LOGGING attribute are specified at
the table level. These replace the corresponding defaults inherited from the tablespace
level for the table itself, and are inherited by the range partitions. However, because
there was little business in the first quarter, the storage attributes for partition
sales_q1 are made smaller. The ENABLE ROW MOVEMENT clause is specified to allow

17-8 Oracle Database Administrator’s Guide

Creating Partitioned Tables

the migration of a row to a new partition if an update to a key value is made that
would place the row in a different partition.
CREATE TABLE sales
( invoice_no NUMBER,
sale_year INT NOT NULL,
sale_month INT NOT NULL,
sale_day
INT NOT NULL )
STORAGE (INITIAL 100K NEXT 50K) LOGGING
PARTITION BY RANGE ( sale_year, sale_month, sale_day)
( PARTITION sales_q1 VALUES LESS THAN ( 1999, 04, 01
TABLESPACE tsa STORAGE (INITIAL 20K, NEXT 10K),
PARTITION sales_q2 VALUES LESS THAN ( 1999, 07, 01
TABLESPACE tsb,
PARTITION sales_q3 VALUES LESS THAN ( 1999, 10, 01
TABLESPACE tsc,
PARTITION sales q4 VALUES LESS THAN ( 2000, 01, 01
TABLESPACE tsd)
ENABLE ROW MOVEMENT;

)
)
)
)

Creating a Range-Partitioned Global Index
The rules for creating range-partitioned global indexes are similar to those for creating
range-partitioned tables. The following is an example of creating a range-partitioned
global index on sales_month for the table created in the preceding example. Each
index partition is named but is stored in the default tablespace for the index.
CREATE INDEX month_ix ON sales(sales_month)
GLOBAL PARTITION BY RANGE(sales_month)
(PARTITION pm1_ix VALUES LESS THAN (2)
PARTITION pm2_ix VALUES LESS THAN (3)
PARTITION pm3_ix VALUES LESS THAN (4)
PARTITION pm4_ix VALUES LESS THAN (5)
PARTITION pm5_ix VALUES LESS THAN (6)
PARTITION pm6_ix VALUES LESS THAN (7)
PARTITION pm7_ix VALUES LESS THAN (8)
PARTITION pm8_ix VALUES LESS THAN (9)
PARTITION pm9_ix VALUES LESS THAN (10)
PARTITION pm10_ix VALUES LESS THAN (11)
PARTITION pm11_ix VALUES LESS THAN (12)
PARTITION pm12_ix VALUES LESS THAN (MAXVALUE));

If your enterprise has or will have databases using different
character sets, use caution when partitioning on character columns,
because the sort sequence of characters is not identical in all
character sets. For more information, see Oracle Database
Globalization Support Guide.

Note:

Creating Hash-Partitioned Tables and Global Indexes
The PARTITION BY HASH clause of the CREATE TABLE statement identifies that the
table is to be hash-partitioned. The PARTITIONS clause can then be used to specify the
number of partitions to create, and optionally, the tablespaces to store them in.
Alternatively, you can use PARTITION clauses to name the individual partitions and
their tablespaces.
The only attribute you can specify for hash partitions is TABLESPACE. All of the hash
partitions of a table must share the same segment attributes (except TABLESPACE),
which are inherited from the table level.
Managing Partitioned Tables and Indexes 17-9

Creating Partitioned Tables

Creating a Hash Partitioned Table
The following examples illustrate two methods of creating a hash-partitioned table
named dept. In the first example the number of partitions is specified, but system
generated names are assigned to them and they are stored in the default tablespace of
the table.
CREATE TABLE dept (deptno NUMBER, deptname VARCHAR(32))
PARTITION BY HASH(deptno) PARTITIONS 16;

In this second example, names of individual partitions, and tablespaces in which they
are to reside, are specified. The initial extent size for each hash partition (segment) is
also explicitly stated at the table level, and all partitions inherit this attribute.
CREATE TABLE dept (deptno NUMBER, deptname VARCHAR(32))
STORAGE (INITIAL 10K)
PARTITION BY HASH(deptno)
(PARTITION p1 TABLESPACE ts1, PARTITION p2 TABLESPACE ts2,
PARTITION p3 TABLESPACE ts1, PARTITION p4 TABLESPACE ts3);

If you create a local index for this table, the database constructs the index so that it is
equipartitioned with the underlying table. The database also ensures that the index is
maintained automatically when maintenance operations are performed on the
underlying table. The following is an example of creating a local index on the table
dept:
CREATE INDEX loc_dept_ix ON dept(deptno) LOCAL;

You can optionally name the hash partitions and tablespaces into which the local index
partitions are to be stored, but if you do not do so, the database uses the name of the
corresponding base partition as the index partition name, and stores the index
partition in the same tablespace as the table partition.

Creating a Hash-Partitioned Global Index
Hash partitioned global indexes can improve the performance of indexes where a
small number of leaf blocks in the index have high contention in multiuser OLTP
environments. Queries involving the equality and IN predicates on the index
partitioning key can efficiently use hash-partitioned global indexes.
The syntax for creating a hash partitioned global index is similar to that used for a
hash partitioned table. For example, the following statement creates a hash-partitioned
global index:
CREATE INDEX hgidx ON tab (c1,c2,c3) GLOBAL
PARTITION BY HASH (c1,c2)
(PARTITION p1 TABLESPACE tbs_1,
PARTITION p2 TABLESPACE tbs_2,
PARTITION p3 TABLESPACE tbs_3,
PARTITION p4 TABLESPACE tbs_4);

Creating List-Partitioned Tables
The semantics for creating list partitions are very similar to those for creating range
partitions. However, to create list partitions, you specify a PARTITION BY LIST
clause in the CREATE TABLE statement, and the PARTITION clauses specify lists of
literal values, which are the discrete values of the partitioning columns that qualify
rows to be included in the partition. For list partitioning, the partitioning key can only
be a single column name from the table.

17-10 Oracle Database Administrator’s Guide

Creating Partitioned Tables

Available only with list partitioning, you can use the keyword DEFAULT to describe
the value list for a partition. This identifies a partition that will accommodate rows
that do not map into any of the other partitions.
As for range partitions, optional subclauses of a PARTITION clause can specify
physical and other attributes specific to a partition segment. If not overridden at the
partition level, partitions inherit the attributes of their parent table.
The following example creates table sales_by_region and partitions it using the list
method. The first two PARTITION clauses specify physical attributes, which override
the table-level defaults. The remaining PARTITION clauses do not specify attributes
and those partitions inherit their physical attributes from table-level defaults. A
default partition is specified.
CREATE TABLE sales_by_region (item# INTEGER, qty INTEGER,
store_name VARCHAR(30), state_code VARCHAR(2),
sale_date DATE)
STORAGE(INITIAL 10K NEXT 20K) TABLESPACE tbs5
PARTITION BY LIST (state_code)
(
PARTITION region_east
VALUES ('MA','NY','CT','NH','ME','MD','VA','PA','NJ')
STORAGE (INITIAL 20K NEXT 40K PCTINCREASE 50)
TABLESPACE tbs8,
PARTITION region_west
VALUES ('CA','AZ','NM','OR','WA','UT','NV','CO')
NOLOGGING,
PARTITION region_south
VALUES ('TX','KY','TN','LA','MS','AR','AL','GA'),
PARTITION region_central
VALUES ('OH','ND','SD','MO','IL','MI','IA'),
PARTITION region_null
VALUES (NULL),
PARTITION region_unknown
VALUES (DEFAULT)
);

Creating Composite Range-Hash Partitioned Tables
To create a range-hash partitioned table, you start by using the PARTITION BY
RANGE clause of a CREATE TABLE statement. Next, you specify a SUBPARTITION BY
HASH clause that follows similar syntax and rules as the PARTITION BY HASH clause.
The individual PARTITION and SUBPARTITION or SUBPARTITIONS clauses, and
optionally a SUBPARTITION TEMPLATE clause, follow.
Attributes specified for a range partition apply to all subpartitions of that partition.
You can specify different attributes for each range partition, and you can specify a
STORE IN clause at the partition level if the list of tablespaces across which the
subpartitions of that partition should be spread is different from those of other
partitions. All of this is illustrated in the following example.
CREATE TABLE emp (deptno NUMBER, empname VARCHAR(32), grade NUMBER)
PARTITION BY RANGE(deptno) SUBPARTITION BY HASH(empname)
SUBPARTITIONS 8 STORE IN (ts1, ts3, ts5, ts7)
(PARTITION p1 VALUES LESS THAN (1000),
PARTITION p2 VALUES LESS THAN (2000)
STORE IN (ts2, ts4, ts6, ts8),
PARTITION p3 VALUES LESS THAN (MAXVALUE)
(SUBPARTITION p3_s1 TABLESPACE ts4,
SUBPARTITION p3_s2 TABLESPACE ts5));

Managing Partitioned Tables and Indexes

17-11

Creating Partitioned Tables

To learn how using a subpartition template can simplify the specification of a
composite partitioned table, see "Using Subpartition Templates to Describe Composite
Partitioned Tables" on page 17-13.
The following statement is an example of creating a local index on the emp table where
the index segments are spread across tablespaces ts7, ts8, and ts9.
CREATE INDEX emp_ix ON emp(deptno)
LOCAL STORE IN (ts7, ts8, ts9);

This local index is equipartitioned with the base table as follows:
■
■

■

It consists of as many partitions as the base table.
Each index partition consists of as many subpartitions as the corresponding base
table partition.
Index entries for rows in a given subpartition of the base table are stored in the
corresponding subpartition of the index.

Creating Composite Range-List Partitioned Tables
The concept of range-list partitioning is similar to that of the other composite
partitioning method, range-hash, but this time you specify that the subpartitions are to
be list rather than hash. Specifically, after the CREATE TABLE...PARTITION BY
RANGE clause, you include a SUBPARTITION BY LIST clause that follows similar
syntax and rules as the PARTITION BY LIST clause. The individual PARTITION and
SUBPARTITION clauses, and optionally a SUBPARTITION TEMPLATE clause, follow.
The range partitions of the composite partitioned table are described as for
non-composite range partitioned tables. This allows that optional subclauses of a
PARTITION clause can specify physical and other attributes, including tablespace,
specific to a partition segment. If not overridden at the partition level, partitions
inherit the attributes of their underlying table.
The list subpartition descriptions, in the SUBPARTITION clauses, are described as for
non-composite list partitions, except the only physical attribute that can be specified is
a tablespace (optional). Subpartitions inherit all other physical attributes from the
partition description.
The following example of creates a table that specifies a tablespace at the partition and
subpartition levels. The number of subpartitions within each partition varies, and
default subpartitions are specified.
CREATE TABLE sample_regional_sales
(deptno number, item_no varchar2(20),
txn_date date, txn_amount number, state varchar2(2))
PARTITION BY RANGE (txn_date)
SUBPARTITION BY LIST (state)
(PARTITION q1_1999 VALUES LESS THAN (TO_DATE('1-APR-1999','DD-MON-YYYY'))
TABLESPACE tbs_1
(SUBPARTITION q1_1999_northwest VALUES ('OR', 'WA'),
SUBPARTITION q1_1999_southwest VALUES ('AZ', 'UT', 'NM'),
SUBPARTITION q1_1999_northeast VALUES ('NY', 'VM', 'NJ'),
SUBPARTITION q1_1999_southeast VALUES ('FL', 'GA'),
SUBPARTITION q1_others VALUES (DEFAULT) TABLESPACE tbs_4
),
PARTITION q2_1999 VALUES LESS THAN ( TO_DATE('1-JUL-1999','DD-MON-YYYY'))
TABLESPACE tbs_2
(SUBPARTITION q2_1999_northwest VALUES ('OR', 'WA'),

17-12 Oracle Database Administrator’s Guide

Creating Partitioned Tables

SUBPARTITION q2_1999_southwest VALUES ('AZ', 'UT', 'NM'),
SUBPARTITION q2_1999_northeast VALUES ('NY', 'VM', 'NJ'),
SUBPARTITION q2_1999_southeast VALUES ('FL', 'GA'),
SUBPARTITION q2_1999_northcentral VALUES ('SD', 'WI'),
SUBPARTITION q2_1999_southcentral VALUES ('OK', 'TX')
),
PARTITION q3_1999 VALUES LESS THAN (TO_DATE('1-OCT-1999','DD-MON-YYYY'))
TABLESPACE tbs_3
(SUBPARTITION q3_1999_northwest VALUES ('OR', 'WA'),
SUBPARTITION q3_1999_southwest VALUES ('AZ', 'UT', 'NM'),
SUBPARTITION q3_others VALUES (DEFAULT) TABLESPACE tbs_4
),
PARTITION q4_1999 VALUES LESS THAN ( TO_DATE('1-JAN-2000','DD-MON-YYYY'))
TABLESPACE tbs_4
);

This example results in the following subpartition descriptions:
■

■

■
■

■

All subpartitions inherit their physical attributes, other than tablespace, from
tablespace level defaults. This is because the only physical attribute that has been
specified for partitions or subpartitions is tablespace. There are no table level
physical attributes specified, thus tablespace level defaults are inherited at all
levels.
The first 4 subpartitions of partition q1_1999 are all contained in tbs_1, except
for the subpartition q1_others, which is stored in tbs_4 and contains all rows
that do not map to any of the other partitions.
The 6 subpartitions of partition q2_1999 are all stored in tbs_2.
The first 2 subpartitions of partition q3_1999 are all contained in tbs_3, except
for the subpartition q3_others, which is stored in tbs_4 and contains all rows
that do not map to any of the other partitions.
There is no subpartition description for partition q4_1999. This results in one
default subpartition being created and stored in tbs_4. The subpartition name is
system generated in the form SYS_SUBPn.

To learn how using a subpartition template can simplify the specification of a
composite partitioned table, see "Using Subpartition Templates to Describe Composite
Partitioned Tables".

Using Subpartition Templates to Describe Composite Partitioned Tables
You can create subpartitions in a composite partitioned table using a subpartition
template. A subpartition template simplifies the specification of subpartitions by not
requiring that a subpartition descriptor be specified for every partition in the table.
Instead, you describe subpartitions only once in a template, then apply that
subpartition template to every partition in the table.
The subpartition template is used whenever a subpartition descriptor is not specified
for a partition. If a subpartition descriptor is specified, then it is used instead of the
subpartition template for that partition. If no subpartition template is specified, and no
subpartition descriptor is supplied for a partition, then a single default subpartition is
created.

Specifying a Subpartition Template for a Range-Hash Partitioned Table
In the case of range-hash partitioned tables, the subpartition template can describe the
subpartitions in detail, or it can specify just the number of hash subpartitions.

Managing Partitioned Tables and Indexes

17-13

Creating Partitioned Tables

The following example creates a range-hash partitioned table using a subpartition
template:
CREATE TABLE emp_sub_template (deptno NUMBER, empname VARCHAR(32), grade NUMBER)
PARTITION BY RANGE(deptno) SUBPARTITION BY HASH(empname)
SUBPARTITION TEMPLATE
(SUBPARTITION a TABLESPACE ts1,
SUBPARTITION b TABLESPACE ts2,
SUBPARTITION c TABLESPACE ts3,
SUBPARTITION d TABLESPACE ts4
)
(PARTITION p1 VALUES LESS THAN (1000),
PARTITION p2 VALUES LESS THAN (2000),
PARTITION p3 VALUES LESS THAN (MAXVALUE)
);

This example produces the following table description:
■
■

■

Every partition has four subpartitions as described in the subpartition template.
Each subpartition has a tablespace specified. It is required that if a tablespace is
specified for one subpartition in a subpartition template, then one must be
specified for all.
The names of the subpartitions are generated by concatenating the partition name
with the subpartition name in the form:
partition name_subpartition name

The following query displays the subpartition names and tablespaces:
SQL> SELECT TABLESPACE_NAME, PARTITION_NAME, SUBPARTITION_NAME
2 FROM DBA_TAB_SUBPARTITIONS WHERE TABLE_NAME='EMP_SUB_TEMPLATE'
3 ORDER BY TABLESPACE_NAME;
TABLESPACE_NAME
--------------TS1
TS1
TS1
TS2
TS2
TS2
TS3
TS3
TS3
TS4
TS4
TS4

PARTITION_NAME
--------------P1
P2
P3
P1
P2
P3
P1
P2
P3
P1
P2
P3

SUBPARTITION_NAME
-----------------P1_A
P2_A
P3_A
P1_B
P2_B
P3_B
P1_C
P2_C
P3_C
P1_D
P2_D
P3_D

12 rows selected.

Specifying a Subpartition Template for a Range-List Partitioned Table
The following example, for a range-list partitioned table, illustrates how using a
subpartition template can help you stripe data across tablespaces. In this example a
table is created where the table subpartitions are vertically striped, meaning that
subpartition n from every partition is in the same tablespace.
CREATE TABLE stripe_regional_sales
( deptno number, item_no varchar2(20),
txn_date date, txn_amount number, state varchar2(2))
PARTITION BY RANGE (txn_date)

17-14 Oracle Database Administrator’s Guide

Creating Partitioned Tables

SUBPARTITION BY LIST (state)
SUBPARTITION TEMPLATE
(SUBPARTITION northwest VALUES ('OR', 'WA') TABLESPACE tbs_1,
SUBPARTITION southwest VALUES ('AZ', 'UT', 'NM') TABLESPACE tbs_2,
SUBPARTITION northeast VALUES ('NY', 'VM', 'NJ') TABLESPACE tbs_3,
SUBPARTITION southeast VALUES ('FL', 'GA') TABLESPACE tbs_4,
SUBPARTITION midwest VALUES ('SD', 'WI') TABLESPACE tbs_5,
SUBPARTITION south VALUES ('AL', 'AK') TABLESPACE tbs_6,
SUBPARTITION others VALUES (DEFAULT ) TABLESPACE tbs_7
)
(PARTITION q1_1999 VALUES LESS THAN ( TO_DATE('01-APR-1999','DD-MON-YYYY')),
PARTITION q2_1999 VALUES LESS THAN ( TO_DATE('01-JUL-1999','DD-MON-YYYY')),
PARTITION q3_1999 VALUES LESS THAN ( TO_DATE('01-OCT-1999','DD-MON-YYYY')),
PARTITION q4_1999 VALUES LESS THAN ( TO_DATE('1-JAN-2000','DD-MON-YYYY'))
);

If you specified the tablespaces at the partition level (for example, tbs_1 for partition
q1_1999, tbs_2 for partition q1_1999, tbs_3 for partition q3_1999, and tbs_4 for
partition q4_1999) and not in the subpartition template, then the table would be
horizontally striped. All subpartitions would be in the tablespace of the owning
partition.

Using Multicolumn Partitioning Keys
For range- and hash-partitioned tables, you can specify up to 16 partitioning key
columns. Multicolumn partitioning should be used when the partitioning key is
composed of several columns and subsequent columns define a higher granularity
than the preceding ones. The most common scenario is a decomposed DATE or
TIMESTAMP key, consisting of separated columns, for year, month, and day.
In evaluating multicolumn partitioning keys, the database uses the second value only
if the first value cannot uniquely identify a single target partition, and uses the third
value only if the first and second do not determine the correct partition, and so forth.
A value cannot determine the correct partition only when a partition bound exactly
matches that value and the same bound is defined for the next partition. The nth
column will therefore be investigated only when all previous (n-1) values of the
multicolumn key exactly match the (n-1) bounds of a partition. A second column, for
example, will be evaluated only if the first column exactly matches the partition
boundary value. If all column values exactly match all of the bound values for a
partition, the database will determine that the row does not fit in this partition and
will consider the next partition for a match.
In the case of nondeterministic boundary definitions (successive partitions with
identical values for at least one column), the partition boundary value becomes an
inclusive value, representing a "less than or equal to" boundary. This is in contrast to
deterministic boundaries, where the values are always regarded as "less than"
boundaries.
The following example illustrates the column evaluation for a multicolumn
range-partitioned table, storing the actual DATE information in three separate columns:
year, month, and date. The partitioning granularity is a calendar quarter. The
partitioned table being evaluated is created as follows:
CREATE TABLE sales_demo (
year
NUMBER,
month
NUMBER,
day
NUMBER,
amount_sold
NUMBER)
PARTITION BY RANGE (year,month)

Managing Partitioned Tables and Indexes

17-15

Creating Partitioned Tables

(PARTITION
PARTITION
PARTITION
PARTITION
PARTITION
PARTITION

before2001
q1_2001
q2_2001
q3_2001
q4_2001
future

REM 12-DEC-2000
INSERT INTO sales_demo
REM 17-MAR-2001
INSERT INTO sales_demo
REM 1-NOV-2001
INSERT INTO sales_demo
REM 1-JAN-2002
INSERT INTO sales_demo

VALUES
VALUES
VALUES
VALUES
VALUES
VALUES

LESS
LESS
LESS
LESS
LESS
LESS

THAN
THAN
THAN
THAN
THAN
THAN

(2001,1),
(2001,4),
(2001,7),
(2001,10),
(2002,1),
(MAXVALUE,0));

VALUES(2000,12,12, 1000);
VALUES(2001,3,17, 2000);
VALUES(2001,11,1, 5000);
VALUES(2002,1,1, 4000);

The year value for 12-DEC-2000 satisfied the first partition, before2001, so no
further evaluation is needed:
SELECT * FROM sales_demo PARTITION(before2001);
YEAR
MONTH
DAY AMOUNT_SOLD
---------- ---------- ---------- ----------2000
12
12
1000

The information for 17-MAR-2001 is stored in partition q1_2001. The first partitioning
key column, year, does not by itself determine the correct partition, so the second
partition key column, month, must be evaluated.
SELECT * FROM sales_demo PARTITION(q1_2001);
YEAR
MONTH
DAY AMOUNT_SOLD
---------- ---------- ---------- ----------2001
3
17
2000

Following the same determination rule as for the previous record, the second column,
month, determines partition q4_2001 as correct partition for 1-NOV-2001:
SELECT * FROM sales_demo PARTITION(q4_2001);
YEAR
MONTH
DAY AMOUNT_SOLD
---------- ---------- ---------- ----------2001
11
1
5000

The partition for 01-JAN-2002 is determined by evaluating only the year column,
which indicates the future partition:
SELECT * FROM sales_demo PARTITION(future);
YEAR
MONTH
DAY AMOUNT_SOLD
---------- ---------- ---------- ----------2002
1
1
4000

If the database encounters MAXVALUE in one of the partition key columns, all other
values of subsequent columns become irrelevant. That is, a definition of partition
future in the preceding example, having a bound of (MAXVALUE,0) is equivalent to a
bound of (MAXVALUE,100) or a bound of (MAXVALUE,MAXVALUE).
The following example illustrates the use of a multicolumn partitioned approach for
table supplier_parts, storing the information about which suppliers deliver which
parts. To distribute the data in equal-sized partitions, it is not sufficient to partition the

17-16 Oracle Database Administrator’s Guide

Creating Partitioned Tables

table based on the supplier_id, because some suppliers might provide hundreds of
thousands of parts, while others provide only a few specialty parts. Instead, you
partition the table on (supplier_id, partnum) to manually enforce equal-sized
partitions.
CREATE TABLE supplier_parts (
supplier_id
NUMBER,
partnum
NUMBER,
price
NUMBER)
PARTITION BY RANGE (supplier_id, partnum)
(PARTITION p1 VALUES LESS THAN (10,100),
PARTITION p2 VALUES LESS THAN (10,200),
PARTITION p3 VALUES LESS THAN (MAXVALUE,MAXVALUE));

The following three records are inserted into the table:
INSERT INTO supplier_parts VALUES (5,5, 1000);
INSERT INTO supplier_parts VALUES (5,150, 1000);
INSERT INTO supplier_parts VALUES (10,100, 1000);

The first two records are inserted into partition p1, uniquely identified by
supplier_id. However, the third record is inserted into partition p2; it matches all
range boundary values of partition p1 exactly and the database therefore considers the
following partition for a match. The value of partnum satisfies the criteria < 200, so it
is inserted into partition p2.
SELECT * FROM supplier_parts PARTITION (p1);
SUPPLIER_ID
PARTNUM
PRICE
----------- ---------- ---------5
5
1000
5
150
1000
SELECT * FROM supplier_parts PARTITION (p2);
SUPPLIER_ID
PARTNUM
PRICE
----------- ---------- ---------10
100
1000

Every row with supplier_id < 10 will be stored in partition p1, regardless of the
partnum value. The column partnum will be evaluated only if supplier_id =10,
and the corresponding rows will be inserted into partition p1, p2, or even into p3
when partnum >=200. To achieve equal-sized partitions for ranges of
supplier_parts, you could choose a composite range-hash partitioned table, range
partitioned by supplier_id, hash subpartitioned by partnum.
Defining the partition boundaries for multicolumn partitioned tables must obey some
rules. For example, consider a table that is range partitioned on three columns a, b,
and c. The individual partitions have range values represented as follows:
P0(a0,
P1(a1,
P2(a2,
…
Pn(an,

b0, c0)
b1, c1)
b2, c2)
bn, cn)

The range values you provide for each partition must follow these rules:
■

a0 must be less than or equal to a1, and a1 must be less than or equal to a2, and
so on.

Managing Partitioned Tables and Indexes

17-17

Creating Partitioned Tables

■

■

If a0=a1, then b0 must be less than or equal to b1. If a0 < a1, then b0 and b1 can
have any values. If b0=b1, then c0 must be less than or equal to c1. If b04

SIZE

Managing Hash Clusters 19-5

Creating Hash Clusters

Overestimating the value of SIZE increases the amount of unused space in the
cluster. If space efficiency is more important than the performance of data
retrieval, disregard the adjustments shown in the preceding table and use the
original value for SIZE.

Setting HASHKEYS
For maximum distribution of rows in a hash cluster, the database rounds the
HASHKEYS value up to the nearest prime number.

Controlling Space in Hash Clusters
The following examples show how to correctly choose the cluster key and set the HASH
IS, SIZE, and HASHKEYS parameters. For all examples, assume that the data block
size is 2K and that on average, 1950 bytes of each block is available data space (block
size minus overhead).
Controlling Space in Hash Clusters: Example 1 You decide to load the emp table into a hash
cluster. Most queries retrieve employee records by their employee number. You
estimate that the maximum number of rows in the emp table at any given time is 10000
and that the average row size is 55 bytes.
In this case, empno should be the cluster key. Because this column contains integers
that are unique, the internal hash function can be bypassed. SIZE can be set to the
average row size, 55 bytes. Note that 34 hash keys are assigned for each data block.
HASHKEYS can be set to the number of rows in the table, 10000. The database rounds
this value up to the next highest prime number: 10007.
CREATE CLUSTER emp_cluster (empno
NUMBER)
. . .
SIZE 55
HASH IS empno HASHKEYS 10000;

Controlling Space in Hash Clusters: Example 2 Conditions similar to the previous example
exist. In this case, however, rows are usually retrieved by department number. At
most, there are 1000 departments with an average of 10 employees for each
department. Department numbers increment by 10 (0, 10, 20, 30, . . . ).
In this case, deptno should be the cluster key. Since this column contains integers that
are uniformly distributed, the internal hash function can be bypassed. A preliminary
value of SIZE (the average amount of space required to hold all rows for each
department) is 55 bytes * 10, or 550 bytes. Using this value for SIZE, only three hash
keys can be assigned for each data block. If you expect some collisions and want
maximum performance of data retrieval, slightly alter your estimated SIZE to prevent
collisions from requiring overflow blocks. By adjusting SIZE by 12%, to 620 bytes
(refer to "Setting SIZE" on page 19-5), there is more space for rows from expected
collisions.
HASHKEYS can be set to the number of unique department numbers, 1000. The
database rounds this value up to the next highest prime number: 1009.
CREATE CLUSTER emp_cluster (deptno NUMBER)
. . .
SIZE 620
HASH IS deptno HASHKEYS 1000;

19-6 Oracle Database Administrator’s Guide

Dropping Hash Clusters

Estimating Size Required by Hash Clusters
As with index clusters, it is important to estimate the storage required for the data in a
hash cluster.
Oracle Database guarantees that the initial allocation of space is sufficient to store the
hash table according to the settings SIZE and HASHKEYS. If settings for the storage
parameters INITIAL, NEXT, and MINEXTENTS do not account for the hash table size,
incremental (additional) extents are allocated until at least SIZE*HASHKEYS is
reached. For example, assume that the data block size is 2K, the available data space
for each block is approximately 1900 bytes (data block size minus overhead), and that
the STORAGE and HASH parameters are specified in the CREATE CLUSTER statement
as follows:
STORAGE (INITIAL 100K
NEXT 150K
MINEXTENTS 1
PCTINCREASE 0)
SIZE 1500
HASHKEYS 100

In this example, only one hash key can be assigned for each data block. Therefore, the
initial space required for the hash cluster is at least 100*2K or 200K. The settings for the
storage parameters do not account for this requirement. Therefore, an initial extent of
100K and a second extent of 150K are allocated to the hash cluster.
Alternatively, assume the HASH parameters are specified as follows:
SIZE 500 HASHKEYS 100

In this case, three hash keys are assigned to each data block. Therefore, the initial space
required for the hash cluster is at least 34*2K or 68K. The initial settings for the storage
parameters are sufficient for this requirement (an initial extent of 100K is allocated to
the hash cluster).

Altering Hash Clusters
You can alter a hash cluster with the ALTER CLUSTER statement:
ALTER CLUSTER emp_dept . . . ;

The implications for altering a hash cluster are identical to those for altering an index
cluster, described in "Altering Clusters" on page 18-6. However, the SIZE, HASHKEYS,
and HASH IS parameters cannot be specified in an ALTER CLUSTER statement. To
change these parameters, you must re-create the cluster, then copy the data from the
original cluster.

Dropping Hash Clusters
You can drop a hash cluster using the DROP CLUSTER statement:
DROP CLUSTER emp_dept;

A table in a hash cluster is dropped using the DROP TABLE statement. The
implications of dropping hash clusters and tables in hash clusters are the same as those
for dropping index clusters.
See Also:

"Dropping Clusters" on page 18-7

Managing Hash Clusters 19-7

Viewing Information About Hash Clusters

Viewing Information About Hash Clusters
The following views display information about hash clusters:
View

Description

DBA_CLUSTERS

DBA view describes all clusters (including hash clusters) in the
database. ALL view describes all clusters accessible to the user.
USER view is restricted to clusters owned by the user. Some
columns in these views contain statistics that are generated by the
DBMS_STATS package or ANALYZE statement.

ALL_CLUSTERS
USER_CLUSTERS

These views map table columns to cluster columns.

DBA_CLU_COLUMNS
USER_CLU_COLUMNS
DBA_CLUSTER_HASH_EXPRESSIONS

These views list hash functions for hash clusters.

ALL_CLUSTER_HASH_EXPRESSIONS
USER_CLUSTER_HASH_EXPRESSIONS

See Also:

Oracle Database Reference for complete descriptions of

these views

19-8 Oracle Database Administrator’s Guide

20
Managing Views, Sequences, and Synonyms
This chapter describes the management of views, sequences, and synonyms and
contains the following topics:
■

Managing Views

■

Managing Sequences

■

Managing Synonyms

■

Viewing Information About Views, Synonyms, and Sequences

Managing Views
This section describes aspects of managing views, and contains the following topics:
■

About Views

■

Creating Views

■

Replacing Views

■

Using Views in Queries

■

Updating a Join View

■

Altering Views

■

Dropping Views

About Views
A view is a logical representation of another table or combination of tables. A view
derives its data from the tables on which it is based. These tables are called base
tables. Base tables might in turn be actual tables or might be views themselves. All
operations performed on a view actually affect the base table of the view. You can use
views in almost the same way as tables. You can query, update, insert into, and delete
from views, just as you can standard tables.
Views can provide a different representation (such as subsets or supersets) of the data
that resides within other tables and views. Views are very powerful because they
allow you to tailor the presentation of data to different types of users.
Oracle Database Concepts for a more complete
description of views

See Also:

Creating Views
To create a view, you must meet the following requirements:
Managing Views, Sequences, and Synonyms 20-1

Managing Views

■

■

■

To create a view in your schema, you must have the CREATE VIEW privilege. To
create a view in another user's schema, you must have the CREATE ANY VIEW
system privilege. You can acquire these privileges explicitly or through a role.
The owner of the view (whether it is you or another user) must have been
explicitly granted privileges to access all objects referenced in the view definition.
The owner cannot have obtained these privileges through roles. Also, the
functionality of the view depends on the privileges of the view owner. For
example, if the owner of the view has only the INSERT privilege for Scott's emp
table, then the view can be used only to insert new rows into the emp table, not to
SELECT, UPDATE, or DELETE rows.
If the owner of the view intends to grant access to the view to other users, the
owner must have received the object privileges to the base objects with the GRANT
OPTION or the system privileges with the ADMIN OPTION.

You can create views using the CREATE VIEW statement. Each view is defined by a
query that references tables, materialized views, or other views. As with all
subqueries, the query that defines a view cannot contain the FOR UPDATE clause.
The following statement creates a view on a subset of data in the emp table:
CREATE VIEW sales_staff AS
SELECT empno, ename, deptno
FROM emp
WHERE deptno = 10
WITH CHECK OPTION CONSTRAINT sales_staff_cnst;

The query that defines the sales_staff view references only rows in department 10.
Furthermore, the CHECK OPTION creates the view with the constraint (named
sales_staff_cnst) that INSERT and UPDATE statements issued against the view
cannot result in rows that the view cannot select. For example, the following INSERT
statement successfully inserts a row into the emp table by means of the sales_staff
view, which contains all rows with department number 10:
INSERT INTO sales_staff VALUES (7584, 'OSTER', 10);

However, the following INSERT statement returns an error because it attempts to
insert a row for department number 30, which cannot be selected using the
sales_staff view:
INSERT INTO sales_staff VALUES (7591, 'WILLIAMS', 30);

The view could have been constructed specifying the WITH READ ONLY clause, which
prevents any updates, inserts, or deletes from being done to the base table through the
view. If no WITH clause is specified, the view, with some restrictions, is inherently
updatable.
Oracle Database SQL Reference for syntax and semantics
of the CREATE VIEW statement

See Also:

Join Views
You can also create views that specify more than one base table or view in the FROM
clause. These are called join views. The following statement creates the
division1_staff view that joins data from the emp and dept tables:
CREATE VIEW division1_staff AS
SELECT ename, empno, job, dname
FROM emp, dept
WHERE emp.deptno IN (10, 30)

20-2 Oracle Database Administrator’s Guide

Managing Views

AND emp.deptno = dept.deptno;

An updatable join view is a join view where UPDATE, INSERT, and DELETE
operations are allowed. See "Updating a Join View" on page 20-5 for further
discussion.

Expansion of Defining Queries at View Creation Time
When a view is created, Oracle Database expands any wildcard (*) in a top-level view
query into a column list. The resulting query is stored in the data dictionary; any
subqueries are left intact. The column names in an expanded column list are enclosed
in quote marks to account for the possibility that the columns of the base object were
originally entered with quotes and require them for the query to be syntactically
correct.
As an example, assume that the dept view is created as follows:
CREATE VIEW dept AS SELECT * FROM scott.dept;

The database stores the defining query of the dept view as:
SELECT "DEPTNO", "DNAME", "LOC" FROM scott.dept;

Views created with errors do not have wildcards expanded. However, if the view is
eventually compiled without errors, wildcards in the defining query are expanded.

Creating Views with Errors
If there are no syntax errors in a CREATE VIEW statement, the database can create the
view even if the defining query of the view cannot be executed. In this case, the view is
considered "created with errors." For example, when a view is created that refers to a
nonexistent table or an invalid column of an existing table, or when the view owner
does not have the required privileges, the view can be created anyway and entered
into the data dictionary. However, the view is not yet usable.
To create a view with errors, you must include the FORCE clause of the CREATE VIEW
statement.
CREATE FORCE VIEW AS ...;

By default, views with errors are created as INVALID. When you try to create such a
view, the database returns a message indicating the view was created with errors. If
conditions later change so that the query of an invalid view can be executed, the view
can be recompiled and be made valid (usable). For information changing conditions
and their impact on views, see "Managing Object Dependencies" on page 13-16.

Replacing Views
To replace a view, you must have all of the privileges required to drop and create a
view. If the definition of a view must change, the view must be replaced; you cannot
use an ALTER VIEW statement to change the definition of a view. You can replace
views in the following ways:
■

You can drop and re-create the view.
Caution: When a view is dropped, all grants of corresponding
object privileges are revoked from roles and users. After the view is
re-created, privileges must be regranted.

Managing Views, Sequences, and Synonyms 20-3

Managing Views

■

You can redefine the view with a CREATE VIEW statement that contains the OR
REPLACE clause. The OR REPLACE clause replaces the current definition of a view
and preserves the current security authorizations. For example, assume that you
created the sales_staff view as shown earlier, and, in addition, you granted
several object privileges to roles and other users. However, now you need to
redefine the sales_staff view to change the department number specified in
the WHERE clause. You can replace the current version of the sales_staff view
with the following statement:
CREATE OR REPLACE VIEW sales_staff AS
SELECT empno, ename, deptno
FROM emp
WHERE deptno = 30
WITH CHECK OPTION CONSTRAINT sales_staff_cnst;

Before replacing a view, consider the following effects:
■

■

■

Replacing a view replaces the view definition in the data dictionary. All
underlying objects referenced by the view are not affected.
If a constraint in the CHECK OPTION was previously defined but not included in
the new view definition, the constraint is dropped.
All views dependent on a replaced view become invalid (not usable). In addition,
dependent PL/SQL program units may become invalid, depending on what was
changed in the new version of the view. For example, if only the WHERE clause of
the view changes, dependent PL/SQL program units remain valid. However, if
any changes are made to the number of view columns or to the view column
names or data types, dependent PL/SQL program units are invalidated. See
"Managing Object Dependencies" on page 13-16 for more information on how the
database manages such dependencies.

Using Views in Queries
To issue a query or an INSERT, UPDATE, or DELETE statement against a view, you
must have the SELECT, INSERT, UPDATE, or DELETE object privilege for the view,
respectively, either explicitly or through a role.
Views can be queried in the same manner as tables. For example, to query the
Division1_staff view, enter a valid SELECT statement that references the view:
SELECT * FROM Division1_staff;
ENAME
EMPNO
JOB
DNAME
-----------------------------------------------------CLARK
7782
MANAGER
ACCOUNTING
KING
7839
PRESIDENT
ACCOUNTING
MILLER
7934
CLERK
ACCOUNTING
ALLEN
7499
SALESMAN
SALES
WARD
7521
SALESMAN
SALES
JAMES
7900
CLERK
SALES
TURNER
7844
SALESMAN
SALES
MARTIN
7654
SALESMAN
SALES
BLAKE
7698
MANAGER
SALES

With some restrictions, rows can be inserted into, updated in, or deleted from a base
table using a view. The following statement inserts a new row into the emp table using
the sales_staff view:
INSERT INTO sales_staff
VALUES (7954, 'OSTER', 30);
20-4 Oracle Database Administrator’s Guide

Managing Views

Restrictions on DML operations for views use the following criteria in the order listed:
1.

If a view is defined by a query that contains SET or DISTINCT operators, a GROUP
BY clause, or a group function, then rows cannot be inserted into, updated in, or
deleted from the base tables using the view.

2.

If a view is defined with WITH CHECK OPTION, a row cannot be inserted into, or
updated in, the base table (using the view), if the view cannot select the row from
the base table.

3.

If a NOT NULL column that does not have a DEFAULT clause is omitted from the
view, then a row cannot be inserted into the base table using the view.

4.

If the view was created by using an expression, such as DECODE(deptno, 10,
"SALES", ...), then rows cannot be inserted into or updated in the base table
using the view.

The constraint created by WITH CHECK OPTION of the sales_staff view only allows
rows that have a department number of 30 to be inserted into, or updated in, the emp
table. Alternatively, assume that the sales_staff view is defined by the following
statement (that is, excluding the deptno column):
CREATE VIEW sales_staff AS
SELECT empno, ename
FROM emp
WHERE deptno = 10
WITH CHECK OPTION CONSTRAINT sales_staff_cnst;

Considering this view definition, you can update the empno or ename fields of
existing records, but you cannot insert rows into the emp table through the
sales_staff view because the view does not let you alter the deptno field. If you
had defined a DEFAULT value of 10 on the deptno field, then you could perform
inserts.
When a user attempts to reference an invalid view, the database returns an error
message to the user:
ORA-04063: view 'view_name' has errors

This error message is returned when a view exists but is unusable due to errors in its
query (whether it had errors when originally created or it was created successfully but
became unusable later because underlying objects were altered or dropped).

Updating a Join View
An updatable join view (also referred to as a modifiable join view) is a view that
contains more than one table in the top-level FROM clause of the SELECT statement,
and is not restricted by the WITH READ ONLY clause.
The rules for updatable join views are shown in the following table. Views that meet
these criteria are said to be inherently updatable.
Rule

Description

General Rule

Any INSERT, UPDATE, or DELETE operation on a join view can
modify only one underlying base table at a time.

Managing Views, Sequences, and Synonyms 20-5

Managing Views

Rule

Description

UPDATE Rule

All updatable columns of a join view must map to columns of a
key-preserved table. See "Key-Preserved Tables" on page 20-7 for a
discussion of key-preserved tables. If the view is defined with the
WITH CHECK OPTION clause, then all join columns and all
columns of repeated tables are not updatable.

DELETE Rule

Rows from a join view can be deleted as long as there is exactly one
key-preserved table in the join. The key preserved table can be
repeated in the FROM clause. If the view is defined with the WITH
CHECK OPTION clause and the key preserved table is repeated,
then the rows cannot be deleted from the view.

INSERT Rule

An INSERT statement must not explicitly or implicitly refer to the
columns of a non-key-preserved table. If the join view is defined
with the WITH CHECK OPTION clause, INSERT statements are not
permitted.

There are data dictionary views that indicate whether the columns in a join view are
inherently updatable. See "Using the UPDATABLE_ COLUMNS Views" on page 20-11
for descriptions of these views.
There are some additional restrictions and conditions that
can affect whether a join view is inherently updatable. Specifics are
listed in the description of the CREATE VIEW statement in the
Oracle Database SQL Reference.

Note:

If a view is not inherently updatable, it can be made updatable by
creating an INSTEAD OF trigger on it. This is described in Oracle
Database Application Developer's Guide - Fundamentals.
Additionally, if a view is a join on other nested views, then the
other nested views must be mergeable into the top level view. For a
discussion of mergeable and unmergeable views, and more
generally, how the optimizer optimizes statements that reference
views, see the Oracle Database Performance Tuning Guide.
Examples illustrating the rules for inherently updatable join views, and a discussion of
key-preserved tables, are presented in following sections. The examples in these
sections work only if you explicitly define the primary and foreign keys in the tables,
or define unique indexes. The following statements create the appropriately
constrained table definitions for emp and dept.
CREATE TABLE dept (
deptno
NUMBER(4) PRIMARY KEY,
dname
VARCHAR2(14),
loc
VARCHAR2(13));
CREATE TABLE emp (
empno
NUMBER(4) PRIMARY KEY,
ename
VARCHAR2(10),
job
VARCHAR2(9),
mgr
NUMBER(4),
sal
NUMBER(7,2),
comm
NUMBER(7,2),
deptno
NUMBER(2),
FOREIGN KEY (DEPTNO) REFERENCES DEPT(DEPTNO));

20-6 Oracle Database Administrator’s Guide

Managing Views

You could also omit the primary and foreign key constraints listed in the preceding
example, and create a UNIQUE INDEX on dept (deptno) to make the following
examples work.
The following statement created the emp_dept join view which is referenced in the
examples:
CREATE VIEW emp_dept AS
SELECT emp.empno, emp.ename, emp.deptno, emp.sal, dept.dname, dept.loc
FROM emp, dept
WHERE emp.deptno = dept.deptno
AND dept.loc IN ('DALLAS', 'NEW YORK', 'BOSTON');

Key-Preserved Tables
The concept of a key-preserved table is fundamental to understanding the restrictions
on modifying join views. A table is key-preserved if every key of the table can also be
a key of the result of the join. So, a key-preserved table has its keys preserved through
a join.
It is not necessary that the key or keys of a table be selected
for it to be key preserved. It is sufficient that if the key or keys were
selected, then they would also be keys of the result of the join.

Note:

The key-preserving property of a table does not depend on the actual data in the table.
It is, rather, a property of its schema. For example, if in the emp table there was at most
one employee in each department, then deptno would be unique in the result of a join
of emp and dept, but dept would still not be a key-preserved table.
If you select all rows from emp_dept, the results are:
EMPNO
ENAME
DEPTNO
---------- ---------- ------7782 CLARK
10
7839 KING
10
7934 MILLER
10
7369 SMITH
20
7876 ADAMS
20
7902 FORD
20
7788 SCOTT
20
7566 JONES
20
8 rows selected.

DNAME
-------------ACCOUNTING
ACCOUNTING
ACCOUNTING
RESEARCH
RESEARCH
RESEARCH
RESEARCH
RESEARCH

LOC
----------NEW YORK
NEW YORK
NEW YORK
DALLAS
DALLAS
DALLAS
DALLAS
DALLAS

In this view, emp is a key-preserved table, because empno is a key of the emp table, and
also a key of the result of the join. dept is not a key-preserved table, because although
deptno is a key of the dept table, it is not a key of the join.

DML Statements and Join Views
The general rule is that any UPDATE, DELETE, or INSERT statement on a join view can
modify only one underlying base table. The following examples illustrate rules specific
to UPDATE, DELETE, and INSERT statements.
UPDATE Statements The following example shows an UPDATE statement that
successfully modifies the emp_dept view:
UPDATE emp_dept
SET sal = sal * 1.10
WHERE deptno = 10;

Managing Views, Sequences, and Synonyms 20-7

Managing Views

The following UPDATE statement would be disallowed on the emp_dept view:
UPDATE emp_dept
SET loc = 'BOSTON'
WHERE ename = 'SMITH';

This statement fails with an error (ORA-01779 cannot modify a column which
maps to a non key-preserved table), because it attempts to modify the base
dept table, and the dept table is not key-preserved in the emp_dept view.
In general, all updatable columns of a join view must map to columns of a
key-preserved table. If the view is defined using the WITH CHECK OPTION clause,
then all join columns and all columns taken from tables that are referenced more than
once in the view are not modifiable.
So, for example, if the emp_dept view were defined using WITH CHECK OPTION, the
following UPDATE statement would fail:
UPDATE emp_dept
SET deptno = 10
WHERE ename = 'SMITH';

The statement fails because it is trying to update a join column.
Oracle Database SQL Reference for syntax and additional
information about the UPDATE statement

See Also:

DELETE Statements You can delete from a join view provided there is one and only one
key-preserved table in the join. The key-preserved table can be repeated in the FROM
clause.
The following DELETE statement works on the emp_dept view:
DELETE FROM emp_dept
WHERE ename = 'SMITH';

This DELETE statement on the emp_dept view is legal because it can be translated to a
DELETE operation on the base emp table, and because the emp table is the only
key-preserved table in the join.
In the following view, a DELETE operation is permitted, because although there are
two key-preserved tables, they are the same table. That is, the key-preserved table is
repeated. In this case, the delete statement operates on the first table in the FROM list
(e1, in this example):
CREATE VIEW emp_emp AS
SELECT e1.ename, e2.empno, e2.deptno
FROM emp e1, emp e2
WHERE e1.empno = e2.empno;

If a view is defined using the WITH CHECK OPTION clause and the key-preserved
table is repeated, rows cannot be deleted from such a view.
CREATE VIEW emp_mgr AS
SELECT e1.ename, e2.ename mname
FROM emp e1, emp e2
WHERE e1.mgr = e2.empno
WITH CHECK OPTION;

Oracle Database SQL Reference for syntax and additional
information about the DELETE statement

See Also:

20-8 Oracle Database Administrator’s Guide

Managing Views

INSERT Statements The following INSERT statement on the emp_dept view succeeds:
INSERT INTO emp_dept (ename, empno, deptno)
VALUES ('KURODA', 9010, 40);

This statement works because only one key-preserved base table is being modified
(emp), and 40 is a valid deptno in the dept table (thus satisfying the FOREIGN KEY
integrity constraint on the emp table).
An INSERT statement, such as the following, would fail for the same reason that such
an UPDATE on the base emp table would fail: the FOREIGN KEY integrity constraint on
the emp table is violated (because there is no deptno 77).
INSERT INTO emp_dept (ename, empno, deptno)
VALUES ('KURODA', 9010, 77);

The following INSERT statement would fail with an error (ORA-01776 cannot
modify more than one base table through a join view):
INSERT INTO emp_dept (empno, ename, loc)
VALUES (9010, 'KURODA', 'BOSTON');

An INSERT cannot implicitly or explicitly refer to columns of a non-key-preserved
table. If the join view is defined using the WITH CHECK OPTION clause, then you
cannot perform an INSERT to it.
Oracle Database SQL Reference for syntax and additional
information about the INSERT statement

See Also:

Updating Views That Involve Outer Joins
Views that involve outer joins are modifiable in some cases. For example:
CREATE VIEW emp_dept_oj1 AS
SELECT empno, ename, e.deptno, dname, loc
FROM emp e, dept d
WHERE e.deptno = d.deptno (+);

The statement:
SELECT * FROM emp_dept_oj1;

Results in:
EMPNO
------7369
7499
7566
7654
7698
7782
7788
7839
7844
7876
7900
7902
7934
7521
14 rows

ENAME
---------SMITH
ALLEN
JONES
MARTIN
BLAKE
CLARK
SCOTT
KING
TURNER
ADAMS
JAMES
FORD
MILLER
WARD
selected.

DEPTNO
------40
30
20
30
30
10
20
10
30
20
30
20
10
30

DNAME
-------------OPERATIONS
SALES
RESEARCH
SALES
SALES
ACCOUNTING
RESEARCH
ACCOUNTING
SALES
RESEARCH
SALES
RESEARCH
ACCOUNTING
SALES

LOC
------------BOSTON
CHICAGO
DALLAS
CHICAGO
CHICAGO
NEW YORK
DALLAS
NEW YORK
CHICAGO
DALLAS
CHICAGO
DALLAS
NEW YORK
CHICAGO

Managing Views, Sequences, and Synonyms 20-9

Managing Views

Columns in the base emp table of emp_dept_oj1 are modifiable through the view,
because emp is a key-preserved table in the join.
The following view also contains an outer join:
CREATE VIEW emp_dept_oj2 AS
SELECT e.empno, e.ename, e.deptno, d.dname, d.loc
FROM emp e, dept d
WHERE e.deptno (+) = d.deptno;

The following statement:
SELECT * FROM emp_dept_oj2;

Results in:
EMPNO
---------7782
7839
7934
7369
7876
7902
7788
7566
7499
7698
7654
7900
7844
7521

ENAME
---------CLARK
KING
MILLER
SMITH
ADAMS
FORD
SCOTT
JONES
ALLEN
BLAKE
MARTIN
JAMES
TURNER
WARD

DEPTNO
--------10
10
10
20
20
20
20
20
30
30
30
30
30
30

DNAME
-------------ACCOUNTING
ACCOUNTING
ACCOUNTING
RESEARCH
RESEARCH
RESEARCH
RESEARCH
RESEARCH
SALES
SALES
SALES
SALES
SALES
SALES
OPERATIONS

LOC
---NEW YORK
NEW YORK
NEW YORK
DALLAS
DALLAS
DALLAS
DALLAS
DALLAS
CHICAGO
CHICAGO
CHICAGO
CHICAGO
CHICAGO
CHICAGO
BOSTON

15 rows selected.

In this view, emp is no longer a key-preserved table, because the empno column in the
result of the join can have nulls (the last row in the preceding SELECT statement). So,
UPDATE, DELETE, and INSERT operations cannot be performed on this view.
In the case of views containing an outer join on other nested views, a table is key
preserved if the view or views containing the table are merged into their outer views,
all the way to the top. A view which is being outer-joined is currently merged only if it
is "simple." For example:
SELECT col1, col2, ... FROM T;

The select list of the view has no expressions, and there is no WHERE clause.
Consider the following set of views:
CREATE VIEW emp_v AS
SELECT empno, ename, deptno
FROM emp;
CREATE VIEW emp_dept_oj1 AS
SELECT e.*, Loc, d.dname
FROM emp_v e, dept d
WHERE e.deptno = d.deptno (+);

In these examples, emp_v is merged into emp_dept_oj1 because emp_v is a simple
view, and so emp is a key-preserved table. But if emp_v is changed as follows:
CREATE VIEW emp_v_2 AS
SELECT empno, ename, deptno

20-10 Oracle Database Administrator’s Guide

Managing Views

FROM emp
WHERE sal > 1000;

Then, because of the presence of the WHERE clause, emp_v_2 cannot be merged into
emp_dept_oj1, and hence emp is no longer a key-preserved table.
If you are in doubt whether a view is modifiable, then you can select from the
USER_UPDATABLE_COLUMNS view to see if it is. For example:
SELECT owner, table_name, column_name, updatable FROM USER_UPDATABLE_COLUMNS
WHERE TABLE_NAME = 'EMP_DEPT_VIEW';

This returns output similar to the following:
OWNER
TABLE_NAME
---------- ---------SCOTT
EMP_DEPT_V
SCOTT
EMP_DEPT_V
SCOTT
EMP_DEPT_V
SCOTT
EMP_DEPT_V
SCOTT
EMP_DEPT_V
5 rows selected.

COLUMN_NAM
---------EMPNO
ENAME
DEPTNO
DNAME
LOC

UPD
--NO
NO
NO
NO
NO

Using the UPDATABLE_ COLUMNS Views
The views described in the following table can assist you to identify inherently
updatable join views.
View

Description

DBA_UPDATABLE_COLU
MNS

Shows all columns in all tables and views that are modifiable.

ALL_UPDATABLE_COLU
MNS

Shows all columns in all tables and views accessible to the user that
are modifiable.

USER_UPDATABLE_COL
UMNS

Shows all columns in all tables and views in the user's schema that
are modifiable.

The updatable columns in view emp_dept are shown below.
SELECT COLUMN_NAME, UPDATABLE
FROM USER_UPDATABLE_COLUMNS
WHERE TABLE_NAME = 'EMP_DEPT';
COLUMN_NAME
-----------------------------EMPNO
ENAME
DEPTNO
SAL
DNAME
LOC

UPD
--YES
YES
YES
YES
NO
NO

6 rows selected.

See Also: Oracle Database Reference for complete descriptions of
the updatable column views

Managing Views, Sequences, and Synonyms

20-11

Managing Sequences

Altering Views
You use the ALTER VIEW statement only to explicitly recompile a view that is invalid.
If you want to change the definition of a view, see "Replacing Views" on page 20-3.
The ALTER VIEW statement lets you locate recompilation errors before run time. To
ensure that the alteration does not affect the view or other objects that depend on it,
you can explicitly recompile a view after altering one of its base tables.
To use the ALTER VIEW statement, the view must be in your schema, or you must
have the ALTER ANY TABLE system privilege.
Oracle Database SQL Reference for syntax and additional
information about the ALTER VIEW statement

See Also:

Dropping Views
You can drop any view contained in your schema. To drop a view in another user's
schema, you must have the DROP ANY VIEW system privilege. Drop a view using the
DROP VIEW statement. For example, the following statement drops the emp_dept
view:
DROP VIEW emp_dept;

Oracle Database SQL Reference for syntax and additional
information about the DROP VIEW statement

See Also:

Managing Sequences
This section describes aspects of managing sequences, and contains the following
topics:
■

About Sequences

■

Creating Sequences

■

Altering Sequences

■

Using Sequences

■

Dropping Sequences

About Sequences
Sequences are database objects from which multiple users can generate unique
integers. The sequence generator generates sequential numbers, which can help to
generate unique primary keys automatically, and to coordinate keys across multiple
rows or tables.
Without sequences, sequential values can only be produced programmatically. A new
primary key value can be obtained by selecting the most recently produced value and
incrementing it. This method requires a lock during the transaction and causes
multiple users to wait for the next value of the primary key; this waiting is known as
serialization. If developers have such constructs in applications, then you should
encourage the developers to replace them with access to sequences. Sequences
eliminate serialization and improve the concurrency of an application.
Oracle Database Concepts for a more complete
description of sequences

See Also:

20-12 Oracle Database Administrator’s Guide

Managing Sequences

Creating Sequences
To create a sequence in your schema, you must have the CREATE SEQUENCE system
privilege. To create a sequence in another user's schema, you must have the CREATE
ANY SEQUENCE privilege.
Create a sequence using the CREATE SEQUENCE statement. For example, the
following statement creates a sequence used to generate employee numbers for the
empno column of the emp table:
CREATE SEQUENCE emp_sequence
INCREMENT BY 1
START WITH 1
NOMAXVALUE
NOCYCLE
CACHE 10;

Notice that several parameters can be specified to control the function of sequences.
You can use these parameters to indicate whether the sequence is ascending or
descending, the starting point of the sequence, the minimum and maximum values,
and the interval between sequence values. The NOCYCLE option indicates that the
sequence cannot generate more values after reaching its maximum or minimum value.
The CACHE clause preallocates a set of sequence numbers and keeps them in memory
so that sequence numbers can be accessed faster. When the last of the sequence
numbers in the cache has been used, the database reads another set of numbers into
the cache.
The database might skip sequence numbers if you choose to cache a set of sequence
numbers. For example, when an instance abnormally shuts down (for example, when
an instance failure occurs or a SHUTDOWN ABORT statement is issued), sequence
numbers that have been cached but not used are lost. Also, sequence numbers that
have been used but not saved are lost as well. The database might also skip cached
sequence numbers after an export and import. See Oracle Database Utilities for details.
See Also:
■

■

Oracle Database SQL Reference for the CREATE SEQUENCE
statement syntax
Oracle Real Application Clusters Deployment and Performance
Guide for information about how caching sequence numbers
improves performance in an Oracle Real Application Clusters
environment

Altering Sequences
To alter a sequence, your schema must contain the sequence, or you must have the
ALTER ANY SEQUENCE system privilege. You can alter a sequence to change any of
the parameters that define how it generates sequence numbers except the sequence
starting number. To change the starting point of a sequence, drop the sequence and
then re-create it.
Alter a sequence using the ALTER SEQUENCE statement. For example, the following
statement alters the emp_sequence:
ALTER SEQUENCE emp_sequence
INCREMENT BY 10
MAXVALUE 10000
CYCLE
CACHE 20;

Managing Views, Sequences, and Synonyms

20-13

Managing Sequences

Oracle Database SQL Reference for syntax and additional
information about the ALTER SEQUENCE statement

See Also:

Using Sequences
To use a sequence, your schema must contain the sequence or you must have been
granted the SELECT object privilege for another user's sequence. Once a sequence is
defined, it can be accessed and incremented by multiple users (who have SELECT
object privilege for the sequence containing the sequence) with no waiting. The
database does not wait for a transaction that has incremented a sequence to complete
before that sequence can be incremented again.
The examples outlined in the following sections show how sequences can be used in
master/detail table relationships. Assume an order entry system is partially comprised
of two tables, orders_tab (master table) and line_items_tab (detail table), that
hold information about customer orders. A sequence named order_seq is defined by
the following statement:
CREATE SEQUENCE Order_seq
START WITH 1
INCREMENT BY 1
NOMAXVALUE
NOCYCLE
CACHE 20;

Referencing a Sequence
A sequence is referenced in SQL statements with the NEXTVAL and CURRVAL
pseudocolumns; each new sequence number is generated by a reference to the
sequence pseudocolumn NEXTVAL, while the current sequence number can be
repeatedly referenced using the pseudo-column CURRVAL.
NEXTVAL and CURRVAL are not reserved words or keywords and can be used as
pseudocolumn names in SQL statements such as SELECT, INSERT, or UPDATE.
Generating Sequence Numbers with NEXTVAL To generate and use a sequence number,
reference seq_name.NEXTVAL. For example, assume a customer places an order. The
sequence number can be referenced in a values list. For example:
INSERT INTO Orders_tab (Orderno, Custno)
VALUES (Order_seq.NEXTVAL, 1032);

Or, the sequence number can be referenced in the SET clause of an UPDATE statement.
For example:
UPDATE Orders_tab
SET Orderno = Order_seq.NEXTVAL
WHERE Orderno = 10112;

The sequence number can also be referenced outermost SELECT of a query or
subquery. For example:
SELECT Order_seq.NEXTVAL FROM dual;

As defined, the first reference to order_seq.NEXTVAL returns the value 1. Each
subsequent statement that references order_seq.NEXTVAL generates the next
sequence number (2, 3, 4,. . .). The pseudo-column NEXTVAL can be used to generate as
many new sequence numbers as necessary. However, only a single sequence number
can be generated for each row. In other words, if NEXTVAL is referenced more than

20-14 Oracle Database Administrator’s Guide

Managing Sequences

once in a single statement, then the first reference generates the next number, and all
subsequent references in the statement return the same number.
Once a sequence number is generated, the sequence number is available only to the
session that generated the number. Independent of transactions committing or rolling
back, other users referencing order_seq.NEXTVAL obtain unique values. If two users
are accessing the same sequence concurrently, then the sequence numbers each user
receives might have gaps because sequence numbers are also being generated by the
other user.
Using Sequence Numbers with CURRVAL To use or refer to the current sequence value of
your session, reference seq_name.CURRVAL. CURRVAL can only be used if
seq_name.NEXTVAL has been referenced in the current user session (in the current or a
previous transaction). CURRVAL can be referenced as many times as necessary,
including multiple times within the same statement. The next sequence number is not
generated until NEXTVAL is referenced. Continuing with the previous example, you
would finish placing the customer's order by inserting the line items for the order:
INSERT INTO Line_items_tab (Orderno, Partno, Quantity)
VALUES (Order_seq.CURRVAL, 20321, 3);
INSERT INTO Line_items_tab (Orderno, Partno, Quantity)
VALUES (Order_seq.CURRVAL, 29374, 1);

Assuming the INSERT statement given in the previous section generated a new
sequence number of 347, both rows inserted by the statements in this section insert
rows with order numbers of 347.
Uses and Restrictions of NEXTVAL and CURRVAL CURRVAL and NEXTVAL can be used in
the following places:
■

VALUES clause of INSERT statements

■

The SELECT list of a SELECT statement

■

The SET clause of an UPDATE statement

CURRVAL and NEXTVAL cannot be used in these places:
■

A subquery

■

A view query or materialized view query

■

A SELECT statement with the DISTINCT operator

■

A SELECT statement with a GROUP BY or ORDER BY clause

■

A SELECT statement that is combined with another SELECT statement with the
UNION, INTERSECT, or MINUS set operator

■

The WHERE clause of a SELECT statement

■

DEFAULT value of a column in a CREATE TABLE or ALTER TABLE statement

■

The condition of a CHECK constraint

Caching Sequence Numbers
Sequence numbers can be kept in the sequence cache in the System Global Area (SGA).
Sequence numbers can be accessed more quickly in the sequence cache than they can
be read from disk.
The sequence cache consists of entries. Each entry can hold many sequence numbers
for a single sequence.
Managing Views, Sequences, and Synonyms

20-15

Managing Sequences

Follow these guidelines for fast access to all sequence numbers:
■

■

Be sure the sequence cache can hold all the sequences used concurrently by your
applications.
Increase the number of values for each sequence held in the sequence cache.

The Number of Entries in the Sequence Cache When an application accesses a sequence in
the sequence cache, the sequence numbers are read quickly. However, if an application
accesses a sequence that is not in the cache, then the sequence must be read from disk
to the cache before the sequence numbers are used.
If your applications use many sequences concurrently, then your sequence cache might
not be large enough to hold all the sequences. In this case, access to sequence numbers
might often require disk reads. For fast access to all sequences, be sure your cache has
enough entries to hold all the sequences used concurrently by your applications.
The Number of Values in Each Sequence Cache Entry When a sequence is read into the
sequence cache, sequence values are generated and stored in a cache entry. These
values can then be accessed quickly. The number of sequence values stored in the
cache is determined by the CACHE parameter in the CREATE SEQUENCE statement. The
default value for this parameter is 20.
This CREATE SEQUENCE statement creates the seq2 sequence so that 50 values of the
sequence are stored in the SEQUENCE cache:
CREATE SEQUENCE seq2
CACHE 50;

The first 50 values of seq2 can then be read from the cache. When the 51st value is
accessed, the next 50 values will be read from disk.
Choosing a high value for CACHE lets you access more successive sequence numbers
with fewer reads from disk to the sequence cache. However, if there is an instance
failure, then all sequence values in the cache are lost. Cached sequence numbers also
could be skipped after an export and import if transactions continue to access the
sequence numbers while the export is running.
If you use the NOCACHE option in the CREATE SEQUENCE statement, then the values of
the sequence are not stored in the sequence cache. In this case, every access to the
sequence requires a disk read. Such disk reads slow access to the sequence. This
CREATE SEQUENCE statement creates the SEQ3 sequence so that its values are never
stored in the cache:
CREATE SEQUENCE seq3
NOCACHE;

Dropping Sequences
You can drop any sequence in your schema. To drop a sequence in another schema,
you must have the DROP ANY SEQUENCE system privilege. If a sequence is no longer
required, you can drop the sequence using the DROP SEQUENCE statement. For
example, the following statement drops the order_seq sequence:
DROP SEQUENCE order_seq;

When a sequence is dropped, its definition is removed from the data dictionary. Any
synonyms for the sequence remain, but return an error when referenced.

20-16 Oracle Database Administrator’s Guide

Managing Synonyms

Oracle Database SQL Reference for syntax and additional
information about the DROP SEQUENCE statement

See Also:

Managing Synonyms
This section describes aspects of managing synonyms, and contains the following
topics:
■

About Synonyms

■

Creating Synonyms

■

Using Synonyms in DML Statements

■

Dropping Synonyms

About Synonyms
A synonym is an alias for a schema object. Synonyms can provide a level of security by
masking the name and owner of an object and by providing location transparency for
remote objects of a distributed database. Also, they are convenient to use and reduce
the complexity of SQL statements for database users.
Synonyms allow underlying objects to be renamed or moved, where only the synonym
needs to be redefined and applications based on the synonym continue to function
without modification.
You can create both public and private synonyms. A public synonym is owned by the
special user group named PUBLIC and is accessible to every user in a database. A
private synonym is contained in the schema of a specific user and available only to the
user and the user's grantees.
Oracle Database Concepts for a more complete
description of synonyms

See Also:

Creating Synonyms
To create a private synonym in your own schema, you must have the CREATE
SYNONYM privilege. To create a private synonym in another user's schema, you must
have the CREATE ANY SYNONYM privilege. To create a public synonym, you must
have the CREATE PUBLIC SYNONYM system privilege.
Create a synonym using the CREATE SYNONYM statement. The underlying schema
object need not exist, nor do you need privileges to access the object. The following
statement creates a public synonym named public_emp on the emp table contained
in the schema of jward:
CREATE PUBLIC SYNONYM public_emp FOR jward.emp

When you create a synonym for a remote procedure or function, you must qualify the
remote object with its schema name. Alternatively, you can create a local public
synonym on the database where the remote object resides, in which case the database
link must be included in all subsequent calls to the procedure or function.
Oracle Database SQL Reference for syntax and additional
information about the CREATE SYNONYM statement

See Also:

Managing Views, Sequences, and Synonyms

20-17

Viewing Information About Views, Synonyms, and Sequences

Using Synonyms in DML Statements
You can successfully use any private synonym contained in your schema or any public
synonym, assuming that you have the necessary privileges to access the underlying
object, either explicitly, from an enabled role, or from PUBLIC. You can also reference
any private synonym contained in another schema if you have been granted the
necessary object privileges for the private synonym.
You can only reference another user's synonym using the object privileges that you
have been granted. For example, if you have the SELECT privilege for the jward.emp
synonym, then you can query the jward.emp synonym, but you cannot insert rows
using the jward.emp synonym.
A synonym can be referenced in a DML statement the same way that the underlying
object of the synonym can be referenced. For example, if a synonym named emp refers
to a table or view, then the following statement is valid:
INSERT INTO emp (empno, ename, job)
VALUES (emp_sequence.NEXTVAL, 'SMITH', 'CLERK');

If the synonym named fire_emp refers to a standalone procedure or package
procedure, then you could execute it with the command
EXECUTE Fire_emp(7344);

Dropping Synonyms
You can drop any private synonym in your own schema. To drop a private synonym in
another user's schema, you must have the DROP ANY SYNONYM system privilege. To
drop a public synonym, you must have the DROP PUBLIC SYNONYM system
privilege.
Drop a synonym that is no longer required using DROP SYNONYM statement. To drop a
private synonym, omit the PUBLIC keyword. To drop a public synonym, include the
PUBLIC keyword.
For example, the following statement drops the private synonym named emp:
DROP SYNONYM emp;

The following statement drops the public synonym named public_emp:
DROP PUBLIC SYNONYM public_emp;

When you drop a synonym, its definition is removed from the data dictionary. All
objects that reference a dropped synonym remain. However, they become invalid (not
usable). For more information about how dropping synonyms can affect other schema
objects, see "Managing Object Dependencies".
Oracle Database SQL Reference for syntax and additional
information about the DROP SYNONYM statement

See Also:

Viewing Information About Views, Synonyms, and Sequences
The following views display information about views, synonyms, and sequences:

20-18 Oracle Database Administrator’s Guide

Viewing Information About Views, Synonyms, and Sequences

View

Description

DBA_VIEWS

DBA view describes all views in the database. ALL view is restricted to
views accessible to the current user. USER view is restricted to views
owned by the current user.

ALL_VIEWS
USER_VIEWS

These views describe synonyms.

DBA_SYNONYMS
ALL_SYNONYMS
USER_SYNONYMS

These views describe sequences.

DBA_SEQUENCES
ALL_SEQUENCES
USER_SEQUENCES
DBA_UPDATABLE_COLUMNS

These views describe all columns in join views that are updatable.

ALL_UPDATABLE_COLUMNS
USER_UPDATABLE_COLUMNS

See Also:

Oracle Database Reference for complete descriptions of

these views

Managing Views, Sequences, and Synonyms

20-19

Viewing Information About Views, Synonyms, and Sequences

20-20 Oracle Database Administrator’s Guide

21
Using DBMS_REPAIR to Repair Data Block
Corruption
This chapter explains how to use the DBMS_REPAIR PL/SQL package to repair data
block corruption in database schema objects. It contains the following topics:
■

Options for Repairing Data Block Corruption

■

About the DBMS_REPAIR Package

■

Using the DBMS_REPAIR Package

■

DBMS_REPAIR Examples
If you are not familiar with the DBMS_REPAIR package,
then it is recommended that you work with an Oracle Support
Services analyst when performing any of the repair procedures
included in this package.

Note:

Options for Repairing Data Block Corruption
Oracle Database provides different methods for detecting and correcting data block
corruption. One method of correction is to drop and re-create an object after the
corruption is detected. However, this is not always possible or desirable. If data block
corruption is limited to a subset of rows, then another option is to rebuild the table by
selecting all data except for the corrupt rows.
Another way to manage data block corruption is to use the DBMS_REPAIR package.
You can use DBMS_REPAIR to detect and repair corrupt blocks in tables and indexes.
You can continue to use objects while you attempt to rebuild or repair them.
Any corruption that involves the loss of data requires
analysis and understanding of how that data fits into the overall
database system. Depending on the nature of the repair, you might
lose data, and logical inconsistencies can be introduced. You must
determine whether the repair approach provided by this package is
the appropriate tool for each specific corruption problem.

Note:

About the DBMS_REPAIR Package
This section describes the procedures contained in the DBMS_REPAIR package and
notes some limitations and restrictions on their use.

Using DBMS_REPAIR to Repair Data Block Corruption

21-1

Using the DBMS_REPAIR Package

See Also: Oracle Database PL/SQL Packages and Types Reference for
more information on the syntax, restrictions, and exceptions for the
DBMS_REPAIR procedures

DBMS_REPAIR Procedures
The following table lists the procedures included in the DBMS_REPAIR package.
Procedure Name

Description

ADMIN_TABLES

Provides administrative functions (create, drop, purge) for
repair or orphan key tables.
Note: These tables are always created in the SYS schema.

CHECK_OBJECT

Detects and reports corruptions in a table or index

DUMP_ORPHAN_KEYS

Reports on index entries that point to rows in corrupt data
blocks

FIX_CORRUPT_BLOCKS

Marks blocks as software corrupt that have been previously
identified as corrupt by the CHECK_OBJECT procedure

REBUILD_FREELISTS

Rebuilds the free lists of the object

SEGMENT_FIX_STATUS

Provides the capability to fix the corrupted state of a bitmap
entry when segment space management is AUTO

SKIP_CORRUPT_BLOCKS

When used, ignores blocks marked corrupt during table and
index scans. If not used, you get error ORA-1578 when
encountering blocks marked corrupt.

These procedures are further described, with examples of their use, in "DBMS_REPAIR
Examples" on page 21-5.

Limitations and Restrictions
DBMS_REPAIR procedures have the following limitations:
■

■

■
■

■

Tables with LOB datatypes, nested tables, and varrays are supported, but the out
of line columns are ignored.
Clusters are supported in the SKIP_CORRUPT_BLOCKS and
REBUILD_FREELISTS procedures, but not in the CHECK_OBJECT procedure.
Index-organized tables and LOB indexes are not supported.
The DUMP_ORPHAN_KEYS procedure does not operate on bitmap indexes or
function-based indexes.
The DUMP_ORPHAN_KEYS procedure processes keys that are no more than 3,950
bytes long.

Using the DBMS_REPAIR Package
The following approach is recommended when considering DBMS_REPAIR for
addressing data block corruption:
■

Task 1: Detect and Report Corruptions

■

Task 2: Evaluate the Costs and Benefits of Using DBMS_REPAIR

■

Task 3: Make Objects Usable

■

Task 4: Repair Corruptions and Rebuild Lost Data

21-2 Oracle Database Administrator’s Guide

Using the DBMS_REPAIR Package

Task 1: Detect and Report Corruptions
The first task is the detection and reporting of corruptions. Reporting not only
indicates what is wrong with a block, but also identifies the associated repair directive.
There are several ways to detect corruptions. Table 21–1 describes the different
detection methodologies.
Table 21–1

Comparison of Corruption Detection Methods

Detection Method

Description

DBMS_REPAIR PL/SQL
package

Performs block checking for a specified table, partition, or index. It
populates a repair table with results.

DB_VERIFY utility

Performs block checking on an offline database

ANALYZE TABLE SQL
statement

Used with the VALIDATE STRUCTURE option, the ANALYZE
TABLE statement verifies the integrity of the structure of an index,
table, or cluster; checks or verifies that tables and indexes are
synchronized.

DB_BLOCK_CHECKING
initialization parameter

When DB_BLOCK_CHECKING=TRUE, corrupt blocks are identified
before they are marked corrupt. Checks are performed when
changes are made to a block.

DBMS_REPAIR: Using the CHECK_OBJECT and ADMIN_TABLES Procedures
The CHECK_OBJECT procedure checks and reports block corruptions for a specified
object. Similar to the ANALYZE...VALIDATE STRUCTURE statement for indexes
and tables, block checking is performed for index and data blocks.
Not only does CHECK_OBJECT report corruptions, but it also identifies any fixes that
would occur if FIX_CORRUPT_BLOCKS is subsequently run on the object. This
information is made available by populating a repair table, which must first be created
by the ADMIN_TABLES procedure.
After you run the CHECK_OBJECT procedure, a simple query on the repair table shows
the corruptions and repair directives for the object. With this information, you can
assess how best to address the reported problems.

DB_VERIFY: Performing an Offline Database Check
Use DB_VERIFY as an offline diagnostic utility when you encounter data corruption.
Oracle Database Utilities for more information about
DB_VERIFY

See Also:

ANALYZE: Reporting Corruption
The ANALYZE TABLE...VALIDATE STRUCTURE statement validates the structure of
the analyzed object. If the database successfully validates the structure, then a message
confirming its validation is returned. If the database encounters corruption in the
structure of the object, then an error message is returned. In this case, drop and
re-create the object.
Oracle Database SQL Reference for more information
about the ANALYZE statement

See Also:

DB_BLOCK_CHECKING Initialization Parameter
You can enable database block checking by setting the DB_BLOCK_CHECKING
initialization parameter to TRUE. This checks data and index blocks for internal
consistency whenever they are modified. DB_BLOCK_CHECKING is a dynamic
Using DBMS_REPAIR to Repair Data Block Corruption

21-3

Using the DBMS_REPAIR Package

parameter, modifiable by the ALTER SYSTEM SET statement. Block checking is
always enabled for the system tablespace.
See Also: Oracle Database Reference for more information about the
DB_BLOCK_CHECKING initialization parameter

Task 2: Evaluate the Costs and Benefits of Using DBMS_REPAIR
Before using DBMS_REPAIR you must weigh the benefits of its use in relation to the
liabilities. You should also examine other options available for addressing corrupt
objects. Begin by answering the following questions:
■

What is the extent of the corruption?
To determine if there are corruptions and repair actions, execute the
CHECK_OBJECT procedure and query the repair table.

■

■

What other options are available for addressing block corruptions? Consider the
following:
–

If the data is available from another source, then drop, re-create, and
repopulate the object.

–

Issue the CREATE TABLE...AS SELECT statement from the corrupt table to
create a new one.

–

Ignore the corruption by excluding corrupt rows from SELECT statements.

–

Perform media recovery.

What logical corruptions or side effects are introduced when you use
DBMS_REPAIR to make an object usable? Can these be addressed? What is the
effort required to do so?
It is possible that you do not have access to rows in blocks marked corrupt.
However, a block can be marked corrupt even if there are rows that you can
validly access.
It is also possible that referential integrity constraints are broken when blocks are
marked corrupt. If this occurs, then disable and reenable the constraint; any
inconsistencies are reported. After fixing all problems, you should be able to
reenable the constraint.
Logical corruption can occur when there are triggers defined on the table. For
example, if rows are reinserted, should insert triggers be fired or not? You can
address these issues only if you understand triggers and their use in your
installation.
If indexes and tables are not synchronized, then execute the DUMP_ORPHAN_KEYS
procedure to obtain information from the keys that might be useful in rebuilding
corrupted data. Then issue the ALTER INDEX...REBUILD ONLINE statement to
synchronize the table with its indexes.

■

If repair involves loss of data, can this data be retrieved?
You can retrieve data from the index when a data block is marked corrupt. The
DUMP_ORPHAN_KEYS procedure can help you retrieve this information.

Task 3: Make Objects Usable
DBMS_REPAIR makes the object usable by ignoring corruptions during table and
index scans.

21-4 Oracle Database Administrator’s Guide

DBMS_REPAIR Examples

Corruption Repair: Using the FIX_CORRUPT_BLOCKS and
SKIP_CORRUPT_BLOCKS Procedures
You can make a corrupt object usable by establishing an environment that skips
corruptions that remain outside the scope of DBMS_REPAIR capabilities.
If corruptions involve a loss of data, such as a bad row in a data block, all such blocks
are marked corrupt by the FIX_CORRUPT_BLOCKS procedure. Then you can run the
SKIP_CORRUPT_BLOCKS procedure, which skips blocks that are marked as corrupt.
When the SKIP_FLAG parameter in the procedure is set, table and index scans skip all
blocks marked corrupt. This applies to both media and software corrupt blocks.

Implications when Skipping Corrupt Blocks
If an index and table are not synchronized, then a SET TRANSACTION READ ONLY
transaction can be inconsistent in situations where one query probes only the index,
and a subsequent query probes both the index and the table. If the table block is
marked corrupt, then the two queries return different results, thereby breaking the
rules of a read-only transaction. One way to approach this isnot to skip corruptions in
a SET TRANSACTION READ ONLY transaction.
A similar issue occurs when selecting rows that are chained. A query of the same row
may or may not access the corruption, producing different results.

Task 4: Repair Corruptions and Rebuild Lost Data
After making an object usable, perform the following repair activities.

Recover Data Using the DUMP_ORPHAN_KEYS Procedures
The DUMP_ORPHAN_KEYS procedure reports on index entries that point to rows in
corrupt data blocks. All such index entries are inserted into an orphan key table that
stores the key and rowid of the corruption.
After the index entry information has been retrieved, you can rebuild the index using
the ALTER INDEX...REBUILD ONLINE statement.

Fix Segment Bitmaps Using the SEGMENT_FIX_STATUS Procedure
Use this procedure if free space in segments is being managed by using bitmaps
(SEGMENT SPACE MANAGEMENT AUTO).
This procedure recalculates the state of a bitmap entry based on the current contents of
the corresponding block. Alternatively, you can specify that a bitmap entry be set to a
specific value. Usually the state is recalculated correctly and there is no need to force a
setting.

DBMS_REPAIR Examples
This section includes the following topics:
■

Examples: Building a Repair Table or Orphan Key Table

■

Example: Detecting Corruption

■

Example: Fixing Corrupt Blocks

■

Example: Finding Index Entries Pointing to Corrupt Data Blocks

■

Example: Skipping Corrupt Blocks

Using DBMS_REPAIR to Repair Data Block Corruption

21-5

DBMS_REPAIR Examples

Examples: Building a Repair Table or Orphan Key Table
The ADMIN_TABLE procedure is used to create, purge, or drop a repair table or an
orphan key table.
A repair table provides information about the corruptions that were found by the
CHECK_OBJECT procedure and how these will be addressed if the
FIX_CORRUPT_BLOCKS procedure is run. Further, it is used to drive the execution of
the FIX_CORRUPT_BLOCKS procedure.
An orphan key table is used when the DUMP_ORPHAN_KEYS procedure is executed
and it discovers index entries that point to corrupt rows. The DUMP_ORPHAN_KEYS
procedure populates the orphan key table by logging its activity and providing the
index information in a usable manner.

Example: Creating a Repair Table
The following example creates a repair table for the users tablespace.
BEGIN
DBMS_REPAIR.ADMIN_TABLES (
TABLE_NAME => 'REPAIR_TABLE',
TABLE_TYPE => dbms_repair.repair_table,
ACTION
=> dbms_repair.create_action,
TABLESPACE => 'USERS');
END;
/

For each repair or orphan key table, a view is also created that eliminates any rows
that pertain to objects that no longer exist. The name of the view corresponds to the
name of the repair or orphan key table and is prefixed by DBA_ (for exampl,
DBA_REPAIR_TABLE or DBA_ORPHAN_KEY_TABLE).
The following query describes the repair table that was created for the users
tablespace.
DESC REPAIR_TABLE
Name
---------------------------OBJECT_ID
TABLESPACE_ID
RELATIVE_FILE_ID
BLOCK_ID
CORRUPT_TYPE
SCHEMA_NAME
OBJECT_NAME
BASEOBJECT_NAME
PARTITION_NAME
CORRUPT_DESCRIPTION
REPAIR_DESCRIPTION
MARKED_CORRUPT
CHECK_TIMESTAMP
FIX_TIMESTAMP
REFORMAT_TIMESTAMP

Null?
-------NOT NULL
NOT NULL
NOT NULL
NOT NULL
NOT NULL
NOT NULL
NOT NULL

Type
-------------NUMBER
NUMBER
NUMBER
NUMBER
NUMBER
VARCHAR2(30)
VARCHAR2(30)
VARCHAR2(30)
VARCHAR2(30)
VARCHAR2(2000)
VARCHAR2(200)
NOT NULL VARCHAR2(10)
NOT NULL DATE
DATE
DATE

Example: Creating an Orphan Key Table
This example illustrates the creation of an orphan key table for the users tablespace.
BEGIN
DBMS_REPAIR.ADMIN_TABLES (

21-6 Oracle Database Administrator’s Guide

DBMS_REPAIR Examples

TABLE_NAME
TABLE_TYPE
ACTION
TABLESPACE

=>
=>
=>
=>

'ORPHAN_KEY_TABLE',
dbms_repair.orphan_table,
dbms_repair.create_action,
'USERS');

END;
/

The orphan key table is described in the following query:
DESC ORPHAN_KEY_TABLE
Name
---------------------------SCHEMA_NAME
INDEX_NAME
IPART_NAME
INDEX_ID
TABLE_NAME
PART_NAME
TABLE_ID
KEYROWID
KEY
DUMP_TIMESTAMP

Null?
-------NOT NULL
NOT NULL
NOT NULL
NOT NULL
NOT
NOT
NOT
NOT

NULL
NULL
NULL
NULL

Type
----------------VARCHAR2(30)
VARCHAR2(30)
VARCHAR2(30)
NUMBER
VARCHAR2(30)
VARCHAR2(30)
NUMBER
ROWID
ROWID
DATE

Example: Detecting Corruption
The CHECK_OBJECT procedure checks the specified object, and populates the repair
table with information about corruptions and repair directives. You can optionally
specify a range, partition name, or subpartition name when you want to check a
portion of an object.
Validation consists of checking all blocks in the object that have not previously been
marked corrupt. For each block, the transaction and data layer portions are checked
for self consistency. During CHECK_OBJECT, if a block is encountered that has a
corrupt buffer cache header, then that block is skipped.
The following is an example of executing the CHECK_OBJECT procedure for the
scott.dept table.
SET SERVEROUTPUT ON
DECLARE num_corrupt INT;
BEGIN
num_corrupt := 0;
DBMS_REPAIR.CHECK_OBJECT (
SCHEMA_NAME => 'SCOTT',
OBJECT_NAME => 'DEPT',
REPAIR_TABLE_NAME => 'REPAIR_TABLE',
CORRUPT_COUNT => num_corrupt);
DBMS_OUTPUT.PUT_LINE('number corrupt: ' || TO_CHAR (num_corrupt));
END;
/

SQL*Plus outputs the following line, indicating one corruption:
number corrupt: 1

Querying the repair table produces information describing the corruption and
suggesting a repair action.
SELECT OBJECT_NAME, BLOCK_ID, CORRUPT_TYPE, MARKED_CORRUPT,
CORRUPT_DESCRIPTION, REPAIR_DESCRIPTION

Using DBMS_REPAIR to Repair Data Block Corruption

21-7

DBMS_REPAIR Examples

FROM REPAIR_TABLE;
OBJECT_NAME
BLOCK_ID CORRUPT_TYPE MARKED_COR
------------------------------ ---------- ------------ ---------CORRUPT_DESCRIPTION
-----------------------------------------------------------------------------REPAIR_DESCRIPTION
-----------------------------------------------------------------------------DEPT
3
1 FALSE
kdbchk: row locked by non-existent transaction
table=0
slot=0
lockid=32
ktbbhitc=1
mark block software corrupt

The corrupted block has not yet been marked corrupt, so this is the time to extract any
meaningful data. After the block is marked corrupt, the entire block must be skipped.

Example: Fixing Corrupt Blocks
Use the FIX_CORRUPT_BLOCKS procedure to fix the corrupt blocks in specified
objects based on information in the repair table that was generated by the
CHECK_OBJECT procedure. Before changing a block, the block is checked to ensure
that the block is still corrupt. Corrupt blocks are repaired by marking the block
software corrupt. When a repair is performed, the associated row in the repair table is
updated with a timestamp.
This example fixes the corrupt block in table scott.dept that was reported by the
CHECK_OBJECT procedure.
SET SERVEROUTPUT ON
DECLARE num_fix INT;
BEGIN
num_fix := 0;
DBMS_REPAIR.FIX_CORRUPT_BLOCKS (
SCHEMA_NAME => 'SCOTT',
OBJECT_NAME=> 'DEPT',
OBJECT_TYPE => dbms_repair.table_object,
REPAIR_TABLE_NAME => 'REPAIR_TABLE',
FIX_COUNT=> num_fix);
DBMS_OUTPUT.PUT_LINE('num fix: ' || TO_CHAR(num_fix));
END;
/

SQL*Plus outputs the following line:
num fix: 1

The following query confirms that the repair was done.
SELECT OBJECT_NAME, BLOCK_ID, MARKED_CORRUPT
FROM REPAIR_TABLE;
OBJECT_NAME
BLOCK_ID MARKED_COR
------------------------------ ---------- ---------DEPT
3 TRUE

21-8 Oracle Database Administrator’s Guide

DBMS_REPAIR Examples

Example: Finding Index Entries Pointing to Corrupt Data Blocks
The DUMP_ORPHAN_KEYS procedure reports on index entries that point to rows in
corrupt data blocks. For each index entry, a row is inserted into the specified orphan
key table. The orphan key table must have been previously created.
This information can be useful for rebuilding lost rows in the table and for diagnostic
purposes.
This should be run for every index associated with a table
identified in the repair table.

Note:

In this example, pk_dept is an index on the scott.dept table. It is scanned to
determine if there are any index entries pointing to rows in the corrupt data block.
SET SERVEROUTPUT ON
DECLARE num_orphans INT;
BEGIN
num_orphans := 0;
DBMS_REPAIR.DUMP_ORPHAN_KEYS (
SCHEMA_NAME => 'SCOTT',
OBJECT_NAME => 'PK_DEPT',
OBJECT_TYPE => dbms_repair.index_object,
REPAIR_TABLE_NAME => 'REPAIR_TABLE',
ORPHAN_TABLE_NAME=> 'ORPHAN_KEY_TABLE',
KEY_COUNT => num_orphans);
DBMS_OUTPUT.PUT_LINE('orphan key count: ' || TO_CHAR(num_orphans));
END;
/

The following output indicates that there are three orphan keys:
orphan key count: 3

Index entries in the orphan key table implies that the index should be rebuilt. This
guarantees that a table probe and an index probe return the same result set.

Example: Skipping Corrupt Blocks
The SKIP_CORRUPT_BLOCKS procedure enables or disables the skipping of corrupt
blocks during index and table scans of the specified object. When the object is a table,
skipping applies to the table and its indexes. When the object is a cluster, it applies to
all of the tables in the cluster, and their respective indexes.
The following example enables the skipping of software corrupt blocks for the
scott.dept table:
BEGIN
DBMS_REPAIR.SKIP_CORRUPT_BLOCKS (
SCHEMA_NAME => 'SCOTT',
OBJECT_NAME => 'DEPT',
OBJECT_TYPE => dbms_repair.table_object,
FLAGS => dbms_repair.skip_flag);
END;
/

Querying scott's tables using the DBA_TABLES view shows that SKIP_CORRUPT is
enabled for table scott.dept.
SELECT OWNER, TABLE_NAME, SKIP_CORRUPT FROM DBA_TABLES

Using DBMS_REPAIR to Repair Data Block Corruption

21-9

DBMS_REPAIR Examples

WHERE OWNER = 'SCOTT';
OWNER
-----------------------------SCOTT
SCOTT
SCOTT
SCOTT
SCOTT
SCOTT
SCOTT
SCOTT
SCOTT
SCOTT
10 rows selected.

21-10 Oracle Database Administrator’s Guide

TABLE_NAME
-----------------------------ACCOUNT
BONUS
DEPT
DOCINDEX
EMP
RECEIPT
SALGRADE
SCOTT_EMP
SYS_IOT_OVER_12255
WORK_AREA

SKIP_COR
-------DISABLED
DISABLED
ENABLED
DISABLED
DISABLED
DISABLED
DISABLED
DISABLED
DISABLED
DISABLED

Part V
Database Security
Part V addresses issues of user and privilege management affecting the security of the
database. It includes the following chapters:
■

Chapter 22, "Managing Users and Securing the Database"

22
Managing Users and Securing the Database
This chapter briefly discusses the creation and management of database users, with
special attention to the importance of establishing security policies to protect your
database, and provides cross-references to the appropriate security documentation.
The following topics are contained in this chapter:
■

The Importance of Establishing a Security Policy for Your Database

■

Managing Users and Resources

■

Managing User Privileges and Roles

■

Auditing Database Use

The Importance of Establishing a Security Policy for Your Database
It is important to develop a security policy for every database. The security policy
establishes methods for protecting your database from accidental or malicious
destruction of data or damage to the database infrastructure.
Each database can have an administrator, referred to as the security administrator,
who is responsible for implementing and maintaining the database security policy If
the database system is small, the database administrator can have the responsibilities
of the security administrator. However, if the database system is large, a designated
person or group of people may have sole responsibility as security administrator.
For information about establishing security policies for your database, see Oracle
Database Security Guide.

Managing Users and Resources
To connect to the database, each user must specify a valid user name that has been
previously defined to the database. An account must have been established for the
user, with information about the user being stored in the data dictionary.
When you create a database user (account), you specify the following attributes of the
user:
■

User name

■

Authentication method

■

Default tablespace

■

Temporary tablespace

■

Other tablespaces and quotas

Managing Users and Securing the Database

22-1

Managing User Privileges and Roles

■

User profile

To learn how to create and manage users, see Oracle Database Security Guide.

Managing User Privileges and Roles
Privileges and roles are used to control user access to data and the types of SQL
statements that can be executed. The table that follows describes the three types of
privileges and roles:
Type

Description

System privilege

A system-defined privilege usually granted only by
administrators. These privileges allow users to perform specific
database operations.

Object privilege

A system-defined privilege that controls access to a specific
object.

Role

A collection of privileges and other roles. Some system-defined
roles exist, but most are created by administrators. Roles group
together privileges and other roles, which facilitates the granting
of multiple privileges and roles to users.

Privileges and roles can be granted to other users by users who have been granted the
privilege to do so. The granting of roles and privileges starts at the administrator level.
At database creation, the administrative user SYS is created and granted all system
privileges and predefined Oracle Database roles. User SYS can then grant privileges
and roles to other users, and also grant those users the right to grant specific privileges
to others.
To learn how to administer privileges and roles for users, see Oracle Database Security
Guide.

Auditing Database Use
You can monitor and record selected user database actions, including those done by
administrators. There are several reasons why you might want to implement database
auditing. These, and descriptions of the types of auditing available to you, are
described in Oracle Database Security Guide.
The Oracle Database Security Guide includes directions for enabling and configuring
database auditing.

22-2 Oracle Database Administrator’s Guide

Part VI
Database Resource Management and Task
Scheduling
Part VI discusses database resource management. It contains information about using
the Oracle Database Resource Manager. Other chapters discuss Scheduler, a product
with functionality designed to simplify your management of tasks, but also to provide
a rich set of scheduling functionality to suit many needs.
This part contains the following chapters:
■

Chapter 23, "Managing Automatic System Tasks Using the Maintenance Window"

■

Chapter 24, "Using the Database Resource Manager"

■

Chapter 25, "Moving from DBMS_JOB to DBMS_SCHEDULER"

■

Chapter 26, "Scheduler Concepts"

■

Chapter 27, "Using the Scheduler"

■

Chapter 28, "Administering the Scheduler"

23
Managing Automatic System Tasks Using the
Maintenance Window
Oracle Database is preconfigured to perform some routine database maintenance tasks
so that you can run them at times when the system load is expected to be light. You
can specify for such a time period a resource plan that controls the resource
consumption of those maintenance tasks.
This chapter consists of the following sections:
■

Maintenance Windows

■

Predefined Automatic System Tasks

■

Resource Management for Automatic System Tasks

Maintenance Windows
Oracle Scheduler enables you to create time windows during which jobs are
automatically run. A typical Scheduler window defines a start time, a duration, and
optionally a resource plan to activate. A Scheduler job can then name a window as its
schedule. (When the window "opens," the job begins to run.) In addition, windows can
be combined into window groups, and if a job names a window group as its schedule
instead of naming a window, the job runs whenever any of the windows in the
window group opens.
Two Scheduler windows are predefined upon installation of Oracle Database:
■

■

WEEKNIGHT_WINDOW starts at 10 p.m. and ends at 6 a.m. every Monday through
Friday.
WEEKEND_WINDOW covers whole days Saturday and Sunday.

Together these windows constitute the MAINTENANCE_WINDOW_GROUP in which all
system maintenance tasks are scheduled. Oracle Database uses the maintenance
windows for automatic statistics collection, for space management, and for some other
internal system maintenance jobs.
You can adjust the predefined maintenance windows to a time suitable to your
database environment using the DBMS_SCHEDULER.SET_ATTRIBUTE procedure. For
example, the following script moves the WEEKNIGHT_WINDOW to midnight to 8 a.m.
every weekday morning:
EXECUTE DBMS_SCHEDULER.SET_ATTRIBUTE(
'WEEKNIGHT_WINDOW',
'repeat_interval',
'freq=daily;byday=MON, TUE, WED, THU, FRI;byhour=0;byminute=0;bysecond=0');

Managing Automatic System Tasks Using the Maintenance Window

23-1

Predefined Automatic System Tasks

Note that the duration of the window was already eight hours, so the script did not
need to change it.
You can also use the SET_ATTRIBUTE procedure to adjust any other property of a
window. For example, the following script sets resource plan
DEFAULT_MAINTENANCE_PLAN for the WEEKNIGHT_WINDOW:
EXECUTE DBMS_SCHEDULER.SET_ATTRIBUTE (
'WEEKNIGHT_WINDOW',
'resource_plan',
'DEFAULT_MAINTENANCE_PLAN');

In this case, if you have already enabled a different resource plan, Oracle Database will
make the DEFAULT_MAINTENANCE_PLAN active when the WEEKNIGHT_WINDOW
opens, and will reactivate the original resource plan when the WEEKNIGHT_WINDOW
closes.
See Also:
■

■

Chapter 27, "Using the Scheduler" for more information on the
Scheduler, and Oracle Database PL/SQL Packages and Types
Reference for information on the DBMS_SCHEDULER package
Chapter 24, "Using the Database Resource Manager" for more
information on creating and modifying resource plans

Predefined Automatic System Tasks
The following jobs are among the automatic system tasks that are predefined to run in
the maintenance windows:
■

Automatic Statistics Collection Job

■

Automatic Segment Advisor Job

Automatic Statistics Collection Job
A Scheduler job GATHER_STATS_JOB is predefined upon installation of Oracle
Database. GATHER_STATS_JOB collects optimizer statistics for all objects in the
database for which there are no statistics or only stale statistics.
If you prefer to manage statistics collection manually, you can disable the job as
follows:
EXECUTE DBMS_SCHEDULER.DISABLE('GATHER_STATS_JOB');

See Also: Oracle Database Performance Tuning Guide for more
information on automatic statistics collection

Automatic Segment Advisor Job
A Scheduler job AUTO_SPACE_ADVISOR_JOB is also predefined upon installation.
AUTO_SPACE_ADVISOR_JOB runs the Automatic Segment Advisor, which identifies
segments that have space available for reclamation, and then makes recommendations
that you can view with Enterprise Manager or a set of PL/SQL package procedures.
You can run the Segment Advisor manually to obtain more up-to-the-minute
recommendations or to obtain recommendations on segments that the Automatic
Segment Advisor did not examine for possible space reclamation.

23-2 Oracle Database Administrator’s Guide

Resource Management for Automatic System Tasks

See Also: "Using the Segment Advisor" on page 14-16 for more
information.

Resource Management for Automatic System Tasks
If you want to control how CPU resources are allocated to the automatic system tasks
so that these tasks do not significantly impact the performance of other database
applications, you can create a resource plan and assign the plan to the maintenance
windows. (These windows initially have no resource plan assigned.) This resource
plan must include directives for the predefined consumer group
AUTO_TASK_CONSUMER_GROUP, which is the Resource Manager consumer group
under which the automatic system tasks run.
The connection between the automatic system tasks and the
AUTO_TASK_CONSUMER_GROUP is made as follows: A Scheduler job class
AUTO_TASKS_JOB_CLASS is predefined and names AUTO_TASK_CONSUMER_GROUP
as its resource_consumer_group attribute. The automatic system tasks
GATHER_STATS_JOB and AUTO_SPACE_ADVISOR_JOB are defined to run in the
AUTO_TASKS_JOB_CLASS job class.
See Also:
■

■

Chapter 27, "Using the Scheduler" for more information on job
classes
Chapter 24, "Using the Database Resource Manager" for more
information on resource plans, resource plan directives, and
resource consumer groups

Managing Automatic System Tasks Using the Maintenance Window

23-3

Resource Management for Automatic System Tasks

23-4 Oracle Database Administrator’s Guide

24
Using the Database Resource Manager
Oracle Database provides database resource management capability through its
Database Resource Manager. This chapter introduces you to its use and contains the
following topics:
■

What Is the Database Resource Manager?

■

Administering the Database Resource Manager

■

Creating a Simple Resource Plan

■

Creating Complex Resource Plans

■

Managing Resource Consumer Groups

■

Enabling the Database Resource Manager

■

Putting It All Together: Database Resource Manager Examples

■

Monitoring and Tuning the Database Resource Manager

■

Interaction with Operating-System Resource Control

■

Viewing Database Resource Manager Information
This chapter discusses the use of the Oracle-supplied
DBMS_RESOURCE_MANAGER and
DBMS_RESOURCE_MANAGER_PRIVS packages to administer the
Database Resource Manager. An easier way to administer the
Database Resource Manager is with the graphical user interface of
Enterprise Manager.

Note:

What Is the Database Resource Manager?
The main goal of the Database Resource Manager is to give the Oracle Database server
more control over resource management decisions, thus circumventing problems
resulting from inefficient operating system management.

What Problems Does the Database Resource Manager Address?
When database resource allocation decisions are left to the operating system, you may
encounter the following problems:
■

Excessive overhead
Excessive overhead results from operating system context switching between
Oracle Database server processes when the number of server processes is high.

Using the Database Resource Manager 24-1

What Is the Database Resource Manager?

■

Inefficient scheduling
The operating system deschedules database servers while they hold latches, which
is inefficient.

■

Inappropriate allocation of resources
The operating system distributes resources equally among all active processes and
is unable to prioritize one task over another.

■

Inability to manage database-specific resources, such as parallel execution servers
and active sessions

How Does the Database Resource Manager Address These Problems?
The Oracle Database Resource Manager helps to overcome these problems by allowing
the database more control over how machine resources are allocated.
Specifically, using the Database Resource Manager, you can:
■

■

■

■

■

■

■

■

■

■

Guarantee certain users a minimum amount of processing resources regardless of
the load on the system and the number of users
Distribute available processing resources by allocating percentages of CPU time to
different users and applications. In a data warehouse, a higher percentage may be
given to ROLAP (relational on-line analytical processing) applications than to
batch jobs.
Limit the degree of parallelism of any operation performed by members of a group
of users
Create an active session pool. This pool consists of a specified maximum number
of user sessions allowed to be concurrently active within a group of users.
Additional sessions beyond the maximum are queued for execution, but you can
specify a timeout period, after which queued jobs will terminate.
Allow automatic switching of users from one group to another group based on
administrator defined criteria. If a member of a particular group of users creates a
session that executes for longer than a specified amount of time, that session can
be automatically switched to another group of users with different resource
requirements.
Prevent the execution of operations that the optimizer estimates will run for a
longer time than a specified limit
Create an undo pool. This pool consists of the amount of undo space that can be
consumed in by a group of users.
Limit the amount of time that a session can be idle. This can be further defined to
mean only sessions that are blocking other sessions.
Configure an instance to use a particular method of allocating resources. You can
dynamically change the method, for example, from a daytime setup to a nighttime
setup, without having to shut down and restart the instance.
Allow the cancellation of long-running SQL statements and the termination of
long-running sessions.

What Are the Elements of the Database Resource Manager?
The elements of database resource management, which you define through the
Database Resource Manager packages, are described below.

24-2 Oracle Database Administrator’s Guide

What Is the Database Resource Manager?

Element

Description

Resource consumer group

User sessions grouped together based on resource processing
requirements.

Resource plan

Contains directives that specify how resources are allocated
to resource consumer groups.

Resource allocation method

The method/policy used by the Database Resource Manager
when allocating for a particular resource; used by resource
consumer groups and resource plans. The database provides
the resource allocation methods that are available, but you
determine which method to use.

Resource plan directive

Used by administrators to associate resource consumer
groups with particular plans and allocate resources among
resource consumer groups.

You will learn how to create and use these elements in later sections of this chapter.

Understanding Resource Plans
This section briefly introduces the concept of resource plans. Included are some
illustrations of simple resource plans. More complex plans are included in the
examples presented later ("Putting It All Together: Database Resource Manager
Examples" on page 24-24), after it has been explained how to build and maintain the
elements of the Database Resource Manager.
Resource plans specify the resource consumer groups belonging to the plan and
contain directives for how resources are to be allocated among these groups. Plans can
also contain subplans, and can designate that resources be allocated among both
resource consumer groups and subplans. Subplans then allocate their share of the
allocation among their own consumer groups and subplans. You can define any
number of resource plans, but only one can be active at any time on a particular Oracle
Database instance.
You use the DBMS_RESOURCE_MANAGER package to create and maintain the elements
of the Database Resource Manager: resource plans, resource consumer groups, and
resource plan directives. Plan information is stored in tables in the data dictionary.
Several views are available for viewing plan data.
See Also: "Viewing Database Resource Manager Information" on
page 24-31

A Simple Resource Plan
The first illustration, shown in Figure 24–1, is of a simple plan, where the plan
allocates resources among resource consumer groups only. The Great Bread Company
has a plan called GREAT_BREAD that allocates CPU resources among three resource
consumer groups. Specifically, SALES is allotted 60% of the CPU time, MARKET is
allotted 20%, and DEVELOP receives the remaining 20%.

Using the Database Resource Manager 24-3

What Is the Database Resource Manager?

Figure 24–1 A Simple Resource Management Plan
GREAT_BREAD
plan
20%
CPU

60%
CPU
SALES
group

20%
CPU

MARKET
group

DEVELOP
group

Oracle Database provides a procedure (CREATE_SIMPLE_PLAN) that enables you to
quickly create a simple resource plan. This procedure is discussed in "Creating a
Simple Resource Plan" on page 24-8.
The currently active resource plan does not enforce allocation
limits until CPU usage is at 100%. If the CPU usage is below 100%,
the database is not CPU-bound and hence there is no need to enforce
limits to ensure that all sessions get their designated resource
allocation.

Note:

In addition, when limits are enforced, unused allocation by any
consumer group can be used by other consumer groups. In the
previous example, if the SALES group does not use all of its allocation,
the Resource Manager permits the MARKET group or DEVELOP group
to use the unused allocation.

A Resource Plan with Subplans
In addition to containing resource consumer groups, a plan can contain subplans. A
subplan can occupy the same position as a resource consumer group in the plan
hierarchy. For example, perhaps the Great Bread Company chooses to divide their
CPU resource as shown in Figure 24–2. The figure illustrates a plan schema, which
contains a top plan (GREAT_BREAD) and all of its descendents.
Figure 24–2 A Resource Plan With Subplans
GREAT_BREAD
plan
20%
CPU

60%
CPU
SALES_TEAM
plan

50%
CPU
WHOLESALE
group

20%
CPU

MARKET
group

50%
CPU
RETAIL
group

DEVELOP_TEAM
plan

50%
CPU
BREAD
group

50%
CPU
MUFFIN
group

In this case, the GREAT_BREAD plan still allocates 20% of CPU resources to the
consumer group MARKET. However, now it allocates CPU resources to subplans
SALES_TEAM (60%), which in turn divides its share equally between the WHOLESALE

24-4 Oracle Database Administrator’s Guide

What Is the Database Resource Manager?

and RETAIL groups, and DEVELOP_TEAM (20%), which in turn divides its resources
equally between the BREAD and MUFFIN groups.
It is possible for a subplan or consumer group to have more than one parent (owning
plan), but there cannot be any loops in a plan schema. An example of a subplan having
more that one parent would be if the Great Bread Company had a night plan and a
day plan. Both the night plan and the day plan contain the SALES_TEAM subplan as a
member, but perhaps with a different CPU resource allocation in each instance.
Note: As explained in subsequent sections, the plans described
should also contain a plan directive for OTHER_GROUPS. To present
a simplified view, however, this plan directive is not shown.

Resource Consumer Groups
Resource consumer groups are groups of users, or sessions, that are grouped together
based on their processing needs. Resource plan directives, discussed next, specify how
resources are allocated among consumer groups and subplans in a plan schema.

Resource Plan Directives
How resources are allocated to resource consumer groups is specified in resource
allocation directives. The Database Resource Manager provides several means of
allocating resources.
CPU Method This method enables you to specify how CPU resources are to be allocated
among consumer groups or subplans. Multiple levels of CPU resource allocation (up
to eight levels) provide a means of prioritizing CPU usage within a plan schema.
Consumer groups and subplans at level 2 get resources that were not allocated at level
1 or were not consumed by a consumer group or subplan at level 1. The same rules
apply to levels 3, 4, and so on. Multiple levels not only provide a way of prioritizing,
but they provide a way of explicitly specifying how all primary and leftover resources
are to be used.
See Figure 24–3 on page 24-26 for an example of a multilevel plan schema.
When there is only one level, unused allocation by any
consumer group or subplan can be used by other consumer groups or
subplans in the level.

Note:

Active Session Pool with Queuing You can control the maximum number of concurrently
active sessions allowed within a consumer group. This maximum designates the active
session pool. When a session cannot be initiated because the pool is full, the session is
placed into a queue. When an active session completes, the first session in the queue
can then be scheduled for execution. You can also specify a timeout period after which
a job in the execution queue (waiting for execution) will timeout, causing it to
terminate with an error.
An entire parallel execution session is counted as one active session.
Degree of Parallelism Limit Specifying a parallel degree limit enables you to control the
maximum degree of parallelism for any operation within a consumer group.

Using the Database Resource Manager 24-5

What Is the Database Resource Manager?

Automatic Consumer Group Switching This method enables you to control resources by
specifying criteria that, if met, causes the automatic switching of sessions to another
consumer group. The criteria used to determine switching are:
■

■

■

■

Switch group: specifies the consumer group to which this session is switched if the
other (following) criteria are met
Switch time: specifies the length of time that a session can execute before it is
switched to another consumer group
Switch time in call: specifies the length of time that a session can execute before it
is switched to another consumer group. Once the top call finishes, the session is
restored to its original consumer group.
Use estimate: specifies whether the database is to use its own estimate of how long
an operation will execute

The Database Resource Manager switches a running session to switch group if the
session is active for more than switch time seconds. Active means that the session is
running and consuming resources, not waiting idly for user input or waiting for CPU
cycles. The session is allowed to continue running, even if the active session pool for
the new group is full. Under these conditions a consumer group can have more
sessions running than specified by its active session pool. Once the session finishes its
operation and becomes idle, it is switched back to its original group.
If use estimate is set to TRUE, the Database Resource Manager uses a predicted estimate
of how long the operation will take to complete. If the database estimate is longer than
the value specified as the switch time, then the database switches the session before
execution starts. If this parameter is not set, the operation starts normally and only
switches groups when other switch criteria are met.
Switch time in call is useful for three-tier applications where the middle tier server is
using session pooling. At the end of every top call, a session is switched back to its
original consumer group--that is, the group it would be in had it just logged in. A top
call in PL/SQL is an entire PL/SQL block being treated as one call. A top call in SQL is
an individual SQL statement issued separately by the client being treated as a one call.
You cannot specify both switch time in call and switch time.
Canceling SQL and Terminating Sessions You can also specify directives to cancel
long-running SQL queries or to terminate long-running sessions. You specify this by
setting CANCEL_SQL or KILL_SESSION as the switch group.
Execution Time Limit You can specify a maximum execution time allowed for an
operation. If the database estimates that an operation will run longer than the specified
maximum execution time, the operation is terminated with an error. This error can be
trapped and the operation rescheduled.
Undo Pool You can specify an undo pool for each consumer group. An undo pool
controls the amount of total undo that can be generated by a consumer group. When
the total undo generated by a consumer group exceeds its undo limit, the current DML
statement generating the redo is terminated. No other members of the consumer group
can perform further data manipulation until undo space is freed from the pool.
Idle Time Limit You can specify an amount of time that a session can be idle, after which
it will be terminated. You can further restrict such termination to only sessions that are
blocking other sessions.

24-6 Oracle Database Administrator’s Guide

Administering the Database Resource Manager

Administering the Database Resource Manager
You must have the system privilege ADMINISTER_RESOURCE_MANAGER to administer
the Database Resource Manager. Typically, database administrators have this privilege
with the ADMIN option as part of the DBA (or equivalent) role.
Being an administrator for the Database Resource Manager lets you execute all of the
procedures in the DBMS_RESOURCE_MANAGER package. These are listed in the
following table, and their use is explained in succeeding sections of this chapter.
Procedure

Description

CREATE_SIMPLE_PLAN

Creates a simple resource plan, containing up to eight
consumer groups, in one step. This is the quickest way to get
started when you use this package.

CREATE_PLAN

Creates a resource plan and specifies its allocation methods.

UPDATE_PLAN

Updates a resource plan.

DELETE_PLAN

Deletes a resource plan and its directives.

DELETE_PLAN_CASCADE

Deletes a resource plan and all of its descendents.

CREATE_CONSUMER_GROUP

Creates a resource consumer group.

UPDATE_CONSUMER_GROUP

Updates a consumer group.

DELETE_CONSUMER_GROUP

Deletes a consumer group.

CREATE_PLAN_DIRECTIVE

Specifies the resource plan directives that allocate resources to
resource consumer groups or subplans in a plan.

UPDATE_PLAN_DIRECTIVE

Updates plan directives

DELETE_PLAN_DIRECTIVE

Deletes plan directives

CREATE_PENDING_AREA

Creates a pending area (scratch area) within which changes
can be made to a plan schema

VALIDATE_PENDING_AREA

Validates the pending changes to a plan schema

CLEAR_PENDING_AREA

Clears all pending changes from the pending area

SUBMIT_PENDING_AREA

Submits all changes for a plan schema

SET_INITIAL_CONSUMER_GROUP

Sets the initial consumer group for a user. This procedure has
been deprecated. The database recommends that you use the
SET_CONSUMER_GROUP_MAPPING procedure to specify the
initial consumer group.

SWITCH_CONSUMER_GROUP_FOR_SESS

Switches the consumer group of a specific session

SWITCH_CONSUMER_GROUP_FOR_USER

Switches the consumer group of all sessions belonging to a
specific user

SET_CONSUMER_GROUP_MAPPING

Maps sessions to consumer groups

SET_CONSUMER_GROUP_MAPPING_PRI

Establishes session attribute mapping priorities

You may, as an administrator with the ADMIN option, choose to grant the
administrative privilege to other users or roles. This is possible using the
DBMS_RESOURCE_MANAGER_PRIVS package. This package contains the procedures
listed in the table below.

Using the Database Resource Manager 24-7

Creating a Simple Resource Plan

Procedure

Description

GRANT_SYSTEM_PRIVILEGE

Grants ADMINISTER_RESOURCE_MANAGER system privilege
to a user or role.

REVOKE_SYSTEM_PRIVILEGE

Revokes ADMINISTER_RESOURCE_MANAGER system privilege
from a user or role.

GRANT_SWITCH_CONSUMER_GROUP

Grants permission to a user, role, or PUBLIC to switch to a
specified resource consumer group.

REVOKE_SWITCH_CONSUMER_GROUP

Revokes permission for a user, role, or PUBLIC to switch to a
specified resource consumer group.

The following example grants the administrative privilege to user scott, but does not
grant scott the ADMIN option. Therefore, scott can execute all of the procedures in
the DBMS_RESOURCE_MANAGER package, but scott cannot use the
GRANT_SYSTEM_PRIVILEGE procedure to grant the administrative privilege to
others.
EXEC DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SYSTEM_PRIVILEGE (GRANTEE_NAME => 'scott', PRIVILEGE_NAME => 'ADMINISTER_RESOURCE_MANAGER', ADMIN_OPTION => FALSE);

You can revoke this privilege using the REVOKE_SYSTEM_PRVILEGE procedure.
Note: The ADMINISTER_RESOURCE_MANAGER system privilege
can only be granted or revoked by using the
DBMS_RESOURCE_MANAGER_PRIVS package. It cannot be granted
or revoked through the SQL GRANT or REVOKE statements.

The other procedures in the DBMS_RESOURCE_MANAGER_PRIVS package are
discussed in "Managing the Switch Privilege" on page 24-20.
See Also: Oracle Database PL/SQL Packages and Types Reference.
contains detailed information about the Database Resource
Manager packages:
■

DBMS_RESOURCE_MANAGER

■

DBMS_RESOURCE_MANAGER_PRIVS

Creating a Simple Resource Plan
You can quickly create a simple resource plan that will be adequate for many
situations using the CREATE_SIMPLE_PLAN procedure. This procedure enables you to
create consumer groups and allocate resources to them by executing a single
statement. Using this procedure, you are not required to invoke the procedures that
are described in succeeding sections for creating a pending area, creating each
consumer group individually, and specifying resource plan directives.
You can specify the following parameters for the CREATE_SIMPLE_PLAN procedure:
Parameter

Description

SIMPLE_PLAN

Name of the plan

CONSUMER_GROUP1

Consumer group name for first group

24-8 Oracle Database Administrator’s Guide

Creating Complex Resource Plans

Parameter

Description

GROUP1_CPU

CPU resource allocated to this group

CONSUMER_GROUP2

Consumer group name for second group

GROUP2_CPU

CPU resource allocated to this group

CONSUMER_GROUP3

Consumer group name for third group

GROUP3_CPU

CPU resource allocated to this group

CONSUMER_GROUP4

Consumer group name for fourth group

GROUP4_CPU

CPU resource allocated to this group

CONSUMER_GROUP5

Consumer group name for fifth group

GROUP5_CPU

CPU resource allocated to this group

CONSUMER_GROUP6

Consumer group name for sixth group

GROUP6_CPU

CPU resource allocated to this group

CONSUMER_GROUP7

Consumer group name for seventh group

GROUP7_CPU

CPU resource allocated to this group

CONSUMER_GROUP8

Consumer group name for eighth group

GROUP8_CPU

CPU resource allocated to this group

Up to eight consumer groups can be specified using this procedure and the only plan
directive that can be specified is for CPU. The plan uses the EMPHASIS CPU allocation
policy and each consumer group uses the ROUND_ROBIN scheduling policy. Each
consumer group specified in the plan is allocated its CPU percentage at level 2. Also
implicitly included in the plan are SYS_GROUP (a system-defined group that is the
initial consumer group for the users SYS and SYSTEM) and OTHER_GROUPS.
Example: Using the CREATE_SIMPLE_PLAN Procedure
BEGIN
DBMS_RESOURCE_MANAGER.CREATE_SIMPLE_PLAN(SIMPLE_PLAN => 'simple_plan1',
CONSUMER_GROUP1 => 'mygroup1', GROUP1_CPU => 80,
CONSUMER_GROUP2 => 'mygroup2', GROUP2_CPU => 20);
END;

Executing the preceding statements creates the following plan:
Consumer Group

Level 1

Level 2

Level 3

SYS_GROUP

100%

-

-

mygroup1

-

80%

-

mygroup2

-

20%

-

OTHER_GROUPS

-

-

100%

Creating Complex Resource Plans
This section describes the actions and DBMS_RESOURCE_MANAGER procedures that
you can use when your situation requires that you create more complex resource
plans. It contains the following sections:
■

Using the Pending Area for Creating Plan Schemas

Using the Database Resource Manager 24-9

Creating Complex Resource Plans

■

Creating Resource Plans

■

Creating Resource Consumer Groups

■

Specifying Resource Plan Directives

Using the Pending Area for Creating Plan Schemas
The first thing you must do to create or modify plan schemas is to create a pending
area. This is a scratch area allowing you to stage your changes and to validate them
before they are made active.

Creating a Pending Area
To create a pending area, you use the following statement:
EXEC DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA;

In effect, you are making the pending area active and "loading" all existing, or active,
plan schemas into the pending area so that they can be updated or new plans added.
Active plan schemas are those schemas already stored in the data dictionary for use by
the Database Resource Manager. If you attempt to update a plan or add a new plan
without first activating (creating) the pending area, you will receive an error message
notifying you that the pending area is not active.
Views are available for inspecting all active resource plan schemas as well as the
pending ones. These views are listed in Viewing Database Resource Manager
Information on page 24-31.

Validating Changes
At any time when you are making changes in the pending area you can call the
validate procedure as shown here.
EXEC DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA;

This procedure checks whether changes that have been made are valid. The following
rules must be adhered to, and are checked by the validate procedure:
1.

No plan schema can contain any loops.

2.

All plans and resource consumer groups referred to by plan directives must exist.

3.

All plans must have plan directives that point to either plans or resource consumer
groups.

4.

All percentages in any given level must not add up to greater than 100.

5.

A plan that is currently being used as a top plan by an active instance cannot be
deleted.

6.

The following plan directive parameters can appear only in plan directives that
refer to resource consumer groups (not other resource plans):
■

PARALLEL_DEGREE_LIMIT_P1

■

ACTIVE_SESS_POOL_P1

■

QUEUEING_P1

■

SWITCH_GROUP

■

SWITCH_TIME

■

SWITCH_ESTIMATE

24-10 Oracle Database Administrator’s Guide

Creating Complex Resource Plans

■

MAX_EST_EXEC_TIME

■

UNDO_POOL

■

MAX_IDLE_TIME

■

MAX_IDLE_BLOCKER_TIME

■

SWITCH_TIME_IN_CALL

7.

There can be no more than 32 resource consumer groups in any active plan
schema. Also, at most, a plan can have 32 children.

8.

Plans and resource consumer groups cannot have the same name.

9.

There must be a plan directive for OTHER_GROUPS somewhere in any active plan
schema. This ensures that a session which is not part of any of the consumer
groups included in the currently active plan is allocated resources (as specified by
the OTHER_GROUPS directive).

You will receive an error message if any of the preceding rules are violated. You can
then make changes to fix any problems and call the validate procedure again.
It is possible to create "orphan" consumer groups that have no plan directives referring
to them. This allows the creation of consumer groups that will not currently be used,
but may be part of some plan to be implemented in the future.

Submitting Changes
After you have validated your changes, call the submit procedure to make your
changes active.
EXEC DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA;

The submit procedure also performs validation, so you do not necessarily need to
make separate calls to the validate procedure. However, if you are making major
changes to plan schemas, debugging problems is often easier if you incrementally
validate your changes. No changes are submitted (made active) until validation is
successful on all of the changes in the pending area.
The SUBMIT_PENDING_AREA procedure clears (deactivates) the pending area after
successfully validating and committing the changes.
A call to SUBMIT_PENDING_AREA may fail even if
VALIDATE_PENDING_AREA succeeds. This can happen if, for
example, a plan being deleted is loaded by an instance after a call to
VALIDATE_PENDING_AREA, but before a call to
SUBMIT_PENDING_AREA.

Note:

Clearing the Pending Area
There is also a procedure for clearing the pending area at any time. This statement
causes all of your changes to be cleared from the pending area:
EXEC DBMS_RESOURCE_MANAGER.CLEAR_PENDING_AREA;

You must call the CREATE_PENDING_AREA procedure before you can again attempt to
make changes.

Using the Database Resource Manager

24-11

Creating Complex Resource Plans

Creating Resource Plans
When you create a resource plan, you can specify the parameters shown in the
following table. The first parameter is required; the remainder are optional.
Parameter

Description

PLAN

Name of the plan.

COMMENT

Any descriptive comment.

CPU_MTH

Resource allocation method for specifying how
much CPU each consumer group or subplan
gets. EMPHASIS, the default method, is for
multilevel plans that use percentages to specify
how CPU is distributed among consumer
groups. RATIO is for single-level plans that use
ratios to specify how CPU is distributed.

ACTIVE_SESS_POOL_MTH

Active session pool resource allocation method.
Limits the number of active sessions. All other
sessions are inactive and wait in a queue to be
activated. ACTIVE_SESS_POOL_ABSOLUTE is
the default and only method available.

PARALLEL_DEGREE_LIMIT_MTH

Resource allocation method for specifying a
limit on the degree of parallelism of any
operation.
PARALLEL_DEGREE_LIMIT_ABSOLUTE is the
default and only method available.

QUEUEING_MTH

Queuing resource allocation method. Controls
order in which queued inactive sessions will
execute. FIFO_TIMEOUT is the default and only
method available.

Oracle Database provides one resource plan, SYSTEM_PLAN, that contains a simple
structure that may be adequate for some environments. It is illustrated later in "An
Oracle-Supplied Plan" on page 24-27.

Creating a Plan
You create a plan using the CREATE_PLAN procedure. The following creates a plan
called great_bread. You choose to use the default resource allocation methods.
EXEC DBMS_RESOURCE_MANAGER.CREATE_PLAN(PLAN => 'great_bread', COMMENT => 'great plan');

Updating a Plan
Use the UPDATE_PLAN procedure to update plan information. If you do not specify
the arguments for the UPDATE_PLAN procedure, they remain unchanged in the data
dictionary. The following statement updates the COMMENT parameter.
EXEC DBMS_RESOURCE_MANAGER.UPDATE_PLAN(PLAN => 'great_bread', NEW_COMMENT => 'great plan for great bread');

Deleting a Plan
The DELETE_PLAN procedure deletes the specified plan as well as all the plan
directives associated with it. The following statement deletes the great_bread plan
and its directives.
EXEC DBMS_RESOURCE_MANAGER.DELETE_PLAN(PLAN => 'great_bread');

24-12 Oracle Database Administrator’s Guide

Creating Complex Resource Plans

The resource consumer groups themselves are not deleted, but they are no longer
associated with the great_bread plan.
The DELETE_PLAN_CASCADE procedure deletes the specified plan as well as all its
descendants (plan directives, subplans, resource consumer groups). If
DELETE_PLAN_CASCADE encounters an error, it will roll back, leaving the plan
schema unchanged.

Using the Ratio Policy
the RATIO policy is a single-level CPU allocation method. Instead of percentages, you
specify numbers corresponding to the ratio of CPU you want to give to the consumer
group. For example, given three consumer groups GOLD_CG, SILVER_CG, and
BRONZE_CG, assume that we specify the following plan directives:
DBMS_RESOURCE_MANAGER.CREATE_PLAN
(PLAN => 'service_level_plan',
CPU_MTH -> 'RATIO',
COMMENT => 'service level plan');
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE
(PLAN => 'service_level_plan',
GROUP_OR_SUBPLAN => 'GOLD_CG',
COMMENT => 'Gold service level customers',
CPU_P1 => 10);
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE
(PLAN => 'service_level_plan',
GROUP_OR_SUBPLAN => 'SILVER_CG',
COMMENT => 'Silver service level customers',
CPU_P1 => 5);
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE
(PLAN => 'service_level_plan',
GROUP_OR_SUBPLAN => 'BRONZE_CG',
COMMENT => 'Bonze service level customers',
CPU_P1 => 2);
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE
(PLAN => 'service_level_plan',
GROUP_OR_SUBPLAN => 'OTHER_GROUPS',
COMMENT => 'Lowest priority sessions',
CPU_P1 => 1);

The ratio of CPU allocation would be 10:5:2:1 for the GOLD_CG, SILVER_CG,
BRONZE_CG, and OTHER_GROUPS consumer groups, respectively.
If sessions exists only in the GOLD_CG and SILVER_CG consumer groups, then the
ratio of CPU allocation would be 10:5 between the two groups.

Creating Resource Consumer Groups
When you create a resource consumer group, you can specify the following
parameters:
Parameter

Description

CONSUMER_GROUP

Name of the consumer group.

COMMENT

Any comment.

Using the Database Resource Manager

24-13

Creating Complex Resource Plans

Parameter

Description

CPU_MTH

The resource allocation method for distributing
CPU among sessions in the consumer group.
The default is ROUND_ROBIN, which uses a
round-robin scheduler to ensure that sessions
are fairly executed. RUN_TO_COMPLETION
specifies that sessions with the largest active
time are scheduled ahead of other sessions.

There are two special consumer groups that are always present in the data dictionary,
and they cannot be modified or deleted. These are:
■

DEFAULT_CONSUMER_GROUP
This is the initial consumer group for all users/sessions that have not been
explicitly assigned an initial consumer group. DEFAULT_CONSUMER_GROUP has
switch privileges granted to PUBLIC; therefore, all users are automatically granted
switch privilege for this consumer group (see "Managing the Switch Privilege" on
page 24-20).

■

OTHER_GROUPS
This consumer group cannot be explicitly assigned to a user. OTHER_GROUPS must
have a resource directive specified in the schema of any active plan. This group
applies collectively to all sessions that belong to a consumer group that is not part
of the currently active plan schema, including DEFAULT_CONSUMER_GROUP.

Additionally, two other groups, SYS_GROUP and LOW_GROUP, are provided as part of
the Oracle-supplied SYSTEM_PLAN that is described in "An Oracle-Supplied Plan" on
page 24-27.

Creating a Consumer Group
You create a consumer group using the CREATE_CONSUMER_GROUP procedure. The
following creates a consumer group called sales. Remember, the pending area must
be active to execute this statement successfully.
EXEC DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP (CONSUMER_GROUP => 'sales', COMMENT => 'retail and wholesale sales');

Updating a Consumer Group
Use the UPDATE_CONSUMER_GROUP procedure to update consumer group
information. If you do not specify the arguments for the UPDATE_CONSUMER_GROUP
procedure, they remain unchanged in the data dictionary.

Deleting a Consumer Group
The DELETE_CONSUMER_GROUP procedure deletes the specified consumer group.
Upon deletion of a consumer group, all users having the deleted group as their initial
consumer group will have the DEFAULT_CONSUMER_GROUP set as their initial
consumer group. All currently running sessions belonging to a deleted consumer
group will be switched to DEFAULT_CONSUMER_GROUP.

Specifying Resource Plan Directives
Resource plan directives assign consumer groups to resource plans and provide the
parameters for each resource allocation method. When you create a resource plan
directive, you can specify the following parameters

24-14 Oracle Database Administrator’s Guide

Creating Complex Resource Plans

Parameter

Description

PLAN

Name of the resource plan.

GROUP_OR_SUBPLAN

Name of the consumer group or subplan.

COMMENT

Any comment.

CPU_P1

For EMPHASIS, specifies the CPU percentage at
the first level. For RATIO, specifies the weight of
CPU usage. Default is NULL for all CPU
parameters.

CPU_P2

For EMPHASIS, specifies CPU percentage at the
second level. Not applicable for RATIO.

CPU_P3

For EMPHASIS, specifies CPU percentage at the
third level. Not applicable for RATIO.

CPU_P4

For EMPHASIS, specifies CPU percentage at the
fourth level. Not applicable for RATIO.

CPU_P5

For EMPHASIS, specifies CPU percentage at the
fifth level. Not applicable for RATIO.

CPU_P6

For EMPHASIS, specifies CPU percentage at the
sixth level. Not applicable for RATIO.

CPU_P7

For EMPHASIS, specifies CPU percentage at the
seventh level. Not applicable for RATIO.

CPU_P8

For EMPHASIS, specifies CPU percentage at the
eighth level. Not applicable for RATIO.

ACTIVE_SESS_POOL_P1

Specifies maximum number of concurrently
active sessions for a consumer group. Default is
UNLIMITED.

QUEUEING_P1

Specified time (in seconds) after which a job in
an inactive session queue (waiting for
execution) will time out. Default is UNLIMITED.

PARALLEL_DEGREE_LIMIT_P1

Specifies a limit on the degree of parallelism for
any operation. Default is UNLIMITED.

SWITCH_GROUP

Specifies consumer group to which this session
is switched if other switch criteria are met. If
the group name is 'CANCEL_SQL', then the
current call will be canceled when other switch
criteria are met. If the group name is
'KILL_SESSION', then the session will be killed
when other switch criteria are met. Default is
NULL.

SWITCH_TIME

Specifies time (in seconds) that a session can
execute before an action is taken. Default in
UNLIMITED. You cannot specify both
SWITCH_TIME and SWITCH_TIME_IN_CALL.

SWITCH_ESTIMATE

If TRUE, tells the database to use its execution
time estimate to automatically switch the
consumer group of an operation prior to
beginning its execution. Default is FALSE.

Using the Database Resource Manager

24-15

Creating Complex Resource Plans

Parameter

Description

MAX_EST_EXEC_TIME

Specifies the maximum execution time (in
seconds) allowed for a session. If the optimizer
estimates that an operation will take longer
than MAX_EST_EXEC_TIME, the operation is
not started and ORA-07455 is issued. If the
optimizer does not provide an estimate, this
directive has no effect. Default is UNLIMITED.

UNDO_POOL

Sets a maximum in kilobytes (K) on the total
amount of undo generated by a consumer
group. Default is UNLIMITED.

MAX_IDLE_TIME

Indicates the maximum session idle time.
Default is NULL, which implies unlimited.

MAX_IDLE_BLOCKER_TIME

Indicates the maximum session idle time of a
blocking session. Default is NULL, which
implies unlimited.

SWITCH_TIME_IN_CALL

Specifies the time (in seconds) that a session can
execute before an action is taken. At the end of
the call, the consumer group of the session is
restored to its original consumer group. Default
is UNLIMITED. You cannot specify both
SWITCH_TIME_IN_CALL and SWITCH_TIME.

Creating a Resource Plan Directive
You use the CREATE_PLAN_DIRECTIVE to create a resource plan directive. The
following statement creates a resource plan directive for plan great_bread.
EXEC DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE (PLAN => 'great_bread', GROUP_OR_SUBPLAN => 'sales', COMMENT => 'sales group', CPU_P1 => 60, PARALLEL_DEGREE_LIMIT_P1 => 4);

To complete the plan, similar to that shown in Figure 24–1, execute the following
statements:
BEGIN
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE (PLAN => 'great_bread',
GROUP_OR_SUBPLAN => 'market', COMMENT => 'marketing group',
CPU_P1 => 20);
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE (PLAN => 'great_bread',
GROUP_OR_SUBPLAN => 'develop', COMMENT => 'development group',
CPU_P1 => 20);
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE (PLAN => 'great_bread',
GROUP_OR_SUBPLAN => 'OTHER_GROUPS', COMMENT => 'this one is required',
CPU_P1 => 0, CPU_P2 => 100);
END;

In this plan, consumer group sales has a maximum degree of parallelism of 4 for any
operation, while none of the other consumer groups are limited in their degree of
parallelism. Also, whenever there are leftover level 1 CPU resources, they are allocated
(100%) to OTHER_GROUPS.

Updating Resource Plan Directives
Use the UPDATE_PLAN_DIRECTIVE procedure to update plan directives. This
example changes CPU allocation for resource consumer group develop.
EXEC DBMS_RESOURCE_MANAGER.UPDATE_PLAN_DIRECTIVE (PLAN => 'great_bread', GROUP_OR_SUBPLAN => 'develop', NEW_CPU_P1 => 15);

24-16 Oracle Database Administrator’s Guide

Managing Resource Consumer Groups

If you do not specify the arguments for the UPDATE_PLAN_DIRECTIVE procedure,
they remain unchanged in the data dictionary.

Deleting Resource Plan Directives
To delete a resource plan directive, use the DELETE_PLAN_DIRECTIVE procedure

How Resource Plan Directives Interact
If there are multiple resource plan directives that refer to the same consumer group,
then the following rules apply for specific cases:
1.

The parallel degree limit for the consumer group will be the minimum of all the
incoming values.

2.

The active session pool for the consumer group will be the sum of all the incoming
values and the queue timeout will be the minimum of all incoming timeout values.

3.

If there is more than one switch group and more than one switch time, the
Database Resource Manager will choose the most restrictive of all incoming values.
Specifically:
■

SWITCH_TIME = min (all incoming over_switch_time values)

■

SWITCH_ESTIMATE = TRUE overrides SWITCH_ESTIMATE = FALSE
Notes:
■

■

If both plan directives specify the same switch time, but
different switch groups, then the choice as to which group to
switch to is statically but arbitrarily decided by the Database
Resource Manager.
You cannot specify both SWITCH_TIME and
SWITCH_TIME_IN_CALL plan directives for a single consumer
group.

4.

If a session is switched to another consumer group because it exceeds its switch
time, that session will execute even if the active session pool for the new consumer
group is full.

5.

The maximum estimated execution time will be the most restrictive of all
incoming values. Specifically:
MAX_EST_EXEC_TIME = min (all incoming MAX_EST_EXEC_TIME values)

Managing Resource Consumer Groups
Before you enable the Database Resource Manager, you must assign resource
consumer groups to users. This can be done manually, or you can provide mappings
that enable the database to automatically assign user sessions to consumer groups
depending upon session attributes.
In addition to providing procedures to create, update, or delete the elements used by
the Database Resource Manager, the DBMS_RESOURCE_MANAGER package contains
procedures that effect the assigning of resource consumer groups to users. It also
provides procedures that allow you to temporarily switch a user session to another
consumer group.

Using the Database Resource Manager

24-17

Managing Resource Consumer Groups

The DBMS_RESOURCE_MANAGER_PRIVS package, described earlier for granting the
Database Resource Manager system privilege, can also be used to grant the switch
privilege to another user, who can then alter their own consumer group.
Of the procedures discussed in this section, you use a pending area only for the
SET_CONSUMER_GROUP_MAPPING and SET_CONSUMER_GROUP_MAPPING_PRI
procedures. These procedures are used for the automatic assigning of sessions to
consumer groups.
This section contains the following topics:
■

Assigning an Initial Resource Consumer Group

■

Changing Resource Consumer Groups

■

Managing the Switch Privilege

■

Automatically Assigning Resource Consumer Groups to Sessions

Assigning an Initial Resource Consumer Group
The initial consumer group of a session is determined by mapping the attributes of the
session to a consumer group. For more information on how to configure the mapping,
see the section "Automatically Assigning Resource Consumer Groups to Sessions" on
page 24-21.

Changing Resource Consumer Groups
There are two procedures, which are part of the DBMS_RESOURCE_MANAGER package,
that allow administrators to change the resource consumer group of running sessions.
Both of these procedures can also change the consumer group of any parallel execution
server sessions associated with the coordinator session. The changes made by these
procedures pertain to current sessions only; they are not persistent. They also do not
change the initial consumer groups for users.
Instead of killing a session of a user who is using excessive CPU, an administrator can
instead change that user's consumer group to one that is allowed less CPU. Or, this
switching can be enforced automatically, using automatic consumer group switching
resource plan directives.

Switching a Session
The SWITCH_CONSUMMER_GROUP_FOR_SESS causes the specified session to
immediately be moved into the specified resource consumer group. In effect, this
statement can raise or lower priority. The following statement changes the resource
consumer group of a specific session to a new consumer group. The session identifier
(SID) is 17, the session serial number (SERIAL#) is 12345, and the session is to be
changed to the high_priority consumer group.
EXEC DBMS_RESOURCE_MANAGER.SWITCH_CONSUMER_GROUP_FOR_SESS ('17', '12345', 'high_priorty');

The SID, session serial number, and current resource consumer group for a session are
viewable using the V$SESSION data dictionary view.

Switching Sessions for a User
The SWITCH_CONSUMER_GROUP_FOR_USER procedure changes the resource
consumer group for all sessions with a given user name.
EXEC DBMS_RESOURCE_MANAGER.SWITCH_CONSUMER_GROUP_FOR_USER ('scott', -

24-18 Oracle Database Administrator’s Guide

Managing Resource Consumer Groups

'low_group');

Using the DBMS_SESSION Package to Switch Consumer Group
If granted the switch privilege, users can switch their current consumer group using
the SWITCH_CURRENT_CONSUMER_GROUP procedure in the DBMS_SESSION package.
This procedure enables users to switch to a consumer group for which they have the
switch privilege. If the caller is another procedure, then this procedure enables users to
switch to a consumer group for which the owner of that procedure has switch
privileges.
The parameters for this procedure are:
Parameter

Description

NEW_CONSUMER_GROUP

The consumer group to which the user is
switching.

OLD_CONSUMER_GROUP

An output parameter. Stores the name of the
consumer group from which the user switched.
Can be used to switch back later.

INITIAL_GROUP_ON_ERROR

Controls behavior if a switching error occurs.
If TRUE, in the event of an error, the user is
switched to the initial consumer group.
If FALSE, raise an error.

The following example illustrates switching to a new consumer group. By printing the
value of the output parameter old_group, we illustrate how the old consumer group
name has been saved.
SET serveroutput on
DECLARE
old_group varchar2(30);
BEGIN
DBMS_SESSION.SWITCH_CURRENT_CONSUMER_GROUP('sales', old_group, FALSE);
DBMS_OUTPUT.PUT_LINE('OLD GROUP = ' || old_group);
END;

The following line is output:
OLD GROUP = DEFAULT_CONSUMER_GROUP

The DBMS_SESSION package can be used from within a PL/SQL application, thus
allowing the application to change consumer groups, or effectively priority,
dynamically.
Note that the Database Resource Manager considers a switch to have taken place even
if the SWITCH_CURRENT_CONSUMER_GROUP procedure is called to switch the session
to the consumer group that it is already in.
The Database Resource Manager also works in
environments where a generic database user name is used to log on
to an application. The DBMS_SESSION package can be called to
switch the consumer group assignment of a session at session
startup, or as particular modules are called.

Note:

Using the Database Resource Manager

24-19

Managing Resource Consumer Groups

See Also: Oracle Database PL/SQL Packages and Types Reference for
additional examples and more information about the
DBMS_SESSION package

Managing the Switch Privilege
Using the DBMS_RESOURCE_MANAGER_PRIVS package, you can grant or revoke the
switch privilege to a user, role, or PUBLIC. The switch privilege gives users the
privilege to switch their current resource consumer group to a specified resource
consumer group. The package also enables you to revoke the switch privilege.
The actual switching is done by executing a procedure in the DBMS_SESSION package.
A user who has been granted the switch privilege (or a procedure owned by that user)
can use the SWITCH_CURRENT_CONSUMER_GROUP procedure to switch to another
resource consumer group. The new group must be one to which the user has been
specifically authorized to switch.

Granting the Switch Privilege
The following example grants the privilege to switch to a consumer group. User
scott is granted the privilege to switch to consumer group bug_batch_group.
EXEC DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SWITCH_CONSUMER_GROUP ('scott', 'bug_batch_group', TRUE);

User scott is also granted permission to grant switch privileges for
bug_batch_group to others.
If you grant a user permission to switch to a particular consumer group, then that user
can switch their current consumer group to the new consumer group.
If you grant a role permission to switch to a particular resource consumer group, then
any users who have been granted that role and have enabled that role can immediately
switch their current consumer group to the new consumer group.
If you grant PUBLIC the permission to switch to a particular consumer group, then
any user can switch to that group.
If the GRANT_OPTION argument is TRUE, then users granted switch privilege for the
consumer group can also grant switch privileges for that consumer group to others.

Revoking Switch Privileges
The following example revokes user scott's privilege to switch to consumer group
bug_batch_group.
EXEC DBMS_RESOURCE_MANAGER_PRIVS.REVOKE_SWITCH_CONSUMER_GROUP ('scott', 'bug_batch_group');

If you revoke a user's switch privileges to a particular consumer group, then any
subsequent attempts by that user to switch to that consumer group will fail. If you
revoke the initial consumer group from a user, then that user will automatically be
part of the DEFAULT_CONSUMER_GROUP when logging in.
If you revoke from a role the switch privileges to a consumer group, then any users
who only had switch privilege for the consumer group through that role will not be
able to subsequently switch to that consumer group.
If you revoke switch privileges to a consumer group from PUBLIC, then any users
other than those who are explicitly assigned switch privileges either directly or
through PUBLIC, will not be able to subsequently switch to that consumer group.

24-20 Oracle Database Administrator’s Guide

Managing Resource Consumer Groups

Automatically Assigning Resource Consumer Groups to Sessions
You can configure the Database Resource Manager to automatically assign consumer
groups to sessions by providing mappings between session attributes and consumer
groups. Further, you can prioritize the mappings so as to indicate which mapping has
precedence in case of conflicts.
There are two types of session attributes: login attributes and runtime attributes. The
login attributes are meaningful only at session login time, when the Database Resource
Manager determines the initial consumer group of the session. In contrast, a session
that has already logged in can later be reassigned to another consumer group based on
its run-time attributes.
You use the SET_CONSUMER_GROUP_MAPPING and
SET_CONSUMER_GROUP_MAPPING_PRI procedures to configure the automatic
assigning of sessions to consumer groups. You must use a pending area for these
procedures.

Creating Consumer Group Mappings
The mapping for a session consists of a set of attribute/consumer group pairs that
determine how a session is matched to a consumer group. You use the
SET_CONSUMER_GROUP_MAPPING procedure to map a single session attribute to a
consumer group. The parameters for this procedure are:
Parameter

Description

ATTRIBUTE

The login or runtime session attribute type

VALUE

The value of the attribute being mapped

CONSUMER_GROUP

The consumer group to map to.

The attribute can be one of the following:
Attribute

Type

Description

ORACLE_USER

Login

The Oracle Database user name

SERVICE_NAME

Login

The service name used by the client to establish a
connection

CLIENT_OS_USER

Login

The operating system user name of the client that
is logging in

CLIENT_PROGRAM

Login

The name of the client program used to log into
the server

CLIENT_MACHINE

Login

The name of the machine from which the client is
making the connection

MODULE_NAME

Runtime

The module name in the application currently
executing as set by the
DBMS_APPLICATION_INFO.SET_MODULE
procedure or the equivalent OCI attribute setting

Using the Database Resource Manager

24-21

Managing Resource Consumer Groups

Attribute

Type

Description

MODULE_NAME_ACTION

Runtime

A combination of the current module and the
action being performed as set by either of the
following procedures or their equivalent OCI
attribute setting:
■

DBMS_APPLICATION_INFO.SET_MODULE

■

DBMS_APPLICATION_INFO.SET_ACTION

The attribute is specified as the module name
followed by a period (.), followed by the action
name (module_name.action_name).
SERVICE_MODULE

Runtime

SERVICE_MODULE_ACTION Runtime

A combination of service and module names in
this form: service_name.module_name
A combination of service name, module name,
and action name, in this form:
service_name.module_name.action_name

For example, the following statement causes user scott to map to the dev_group
consumer group every time he logs in:
BEGIN
DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING
(DBMS_RESOURCE_MANAGER.ORACLE_USER, 'scott', 'dev_group');
END;

Creating Attribute Mapping Priorities
To resolve conflicting mappings, you can establish a priority ordering of the attributes
from most important to least important. You use the
SET_CONSUMER_GROUP_MAPPING_PRI procedure to set the priority of each attribute
to a unique integer from 1 (most important) to 10 (least important). The following
example illustrates this setting of priorities:
BEGIN
DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING_PRI(
EXPLICIT => 1,
SERVICE_MODULE_ACTION => 2,
SERVICE_MODULE => 3,
MODULE_NAME_ACTION => 4,
MODULE_NAME => 5,
SERVICE_NAME => 6,
ORACLE_USER => 7,
CLIENT_PROGRAM => 8,
CLIENT_OS_USER => 9,
CLIENT_MACHINE => 10);
END;

In this example, the priority of the database user name is set to 7 (less important),
while the priority of the module name is set to 5 (more important).

24-22 Oracle Database Administrator’s Guide

Managing Resource Consumer Groups

SET_CONSUMER_GROUP_MAPPING_PRI requires that you include
the pseudo-attribute EXPLICIT as an argument. It must be set to 1. It
indicates that explicit consumer group switches have the highest priority.
You explicitly switch consumer groups with these package procedures,
which are described in detail in Oracle Database PL/SQL Packages and Types
Reference:

Note:

■

DBMS_SESSION.SWITCH_CURRENT_CONSUMER_GROUP

■

DBMS_RESOURCE_MANAGER.SWITCH_CONSUMER_GROUP_FOR_SESS

■

DBMS_RESOURCE_MANAGER.SWITCH_CONSUMER_GROUP_FOR_USER

To illustrate how mapping priorities work, assume that in addition to the mapping of
user scott to the dev_group consumer group, there is also a module name mapping
as follows:
EXEC DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING (DBMS_RESOURCE_MANAGER.MODULE_NAME, 'backup', 'low_priority');

Now if the session sets its module name to backup, the session would be reassigned
to the low_priority consumer group, because module name mapping has a higher
priority than database user mapping.
To prevent unauthorized clients from setting their session attributes so that they map
to higher priority consumer groups, user switch privileges for consumer groups are
enforced. This means that even though the attribute of a given session matches a
mapping pair, the mapping is only considered valid if the session has the switching
privilege for that particular consumer group. Sessions are automatically switched only
to consumer groups for which they have been granted switch privileges.

Automatic Group Switching
Each session can be switched automatically to another consumer group via the
mappings at distinct points in time:
■

■

■

■

When the session first logs in, the mapping is evaluated to determine the initial
group of the session.
If the current mapping attribute of a session is A, then if the attribute A is set to a
new value (only possible for runtime attributes) then the mapping is reevaluated
and the session is switched to the appropriate consumer group.
If a runtime session attribute is modified such that the current mapping becomes a
different attribute B, then the session is switched.
Whenever the client ID is set to a different value, the mapping is reevaluated and
the session is switched.

Two things to note about the preceding rules are:
■

■

If a runtime attribute for which a mapping is provided is set to the same value it
already has, or if the client ID is set to the same value it already has, then no
switching takes place.
A session can be switched to the same consumer group it is already in. The effect
of switching in this case is to zero out session statistics that are typically zeroed
out during a switch to a different group (for example, the
ACTIVE_TIME_IN_GROUP value of the session).

Using the Database Resource Manager

24-23

Enabling the Database Resource Manager

Enabling the Database Resource Manager
You enable the Database Resource Manager by setting the RESOURCE_MANAGER_PLAN
initialization parameter. This parameter specifies the top plan, identifying the plan
schema to be used for this instance. If no plan is specified with this parameter, the
Database Resource Manager is not activated.
The Resource Manager automatically activates if a Scheduler
window that specifies a resource plan opens.

Note:

The following example activates the Database Resource Manager and assigns the top
plan as mydb_plan.
RESOURCE_MANAGER_PLAN = mydb_plan

You can also activate or deactivate the Database Resource Manager, or change the
current top plan, using the DBMS_RESOURCE_MANAGER.SWITCH_PLAN package
procedure or the ALTER SYSTEM statement. In this example, the top plan is specified
as mydb_plan:
ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = 'mydb_plan';

An error message is returned if the specified plan does not exist in the data dictionary.
To deactivate the Database Resource Manager, issue the following statement:
ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = '';

The Scheduler can automatically change the Resource Manager plan at Scheduler
window boundaries. In some cases, this may be unacceptable. For example, if you
have an important task to finish, and if you set the Resource Manager plan to give
your task priority, then you expect that the plan will remain the same until you change
it. However, because a Scheduler window could become activated after you have set
your plan, the Resource Manager plan may change while your task is running.
To prevent this situation, you can set the RESOURCE_MANAGER_PLAN parameter to the
name of the plan you want for the system and prepend the name with "FORCE:".
Using the prefix FORCE indicates that the current Resource Manager plan can be
changed only when the database administrator changes the value of the
RESOURCE_MANAGER_PLAN parameter. This restriction can be lifted by reexecuting
the command without prepending the plan name with "FORCE:".
ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = 'FORCE:mydb_plan';

The DBMS_RESOURCE_MANAGER.SWITCH_PLAN package procedure has a similar
capability.
See Also: Oracle Database PL/SQL Packages and Types Reference for
more information on DBMS_RESOURCE_MANAGER.SWITCH_PLAN.

Putting It All Together: Database Resource Manager Examples
This section provides some examples of resource plan schemas. The following
examples are presented:
■

Multilevel Schema Example

■

Example of Using Several Resource Allocation Methods

24-24 Oracle Database Administrator’s Guide

Putting It All Together: Database Resource Manager Examples

■

An Oracle-Supplied Plan

Multilevel Schema Example
The following statements create a multilevel schema as illustrated in Figure 24–3. They
use default plan and resource consumer group methods.
BEGIN
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.CREATE_PLAN(PLAN => 'bugdb_plan',
COMMENT => 'Resource plan/method for bug users sessions');
DBMS_RESOURCE_MANAGER.CREATE_PLAN(PLAN => 'maildb_plan',
COMMENT => 'Resource plan/method for mail users sessions');
DBMS_RESOURCE_MANAGER.CREATE_PLAN(PLAN => 'mydb_plan',
COMMENT => 'Resource plan/method for bug and mail users sessions');
DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP(CONSUMER_GROUP => 'Online_group',
COMMENT => 'Resource consumer group/method for online bug users sessions');
DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP(CONSUMER_GROUP => 'Batch_group',
COMMENT => 'Resource consumer group/method for batch job bug users sessions');
DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP(CONSUMER_GROUP => 'Bug_Maint_group',
COMMENT => 'Resource consumer group/method for users sessions for bug db maint');
DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP(CONSUMER_GROUP => 'Users_group',
COMMENT => 'Resource consumer group/method for mail users sessions');
DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP(CONSUMER_GROUP => 'Postman_group',
COMMENT => 'Resource consumer group/method for mail postman');
DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP(CONSUMER_GROUP => 'Mail_Maint_group',
COMMENT => 'Resource consumer group/method for users sessions for mail db maint');
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(PLAN => 'bugdb_plan',
GROUP_OR_SUBPLAN => 'Online_group',
COMMENT => 'online bug users sessions at level 1', CPU_P1 => 80, CPU_P2=> 0,
PARALLEL_DEGREE_LIMIT_P1 => 8);
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(PLAN => 'bugdb_plan',
GROUP_OR_SUBPLAN => 'Batch_group',
COMMENT => 'batch bug users sessions at level 1', CPU_P1 => 20, CPU_P2 => 0,
PARALLEL_DEGREE_LIMIT_P1 => 2);
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(PLAN => 'bugdb_plan',
GROUP_OR_SUBPLAN => 'Bug_Maint_group',
COMMENT => 'bug maintenance users sessions at level 2', CPU_P1 => 0, CPU_P2 => 100,
PARALLEL_DEGREE_LIMIT_P1 => 3);
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(PLAN => 'bugdb_plan',
GROUP_OR_SUBPLAN => 'OTHER_GROUPS',
COMMENT => 'all other users sessions at level 3', CPU_P1 => 0, CPU_P2 => 0,
CPU_P3 => 100);
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(PLAN => 'maildb_plan',
GROUP_OR_SUBPLAN => 'Postman_group',
COMMENT => 'mail postman at level 1', CPU_P1 => 40, CPU_P2 => 0,
PARALLEL_DEGREE_LIMIT_P1 => 4);
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(PLAN => 'maildb_plan',
GROUP_OR_SUBPLAN => 'Users_group',
COMMENT => 'mail users sessions at level 2', CPU_P1 => 0, CPU_P2 => 80,
PARALLEL_DEGREE_LIMIT_P1 => 4);
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(PLAN => 'maildb_plan',
GROUP_OR_SUBPLAN => 'Mail_Maint_group',
COMMENT => 'mail maintenance users sessions at level 2', CPU_P1 => 0, CPU_P2 => 20,
PARALLEL_DEGREE_LIMIT_P1 => 2);
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(PLAN => 'maildb_plan',
GROUP_OR_SUBPLAN => 'OTHER_GROUPS',
COMMENT => 'all other users sessions at level 3', CPU_P1 => 0, CPU_P2 => 0,
CPU_P3 => 100);
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(PLAN => 'mydb_plan',
GROUP_OR_SUBPLAN => 'maildb_plan',
COMMENT=> 'all mail users sessions at level 1', CPU_P1 => 30);
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(PLAN => 'mydb_plan',
GROUP_OR_SUBPLAN => 'bugdb_plan',
COMMENT => 'all bug users sessions at level 1', CPU_P1 => 70);
DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();

Using the Database Resource Manager

24-25

Putting It All Together: Database Resource Manager Examples

DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;

The preceding call to VALIDATE_PENDING_AREA is optional because the validation is
implicitly performed in SUBMIT_PENDING_AREA.
Figure 24–3 Multilevel Schema
MYDB
PLAN
30% @
Level 1

70% @
Level 1

MAILDB
PLAN

BUGDB
PLAN

40% @
Level 1

80% @
Level 2

20% @
Level 2

POSTMAN
GROUP

USERS
GROUP

MAIL MAINT
GROUP

100% @
Level 3

100% @
Level 3

OTHER
GROUPS

80% @
Level 1

ONLINE
GROUP

20% @
Level 1
BATCH
GROUP

100% @
Level 2
BUG MAINT
GROUP

Note that under maildb_plan, because only 40% CPU is allocated to
Postman_group at level 1, there is an implied 60% remaining at level 1. This 60% is
then shared by Users_group and Mail_Maint_group at level 2.

Example of Using Several Resource Allocation Methods
The example presented here could represent a plan for a database supporting a
packaged ERP (Enterprise Resource Planning) or CRM (Customer Relationship
Management). The work in such an environment can be highly varied. There may be a
mix of short transactions and quick queries, in combination with longer running batch
jobs that include large parallel queries. The goal is to give good response time to OLTP
(Online Transaction Processing), while allowing batch jobs to run in parallel.
The plan is summarized in the following table.

Group

CPU
Resource
Allocation %

oltp

Level 1: 80%

Active Session
Pool
Parameters

Automatic Switching
Parameters

Maximum
Estimated
Execution
Time

Undo Pool

Switch to group: batch

--

Size: 200K

--

Time: 3600

--

--

--

--

Switch time: 3
Use estimate: TRUE
batch

Level 2: 100%

Pool size: 5
Timeout: 600

OTHER_GROUPS

Level 3: 100%

--

The following statements create the preceding plan, which is named erp_plan:
BEGIN
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.CREATE_PLAN(PLAN => 'erp_plan',
COMMENT => 'Resource plan/method for ERP Database');
DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP(CONSUMER_GROUP => 'oltp',

24-26 Oracle Database Administrator’s Guide

Monitoring and Tuning the Database Resource Manager

COMMENT => 'Resource consumer group/method for OLTP jobs');
DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP(CONSUMER_GROUP => 'batch',
COMMENT => 'Resource consumer group/method for BATCH jobs');
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(PLAN => 'erp_plan',
GROUP_OR_SUBPLAN => 'oltp', COMMENT => 'OLTP sessions', CPU_P1 => 80,
SWITCH_GROUP => 'batch', SWITCH_TIME => 3,SWITCH_ESTIMATE => TRUE,
UNDO_POOL => 200);
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(PLAN => 'erp_plan',
GROUP_OR_SUBPLAN => 'batch', COMMENT => 'BATCH sessions', CPU_P2 => 100,
ACTIVE_SESS_POOL_P1 => 5, QUEUEING_P1 => 600,
MAX_EST_EXEC_TIME => 3600);
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(PLAN => 'erp_plan',
GROUP_OR_SUBPLAN => 'OTHER_GROUPS', COMMENT => 'mandatory', CPU_P3 => 100);
DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;

An Oracle-Supplied Plan
Oracle Database provides one default resource manager plan, SYSTEM_PLAN, which
gives priority to system sessions. SYSTEM_PLAN is defined as follows:
CPU Resource Allocation
Resource Consumer Group

Level 1

Level 2

Level 3

SYS_GROUP

100%

0%

0%

OTHER_GROUPS

0%

100%

0%

LOW_GROUP

0%

0%

100%

The database-provided groups in this plan are:
■
■

■

SYS_GROUP is the initial consumer group for the users SYS and SYSTEM.
OTHER_GROUPS applies collectively to all sessions that belong to a consumer
group that is not part of the currently active plan schema.
LOW_GROUP provides a group having lower priority than SYS_GROUP and
OTHER_GROUPS in this plan. It is up to you to decide which user sessions will be
part of LOW_GROUP. Switch privilege is granted to PUBLIC for this group.

These groups can be used, or not used, and can be modified or deleted.
You can use this simple database-supplied plan if it is appropriate for your
environment.

Monitoring and Tuning the Database Resource Manager
To effectively monitor and tune the Database Resource Manager, you must design a
representative environment. The Database Resource Manager works best in large
production environments in which system utilization is high. If a test places
insufficient load on the system, measured CPU allocations can be very different from
the allocations specified in the active resource plan. This is because the Database
Resource Manager does not attempt to enforce CPU allocation percentage limits as
long as consumer groups are getting the resources they need.

Using the Database Resource Manager

24-27

Monitoring and Tuning the Database Resource Manager

Creating the Environment
To create a representative environment, there must be sufficient load (demand for CPU
resources) to make CPU resources scarce. If the following rules are followed, the test
environment should generate actual (measured) resource allocations that match those
specified in the active resource plan.
1.

Create the minimum number of concurrently running processes required to
generate sufficient load. This is the larger of:
■
■

2.

Four processes for each consumer group
1.5 * (number of processors) for each consumer group. If the result is not an
integer, round up.

Each and every process must be capable of consuming all of the CPU resources
allocated to the consumer group in which it runs. Write resource intensive
programs that continue to spin no matter what happens. This can be as simple as:
BEGIN
DECLARE
m NUMBER;
BEGIN
FOR i IN 1..100000 LOOP
FOR j IN 1..100000 LOOP
/*
* The following query does a cartesian product without
* a predicate and takes up significant CPU time.
*/
select count(*) into m from v$sysstat, v$system_event;
END LOOP;
END LOOP;
END;
END;
/

Why Is This Necessary to Produce Expected Results?
When every group can secure as much CPU resources as it demands, the Database
Resource Manager first seeks to maximize system throughput, not to enforce allocation
percentages. For example, consider the following conditions:
■

There is only one process available to run for each consumer group.

■

Each process runs continuously.

■

There are four CPUs.

In this case, the measured CPU allocation to each consumer group will be 25%, no
matter what the allocations specified in the active resource plan.
Another factor determines the calculation in (1) in the preceding section. Processor
affinity scheduling at the operating system level can distort CPU allocation on
underutilized systems. This is explained in the following paragraphs.
Until the number of concurrently running processes reaches a certain level, typical
operating system scheduling algorithms will prevent full utilization. The Database
Resource Manager controls CPU usage by restricting the number of running processes.
By deciding which processes are allowed to run and for what duration, the Database
Resource Manager controls CPU resource allocation. When a CPU has resources
available, and other processors are fully utilized, the operating system migrates
processes to the underutilized processor, but not immediately.

24-28 Oracle Database Administrator’s Guide

Monitoring and Tuning the Database Resource Manager

With processor affinity, the operating system waits (for a time) to migrate processes,
"hoping" that another process will be dispatched to run instead of forcing process
migration from one CPU to another. On a fully loaded system with enough processes
waiting, this strategy will work. In large production environments, processor affinity
increases performance significantly, because invalidating the current CPU cache and
then loading the new one is quite expensive. Since processes have processor affinity on
most platforms, more processes than CPUs for each consumer group must be run.
Otherwise, full system utilization is not possible.

Monitoring Results
Use these views to help you monitor the results of your Resource Manager settings.
■

V$RSRC_CONSUMER_GROUP

■

V$RSRC_SESSION_INFO

■

V$RSRC_PLAN_HISTORY

V$RSRC_CONSUMER_GROUP Use the V$RSRC_CONSUMER_GROUP view to monitor

CPU usage. It provides the cumulative amount of CPU time consumed by all sessions
in each consumer group. It also provides a number of other measures helpful for
tuning.
SQL> SELECT NAME, CONSUMED_CPU_TIME FROM V$RSRC_CONSUMER_GROUP;
NAME
CONSUMED_CPU_TIME
-------------------------------- ----------------OTHER_GROUPS
14301
TEST_GROUP
8802
TEST_GROUP2
0
3 rows selected.

V$RSRC_SESSION_INFO Use this view to monitor the status of a particular session.
The view shows how the session has been affected by the Resource Manager. It
provides information such as:
■

The consumer group that the session currently belongs to

■

The consumer group that the session originally belonged to

■

The session attribute that was used to map the session to the consumer group

■

Session state (RUNNING, WAIT_FOR_CPU, QUEUED, and so on)

■

Current and cumulative statistics for metrics, such as CPU consumed, wait times,
and queued time

SQL> SELECT se.sid sess_id, co.name consumer_group,
se.state, se.consumed_cpu_time cpu_time, se.cpu_wait_time, se.queued_time
FROM v$rsrc_session_info se, v$rsrc_consumer_group co
WHERE se.current_consumer_group_id = co.id
SESS_ID
------145
158
142
141

CONSUMER_GROUP
--------------SYS_GROUP
OTHER_GROUPS
OTHER_GROUPS
OTHER_GROUPS

STATE
CPU_TIME CPU_WAIT_TIME QUEUED_TIME
-------- ---------- ------------- ----------RUNNING
572217
0
0
WAITING
220
0
0
WAITING
146453
0
0
WAITING
148228
0
0

Using the Database Resource Manager

24-29

Interaction with Operating-System Resource Control

V$RSRC_PLAN_HISTORY This view shows when resource manager plans were
enabled or disabled on the instance. It helps you understand how resources were
shared among the consumer groups over time. For each entry in the view, the
V$RSRC_CONS_GROUP_HISTORY view has a corresponding entry for each consumer
group in the plan that shows the cumulative statistics for the consumer group.

Oracle Database Reference for information on these and
other Resource Manager views.

See Also:

Interaction with Operating-System Resource Control
The Oracle Database server expects a static configuration and allocates internal
resources, such as latches, from available resources detected at database startup. The
database might not perform optimally and can become unstable if resource
configuration changes very frequently.

Guidelines for Using Operating-System Resource Control
If you do choose to use Operating-system resource control with Oracle Database, then
it should be used judiciously, according to the following guidelines:
1.

Operating-system resource control should not be used concurrently with the
Database Resource Manager, because neither of them are aware of each other's
existence. As a result, both the operating system and Database Resource Manager
try to control resource allocation in a manner that causes unpredictable behavior
and instability of Oracle Database.
■

■

If you want to control resource distribution within an instance, use the
Database Resource Manager and turn off operating-system resource control.
If you have multiple instances on a node and you want to distribute resources
among them, use operating-system resource control, not the Database
Resource Manager.
Oracle Database currently does not support the use of both
tools simultaneously. Future releases might allow for their
interaction on a limited scale.

Note:

2.

In an Oracle Database environment, the use of an operating-system resource
manager, such as Hewlett Packard's Process Resource Manager (PRM) or Sun's
Solaris Resource Manager (SRM), is supported only if all of the following
conditions are met:
■

■

■

Each instance must be assigned to a dedicated operating-system resource
manager group or managed entity.
The dedicated entity running all the instance's processes must run at one
priority (or resource consumption) level.
Process priority management must not be enabled.

24-30 Oracle Database Administrator’s Guide

Viewing Database Resource Manager Information

Caution: Management of individual database processes at
different priority levels (for example, using the nice command on
UNIX platforms) is not supported. Severe consequences, including
instance crashes, can result. You can expect similar undesirable
results if operating-system resource control is permitted to manage
memory on which an Oracle Database instance is pinned.
3.

If you chose to use operating-system resource control, make sure you turn off the
Database Resource Manager. By default, the Database Resource Manager is turned
off. If it is not, you can turn it off by issuing the following statement:
ALTER SYSTEM SET RESOURCE_MANAGER_PLAN='';

Also remember to reset this parameter in your initialization parameter file, so that
the Database Resource Manager is not activated the next time the database is
started up.

Dynamic Reconfiguration
Tools such as Sun's processor sets and dynamic system domains work well with an
Oracle Database. There is no need to restart an instance if the number of CPUs
changes.
The database dynamically detects any change in the number of available CPUs and
reallocates internal resources. On most platforms, the database automatically adjusts
the value of the CPU_COUNT initialization parameter to the number of available CPUs.

Viewing Database Resource Manager Information
The following table lists views that are associated with Database Resource Manager:
View

Description

DBA_RSRC_CONSUMER_GROUP_PRIVS

DBA view lists all resource consumer groups and the users
and roles to which they have been granted. USER view lists
all resource consumer groups granted to the user.

USER_RSRC_CONSUMER_GROUP_PRIVS
DBA_RSRC_CONSUMER_GROUPS

Lists all resource consumer groups that exist in the database.

DBA_RSRC_MANAGER_SYSTEM_PRIVS
USER_RSRC_MANAGER_SYSTEM_PRIVS

DBA view lists all users and roles that have been granted
Database Resource Manager system privileges. USER view
lists all the users that are granted system privileges for the
DBMS_RESOURCE_MANAGER package.

DBA_RSRC_PLAN_DIRECTIVES

Lists all resource plan directives that exist in the database.

DBA_RSRC_PLANS

Lists all resource plans that exist in the database.

DBA_RSRC_GROUP_MAPPINGS

Lists all of the various mapping pairs for all of the session
attributes

DBA_RSRC_MAPPING_PRIORITY

Lists the current mapping priority of each attribute

DBA_USERS

DBA view contains information about all users of the
database. Specifically, for the Database Resource Manager, it
contains the initial resource consumer group for the user.
USER view contains information about the current user, and
specifically, for the Database Resource Manager, it contains
the current user's initial resource consumer group.

USERS_USERS

Using the Database Resource Manager

24-31

Viewing Database Resource Manager Information

View

Description

V$ACTIVE_SESS_POOL_MTH

Displays all available active session pool resource allocation
methods.

V$BLOCKING_QUIESCE

Lists all sessions that could potentially block a quiesce
operation. Includes sessions that are active and not in the
SYS_GROUP consumer group.

V$PARALLEL_DEGREE_LIMIT_MTH

Displays all available parallel degree limit resource allocation
methods.

V$QUEUEING_MTH

Displays all available queuing resource allocation methods.

V$RSRC_CONS_GROUP_HISTORY

For each entry in the view V$RSRC_PLAN_HISTORY,
contains an entry for each consumer group in the plan
showing the cumulative statistics for the consumer group.

V$RSRC_CONSUMER_GROUP

Displays information about active resource consumer
groups. This view can be used for tuning.

V$RSRC_CONSUMER_GROUP_CPU_MTH

Displays all available CPU resource allocation methods for
resource consumer groups.

V$RSRC_PLAN

Displays the names of all currently active resource plans.

V$RSRC_PLAN_CPU_MTH

Displays all available CPU resource allocation methods for
resource plans.

V$RSRC_PLAN_HISTORY

Shows when Resource Manager plans were enabled or
disabled on the instance. It helps you understand how
resources were shared among the consumer groups over
time.

V$RSRC_SESSION_INFO

Displays Resource Manager statistics for each session. Shows
how the session has been affected by the Resource Manager.
Can be used for tuning.

V$SESSION

Lists session information for each current session.
Specifically, lists the name of the resource consumer group of
each current session.

You can use these views for viewing privileges, viewing plan schemas, or you can
monitor them to gather information for tuning the Database Resource Manager. Some
examples of their use follow.
Oracle Database Reference for detailed information about
the contents of each of these views

See Also:

Viewing Consumer Groups Granted to Users or Roles
The DBA_RSRC_CONSUMER_GROUP_PRIVS view displays the consumer groups
granted to users or roles. Specifically, it displays the groups to which a user or role is
allowed to belong or be switched. For example, in the view shown below, user scott
can belong to the consumer groups market or sales, he has the ability to assign
(grant) other users to the sales group but not the market group. Neither group is his
initial consumer group.
SQL> SELECT * FROM DBA_RSRC_CONSUMER_GROUP_PRIVS;
GRANTEE
-----------------------------PUBLIC
PUBLIC
SCOTT

24-32 Oracle Database Administrator’s Guide

GRANTED_GROUP
-----------------------------DEFAULT_CONSUMER_GROUP
LOW_GROUP
MARKET

GRA
--YES
NO
NO

INI
--YES
NO
NO

Viewing Database Resource Manager Information

SCOTT
SYSTEM

SALES
SYS_GROUP

YES NO
NO YES

Scott was granted the ability to switch to these groups using the
DBMS_RESOURCE_MANAGER_PRIVS package.

Viewing Plan Schema Information
This example shows using the DBA_RSRC_PLANS view to display all of the resource
plans defined in the database. All of the plans displayed are active, meaning they are
not staged in the pending area
SQL> SELECT PLAN,COMMENTS,STATUS FROM DBA_RSRC_PLANS;
PLAN
----------SYSTEM_PLAN
BUGDB_PLAN
MAILDB_PLAN
MYDB_PLAN
GREAT_BREAD
ERP_PLAN

COMMENTS
------------------------------------------------------Plan to give system sessions priority
Resource plan/method for bug users sessions
Resource plan/method for mail users sessions
Resource plan/method for bug and mail users sessions
Great plan for great bread
Resource plan/method for ERP Database

STATUS
-----ACTIVE
ACTIVE
ACTIVE
ACTIVE
ACTIVE
ACTIVE

6 rows selected.

Viewing Current Consumer Groups for Sessions
You can use the V$SESSION view to display the consumer groups that are currently
assigned to sessions.
SQL> SELECT SID,SERIAL#,USERNAME,RESOURCE_CONSUMER_GROUP FROM V$SESSION;
SID
----.
.
.
11
13

SERIAL#
-------

USERNAME
------------------------

136 SYS
16570 SCOTT

RESOURCE_CONSUMER_GROUP
--------------------------------

SYS_GROUP
SALES

10 rows selected.

Viewing the Currently Active Plans
This example sets mydb_plan, as created by the statements shown earlier in
"Multilevel Schema Example" on page 24-25, as the top level plan. The V$RSRC_PLAN
view is queried to display the currently active plans.
SQL> ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = mydb_plan;
System altered.
SQL> SELECT NAME, IS_TOP_PLAN FROM V$RSRC_PLAN;
NAME
IS_TO
------------------------------------MYDB_PLAN
TRUE
MAILDB_PLAN
FALSE
BUGDB_PLAN
FALSE

Using the Database Resource Manager

24-33

Viewing Database Resource Manager Information

24-34 Oracle Database Administrator’s Guide

25
Moving from DBMS_JOB to
DBMS_SCHEDULER
Oracle Database provides advanced scheduling capabilities through Oracle Scheduler
(the Scheduler). The Scheduler offers far more functionality than the DBMS_JOB
package, which was the previous Oracle Database job scheduler. This chapter
discusses briefly how you can take statements created with DBMS_JOB and rewrite
them using DBMS_SCHEDULER, which is the package that you use to configure and
operate the Scheduler.
See Also: The documentation on supplied PL/SQL packages that
came with your Oracle9i software for information on the
DBMS_JOB package

This chapter contains the following topic:
■

Moving from DBMS_JOB to DBMS_SCHEDULER

Moving from DBMS_JOB to DBMS_SCHEDULER
This section illustrates some examples of how you can take jobs created with the
DBMS_JOB package and rewrite them using the DBMS_SCHEDULER package.

Creating a Job
An example of creating a job using DBMS_JOB is the following:
VARIABLE jobno NUMBER;
BEGIN
DBMS_JOB.SUBMIT(:jobno, 'INSERT INTO employees VALUES (7935, ''SALLY'',
''DOGAN'', ''sally.dogan@xyzcorp.com'', NULL, SYSDATE, ''AD_PRES'', NULL,
NULL, NULL, NULL);', SYSDATE, 'SYSDATE+1');
COMMIT;
END;
/

An equivalent statement using DBMS_SCHEDULER is the following:
BEGIN
DBMS_SCHEDULER.CREATE_JOB(
job_name
=> 'job1',
job_type
=> 'PLSQL_BLOCK',
job_action
=> 'INSERT INTO employees VALUES (7935, ''SALLY'',
''DOGAN'', ''sally.dogan@xyzcorp.com'', NULL, SYSDATE,''AD_PRES'', NULL,
NULL, NULL, NULL);');

Moving from DBMS_JOB to DBMS_SCHEDULER

25-1

Moving from DBMS_JOB to DBMS_SCHEDULER

start_date
repeat_interval
END;
/

=>
=>

SYSDATE,
'FREQ = DAILY; INTERVAL = 1');

Altering a Job
An example of altering a job using DBMS_JOB is the following:
BEGIN
DBMS_JOB.WHAT(31, 'INSERT INTO employees VALUES (7935, ''TOM'', ''DOGAN'',
''tom.dogan@xyzcorp.com'', NULL, SYSDATE,''AD_PRES'', NULL,
NULL, NULL, NULL);');
COMMIT;
END;
/

This changes the action for JOB1 to insert a different value. An equivalent statement
using DBMS_SCHEDULER is the following:
BEGIN
DBMS_SCHEDULER.SET_ATTRIBUTE(
name
=> 'JOB1',
attribute
=> 'job_action',
value
=> 'INSERT INTO employees VALUES (7935, ''TOM'', ''DOGAN'',
''tom.dogan@xyzcorp.com'', NULL, SYSDATE, ''AD_PRES'', NULL,
NULL, NULL, NULL);');
END;
/

Removing a Job from the Job Queue
The following example removes a job using DBMS_JOB, where 14144 is the number of
the job being run:
BEGIN
DBMS_JOB.REMOVE(14144);
COMMIT;
END;
/

Using DBMS_SCHEDULER, you would issue the following statement instead:
BEGIN
DBMS_SCHEDULER.DROP_JOB('myjob1');
END;
/

See Also:
■

Oracle Database PL/SQL Packages and Types Reference for more
information about the DBMS_SCHEDULER package

■

Chapter 26, "Scheduler Concepts"

■

Chapter 27, "Using the Scheduler"

■

Chapter 28, "Administering the Scheduler"

25-2 Oracle Database Administrator’s Guide

26
Scheduler Concepts
Oracle Database provides advanced job scheduling capabilities through Oracle
Scheduler (the Scheduler). This chapter introduces you to Scheduler concepts and
includes the following topics:
■

Overview of the Scheduler

■

Basic Scheduler Concepts

■

Advanced Scheduler Concepts

■

Scheduler Architecture
This chapter discusses the use of the Oracle-supplied
DBMS_SCHEDULER package to administer scheduling capabilities.
You can also use Oracle Enterprise Manager (EM) as an easy-to-use
graphical interface for many of the same capabilities.

Note:

See the Oracle Database PL/SQL Packages and Types Reference for
DBMS_SCHEDULER syntax and the Oracle Enterprise Manager
documentation set for more information regarding EM.

Overview of the Scheduler
Organizations have too many tasks, and manually dealing with each one can be
daunting. To help you simplify these management tasks, as well as offering a rich set
of functionality for complex scheduling needs, Oracle provides a collection of
functions and procedures in the DBMS_SCHEDULER package. Collectively, these
functions are called the Scheduler, and they are callable from any PL/SQL program.
The Scheduler enables database administrators and application developers to control
when and where various tasks take place in the database environment. These tasks can
be time consuming and complicated, so using the Scheduler can help you to improve
the management and planning of these tasks. In addition, by ensuring that many
routine database tasks occur without manual intervention, you can lower operating
costs, implement more reliable routines, minimize human error, and shorten the time
windows needed.
Some typical examples of using the Scheduler are:
■

■

Database administrators can schedule and monitor recurring database
maintenance jobs such as backups or nightly data warehousing loads and extracts.
Application developers can create programs and program libraries that end users
can use to create or monitor their own jobs. In addition to typical database jobs,
you can schedule and monitor jobs that run as part of an application suite.

Scheduler Concepts 26-1

Overview of the Scheduler

What Can the Scheduler Do?
The Scheduler provides complex enterprise scheduling functionality, which you can
use to:
■

Schedule job execution based on time or events
The most basic capability of a job scheduler is the ability to schedule a job to run at
a particular date and time or when a particular event occurs. The Scheduler
enables you to reduce your operating costs by enabling you to schedule execution
of jobs. For example, consider the situation where a patch needs to be applied to a
database that is in production. To minimize disruptions, this task will need to be
performed during non-peak hours. This can be easily accomplished using the
Scheduler. Instead of having IT personnel manually carry out this task during
non-peak hours, you can instead create a job and schedule it to run at a specified
time using the Scheduler. See "Creating Jobs" on page 27-2 for more information.

■

Schedule job processing in a way that models your business requirements
The Scheduler enables limited computing resources to be allocated appropriately
among competing jobs, thus aligning job processing with your business needs.
This is accomplished in the following ways:

■

–

Jobs that share common characteristic and behavior can be grouped into larger
entities called job classes. You can prioritize among the classes by controlling
the resources allocated to each class. This enables you to ensure that your
critical jobs have priority and have enough resources to complete. For
example, if you have a critical project to load a data warehouse, then you can
combine all the data warehousing jobs into one class and give priority to it
over other jobs by allocating it a high percentage of the available resources.

–

The Scheduler takes prioritization of jobs one step further, by providing you
the ability to change the prioritization based on a schedule. Because your
definition of a critical job can change over time, the Scheduler enables you to
also change the priority among your jobs over that time frame. For example,
you may consider the jobs to load a data warehouse to be critical jobs during
non-peak hours but not during peak hours. In such a case, you can change the
priority among the classes by changing the resource allocated to each class.
See "Creating Job Classes" on page 27-19 and "Creating Windows" on
page 27-21 for more information.

–

In addition to running jobs based on a time schedule, the Scheduler enables
you start jobs in response to system or business events. Your applications can
detect events and then signal the Scheduler. Depending on the type of signal
sent, the Scheduler starts a specific job. An example of using events to align
your job processing with business needs is to prepare event-based jobs for
when a transaction fails, such as someone trying to withdraw more money
from a bank account than is available. In this case, you could run jobs that
check for suspicious activity in this account.

Manage and monitor jobs
There are multiple states that a job undergoes from its creation to its completion.
Scheduler activity is logged and information such as the status of the job and the
last run time of the job can be easily tracked. This information is stored in views
and can be easily queried using Enterprise Manager or a SQL query. These views
provide valuable information about jobs and their execution that can help you
schedule and manage your jobs better. For example, a DBA can easily track all jobs
that failed for user scott. See "Monitoring and Managing the Scheduler" on
page 28-6.

26-2 Oracle Database Administrator’s Guide

Basic Scheduler Concepts

■

Execute and manage jobs in a clustered environment
A cluster is a set of database instances that cooperates to perform the same task.
Oracle Real Application Clusters (RAC) provides scalability and reliability without
any change to your applications. The Scheduler fully supports execution of jobs in
such a clustered environment. To balance the load on your system and for better
performance, you can also specify the database service where you want a job to
run. See "Using the Scheduler in Real Application Clusters Environments" on
page 26-12 for more information.

Basic Scheduler Concepts
The Scheduler offers a modular approach for managing tasks within the Oracle
environment. Advantages of modularity include easier management of your database
environment and reusability of scheduler objects when creating new tasks that are
similar to existing tasks.
In the Scheduler, most components are database objects like a table, which enables you
to use normal Oracle privileges.
The basic elements of the Scheduler are:
■

Programs

■

Schedules

■

Jobs

■

Events

■

Chains
See Also:

"Advanced Scheduler Concepts" on page 26-7

Programs
A Scheduler program object is a collection of metadata about what will be run by the
Scheduler. It includes information such as the name of the program object, program
action (for example, a procedure or executable name), program type (for example,
PL/SQL and Java stored procedures or PL/SQL anonymous blocks) and the number
of arguments required for the program.
A program is a separate entity from a job. Jobs run at a certain time or because a
certain event occurred, and invoke a certain program. Jobs can be created that point to
existing program objects, which means that different jobs can use the same program
and run the program at different times and with different settings. Given the right
privileges, different users can thus use the same program without having to redefine
it. This enables the creation of program libraries, where users can select from a list of
existing programs.
Because a Scheduler program can invoke a stored procedure or other executable that
requires arguments, a means is provided to store default values for those arguments as
program attributes.
See "Creating Programs" on page 27-9 for more information about programs, and
"Jobs" on page 26-4 for an overview of jobs.

Schedules
A schedule specifies when and how many times a job is executed. Jobs can be
scheduled for processing at a later time or immediately. For jobs to be executed at a
Scheduler Concepts 26-3

Basic Scheduler Concepts

later time, the user can specify a date and time when the job should start. For jobs that
repeat over a period of time, an end date and time can be specified, which indicates
when the schedule expires.
A schedule can also specify that a job be executed when a certain event occurs, such as
a badge swipe or inventory dropping below a threshold. For more information on
events, see "Events" on page 26-5.
Similar to programs, schedules are objects that can be named and saved in the
database. Users can then share named schedules. For example, the end of a business
quarter may be a common time frame for many jobs. Instead having to define an
end-of-quarter schedule each time a new job is defined, job creators can point to a
named schedule.
Some examples of schedules you might use to control time-based jobs are:
■
■

■

Run on Wednesday, December 26th, 2001 at 2pm
Run every Monday, at 8am, starting on December 26th, 2001, and ending on
January 31st, 2002
Run on every working day

See "Creating Schedules" on page 27-13 for more information.

Jobs
A job is a user-defined task that is scheduled to run one or more times. It is a
combination of what needs to be executed (the action) and when (the schedule). Users
with the right privileges can create jobs either by:
■

■

Specifying as job attributes both the action to perform (for example, an inline
PL/SQL anonymous block) and the schedule by which to perform the action (for
example, every day at noon, or when a certain event occurs)
Specifying as job attributes the names of an existing program object and an
existing schedule object

Like programs and schedules, jobs are objects that can be named and saved in the
database.
Job Arguments
You can specify job arguments to customize a named program object. Job arguments
override the default argument values in the program object, and provide values for
those program arguments that have no default value. In addition, job arguments can
provide argument values to an inline action (for example, a stored procedure) that the
job specifies.
A job cannot be enabled until all required program argument values are defined, either
as defaults in a referenced program object, or as job arguments.
A common example of a job is one that runs a set of nightly reports. If different
departments require different reports, you can create a program for this task that can
be shared among different users from different departments. The program action
would be to run a reports script, and the program would have one argument: the
department number. Each user can then create a job that points to this program, and
can specify the department number as a job argument.
See "Creating Jobs" on page 27-2 for more information.

26-4 Oracle Database Administrator’s Guide

Basic Scheduler Concepts

Job Instances
A job instance represents a specific run of a job. Jobs that are scheduled to run only
once will have only one instance. Jobs that have a repeating schedule will have
multiple instances, with each run of the job representing an instance. For example, a
job that is scheduled to run on Tuesday, Oct. 8th 2002 will have one instance. A job that
runs daily at noon for a week has seven instances, one for each time the job runs.
When a job is created, only one entry is added to the Scheduler's job table to represent
the job. Each time the job runs, an entry is added to the job log. Therefore, if you create
a job that has a repeating schedule, you will find one entry in the job views and
multiple entries in the job log. Each job instance log entry provides information about
a particular run, such as the job completion status and the start and end time. Each run
of the job is assigned a unique log id which is used in both the job log and job run
details views.
See "How to View Scheduler Information" on page 28-7 for more information.

Events
An event is a message sent by one application or system process to another to indicate
that some action or occurrence has been detected. An event is raised (sent) by one
application or process, and consumed (received) by one or more applications or
processes.
There are two kinds of events in the Scheduler:
■

Events raised by the Scheduler
The Scheduler can raise an event to indicate state changes that occur within the
Scheduler itself. For example, the Scheduler can raise an event when a job starts,
when a job completes, when a job exceeds its allotted run time, and so on. The
consumer of the event is an application that takes some action in response to the
event.
For example, if due to a high system load, a job is still not started 30 minutes after
the scheduled start time, the Scheduler can raise an event that causes a handler
application to send a notification e-mail to the database administrator.

■

Events raised by an application
An application can raise an event to be consumed by the Scheduler. The Scheduler
reacts to the event by starting a job. You can create a schedule that references an
event instead of containing date, time, and recurrence information. If a job is
assigned to such a schedule (an event schedule), the job runs when the event is
raised. You can also create a job that has no schedule assigned and that directly
references an event as the means to start the job.
For example, when an inventory tracking system notices that the inventory has
gone below a certain threshold, it can raise an event that starts an inventory
replenishment job.

The Scheduler uses Oracle Streams Advanced Queuing to raise and consume events.
When raising a job state change event, the Scheduler enqueues a message onto a
default event queue. Applications subscribe to this queue, dequeue event messages,
and take appropriate action. When raising an event to notify the Scheduler to start a
job, an application enqueues a message onto a queue that was specified when setting
up the job.

Scheduler Concepts 26-5

Basic Scheduler Concepts

See Also:
■

■

Oracle Streams Advanced Queuing User's Guide and Reference for
more information on Advanced Queuing
"Using Events" on page 27-30 for more information on Events

Chains
A chain is a grouping of programs that are linked together for a single, combined
objective. An example of a chain might be "run program A and then program B, but
only run program C if programs A and B complete successfully, otherwise run
program D." A Scheduler job can point to a chain instead of pointing to a single
program object.
Each position within a chain of interdependent programs is referred to as a step.
Typically, after an initial set of chain steps has started, the execution of successive steps
depends on the completion of one or more previous steps. Each step can point to one
of the following:
■

A program

■

Another chain (a nested chain)

■

An event
A step that points to an event waits until the specified event is raised. If the event
occurs, the step completes successfully.

Multiple steps in the chain can invoke the same program or nested chain.
In a sense, a chain resembles a decision tree, with many possible paths for selecting
which steps run and when. A list of rules is used to decide which actions to perform at
any particular stage. An example of a rule is "If step 2 fails or step 3 fails, wait an hour
and then start step 4."
While a job pointing to a chain is running, the current state of all steps of the running
chain can be monitored.
A typical situation where you might want to create a chain is to combine the different
programs necessary for a successful financial transaction.
See "Using Chains" on page 27-37 for more information.

How Programs, Jobs, and Schedules are Related
To define what is executed and when, you assign relationships among programs, jobs,
and schedules. Figure 26–1 illustrates examples of such relationships.

26-6 Oracle Database Administrator’s Guide

Advanced Scheduler Concepts

Figure 26–1 Relationships Among Programs, Jobs, and Schedules

P1

J1

P2

J2

J3

P3

...

P8

J4

...

J20

S1

P9

J21

S2

P10

J22

J23

S3

J24

S4

To understand Figure 26–1, consider a situation where tables are being analyzed. In
this example, P1 would be a program to analyze a table using the DBMS_STATS
package. The program has an input parameter for the table name. Two jobs, J1 and
J2, both point to the same program, but each supplies a different table name.
Additionally, schedule S1 could specify a run time of 2:00 a.m. every day. The end
result would be that the two tables named in J1 and J2 are analyzed daily at 2:00 a.m.
Note that J4 points to no other entity, so it is self-contained with all relevant
information defined in the job itself. P2, P9 and S2 illustrate that you can leave a
program or schedule unassigned if you want. You could, for example, create a
program that calculates a year-end inventory and temporarily leave it unassigned to
any job.

Advanced Scheduler Concepts
Many Scheduler capabilities enable database administrators to control more advanced
aspects of scheduling. Typically, these topics are not as important for application
developers.
This section discusses the following advanced topics:
■

Job Classes

■

Windows

■

Window Groups

Job Classes
Job classes provide a way to:
■

Assign the same set of attribute values to member jobs
Each job class specifies a set of attributes, such as logging level. When you assign a
job to a job class, the job inherits those attributes. For example, you can specify the
same policy for purging log entries for all payroll jobs.

■

Set service affinity for member jobs
You can set the service attribute of a job class to a desired database service
name. This determines the instances in a Real Application Clusters environment

Scheduler Concepts 26-7

Advanced Scheduler Concepts

that run the member jobs, and optionally the system resources that are assigned to
member jobs. See "Service Affinity when Using the Scheduler" on page 26-12 for
more information.
■

Set resource allocation for member jobs
Job classes provide the link between the Database Resource Manager and the
Scheduler, because each job class can specify a resource consumer group as an
attribute. Member jobs then belong to the specified consumer group, and are
assigned resources according to settings in the current resource plan.
Alternatively, you can leave the resource_consumer_group attribute NULL
and set the service attribute of a job class to a desired database service name.
That service can in turn be mapped to a resource consumer group.
See Chapter 24, "Using the Database Resource Manager" for more information on
mapping services to consumer groups.

■

Group jobs for prioritization
Within the same job class, you can assign priority values of 1-5 to individual jobs
so that if two jobs in the class are scheduled to start at the same time, the one with
the higher priority takes precedence. This ensures that you do not have a less
important job preventing the timely completion of a more important one.
If two jobs have the same assigned priority value, the job with the earlier start date
takes precedence. If no priority is assigned to a job, its priority defaults to 3.
Job priorities are used only to prioritize among jobs in the
same class.

Note:

There is no guarantee that a high priority job in class A will be
started before a low priority job in class B, even if they share the
same schedule. Prioritizing among jobs of different classes depends
on the current resource plan and on the designated resource
consumer group or service name of each job class.
When defining job classes, you should try to classify jobs by functionality. Consider
dividing jobs into groups that access similar data, such as marketing, production,
sales, finance, and human resources.
Some of the restrictions to keep in mind are:
■

■

A job must be part of exactly one class. When you create a job, you can specify
which class the job is part of. If you do not specify a class, the job automatically
becomes part of the class DEFAULT_JOB_CLASS.
Dropping a class while there are still jobs in that class results in an error. You can
force a class to be dropped even if there are still jobs that are members of that class,
but all jobs referring to that class are then automatically disabled and assigned to
the class DEFAULT_JOB_CLASS. Jobs belonging to the dropped class that are
already running continue to run under class settings determined at the start of the
job.

Windows
You create windows to change resource allocation among jobs during various time
periods of the day, week, and so on. A window is represented by an interval of time
with a well-defined beginning and end, such as "from 12am-6am".

26-8 Oracle Database Administrator’s Guide

Advanced Scheduler Concepts

Windows work with job classes to control resource allocation. Each window specifies
the resource plan to activate when the window opens (becomes active), and each job
class specifies a resource consumer group or specifies a database service, which can
map to a consumer group. A job that runs within a window therefore has resources
allocated to it according to the consumer group of its job class and the resource plan of
the window.
Figure 26–2 shows a workday that includes two windows. In this configuration, jobs
that belong to the job class that links to Consumer Group 1 get more resources in the
morning than in the afternoon. The opposite is true for jobs in the job class that links to
Consumer Group 2.
Figure 26–2 Windows help define the resources that are allocated to jobs
Resource Plan A
Consumer Group 1 - 90%
Consumer Group 2 - 10%

Resource Plan B
Consumer Group 1 - 10%
Consumer Group 2 - 90%

Window 1

Window 2

0

24

6 am

11 am

2 pm

8pm

See Chapter 24, "Using the Database Resource Manager" for more information on
resource plans and consumer groups.
You can assign a priority to each window. If windows overlap, the window with the
highest priority is chosen over other windows with lower priorities. The Scheduler
automatically opens and closes windows as window start times and end times come
and go.
A job can name a window in its schedule_name attribute. The Scheduler then starts
the job when the window opens. If a window is already open, and a new job is created
that points to that window, the job is not started until the next time the window opens.
See "Creating Windows" on page 27-21 for examples of creating and using windows.
Note: If necessary, you can temporarily block windows from
switching the current resource plan. For more information, see
"Enabling the Database Resource Manager" on page 24-24, or the
discussion of the DBMS_RESOURCE_MANAGER.SWITCH_PLAN
package procedure in Oracle Database PL/SQL Packages and Types
Reference.

Window Groups
You can group windows for ease of use in scheduling jobs. If a job must run during
multiple time periods throughout the day, week, and so on, you can create a window
for each time period, and then add the windows to a window group. You can then set
the schedule_name attribute of the job to the name of this window group, and the
job executes during all the time periods specified in the window group.
For example, if you had a window called "Weekends" and a window called
"Weeknights," you could add these two windows to a window group called

Scheduler Concepts 26-9

Scheduler Architecture

"Downtime." The data warehousing staff could then create a job to run queries
according to this Downtime window group—on weeknights and weekends—when
the queries could be assigned a high percentage of available resources.
If a window in a window group is already open, and a new job is created that points to
that window group, the job is not started until the next window in the window group
opens.
See "Creating Window Groups" on page 27-28 for examples of creating window
groups.

Scheduler Architecture
This section discusses the Scheduler's architecture, and describes:
■

The Job Table

■

The Job Coordinator

■

How Jobs Execute

■

Job Slaves

■

Using the Scheduler in Real Application Clusters Environments

Figure 26–3 illustrates how jobs are handled by the database.
Figure 26–3 Scheduler Components
Client

Database
Job Table
Job1
Job2
Job3
Job4
Job5
Job6

Job Coordinator

JS

JS

JS

Job Slaves

The Job Table
The job table is a container for all the jobs, with one table per database. The job table
stores information for all jobs such as the owner name or the level of logging. You can
find this information in the *_SCHEDULER_JOBS views.
Jobs are database objects, and can therefore accumulate and take up too much space.
To avoid this, job objects are automatically dropped by default after completion. This
behavior is controlled by the auto_drop job attribute.
See "How to View Scheduler Information" on page 28-7 for the available job views and
administration.

26-10 Oracle Database Administrator’s Guide

Scheduler Architecture

The Job Coordinator
The job coordinator is a background process (cjqNNN) that is automatically started
when jobs must be run, or windows must be opened. It is automatically brought down
after a sustained period of Scheduler inactivity. The job coordinator:
■

Controls and spawns the job slaves

■

Queries the job table

■

Picks up jobs from the job table on a regular basis and places them in a memory
cache. This improves performance by avoiding going to the disk

■

Takes jobs from the memory cache and passes them to job slaves for execution

■

Cleans up the job slave pool when slaves are no longer needed

■

Goes to sleep when no jobs are scheduled

■

■

Wakes up when a new job is about to be executed or a job was created using the
CREATE_JOB procedure
Upon database startup after an abnormal database shutdown, recovers any jobs
that were running.

You do not need to set when the job coordinator checks the job table; the system
chooses the time frame automatically.
One job coordinator is used per instance. This is also the case in RAC environments.
See Also: "How to View Scheduler Information" on page 28-7 for
job coordinator administration and "Using the Scheduler in Real
Application Clusters Environments" on page 26-12 for RAC
information

How Jobs Execute
When a job is picked for processing, the job slave:
1.

Gathers all the metadata needed to run the job. As an example, arguments of the
program and privilege information.

2.

Starts a database session as the owner of the job, starts a transaction, and then
starts executing the job.

3.

Once the job is complete, the slave commits and ends the transaction.

4.

Closes the session.

Job Slaves
Job slaves actually execute the jobs you submit. They are awakened by the job
coordinator when it is time for a job to be executed. They gather metadata to run the
job from the job table.
When a job is done, the slaves:
■
■

Reschedule the job if required
Update the state in the job table to reflect whether the job has completed or is
scheduled to run again

■

Insert an entry into the job log table

■

Update the run count, and if necessary, failure count and retry count

Scheduler Concepts 26-11

Scheduler Architecture

■

Clean up

■

Look for new work (if none, they go to sleep)

The Scheduler dynamically sizes the slave pool as required.

Using the Scheduler in Real Application Clusters Environments
In a Real Application Clusters (RAC) environment, the Scheduler uses one job table for
each database and one job coordinator for each instance. The job coordinators
communicate with each other to keep information current. The Scheduler attempts to
balance the load of the jobs of a job class across all available instances when the job
class has no service affinity, or across the instances assigned to a particular service
when the job class does have service affinity.
Figure 26–4 illustrates a typical RAC architecture, with each instance's job coordinator
exchanging information with the others.
Figure 26–4 RAC Architecture and the Scheduler
Job Slaves
JS

JS

Job Slaves
JS

JS

JS

Job Slaves
JS

JS

JS

JS

Job Coordinator 1

Job Coordinator 2

Job Coordinator 3

Instance 1

Instance 2

Instance 3

Database
Job Table
Job1
Job2
Job3
Job4
Job5
Job6

Service Affinity when Using the Scheduler
The Scheduler enables you to specify the database service under which a job should be
run (service affinity). This ensures better availability than instance affinity because it
guarantees that other nodes can be dynamically assigned to the service if an instance
goes down. Instance affinity does not have this capability, so, when an instance goes
down, none of the jobs with an affinity to that instance will be able to run until the
instance comes back up. Figure 26–5 illustrates a typical example of how services and
instances could be used.

26-12 Oracle Database Administrator’s Guide

Scheduler Architecture

Figure 26–5 Service Affinity and the Scheduler
Service A

Instance
1

Service B

Instance
2

Instance
3

Service C

Instance
4

Instance
5

Service D

Instance
6

Instance
7

Service E

Instance
8

Database

In Figure 26–5, you could change the properties of the services and the Scheduler will
automatically recognize the change.
Each job class can specify a database service. If a service is not specified, the job class
belongs to an internal service that is guaranteed to be mapped to every running
instance.

Scheduler Concepts 26-13

Scheduler Architecture

26-14 Oracle Database Administrator’s Guide

27
Using the Scheduler
Oracle Database provides database job capabilities through Oracle Scheduler (the
Scheduler). This chapter explains how to use the various Scheduler components, and
discusses the following topics:
■

Scheduler Objects and Their Naming

■

Using Jobs

■

Using Programs

■

Using Schedules

■

Using Job Classes

■

Using Windows

■

Using Window Groups

■

Using Events

■

Using Chains

■

Allocating Resources Among Jobs
This chapter describes how to use the DBMS_SCHEDULER
package to work with Scheduler components. You can accomplish the
same tasks using Oracle Enterprise Manager.

Note:

Scheduler Objects and Their Naming
Each Scheduler object is a complete database schema object of the form
[schema.]name. Scheduler objects exactly follow the naming rules for database
objects and share the SQL namespace with other database objects.
When names for Scheduler objects are used in the DBMS_SCHEDULER package, SQL
naming rules continue to be followed. By default, Scheduler object names are
uppercase unless they are surrounded by double quotes. For example, when creating a
job, job_name => 'my_job' is the same as job_name => 'My_Job' and
job_name => 'MY_JOB', but not the same as job_name => '"my_job"'. These
naming rules are also followed in those cases where comma-delimited lists of
Scheduler object names are used within the DBMS_SCHEDULER package.
See Oracle Database SQL Reference for details regarding naming objects.

Using the Scheduler 27-1

Using Jobs

Using Jobs
A job is the combination of a schedule and a program, along with any additional
arguments required by the program. This section introduces you to basic job tasks, and
discusses the following topics:
■

Job Tasks and Their Procedures

■

Creating Jobs

■

Copying Jobs

■

Altering Jobs

■

Running Jobs

■

Stopping Jobs

■

Dropping Jobs

■

Disabling Jobs

■

Enabling Jobs
See Also:

"Jobs" on page 26-4 for an overview of jobs.

Job Tasks and Their Procedures
Table 27–1 illustrates common job tasks and their appropriate procedures and
privileges:
Table 27–1

Job Tasks and Their Procedures

Task

Procedure

Privilege Needed

Create a job

CREATE_JOB

CREATE JOB or CREATE ANY JOB

Alter a job

SET_ATTRIBUTE

ALTER or CREATE ANY JOB or be the owner

Run a job

RUN_JOB

ALTER or CREATE ANY JOB or be the owner

Copy a job

COPY_JOB

ALTER or CREATE ANY JOB or be the owner

Drop a job

DROP_JOB

ALTER or CREATE ANY JOB or be the owner

Stop a job

STOP_JOB

ALTER or CREATE ANY JOB or be the owner

Disable a job

DISABLE

ALTER or CREATE ANY JOB or be the owner

Enable a job

ENABLE

ALTER or CREATE ANY JOB or be the owner

See "How to Manage Scheduler Privileges" on page 28-14 for further information
regarding privileges.

Creating Jobs
You create jobs using the CREATE_JOB procedure or Enterprise Manager. When
creating a job, you specify an action, a schedule, and other attributes. For example, the
following statement creates a job called update_sales, which calls a stored
procedure in the OPS schema that updates a sales summary table:
BEGIN
DBMS_SCHEDULER.CREATE_JOB
job_name
=>
job_type
=>
job_action
=>

27-2 Oracle Database Administrator’s Guide

(
'update_sales',
'STORED_PROCEDURE',
'OPS.SALES_PKG.UPDATE_SALES_SUMMARY',

Using Jobs

start_date
repeat_interval
end_date
job_class
comments
END;
/

=>
=>
=>
=>
=>

'28-APR-03 07.00.00 PM Australia/Sydney',
'FREQ=DAILY;INTERVAL=2', /* every other day */
'20-NOV-04 07.00.00 PM Australia/Sydney',
'batch_update_jobs',
'My new job');

You can create a job in another schema by specifying schema.job_name. The creator
of a job is, therefore, not necessarily the job owner. The job owner is the user in whose
schema the job is created, while the job creator is the user who is creating the job. Jobs
are executed with the privileges of the schema in which the job is created. The NLS
environment of the job when it runs is that which was present at the time the job was
created.
After a job is created, it can be queried using the *_SCHEDULER_JOBS views. Jobs are
created disabled by default and need to be enabled to run.
Jobs are set to be automatically dropped by default after they complete. Setting the
auto_drop attribute to FALSE causes the job to persist. Note that repeating jobs are
not auto-dropped unless the job end date passes, the maximum number of runs
(max_runs) is reached, or the maximum number of failures is reached
(max_failures).

Setting Job Attributes
You can set job attributes when creating the job, or you can set them after the job is
created by using the SET_ATTRIBUTE procedure or Enterprise Manager. (Some job
attributes can be set only with the SET_ATTRIBUTE procedure or Enterprise
Manager.)
See Oracle Database PL/SQL Packages and Types Reference for information about the
SET_ATTRIBUTE procedure and about the various job attributes.

Setting Job Arguments
After creating a job, you may need to set job arguments if:
■

■

■

The inline job action is a stored procedure or other executable that requires
arguments
The job references a named program object and you want to override one or more
default program arguments
The job references a named program object and one or more of the program
arguments were not assigned a default value

To set job arguments, use the SET_JOB_ARGUMENT_VALUE or
SET_JOB_ANYDATA_VALUE procedures or Enterprise Manager.
SET_JOB_ANYDATA_VALUE is used for complex data types that must be encapsulated
in an ANYDATA object.
An example of a job that might need arguments is one that starts a reporting program
that requires a start date and end date. The following code example sets the end date
job argument, which is the second argument expected by the reporting program:
BEGIN
DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE (
job_name
=> 'ops_reports',
argument_position
=> 2,
argument_value
=> '12-DEC-03');
END;

Using the Scheduler 27-3

Using Jobs

/

If you use this procedure on an argument whose value has already been set, it will be
overwritten. You can set argument values using either the argument name or the
argument position. To use argument name, the job must reference a named program
object, and the argument must have been assigned a name in the program object. If a
program is inlined, only setting by position is supported. Arguments are not
supported for jobs of type plsql_block.
To remove a value that has been set, use the RESET_JOB_ARGUMENT procedure. This
procedure can be used for both regular and ANYDATA arguments.
See Oracle Database PL/SQL Packages and Types Reference for information about the
SET_JOB_ARGUMENT_VALUE and SET_JOB_ANYDATA_VALUE procedures.

Ways of Creating Jobs
You create a job using the CREATE_JOB procedure or Enterprise Manager. Because this
procedure is overloaded, there are several different ways of using it. In addition to
inlining a job during the job creation, you can also create a job that points to a named
program and schedule. This is discussed in the following sections:
■

Creating Jobs Using a Named Program

■

Creating Jobs Using a Named Schedule

■

Creating Jobs Using a Named Program and Schedule

Creating Jobs Using a Named Program You can also create a job by pointing to a named
program instead of inlining its action. To create a job using a named program, you
specify the value for program_name in the CREATE_JOB procedure when creating the
job and do not specify the values for job_type, job_action, and
number_of_arguments.
To use an existing program when creating a job, the owner of the job must be the
owner of the program or have EXECUTE privileges on it. An example of using the
CREATE_JOB procedure with a named program is the following statement, which
creates a job called my_new_job1:
BEGIN
DBMS_SCHEDULER.CREATE_JOB (
job_name
=> 'my_new_job1',
program_name
=> 'my_saved_program',
repeat_interval
=> 'FREQ=DAILY;BYHOUR=12',
comments
=> 'Daily at noon');
END;
/

Creating Jobs Using a Named Schedule You can also create a job by pointing to a named
schedule instead of inlining its schedule. To create a job using a named schedule, you
specify the value for schedule_name in the CREATE_JOB procedure when creating
the job and do not specify the values for start_date, repeat_interval, and
end_date.
You can use any named schedule to create a job because all schedules are created with
access to PUBLIC. An example of using the CREATE_JOB procedure with a named
schedule is the following statement, which creates a job called my_new_job2:
BEGIN
DBMS_SCHEDULER.CREATE_JOB (
job_name
=>

27-4 Oracle Database Administrator’s Guide

'my_new_job2',

Using Jobs

job_type
job_action
schedule_name
END;
/

=>
=>
=>

'PLSQL_BLOCK',
'BEGIN SALES_PKG.UPDATE_SALES_SUMMARY; END;',
'my_saved_schedule');

Creating Jobs Using a Named Program and Schedule A job can also be created by pointing
to both a named program and schedule. An example of using the CREATE_JOB
procedure with a named program and schedule is the following statement, which
creates a new job called my_new_job3 based on the existing program
my_saved_program1 and the existing schedule my_saved_schedule1:
BEGIN
DBMS_SCHEDULER.CREATE_JOB (
job_name
=> 'my_new_job3',
program_name
=> 'my_saved_program1',
schedule_name
=> 'my_saved_schedule1');
END;
/

Copying Jobs
You copy a job using the COPY_JOB procedure or Enterprise Manager. This call copies
all the attributes of the old job to the new job except the new job is created disabled
and has another name.
See Oracle Database PL/SQL Packages and Types Reference for detailed information about
the COPY_JOB procedure.

Altering Jobs
You alter a job using the SET_ATTRIBUTE procedure or Enterprise Manager. All jobs
can be altered, and, with the exception of the job name, all job attributes can be
changed. If there is a running instance of the job when the SET_ATTRIBUTE call is
made, it is not affected by the call. The change is only seen in future runs of the job.
In general, you should not alter a job that was automatically created for you by the
database. Jobs that were created by the database have the column SYSTEM set to TRUE
in job views. The attributes of a job are available in the *_SCHEDULER_JOBS views.
It is perfectly valid for running jobs to alter their own job attributes using the
SET_ATTRIBUTE procedure, however, these changes will not be picked up until the
next scheduled run of the job.
See Oracle Database PL/SQL Packages and Types Reference for detailed information about
the SET_ATTRIBUTE procedure and "Configuring the Scheduler" on page 28-1.

Running Jobs
Normally, jobs are executed asynchronously. The user creates a job and the API
immediately returns and indicates whether the creation was successful. To find out
whether the job succeeded, the user has to query the job table or the job log. While this
is the expected behavior, for special cases, the database also allows users to run jobs
synchronously.

Using the Scheduler 27-5

Using Jobs

Running Jobs Asynchronously
You can schedule a job to run asynchronously based on the schedule defined when the
job is created. In this case, the job is submitted to the job coordinator and is picked up
by the job slaves for execution.

Running Jobs Synchronously
After a job has been created, you can run the job synchronously using the RUN_JOB
procedure with the use_current_session argument set to TRUE. In this case, the
job will run within the user session that invoked the RUN_JOB call instead of being
picked up by the coordinator and being executed by a job slave.
You can use the RUN_JOB procedure to test a job or to run it outside of its specified
schedule. Running a job with RUN_JOB with the use_current_session argument
set to TRUE does not change the count for failure_count and run_count for the
job. The job run will, however, be reflected in the job log. Runtime errors generated by
the job are passed back to the invoker of RUN_JOB.
When using RUN_JOB to run a job that points to a chain, use_current_session
must be set to FALSE.

Job Run Environment
Jobs are run with the privileges that are granted to the job owner directly or indirectly
through default logon roles. External OS roles are not supported. Given sufficient
privileges, users can create jobs in other users' schemas. The creator and the owner of
the job can, therefore, be different. For example, if user jim has the CREATE ANY JOB
privilege and creates a job in the scott schema, then the job will run with the
privileges of scott.
The NLS environment of the session in which the job was created is saved and is used
when the job is being executed. To alter the NLS environment in which a job runs, a
job must be created in a session with different NLS settings.

Running External Jobs
An external job is a job that runs outside the database. All external jobs run as a
low-privileged guest user, as has been determined by the database administrator while
configuring external job support. Because the executable will be run as a
low-privileged guest account, you should verify that it has access to necessary files
and resources. Most, but not all, platforms support external jobs. For platforms that do
not support external jobs, creating or setting the attribute of a job or a program to type
EXECUTABLE returns an error. See your operating system-specific documentation for
more information.
For an external job, job_type is specified as EXECUTABLE (If using named programs,
the corresponding program_type would be EXECUTABLE). job_action (or
corresponding program_action if using named programs) is the full OS-dependent
path of the desired external executable plus optionally any command line arguments.
For example, /usr/local/bin/perl or C:\perl\bin\perl. The program or job
arguments for type EXECUTABLE must be a string type such as CHAR, VARCHAR2, or
VARCHAR.
Some additional post-installation steps might be required to ensure that external jobs
run as a low-privileged guest user. See your operating system-specific documentation
for any post-installation configuration steps.
The owner of an external job must have the CREATE
EXTERNAL JOB system privilege before the job can be enabled or run.

Note:

27-6 Oracle Database Administrator’s Guide

Using Jobs

Capturing Standard Error Output for External Jobs When an external job writes output to
stderr, the first 200 bytes are recorded in the additional_info column of the
*_SCHEDULER_JOB_RUN_DETAILS views. The information is in the following
name/value pair format:
STANDARD_ERROR="text"

Stopping Jobs
You stop one or more running jobs using the STOP_JOB procedure or Enterprise
Manager. STOP_JOB accepts a comma-delimited list of jobs and job classes. If a job
class is supplied, all running jobs in the job class are stopped. For example, the
following statement stops job job1 and all jobs in the job class dw_jobs.
BEGIN
DBMS_SCHEDULER.STOP_JOB('job1, sys.dw_jobs');
END;
/

All instances of the designated jobs are stopped. After stopping a job, the state of a
one-time job is set to STOPPED, and the state of a repeating job is set to SCHEDULED
(because the next run of the job is scheduled). In addition, an entry is made in the job
log with OPERATION set to 'STOPPED', and ADDITIONAL_INFO set to
'REASON="Stop job called by user: username"'.
By default, the Scheduler tries to gracefully stop a job using an interrupt mechanism.
This method gives control back to the slave process, which can collect statistics of the
job run. If the force option is set to TRUE, the job is abruptly terminated and certain
runtime statistics might not be available for the job run.
Stopping a job that is running a chain automatically stops all running steps (by calling
STOP_JOB with the force option set to TRUE on each step).
See Oracle Database PL/SQL Packages and Types Reference for detailed information about
the STOP_JOB procedure.
Caution: When a job is stopped, only the current transaction is
rolled back. This can cause data inconsistency.

Stopping External Jobs
The Scheduler offers implementors of external jobs a mechanism to gracefully clean up
after their external jobs when STOP_JOB is called with force set to FALSE. On Unix,
this is done by sending a SIGTERM signal to the process launched by the external job
agent. The implementor of the external job is expected to trap the SIGTERM in an
interrupt handler, clean up whatever work the job has done, and exit. On Windows,
STOP_JOB with force set to FALSE is supported only on Windows XP, Windows
2003, and later operating systems. On those platforms, the process launched by the
external job agent is a console process. To stop it, the Scheduler sends a CTRL-BREAK
to the process. The CTRL_BREAK can be handled by registering a handler with the
SetConsoleCtrlHandler() routine.

Dropping Jobs
You drop a one or more jobs using the DROP_JOB procedure or Enterprise Manager.
DROP_JOB accepts a comma-delimited list of jobs and job classes. If a job class is
supplied, all jobs in the job class are dropped, although the job class itself is not
dropped.
Using the Scheduler 27-7

Using Jobs

Dropping a job results in the job being removed from the job table, its metadata being
removed, and it no longer being visible in the *_SCHEDULER_JOBS views. Therefore,
no more runs of the job will be executed.
If an instance of the job is running at the time of the call, the call results in an error. You
can still drop the job by setting the force option in the call to TRUE. Setting the force
option to TRUE first attempts to stop the running job instance (by calling STOP_JOB
with the force option set to FALSE), and then drops the job. By default, force is set
to FALSE.
For example, the following statement drops jobs job1 and job3, and all jobs in job
classes jobclass1 and jobclass2:
BEGIN
DBMS_SCHEDULER.DROP_JOB ('job1, job3, sys.jobclass1, sys.jobclass2');
END;
/

The DROP_JOB_CLASS procedure should be used to drop a job class. See "Dropping
Job Classes" on page 27-19 for information about how to drop job classes.
See Oracle Database PL/SQL Packages and Types Reference for detailed information about
the DROP_JOB procedure.

Disabling Jobs
You disable one or more jobs using the DISABLE procedure or Enterprise Manager. A
job can also become disabled for other reasons. For example, a job will be disabled
when the job class it belongs to is dropped. A job is also disabled if either the program
or the schedule that it points to is dropped. Note that if the program or schedule that
the job points to is disabled, the job will not be disabled and will therefore result in an
error when the Scheduler tries to run the job.
Disabling a job means that, although the metadata of the job is there, it should not run
and the job coordinator will not pick up these jobs for processing. When a job is
disabled, its state in the job table is changed to disabled.
When a job is disabled with the force option set to FALSE and the job is currently
running, an error is returned. When force is set to TRUE, the job is disabled, but the
currently running instance is allowed to finish.
You can also disable several jobs in one call by providing a comma-delimited list of job
names or job class names to the DISABLE procedure call. For example, the following
statement combines jobs with job classes:
BEGIN
DBMS_SCHEDULER.DISABLE('job1, job2, job3, sys.jobclass1, sys.jobclass2');
END;
/

See Oracle Database PL/SQL Packages and Types Reference for detailed information about
the DISABLE procedure.

Enabling Jobs
You enable one or more jobs by using the ENABLE procedure or Enterprise Manager.
The effect of using this procedure is that the job will now be picked up by the job
coordinator for processing. Jobs are created disabled by default, so you need to enable
them before they can run. When a job is enabled, a validity check is performed. If the
check fails, the job is not enabled.

27-8 Oracle Database Administrator’s Guide

Using Programs

You can enable several jobs in one call by providing a comma-delimited list of job
names or job class names to the ENABLE procedure call. For example, the following
statement combines jobs with job classes:
BEGIN
DBMS_SCHEDULER.ENABLE ('job1, job2, job3,
sys.jobclass1, sys.jobclass2, sys.jobclass3');
END;
/

See Oracle Database PL/SQL Packages and Types Reference for detailed information about
the ENABLE procedure.

Using Programs
A program is a collection of metadata about a particular task. This section introduces
you to basic program tasks, and discusses the following topics:
■

Program Tasks and Their Procedures

■

Creating Programs

■

Altering Programs

■

Dropping Programs

■

Disabling Programs

■

Enabling Programs
See Also:

"Programs" on page 26-3 for an overview of programs.

Program Tasks and Their Procedures
Table 27–2 illustrates common program tasks and their appropriate procedures and
privileges:
Table 27–2

Program Tasks and Their Procedures

Task

Procedure

Privilege Needed

Create a program

CREATE_PROGRAM

CREATE JOB or CREATE ANY JOB

Alter a program

SET_ATTRIBUTE

ALTER or CREATE ANY JOB or be the owner

Drop a program

DROP_PROGRAM

ALTER or CREATE ANY JOB or be the owner

Disable a program

DISABLE

ALTER or CREATE ANY JOB or be the owner

Enable a program

ENABLE

ALTER or CREATE ANY JOB or be the owner

See "How to Manage Scheduler Privileges" on page 28-14 for further information
regarding privileges.

Creating Programs
You create programs by using the CREATE_PROGRAM procedure or Enterprise
Manager. By default, programs are created in the schema of the creator. To create a
program in another user's schema, you need to qualify the program name with the
schema name. For other users to use your programs, they must have EXECUTE
privileges on the program, therefore, once a program has been created, you have to

Using the Scheduler 27-9

Using Programs

grant the EXECUTE privilege on it. An example of creating a program is the following,
which creates a program called my_program1:
BEGIN
DBMS_SCHEDULER.CREATE_PROGRAM (
program_name
=> 'my_program1',
program_action
=> '/usr/local/bin/date',
program_type
=> 'EXECUTABLE',
comments
=> 'My comments here');
END;
/

Defining Program Arguments
After creating a program, you can define a name or default value for each program
argument. If no default value is defined for a program argument, the job that
references the program must supply an argument value. (The job can also override a
default value.) All argument values must be defined before the job can be enabled.
To set program argument values, use the DEFINE_PROGRAM_ARGUMENT or
DEFINE_ANYDATA_ARGUMENT procedures. DEFINE_ANYDATA_ARGUMENT is used for
complex types that must be encapsulated in an ANYDATA object. An example of a
program that might need arguments is one that starts a reporting program that
requires a start date and end date. The following code example sets the end date
argument, which is the second argument expected by the reporting program. The
example also assigns a name to the argument so that you can refer to the argument by
name (instead of position) from other package procedures, including
SET_JOB_ANYDATA_VALUE and SET_JOB_ARGUMENT_VALUE.
BEGIN
DBMS_SCHEDULER.DEFINE_PROGRAM_ARGUMENT (
program_name
=> 'operations_reporting',
argument_position
=> 2,
argument_name
=> 'end_date',
argument_type
=> 'VARCHAR2',
default_value
=> '12-DEC-03');
END;
/

You can drop a program argument either by name or by position, as in the following:
BEGIN
DBMS_SCHEDULER.DROP_PROGRAM_ARGUMENT (
program_name
=> 'operations_reporting',
argument_position
=> 2);
DBMS_SCHEDULER.DROP_PROGRAM_ARGUMENT (
program_name
=> 'operations_reporting',
argument_name
=> 'end_date');
END;
/

In some special cases, program logic is dependent on the Scheduler environment. The
Scheduler has some predefined metadata arguments that can be passed as an
argument to the program for this purpose. For example, for some jobs whose schedule
is a window name, it is useful to know how much longer the window will be open
when the job is started. This is possible by defining the window end time as a
metadata argument to the program.

27-10 Oracle Database Administrator’s Guide

Using Programs

If a program needs access to specific job metadata, you can define a special metadata
argument using the DEFINE_METADATA_ARGUMENT procedure, so values will be
filled in by the Scheduler when the program is executed.
See Also: Oracle Database PL/SQL Packages and Types Reference for
detailed information about the DEFINE_PROGRAM_ARGUMENT,
DEFINE_ANYDATA_ARGUMENT, and
DEFINE_METADATA_ARGUMENT procedures

Altering Programs
You can use Enterprise Manager or the DBMS_SCHEDULER.SET_ATTRIBUTE and
DBMS_SCHEDULER.SET_ATTRIBUTE_NULL package procedures to alter programs.
See Oracle Database PL/SQL Packages and Types Reference for more information on the
DBMS_SCHEDULER package procedures.
The following are instructions for altering a program with Enterprise Manager:
1.

From the Administration page, click Programs
The Scheduler Programs page appears. It displays existing programs.

2.

Click in the Select column to select a program, and then click Edit.
The Edit Program page appears.

3.

Next to the Enabled heading, select Yes or No.

4.

In the Description field, change any comments.

5.

From the Type drop-down list, select one of the following:
■

PLSQL_BLOCK
A Source field appears. Enter or alter the PL/SQL code in this field.

■

STORED_PROCEDURE
A Procedure Name field appears. If the field contains a stored procedure
name, click View Procedure to view or edit the stored procedure. If the field is
blank, or if you want to change stored procedures, click Select Procedure. A
Select Procedure page then appears. Select a stored procedure and then click
Select to return to the Edit Program page. (Click Help at the top of the page
for help with using the Select Procedure page.)
With a procedure name selected, a list of arguments appears under the
Arguments heading on the Edit Program page. Optionally enter default values
for one or more arguments.

■

EXECUTABLE
An Executable Name field appears. Enter the full path of the executable.
Under the Arguments heading, edit or delete arguments, or click Add
Another Row to add an argument.

6.

Click Apply to save your changes.

If any currently running jobs use the program that you altered, they continue to run
with the program as defined before the alter operation.

Dropping Programs
You drop one or more programs using the DROP_PROGRAM procedure or Enterprise
Manager.

Using the Scheduler

27-11

Using Schedules

Running jobs that point to the program are not affected by the DROP_PROGRAM call,
and are allowed to continue. Any arguments that pertain to the program are also
dropped when the program is dropped. You can drop several programs in one call by
providing a comma-delimited list of program names. For example, the following
statement drops three programs:
BEGIN
DBMS_SCHEDULER.DROP_PROGRAM('program1, program2, program3');
END;
/

See Oracle Database PL/SQL Packages and Types Reference for detailed information about
the DROP_PROGRAM procedure.

Disabling Programs
You disable one or more programs using the DISABLE procedure or Enterprise
Manager. When a program is disabled, the status is changed to disabled. A disabled
program implies that, although the metadata is still there, jobs that point to this
program cannot run.
Running jobs that point to the program are not affected by the DISABLE call, and are
allowed to continue. Any argument that pertains to the program will not be affected
when the program is disabled.
A program can also become disabled for other reasons. For example, if a program
argument is dropped or number_of_arguments is changed so that all arguments are
no longer defined.
See Oracle Database PL/SQL Packages and Types Reference for detailed information about
the DISABLE procedure.

Enabling Programs
You enable one or more programs using the ENABLE procedure or Enterprise Manager.
When a program is enabled, the enabled flag is set to TRUE. Programs are created
disabled by default, therefore, you have to enable them before you can enable jobs that
point to them. Before programs are enabled, validity checks are performed to ensure
that the action is valid and that all arguments are defined.
You can enable several programs in one call by providing a comma-delimited list of
program names to the ENABLE procedure call. For example, the following statement
enables three programs:
BEGIN
DBMS_SCHEDULER.ENABLE('program1, program2, program3');
END;
/

See Oracle Database PL/SQL Packages and Types Reference for detailed information about
the ENABLE procedure.

Using Schedules
A schedule defines when a job should be run or when a window should open.
Schedules can be shared among users by creating and saving them as objects in the
database.
This section introduces you to basic schedule tasks, and discusses the following topics:

27-12 Oracle Database Administrator’s Guide

Using Schedules

■

Schedule Tasks and Their Procedures

■

Creating Schedules

■

Altering Schedules

■

Dropping Schedules

■

Setting the Repeat Interval
See Also:

"Schedules" on page 26-3 for an overview of schedules.

Schedule Tasks and Their Procedures
Table 27–3 illustrates common schedule tasks and the procedures you use to handle
them.
Table 27–3

Schedule Tasks and Their Procedures

Task

Procedure

Privilege Needed

Create a schedule

CREATE_SCHEDULE

CREATE JOB or CREATE ANY JOB

Alter a schedule

SET_ATTRIBUTE

ALTER or CREATE ANY JOB or be the owner

Drop a schedule

DROP_SCHEDULE

ALTER or CREATE ANY JOB or be the owner

See "How to Manage Scheduler Privileges" on page 28-14 for further information
regarding privileges.

Creating Schedules
You create schedules by using the CREATE_SCHEDULE procedure or Enterprise
Manager. Schedules are created in the schema of the user creating the schedule, and
are enabled when first created. You can create a schedule in another user's schema.
Once a schedule has been created, it can be used by other users. The schedule is
created with access to PUBLIC. Therefore, there is no need to explicitly grant access to
the schedule. An example of creating a schedule is the following statement:
BEGIN
DBMS_SCHEDULER.CREATE_SCHEDULE (
schedule_name
=> 'my_stats_schedule',
start_date
=> SYSTIMESTAMP,
end_date
=> SYSTIMESTAMP + INTERVAL '30' day,
repeat_interval
=> 'FREQ=HOURLY; INTERVAL=4',
comments
=> 'Every 4 hours');
END;
/

See Oracle Database PL/SQL Packages and Types Reference for detailed information about
the CREATE_SCHEDULE procedure.

Altering Schedules
You alter a schedule by using the SET_ATTRIBUTE procedure or Enterprise Manager.
Altering a schedule changes the definition of the schedule. With the exception of
schedule name, all attributes can be changed. The attributes of a schedule are available
in the *_SCHEDULER_SCHEDULES views.

Using the Scheduler

27-13

Using Schedules

If a schedule is altered, the change will not affect running jobs and open windows that
use this schedule. The change will only be in effect the next time the jobs runs or the
window opens.
See Oracle Database PL/SQL Packages and Types Reference for detailed information about
the SET_ATTRIBUTE procedure.

Dropping Schedules
You drop a schedule using the DROP_SCHEDULE procedure or Enterprise Manager.
This procedure call will delete the schedule object from the database.
See Oracle Database PL/SQL Packages and Types Reference for detailed information about
the DROP_SCHEDULE procedure.

Setting the Repeat Interval
You control when and how often a job repeats by setting the repeat_interval
attribute of the job itself or of the named schedule that the job references. You can set
repeat_interval with DBMS_SCHEDULER package procedures or with Enterprise
Manager.
The result of evaluating the repeat_interval is a set of timestamps. The Scheduler
runs the job at each timestamp. Note that the start date from the job or schedule also
helps determine the resulting set of timestamps. (See Oracle Database PL/SQL Packages
and Types Reference for more information about repeat_interval evaluation.) If no
value for repeat_interval is specified, the job runs only once at the specified start
date.
Immediately after a job is started, the repeat_interval is evaluated to determine
the next scheduled execution time of the job. It is possible that the next scheduled
execution time arrives while the job is still running. A new instance of the job,
however, will not be started until the current one completes.
There are two ways to specify the repeat interval:
■

Using the Scheduler Calendaring Syntax

■

Using a PL/SQL Expression

Using the Scheduler Calendaring Syntax
The primary method of setting how often a job will repeat is by setting the
repeat_interval attribute with a Scheduler calendaring expression. See Oracle
Database PL/SQL Packages and Types Reference for a detailed description of the
calendaring syntax for repeat_interval as well as the CREATE_SCHEDULE
procedure.
Examples of Calendaring Expressions
The following examples illustrate simple repeat intervals. For simplicity, it is assumed
that there is no contribution to the evaluation results by the start date.
Run every Friday. (All three examples are equivalent.)
FREQ=DAILY; BYDAY=FRI;
FREQ=WEEKLY; BYDAY=FRI;
FREQ=YEARLY; BYDAY=FRI;

Run every other Friday.
FREQ=WEEKLY; INTERVAL=2; BYDAY=FRI;

27-14 Oracle Database Administrator’s Guide

Using Schedules

Run on the last day of every month.
FREQ=MONTHLY; BYMONTHDAY=-1;

Run on the next to last day of every month.
FREQ=MONTHLY; BYMONTHDAY=-2;

Run on March 10th. (Both examples are equivalent)
FREQ=YEARLY; BYMONTH=MAR; BYMONTHDAY=10;
FREQ=YEARLY; BYDATE=0310;

Run every 10 days.
FREQ=DAILY; INTERVAL=10;

Run daily at 4, 5, and 6PM.
FREQ=DAILY; BYHOUR=16,17,18;

Run on the 15th day of every other month.
FREQ=MONTHLY; INTERVAL=2; BYMONTHDAY=15;

Run on the 29th day of every month.
FREQ=MONTHLY; BYMONTHDAY=29;

Run on the second Wednesday of each month.
FREQ=MONTHLY; BYDAY=2WED;

Run on the last Friday of the year.
FREQ=YEARLY; BYDAY=-1FRI;

Run every 50 hours.
FREQ=HOURLY; INTERVAL=50;

Run on the last day of every other month.
FREQ=MONTHLY; INTERVAL=2; BYMONTHDAY=-1;

Run hourly for the first three days of every month.
FREQ=HOURLY; BYMONTHDAY=1,2,3;

Here are some more complex repeat intervals:
Run on the last workday of every month (assuming that workdays are Monday
through Friday).
FREQ=MONTHLY; BYDAY=MON,TUE,WED,THU,FRI; BYSETPOS=-1

Run on the last workday of every month, excluding company holidays. (This example
references an existing named schedule called Company_Holidays.)
FREQ=MONTHLY; BYDAY=MON,TUE,WED,THU,FRI; EXCLUDE=Company_Holidays; BYSETPOS=-1

Run at noon every Friday and on company holidays.
FREQ=YEARLY;BYDAY=FRI;BYHOUR=12;INCLUDE=Company_Holidays

Using the Scheduler

27-15

Using Schedules

Run on these three holidays: July 4th, Memorial Day, and Labor Day. (This example
references three existing named schedules—JUL4, MEM, and LAB—where each defines
a single date corresponding to a holiday.)
JUL4,MEM,LAB

Examples of Calendaring Expression Evaluation
A repeat interval of "FREQ=MINUTELY;INTERVAL=2;BYHOUR=17;
BYMINUTE=2,4,5,50,51,7;" with a start date of 28-FEB-2004 23:00:00 will generate
the following schedule:
SUN
SUN
SUN
MON
MON
MON
...

29-FEB-2004
29-FEB-2004
29-FEB-2004
01-MAR-2004
01-MAR-2004
01-MAR-2004

17:02:00
17:04:00
17:50:00
17:02:00
17:04:00
17:50:00

A repeat interval of "FREQ=MONTHLY;BYMONTHDAY=15,-1" with a start date of
29-DEC-2003 9:00:00 will generate the following schedule:
WED
THU
SAT
SUN
SUN
MON
WED
...

31-DEC-2003
15-JAN-2004
31-JAN-2004
15-FEB-2004
29-FEB-2004
15-MAR-2004
31-MAR-2004

09:00:00
09:00:00
09:00:00
09:00:00
09:00:00
09:00:00
09:00:00

A repeat interval of "FREQ=MONTHLY;" with a start date of 29-DEC-2003 9:00:00 will
generate the following schedule. (Note that because there is no BYMONTHDAY clause,
the day of month is retrieved from the start date.)
MON
THU
SUN
MON
...

29-DEC-2003
29-JAN-2004
29-FEB-2004
29-MAR-2004

09:00:00
09:00:00
09:00:00
09:00:00

Example of Using a Calendaring Expression
As an example of using the calendaring syntax, consider the following statement:
BEGIN
DBMS_SCHEDULER.CREATE_JOB (
job_name
=> 'scott.my_job1',
start_date
=> '15-JUL-04 01.00.00 AM Europe/Warsaw',
repeat_interval
=> 'FREQ=MINUTELY; INTERVAL=30;',
end_date
=> '15-SEP-04 01.00.00 AM Europe/Warsaw',
comments
=> 'My comments here');
END;
/

This creates my_job1 in scott. It will run for the first time on July 15th and then run
until September 15. The job is run every 30 minutes.

27-16 Oracle Database Administrator’s Guide

Using Schedules

Using a PL/SQL Expression
When you need more complicated capabilities than the calendaring syntax provides,
you can use PL/SQL expressions. You cannot, however, use PL/SQL expressions for
windows or in named schedules. The PL/SQL expression must evaluate to a date or a
timestamp. Other than this restriction, there are no limitations, so with sufficient
programming, you can create every possible repeat interval. As an example, consider
the following statement:
BEGIN
DBMS_SCHEDULER.CREATE_JOB (
job_name
=> 'scott.my_job2',
start_date
=> '15-JUL-04 01.00.00 AM Europe/Warsaw',
repeat_interval
=> 'SYSTIMESTAMP + INTERVAL '30' MINUTE',
end_date
=> '15-SEP-04 01.00.00 AM Europe/Warsaw',
comments
=> 'My comments here');
END;
/

This creates my_job1 in scott. It will run for the first time on July 15th and then
every 30 minutes until September 15. The job is run every 30 minutes because
repeat_interval is set to SYSTIMESTAMP + INTERVAL '30' MINUTE, which
returns a date 30 minutes into the future.

Differences Between PL/SQL Expression and Calendaring Syntax Behavior
The following are important differences in behavior between a calendaring expression
and PL/SQL repeat interval:
■

Start date
Using the calendaring syntax, the start date is a reference date only. This means
that the schedule is valid as of this date. It does not mean that the job will start on
the start date.
Using a PL/SQL expression, the start date represents the actual time that the job
will start executing for the first time.

■

Next run time
Using the calendaring syntax, the next time the job will run is fixed.
Using the PL/SQL expression, the next time the job will run depends on the actual
start time of the current run of the job. As an example of the difference, if a job
started at 2:00 PM and its schedule was to repeat every 2 hours, then, if the repeat
interval was specified with the calendaring syntax, it would repeat at 4, 6 and so
on. If PL/SQL was used and the job started at 2:10, then the job would repeat at
4:10, and if the next job actually started at 4:11, then the subsequent run would be
at 6:11.

To illustrate these two points, consider a situation where you have a start date of
15-July-2003 1:45:00 and you want it to repeat every two hours. A calendar expression
of "FREQ=HOURLY; INTERVAL=2; BYMINUTE=0;" will generate the following
schedule:
TUE
TUE
TUE
TUE
TUE
...

15-JUL-2003
15-JUL-2003
15-JUL-2003
15-JUL-2003
15-JUL-2003

03:00:00
05:00:00
07:00:00
09:00:00
11:00:00

Using the Scheduler

27-17

Using Job Classes

Note that the calendar expression repeats every two hours on the hour.
A PL/SQL expression of "SYSTIMESTAMP + interval '2' hour", however,
might have a run time of the following:
TUE
TUE
TUE
TUE
TUE
...

15-JUL-2003
15-JUL-2003
15-JUL-2003
15-JUL-2003
15-JUL-2003

01:45:00
03:45:05
05:45:09
07:45:14
09:45:20

Repeat Intervals and Daylight Savings
For repeating jobs, the next time a job is scheduled to run is stored in a timestamp with
time zone column. When using the calendaring syntax, the time zone is retrieved from
start_date. For more information on what happens when start_date is not
specified, see Oracle Database PL/SQL Packages and Types Reference.
In the case of repeat intervals that are based on PL/SQL expressions, the time zone is
part of the timestamp that is returned by the PL/SQL expression. In both cases, it is
important to use region names. For example, "Europe/Istanbul", instead of
absolute time zone offsets such as "+2:00". Only when a time zone is specified as a
region name will the Scheduler follow daylight savings adjustments that apply to that
region.

Using Job Classes
Jobs classes provide a way to group jobs for resource allocation and prioritization, and
a way to easily assign a set of attribute values to member jobs.
There is a default job class that is created with the database. If you create a job without
specifying a job class, the job will be assigned to this default job class
(DEFAULT_JOB_CLASS). The default job class has the EXECUTE privilege granted to
PUBLIC so any database user who has the privilege to create a job can create a job in
the default job class.
This section introduces you to basic job class tasks, and discusses the following topics:
■

Job Class Tasks and Their Procedures

■

Creating Job Classes

■

Altering Job Classes

■

Dropping Job Classes
See Also:

"Job Classes" on page 26-7 for an overview of job classes.

Job Class Tasks and Their Procedures
Table 27–4 illustrates common job class tasks and their appropriate procedures and
privileges:
Table 27–4

Job Class Tasks and Their Procedures

Task

Procedure

Privilege Needed

Create a job class

CREATE_JOB_CLASS

MANAGE SCHEDULER

Alter a job class

SET_ATTRIBUTE

MANAGE SCHEDULER

Drop a job class

DROP_JOB_CLASS

MANAGE SCHEDULER

27-18 Oracle Database Administrator’s Guide

Using Windows

See "How to Manage Scheduler Privileges" on page 28-14 for further information
regarding privileges.

Creating Job Classes
You create a job class using the CREATE_JOB_CLASS procedure or Enterprise
Manager. For example, the following statement creates a job class for all finance jobs:
BEGIN
DBMS_SCHEDULER.CREATE_JOB_CLASS (
job_class_name
=> 'finance_jobs',
resource_consumer_group
=> 'finance_group');
END;
/

To query job classes, use the *_SCHEDULER_JOB_CLASSES views.
See Oracle Database PL/SQL Packages and Types Reference for detailed information about
the SET_ATTRIBUTE procedure and "Configuring the Scheduler" on page 28-1 for
examples of creating job classes.

Altering Job Classes
You alter a job class by using the SET_ATTRIBUTE procedure or Enterprise Manager.
Other than the job class name, all the attributes of a job class can be altered. The
attributes of a job class are available in the *_SCHEDULER_JOB_CLASSES views.
When a job class is altered, running jobs that belong to the class are not affected. The
change only takes effect for jobs that have not started running yet.
See Oracle Database PL/SQL Packages and Types Reference for detailed information about
the SET_ATTRIBUTE procedure and "Configuring the Scheduler" on page 28-1.

Dropping Job Classes
You drop one or more job classes using the DROP_JOB_CLASS procedure or Enterprise
Manager. Dropping a job class means that all the metadata about the job class is
removed from the database.
You can drop several job classes in one call by providing a comma-delimited list of job
class names to the DROP_JOB_CLASS procedure call. For example, the following
statement drops three job classes:
BEGIN
DBMS_SCHEDULER.DROP_JOB_CLASS('jobclass1, jobclass2, jobclass3');
END;
/

See Oracle Database PL/SQL Packages and Types Reference for detailed information about
the DROP_JOB_CLASS procedure.

Using Windows
Windows provide a way to automatically activate different resource plans at different
times. Running jobs can then see a change in the resources that are allocated to them
when there is a change in resource plan.
The key attributes of a window are its:
■

Schedule

Using the Scheduler

27-19

Using Windows

This controls when the window is in effect.
■

Duration
This controls how long the window is open.

■

Resource plan
This names the resource plan that activates when the window opens.

Only one window can be in effect at any given time. Windows belong to the SYS
schema.
All window activity is logged in the *_SCHEDULER_WINDOW_LOG views, otherwise
known as the window logs. See "Window Logs" on page 28-12 for examples of
window logging.
This section introduces you to basic window tasks, and discusses the following topics:
■

Window Tasks and Their Procedures

■

Creating Windows

■

Dropping Windows

■

Opening Windows

■

Closing Windows

■

Dropping Windows

■

Disabling Windows

■

Enabling Windows

■

Overlapping Windows
See Also:

"Windows" on page 26-8 for an overview of windows.

Window Tasks and Their Procedures
Table 27–5 illustrates common window tasks and the procedures you use to handle
them.
Table 27–5

Window Tasks and Their Procedures

Task

Procedure

Privilege Needed

Create a window

CREATE_WINDOW

MANAGE SCHEDULER

Open a window

OPEN_WINDOW

MANAGE SCHEDULER

Close a window

CLOSE_WINDOW

MANAGE SCHEDULER

Alter a window

SET_ATTRIBUTE

MANAGE SCHEDULER

Drop a window

DROP_WINDOW

MANAGE SCHEDULER

Disable a window

DISABLE

MANAGE SCHEDULER

Enable a window

ENABLE

MANAGE SCHEDULER

See "How to Manage Scheduler Privileges" on page 28-14 for further information
regarding privileges.

27-20 Oracle Database Administrator’s Guide

Using Windows

Creating Windows
You can use Enterprise Manager or the DBMS_SCHEDULER.CREATE_WINDOW package
procedure to create windows. There is one difference between these methods, other
than the fact that one uses PL/SQL, and the other a graphical user interface. When
using the package procedure, you can leave the resource_plan parameter NULL. In
this case, when the window opens, the current plan remains in effect. See Oracle
Database PL/SQL Packages and Types Reference and "Configuring the Scheduler" on
page 28-1 for more information.
The following are instructions for creating a window with Enterprise Manager:
1.

From the Administration page, click Windows under the Database Scheduler
heading.
The Scheduler Windows page appears. It displays existing windows.

2.

Click Create.
The Create Window page appears.

3.

Enter a name for the window.

4.

Choose a resource plan from the Resource Plan drop-down list, or create a
resource plan.
You can use the default, INTERNAL_PLAN. To view the contents of an existing
resource plan, click View Resource Plan. If you choose to create a new resource
plan, click Create Resource Plan and follow those steps.

5.

Select a priority, Low or High.

6.

Enter optional comments in the Description field.

7.

Under the Schedule heading, do one of the following:
■
■

To create a schedule, select Use a calendar.
To use an existing schedule, select Use an existing schedule. A schedule with
its information is displayed. If you want a different schedule, click Select
Schedule, in which case the Select Schedule page is displayed. Select one of
the schedules listed under the Results heading, or enter the schema and a text
string for the schedule name, and then click Go. The schedule with the search
string in its name is returned. Select the desired schedule and click Select.
The Scheduler does not check if there is already a window
defined for that schedule. Therefore, this may result in windows that
overlap. Also, using a named schedule that has a PL/SQL expression
as its repeat interval is not supported for windows.

Note:

Click OK, and skip the remaining steps in this procedure.
8.

If you want to change the time zone, click the flashlight icon next to the Time Zone
field and follow the steps.

9.

Under the Repeating drop-down list, choose how often you want the window to
repeat. If you choose a value other than Do Not Repeat, the page changes so that
you can enter the interval and enter a starting time.

10. Under the Available to Start heading, select whether you want the schedule to

start Immediately or Later. If you choose Later, enter a date.
11. Under the Duration heading, enter how long the window is to remain open.

Using the Scheduler

27-21

Using Windows

12. Under the Not Available After heading, optionally specify an end date.
13. Click OK to save the window.

Altering Windows
You alter a window using the SET_ATTRIBUTE procedure or Enterprise Manager.
With the exception of WINDOW_NAME, all the attributes of a window can be changed
when it is altered. The attributes of a window are available in the
*_SCHEDULER_WINDOWS views.
When a window is altered, it does not affect an active window. The changes only take
effect the next time the window opens.
All windows can be altered. If you alter a window that is disabled, it will remain
disabled after it is altered. An enabled window will be automatically disabled, altered,
and then reenabled, if the validity checks performed during the enable process are
successful.
See Oracle Database PL/SQL Packages and Types Reference for detailed information about
the SET_ATTRIBUTE procedure and "Configuring the Scheduler" on page 28-1.

Opening Windows
When a window opens, the Scheduler switches to the resource plan that has been
associated with it during its creation. If there are jobs running when the window
opens, the resources allocated to them might change due to the switch in resource
plan.
There are two ways a window can open:
■

According to the window's schedule

■

Manually, using the OPEN_WINDOW procedure
This procedure opens the window independent of its schedule. This window will
open and the resource plan associated with it will take effect immediately. Only an
enabled window can be manually opened.
In the OPEN_WINDOW procedure, you can specify the time interval that the
window should be open for, using the duration attribute. The duration is of type
interval day to second. If the duration is not specified, then the window will be
opened for the regular duration as stored with the window.
Opening a window manually has no impact on regular scheduled runs of the
window.
When a window that was manually opened closes, the rules about overlapping
windows are applied to determine which other window should be opened at that
time if any at all.
You can force a window to open even if there is one already open by setting the
force option to TRUE in the OPEN_WINDOW call or Enterprise Manager.
When the force option is set to TRUE, the Scheduler automatically closes any
window that is open at that time, even if it has a higher priority. For the duration
of this manually opened window, the Scheduler does not open any other
scheduled windows even if they have a higher priority. You can open a window
that is already open. In this case, the window stays open for the duration specified
in the call, from the time the OPEN_WINDOW command was issued.

27-22 Oracle Database Administrator’s Guide

Using Windows

Consider an example to illustrate this. window1 was created with a duration of
four hours. It has how been open for two hours. If at this point you reopen
window1 using the OPEN_WINDOW call and do not specify a duration, then
window1 will be open for another four hours because it was created with that
duration. If you specified a duration of 30 minutes, the window will close in 30
minutes.
When a window opens, an entry is made in the window log.
A window can fail to switch resource plans if the current resource plan has been
manually switched using the ALTER SYSTEM statement with the FORCE option, or
using the DBMS_RESOURCE_MANAGER.SWITCH_PLAN package procedure with the
allow_scheduler_plan_switches argument set to FALSE. In this case, the failure
to switch resource plans is written to the window log.
See Oracle Database PL/SQL Packages and Types Reference for detailed information about
the OPEN_WINDOW procedure and the DBMS_RESOURCE_MANAGER.SWITCH_PLAN
procedure.

Closing Windows
There are two ways a window can close:
■

Based on a schedule
A window will close based on the schedule defined at creation time.

■

Manually, using the CLOSE_WINDOW procedure
The CLOSE_WINDOW procedure will close an open window prematurely.

A closed window means that it is no longer in effect. When a window is closed, the
Scheduler will switch the resource plan to the one that was in effect outside the
window or in the case of overlapping windows to another window. If you try to close
a window that does not exist or is not open, an error is generated.
A job that is running will not close when the window it is running in closes unless the
attribute stop_on_window_close was set to TRUE when the job was created.
However, the resources allocated to the job may change because the resource plan may
change.
When a running job has a window group as its schedule, the job will not be stopped
when its window is closed if another window that is also a member of the same
window group then becomes active. This is the case even if the job was created with
the attribute stop_on_window_close set to TRUE.
When a window is closed, an entry will be added to the window log
DBA_SCHEDULER_WINDOW_LOG.
See Oracle Database PL/SQL Packages and Types Reference for detailed information about
the CLOSE_WINDOW procedure.

Dropping Windows
You drop one or more windows using the DROP_WINDOW procedure or Enterprise
Manager. When a window is dropped, all metadata about the window is removed
from the *_SCHEDULER_WINDOWS views. All references to the window are removed
from window groups.
You can drop several windows in one call by providing a comma-delimited list of
window names or window group names to the DROP_WINDOW procedure. For
example, the following statement drops both windows and window groups:
Using the Scheduler

27-23

Using Windows

BEGIN
DBMS_SCHEDULER.DROP_WINDOW ('window1, window2,
window3, windowgroup1, windowgroup2');
END;
/

Note that if a window group name is provided, then the windows in the window
group are dropped, but the window group is not dropped. To drop the window group,
you must use the DROP_WINDOW_GROUP procedure.
See Oracle Database PL/SQL Packages and Types Reference for detailed information about
the DROP_WINDOW procedure.

Disabling Windows
You disable one or more windows using the DISABLE procedure or with Enterprise
Manager. This means that the window will not open, however, the metadata of the
window is still there, so it can be reenabled. Because the DISABLE procedure is used
for several Scheduler objects, when disabling windows, they must be preceded by
SYS.
A window can also become disabled for other reasons. For example, a window will
become disabled when it is at the end of its schedule. Also, if a window points to a
schedule that no longer exists, it becomes disabled.
If there are jobs that have the window as their schedule, you will not be able to disable
the window unless you set force to TRUE in the procedure call. By default, force is
set to FALSE. When the window is disabled, those jobs that have the window as their
schedule will not be disabled.
You can disable several windows in one call by providing a comma-delimited list of
window names or window group names to the DISABLE procedure call. For example,
the following statement disables both windows and window groups:
BEGIN
DBMS_SCHEDULER.DISABLE ('sys.window1, sys.window2,
sys.window3, sys.windowgroup1, sys.windowgroup2');
END;
/

See Oracle Database PL/SQL Packages and Types Reference for detailed information about
the DISABLE procedure and Admin for a list of why windows may be disabled.

Enabling Windows
You enable one or more windows using the ENABLE procedure or Enterprise Manager.
An enabled window is one that can be opened. Windows are, by default, created
enabled. When a window is enabled using the ENABLE procedure, a validity check is
performed and only if this is successful will the window be enabled. When a window
is enabled, it is logged in the window log table. Because the ENABLE procedure is used
for several Scheduler objects, when enabling windows, they must be preceded by SYS.
You can enable several windows in one call by providing a comma-delimited list of
window names. For example, the following statement enables three windows:
BEGIN
DBMS_SCHEDULER.ENABLE ('sys.window1, sys.window2, sys.window3');
END;
/

27-24 Oracle Database Administrator’s Guide

Using Windows

See Oracle Database PL/SQL Packages and Types Reference for detailed information about
the ENABLE procedure.

Overlapping Windows
Although Oracle does not recommend it, windows can overlap. Because only one
window can be active at one time, the following rules are used to determine which
window will be active when windows overlap:
■

■

■

If windows of the same priority overlap, the window that is active will stay open.
However, if the overlap is with a window of higher priority, the lower priority
window will close and the window with the higher priority will open. Jobs
currently running that had a schedule naming the low priority window may be
stopped depending on the behavior you assigned when you created the job.
If at the end of a window there are multiple windows defined, the window with
the highest priority will open. If all windows have the same priority, the window
that has the highest percentage of time remaining will open.
An open window that is dropped will be automatically closed. At that point, the
previous rule applies.

Whenever two windows overlap, an entry is written in the Scheduler log.

Examples of Overlapping Windows
Figure 27–1 illustrates a typical example of how windows, resource plans, and
priorities might be determined for a 24 hour schedule. In the following two examples,
assume that Window1 has been associated with Resource Plan1, Window2 with
Resource Plan2, and so on.
Figure 27–1 Windows and Resource Plans (Example 1)
Default
Resource
Plan

Resource
Plan
1

Resource
Plan
3

Resource Default
Plan
Resource
1
Plan

Window 3
(High Priority)

4 am

6 am

9 am

Default
Resource
Plan

Window 4
(High Priority)

Window 1
(Low Priority)

12 am

Resource
Plan
4

Resource
Plan 2

Window 2
(High Priority)

11 am

2 pm 3 pm

8 pm

10 pm

12 am

In Figure 27–1, the following occurs:
■

From 12AM to 4AM
No windows are open, so a default resource plan is in effect.

■

From 4AM to 6AM
Window1 has been assigned a low priority, but it opens because there are no high
priority windows. Therefore, Resource Plan 1 is in effect.

■

From 6AM to 9AM
Window3 will open because it has a higher priority than Window1, so Resource
Plan 3 is in effect.
Using the Scheduler

27-25

Using Windows

■

From 9AM to 11AM
Even though Window1 was closed at 6AM because of a higher priority window
opening, at 9AM, this higher priority window is closed and Window1 still has two
hours remaining on its original schedule. It will be reopened for these remaining
two hours and resource plan will be in effect.

■

From 11AM to 2PM
A default resource plan is in effect because no windows are open.

■

From 2PM to 3PM
Window2 will open so Resource Plan 2 is in effect.

■

From 3PM to 8PM
Window4 is of the same priority as Window2, so it will not interrupt Window2.
Therefore, Resource Plan 2 is in effect.

■

From 8PM to 10PM
Window4 will open so Resource Plan 4 is in effect.

■

From 10PM to 12AM
A default resource plan is in effect because no windows are open.

Figure 27–2 illustrates another example of how windows, resource plans, and
priorities might be determined for a 24 hour schedule.
Figure 27–2 Windows and Resource Plans (Example 2)
Default Resource
Plan

Resource
Plan 1

Resource
Plan 3
Window 6
(High Priority)

Resource
Plan 5
Window 5
(Low Priority)

Window 3
(High Priority)
Window 1
(Low Priority)

12 am

4 am

6 am

7 am

8 am

9 am

11 am

In Figure 27–2, the following occurs:
■

From 12AM to 4AM
A default resource plan is in effect.

■

From 4AM to 6AM
Window1 has been assigned a low priority, but it opens because there are no high
priority windows, so Resource Plan 1 is in effect.

■

From 6AM to 9AM

27-26 Oracle Database Administrator’s Guide

Using Window Groups

Window3 will open because it has a higher priority than Window1. Note that
Window6 does not open because another high priority window is already in effect.
■

From 9AM to 11AM
At 9AM, Window5 or Window1 are the two possibilities. They both have low
priorities, so the choice is made based on which has a greater percentage of its
duration remaining. Window5 has a larger percentage of time remaining
compared to the total duration than Window1. Even if Window1 were to extend
to, say, 11:30AM, Window5 would have 2/3 * 100% of its duration remaining,
while Window1 would have only 2.5/7 * 100%, which is smaller. Thus, Resource
Plan 5 will be in effect.

Using Window Groups
Window groups provide an easy way to schedule jobs that must run during multiple
time periods throughout the day, week, and so on. If you create a window group, add
windows to it, and then name this window group in a job's schedule_name attribute,
the job runs during all the windows in the window group.
Window groups reside in the SYS schema. This section introduces you to basic
window group tasks, and discusses the following topics:
■

Window Group Tasks and Their Procedures

■

Creating Window Groups

■

Dropping Window Groups

■

Adding a Member to a Window Group

■

Dropping a Member from a Window Group

■

Enabling a Window Group

■

Disabling a Window Group
See Also: "Window Groups" on page 26-9 for an overview of
window groups.

Window Group Tasks and Their Procedures
Table 27–6 illustrates common window group tasks and the procedures you use to
handle them.
Table 27–6

Window Group Tasks and Their Procedures

Task

Procedure

Privilege Needed

Create a window group

CREATE_WINDOW_GROUP

MANAGE SCHEDULER

Drop a window group

DROP_WINDOW_GROUP

MANAGE SCHEDULER

Add a member to a window group

ADD_WINDOW_GROUP_MEMBER

MANAGE SCHEDULER

Drop a member to a window group

REMOVE_WINDOW_GROUP_MEMBER

MANAGE SCHEDULER

Enable a window group

ENABLE

MANAGE SCHEDULER

Disable a window group

DISABLE

MANAGE SCHEDULER

See "How to Manage Scheduler Privileges" on page 28-14 for further information
regarding privileges.

Using the Scheduler

27-27

Using Window Groups

Creating Window Groups
You create a window group by using the CREATE_WINDOW_GROUP procedure or
Enterprise Manager. You can specify the member windows of the group when you
create the group, or you can add them later using the ADD_WINDOW_GROUP_MEMBER
procedure. A window group cannot be a member of another window group. You can,
however, create a window group that has no members.
If you create a window group and you specify a member window that does not exist,
an error is generated and the window group is not created. If a window is already a
member of a window group, it is not added again.
Window groups are created in the SYS schema. Window groups, like windows, are
created with access to PUBLIC, therefore, no privileges are required to access window
groups.
The following statement creates a window group called downtime and adds two
windows (weeknights and weekends) to it:
BEGIN
DBMS_SCHEDULER.CREATE_WINDOW_GROUP (
group_name
=> 'downtime',
window_list => 'weeknights, weekends');
END;
/

See Oracle Database PL/SQL Packages and Types Reference for detailed information about
the CREATE_WINDOW_GROUP procedure.

Dropping Window Groups
You drop one or more window groups by using the DROP_WINDOW_GROUP procedure
or Enterprise Manager. This call will drop the window group but not the windows that
are members of this window group. If you want to drop all the windows that are
members of this group but not the window group itself, you can use the
DROP_WINDOW procedure and provide the name of the window group to the call.
You can drop several window groups in one call by providing a comma-delimited list
of window group names to the DROP_WINDOW_GROUP procedure call. For example,
the following statement drops three window groups:
BEGIN
DBMS_SCHEDULER.DROP_WINDOW_GROUP('windowgroup1, windowgroup2, windowgroup3');
END;
/

See Oracle Database PL/SQL Packages and Types Reference for detailed information about
adding and dropping window groups.

Adding a Member to a Window Group
You add windows to a window group by using the ADD_WINDOW_GROUP_MEMBER
procedure.
You can add several members to a window group in one call, by specifying a
comma-delimited list of windows. For example, the following statement adds three
windows to the window group window_group1:
BEGIN
DBMS_SCHEDULER.ADD_WINDOW_GROUP_MEMBER ('window_group1',
'window1, window2, window3');

27-28 Oracle Database Administrator’s Guide

Using Window Groups

END;
/

If an already open window is added to a window group, the Scheduler will not start
jobs that point to this window group until the next window in the window group
opens.
See Oracle Database PL/SQL Packages and Types Reference for detailed information about
the ADD_WINDOW_GROUP_MEMBER procedure.

Dropping a Member from a Window Group
You can drop one or more windows from a window group by using the
REMOVE_WINDOW_GROUP_MEMBER procedure or Enterprise Manager. Jobs with the
stop_on_window_close flag set will only be stopped when a window closes.
Dropping an open window from a window group has no impact on this.
You can remove several members from a window group in one call by specifying a
comma-delimited list of windows. For example, the following statement drops three
windows:
BEGIN
DBMS_SCHEDULER.REMOVE_WINDOW_GROUP_MEMBER('window_group1', 'window1, window2,
window3');
END;
/

See Oracle Database PL/SQL Packages and Types Reference for detailed information about
the REMOVE_WINDOW_GROUP_MEMBER procedure.

Enabling a Window Group
You enable one or more window groups using the ENABLE procedure or Enterprise
Manager. By default, window groups are created ENABLED. For example:
BEGIN
DBMS_SCHEDULER.ENABLE('sys.windowgroup1', 'sys.windowgroup2, sys.windowgroup3');
END;
/

See Oracle Database PL/SQL Packages and Types Reference for detailed information about
the ENABLE procedure.

Disabling a Window Group
You disable a window group using the DISABLE procedure or Enterprise Manager.
This means that jobs with the window group as a schedule will not run even if the
member windows open, however, the metadata of the window group is still there, so it
can be reenabled. Note that the members of the window group will still open.
You can also disable several window groups in one call by providing a
comma-delimited list of window group names to the DISABLE procedure call. For
example, the following statement disables three window groups:
BEGIN
DBMS_SCHEDULER.DISABLE('sys.windowgroup1, sys.windowgroup2, sys.windowgroup3');
END;
/

Using the Scheduler

27-29

Using Events

Note that, in this example, the window group will be disabled, but the windows that
are members of the window group will not be disabled.
See Oracle Database PL/SQL Packages and Types Reference for detailed information about
the DISABLE procedure.

Using Events
The Scheduler works with two kinds of events:
■

■

Events that the Scheduler raises to notify applications of a change in job state (for
example, job complete)
Events that applications raise to notify the Scheduler to start a job

This section provides details on how to work with both kinds of events, and includes
the following topics:
■

Using Events Raised by the Scheduler

■

Using Events Raised by an Application
See Also:
■
■

■

"Events" on page 26-5 for an overview of Scheduler events
"Examples of Creating Jobs and Schedules Based on Events" on
page 28-27
"Using Chains" on page 27-37 for information on how to use
events with chains to achieve precise control over process flow

Using Events Raised by the Scheduler
You can set up a job so that the Scheduler raises an event when the job changes state.
You do so by setting the raise_events job attribute. Because you cannot set this
attribute with the CREATE_JOB procedure, you must first create the job and then alter
the job with the SET_ATTRIBUTE procedure.
By default, until you alter a job with SET_ATTRIBUTE, a job does not raise any state
change events.
Table 27–7 summarizes the one administration task involving events raised by the
Scheduler.
Table 27–7

Event Tasks and Their Procedures for Events Raised by the Scheduler

Task

Procedure

Privilege Needed

Altering a Job to Raise Events

SET_ATTRIBUTE

CREATE ANY JOB or ownership of job
being altered or ALTER privileges on the
job

After you enable job state change events for a job, the Scheduler raises these events by
enqueuing messages onto the Scheduler event queue
SYS.SCHEDULER$_EVENT_QUEUE. This queue is a secure queue, so depending on
your application, you may have to configure the queue to enable certain users to
perform operations on it. See Oracle Streams Concepts and Administration for
information on secure queues.
To prevent unlimited growth of the Scheduler event queue, events raised by the
Scheduler expire in 24 hours by default. (Expired events are deleted from the queue.)

27-30 Oracle Database Administrator’s Guide

Using Events

You can change this expiry time by setting the event_expiry_time Scheduler
attribute with the SET_SCHEDULER_ATTRIBUTE procedure. See Oracle Database
PL/SQL Packages and Types Reference for more information.

Altering a Job to Raise Events
To enable job state change events for a job, you use the SET_ATTRIBUTE procedure to
turn on bit flags in the raise_events job attribute. Each bit flag represents a
different job state to raise an event for. For example, turning on the least significant bit
enables "job started" events to be raised. To enable multiple state change event types in
one call, you add the desired bit flag values together and supply the result as an
argument to SET_ATTRIBUTE. For a list of bit flags, see the discussion of
DBMS_SCHEDULER.SET_ATTRIBUTE in Oracle Database PL/SQL Packages and Types
Reference.
The following example enables multiple state change events for job dw_reports. It
enables the following event types, both of which indicate some kind of error. (Event
types are defined as constants in the package.)
■

job_failed (bit flag value = 4)

■

job_sch_lim_reached (bit flag value = 64)

BEGIN
DBMS_SCHEDULER.SET_ATTRIBUTE('dw_reports', 'raise_events',
DBMS_SCHEDULER.JOB_FAILED + DBMS_SCHEDULER.JOB_SCH_LIM_REACHED);
END;
/

Consuming Scheduler-Raised Events with your Application
To consume Scheduler events, your application must subscribe to the Scheduler event
queue SYS.SCHEDULER$_EVENT_QUEUE. This queue is a secure queue and is owned
by SYS. To create a subscription to this queue for a user, do the following:
1.

Log in to the database as the SYS user or as a user with the MANAGE ANY QUEUE
privilege.

2.

Subscribe to the queue using a new or existing agent.

3.

Run the package procedure DBMS_AQADM.ENABLE_DB_ACCESS as follows:
DBMS_AQADM.ENABLE_DB_ACCESS(agent_name, db_username);

where agent_name references the agent that you used to subscribe to the events
queue, and db_username is the user for whom you want to create a subscription.
There is no need to grant dequeue privileges to the user. The dequeue privilege is
granted on the Scheduler event queue to PUBLIC.
As an alternative, the user can subscribe to the Scheduler event queue using the
ADD_EVENT_QUEUE_SUBSCRIBER procedure, as shown in the following example:
DBMS_SCHEDULER.ADD_EVENT_QUEUE_SUBSCRIBER(subscriber_name);

where subscriber_name is the name of the Oracle Streams Advanced Queuing
(AQ) agent to be used to subscribe to the Scheduler event queue. (If it is NULL, an
agent is created whose name is the user name of the calling user.) This call both creates
a subscription to the Scheduler event queue and grants the user permission to dequeue
using the designated agent. The subscription is rule-based. The rule permits the user to
see only events raised by jobs that the user owns, and filters out all other messages.

Using the Scheduler

27-31

Using Events

After the subscription is in place, the user can either poll for messages at regular
intervals or register with AQ for notification.
See Oracle Streams Advanced Queuing User's Guide and Reference for more information.
Scheduler Event Queue
The Scheduler event queue SYS.SCHEDULER$_EVENT_QUEUE is of type
scheduler$_event_info. The following are details on this type.
create or replace type sys.scheduler$_event_info as object
(
event_type
VARCHAR2(4000),
object_owner
VARCHAR2(4000),
object_name
VARCHAR2(4000),
event_timestamp
TIMESTAMP WITH TIME ZONE,
error_code
NUMBER,
error_msg
VARCHAR2(4000),
event_status
NUMBER,
log_id
NUMBER,
run_count
NUMBER,
failure_count
NUMBER,
retry_count
NUMBER,
spare1
NUMBER,
spare2
NUMBER,
spare3
VARCHAR2(4000),
spare4
VARCHAR2(4000),
spare5
TIMESTAMP WITH TIME ZONE,
spare6
TIMESTAMP WITH TIME ZONE,
spare7
RAW(2000),
spare8
RAW(2000),
);
Attribute

Description

event_type

One of "JOB_STARTED", "JOB_SUCCEEDED", "JOB_FAILED",
"JOB_BROKEN", "JOB_COMPLETED", "JOB_STOPPED",
"JOB_SCH_LIM_REACHED", "JOB_DISABLED",
"JOB_CHAIN_STALLED", "JOB_OVER_MAX_DUR".
For descriptions of these event types, see the SET_ATTRIBUTE
procedure in Oracle Database PL/SQL Packages and Types Reference.

object_owner

Owner of the job that raised the event.

object_name

Name of the job that raised the event.

event_timestamp Time at which the event occurred.
error_code

Applicable only when an error is thrown during job execution. Contains
the top-level error code.

error_msg

Applicable only when an error is thrown during job execution. Contains
the entire error stack.

event_status

Adds further qualification to the event type. If event_type is
"JOB_STARTED," a status of 1 indicates that it is a normal start, and a
status of 2 indicates that it is a retry.
If event_type is "JOB_FAILED," a status of 4 indicates that it was a
failure due to an error that was thrown during job execution, and a
status of 8 indicates that it was an abnormal termination of some kind.
If event_type is "JOB_STOPPED," a status of 16 indicates that it was a
normal stop, and a status of 32 indicates that it was a stop with the
FORCE option set to TRUE.

27-32 Oracle Database Administrator’s Guide

Using Events

Attribute

Description

log_id

Points to the ID in the scheduler job log from which additional
information can be obtained. Note that there need not always be a log
entry corresponding to an event. In such cases, log_id is NULL.

run_count

Run count for the job when the event was raised.

failure_count

Failure count for the job when the event was raised.

retry_count

Retry count for the job when the event was raised.

spare1 – spare8

Currently not implemented

Using Events Raised by an Application
Your application can raise an event to notify the Scheduler to start a job. A job started
in this way is referred to as an event-based job. The job can optionally retrieve the
message content of the event.
To create an event-based job, you must set these two additional attributes:
■

queue_spec
A queue specification that includes the name of the queue where your application
enqueues messages to raise job start events, or in the case of a secure queue, the
queue name followed by a comma and the agent name.

■

event_condition
A conditional expression based on message properties that must evaluate to TRUE
for the message to start the job. The expression must have the syntax of an Oracle
Streams Advanced Queuing rule. Accordingly, you can include user data
properties in the expression, provided that the message payload is an object type,
and that you prefix object attributes in the expression with tab.user_data.
For more information on rules, see the DBMS_AQADM.ADD_SUBSCRIBER procedure
in Oracle Database PL/SQL Packages and Types Reference.
The following example sets event_condition to select only card-swipe events
that occur after midnight and before 9:00 a.m. Assume that the message payload is
an object with two attributes called event_type and event_timestamp.
event_condition = 'tab.user_data.event_type = ''CARD_SWIPE'' and
extract hour from tab.user_data.event_timestamp < 9'

You can specify queue_spec and event_condition as inline job attributes, or you
can create an event schedule with these attributes and point to this schedule from the
job.
The Scheduler runs the event-based job for each occurrence of
an event that matches event_condition. However, events that
occur while the job is already running are ignored; the event gets
consumed, but does not trigger another run of the job.

Note:

Table 27–8 describes common administration tasks involving events raised by an
application (and consumed by the Scheduler) and the procedures associated with
them.

Using the Scheduler

27-33

Using Events

Table 27–8

Event Tasks and Their Procedures for Events Raised by an Application

Task

Procedure

Privilege Needed

Creating an Event-Based Job

CREATE_JOB

CREATE JOB or CREATE ANY JOB

Altering an Event-Based Job

SET_ATTRIBUTE

CREATE ANY JOB or ownership of the job
being altered or ALTER privileges on the
job

Creating an Event Schedule

CREATE_EVENT_SCHEDULE

CREATE JOB or CREATE ANY JOB

Altering an Event Schedule

SET_ATTRIBUTE

CREATE ANY JOB or ownership of the
schedule being altered or ALTER
privileges on the schedule

Oracle Streams Advanced Queuing User's Guide and Reference
for information on how to create queues and enqueue messages.

See Also:

Creating an Event-Based Job
You use the CREATE_JOB procedure or Enterprise Manager to create an event-based
job. The job can include event information inline as job attributes or can specify event
information by pointing to an event schedule.
Like jobs based on time schedules, event-based jobs are not auto-dropped unless the
job end date passes, max_runs is reached, or the maximum number of failures
(max_failures) is reached.
Specifying Event Information as Job Attributes To specify event information as job
attributes, you use an alternate syntax of CREATE_JOB that includes the queue_spec
and event_condition attributes.
The following example creates a job that starts whenever someone swipes a badge to
enter a data center:
BEGIN
DBMS_SCHEDULER.CREATE_JOB (
job_name
=> 'my_job',
program_name
=> 'my_program',
event_condition
=> 'tab.user_data.event_type = ''CARD_SWIPE''',
queue_spec
=> 'entry_events_q, entry_agent1',
enabled
=> TRUE,
comments
=> 'Start job when someone swipes a badge');
END;
/

See Oracle Database PL/SQL Packages and Types Reference for more information regarding
the CREATE_JOB procedure.
Specifying Event Information in an Event Schedule To specify event information with an
event schedule, you set the job's schedule_name attribute to the name of an event
schedule, as shown in the following example:
BEGIN
DBMS_SCHEDULER.CREATE_JOB (
job_name
=> 'my_job',
program_name
=> 'my_program',
schedule_name
=> 'entry_events_schedule',
enabled
=> TRUE,
comments
=> 'Start job when someone swipes a badge');
END;

27-34 Oracle Database Administrator’s Guide

Using Events

/

See "Creating an Event Schedule" on page 27-35 for more information.

Altering an Event-Based Job
You alter an event-based job by using the SET_ATTRIBUTE procedure. For jobs that
specify the event inline, you cannot set the queue_spec and event_condition
attributes individually with SET_ATTRIBUTE. Instead, you must set an attribute
called event_spec, and pass an event condition and queue specification as the third
and fourth arguments, respectively, to SET_ATTRIBUTE.
The following is an example of using the event_spec attribute:
BEGIN
DBMS_SCHEDULER.SET_ATTRIBUTE ('my_job', 'event_spec',
'tab.user_data.event_type = ''BAD_BADGE''', 'entry_events_q, entry_agent1');
END;
/

See Oracle Database PL/SQL Packages and Types Reference for more information regarding
the SET_ATTRIBUTE procedure.

Creating an Event Schedule
You can create a schedule that is based on an event. You can then reuse the schedule
for multiple jobs. To do so, use the CREATE_EVENT_SCHEDULE procedure, or use
Enterprise Manager. The following is an example of creating an event schedule:
BEGIN
DBMS_SCHEDULER.CREATE_EVENT_SCHEDULE (
schedule_name
=> 'entry_events_schedule',
start_date
=> SYSTIMESTAMP,
event_condition
=> 'tab.user_data.event_type = ''CARD_SWIPE''',
queue_spec
=> 'entry_events_q, entry_agent1');
END;
/

You can drop an event schedule using the DROP_SCHEDULE procedure. See Oracle
Database PL/SQL Packages and Types Reference for more information on
CREATE_EVENT_SCHEDULE.

Altering an Event Schedule
You alter the event information in an event schedule in the same way that you alter
event information in a job. For more information, see "Altering an Event-Based Job" on
page 27-35.
The following example demonstrates how to use the SET_ATTRIBUTE procedure and
the event_spec attribute to alter event information in an event schedule.
BEGIN
DBMS_SCHEDULER.SET_ATTRIBUTE ('entry_events_schedule', 'event_spec',
'tab.user_data.event_type = ''BAD_BADGE''', 'entry_events_q, entry_agent1');
END;
/

See Oracle Database PL/SQL Packages and Types Reference for more information regarding
the SET_ATTRIBUTE procedure.

Using the Scheduler

27-35

Using Events

Passing Event Messages into an Event-Based Job
Through a metadata argument, the Scheduler can pass to an event-based job the
message content of the event that started the job. The following rules apply:
■
■

■

The job must use a named program of type STORED_PROCEDURE.
One of the named program's arguments must be a metadata argument with
metadata_attribute set to EVENT_MESSAGE.
The stored procedure that implements the program must have an argument at the
position corresponding to the named program's metadata argument. The
argument type must be the data type of the queue where your application queues
the job-start event.

If you use the RUN_JOB procedure to manually run a job that has an EVENT_MESSAGE
metadata argument, the value passed to that argument is NULL.
The following example shows how to construct an event-based job that can receive the
event message content:
create or replace procedure my_stored_proc (event_msg IN event_queue_type)
as
begin
-- retrieve and process message body
-- do other work
end;
/
begin
dbms_scheduler.create_program (
program_name => 'my_prog',
program_action=> 'my_stored_proc',
program_type => 'STORED_PROCEDURE',
number_of_arguments => 1,
enabled => FALSE) ;
dbms_scheduler.define_metadata_argument (
program_name => 'my_prog',
argument_position => 1 ,
metadata_attribute => 'EVENT_MESSAGE') ;
dbms_scheduler.enable ('my_prog');
exception
when others then raise ;
end ;
/
begin
dbms_scheduler.create_job (
job_name => 'my_evt_job' ,
program_name => 'my_prog',
schedule_name => 'my_evt_sch',
enabled => true,
auto_Drop => false) ;
exception
when others then raise ;
end ;
/

27-36 Oracle Database Administrator’s Guide

Using Chains

Using Chains
A chain is a named series of programs that are linked together for a combined
objective. To create and use a chain, you complete these steps in order:
Step

See...

1. Create a chain object

Creating Chains

2. Define the steps in the chain

Defining Chain Steps

3. Add rules

Adding Rules to a Chain

4. Enable the chain

Enabling Chains

5. Create a job that points to the chain

Creating Jobs for Chains

Other topics discussed in this section include:
■

Chain Tasks and Their Procedures

■

Dropping Chains

■

Running Chains

■

Dropping Rules from a Chain

■

Disabling Chains

■

Dropping Chain Steps

■

Altering Chain Steps

■

Handling Stalled Chains
See Also:
■

"Chains" on page 26-6 for an overview of chains

■

"Examples of Creating Chains" on page 28-26

Chain Tasks and Their Procedures
Table 27–9 illustrates common tasks involving chains and the procedures associated
with them.
Table 27–9

Chain Tasks and Their Procedures

Task

Procedure

Privilege Needed

Create a chain

CREATE_CHAIN

CREATE JOB, CREATE EVALUATION CONTEXT and CREATE RULE
SET if the owner. CREATE ANY JOB, CREATE ANY RULE SET and
CREATE ANY EVALUATION CONTEXT otherwise

Drop a chain

DROP_CHAIN

Ownership of the chain or ALTER privileges on the chain or CREATE
ANY JOB privileges. If not owner, also requires DROP ANY
EVALUATION CONTEXT and DROP ANY RULE SET

Alter a chain

ALTER_CHAIN

Ownership of the chain, or ALTER privileges on the chain or CREATE
ANY JOB

Alter a chain

SET_ATTRIBUTE

Ownership of the chain, or ALTER privileges on the chain or CREATE
ANY JOB

Alter a running chain

ALTER_RUNNING_CHAIN Ownership of the job, or ALTER privileges on the job or CREATE ANY
JOB

Run a chain

RUN_CHAIN

CREATE JOB or CREATE ANY JOB. In addition, the owner of the new
job must have EXECUTE privileges on the chain or EXECUTE ANY
PROGRAM

Using the Scheduler

27-37

Using Chains

Table 27–9 (Cont.) Chain Tasks and Their Procedures
Task

Procedure

Privilege Needed

Add rules to a chain

DEFINE_CHAIN_RULE

Ownership of the chain, or ALTER privileges on the chain or CREATE
ANY JOB privileges. CREATE RULE if the owner of the chain, CREATE
ANY RULE otherwise

Alter rules in a chain

DEFINE_CHAIN_RULE

Ownership of the chain, or ALTER privileges on the chain or CREATE
ANY JOB privileges. If not owner of the chain, requires ALTER
privileges on the rule or ALTER ANY RULE

Drop rules from a chain

DROP_CHAIN_RULE

Ownership of the chain, or ALTER privileges on the chain or CREATE
ANY JOB privileges. DROP ANY RULE if not the owner of the chain

Enable a chain

ENABLE

Ownership of the chain, or ALTER privileges on the chain or CREATE
ANY JOB

Disable a chain

DISABLE

Ownership of the chain, or ALTER privileges on the chain or CREATE
ANY JOB

Create steps

DEFINE_CHAIN_STEP

Ownership of the chain, or ALTER privileges on the chain or CREATE
ANY JOB

Drop steps

DROP_CHAIN_STEP

Ownership of the chain, or ALTER privileges on the chain or CREATE
ANY JOB

Alter steps

DEFINE_CHAIN_STEP

Ownership of the chain, or ALTER privileges on the chain or CREATE
ANY JOB

Creating Chains
You create a chain by using the CREATE_CHAIN procedure. After creating the chain
object with CREATE_CHAIN, you define chain steps and chain rules separately.
The rule_set_name and evaluation_interval arguments are normally left
NULL. evaluation_interval can define the times that chain rules get evaluated,
other than when the job starts or a step completes. rule_set_name is for advanced
users only.
See Oracle Database PL/SQL Packages and Types Reference for more information on
CREATE_CHAIN.
The following is an example of creating a chain:
BEGIN
DBMS_SCHEDULER.CREATE_CHAIN (
chain_name
=> 'my_chain1',
rule_set_name
=> NULL,
evaluation_interval => NULL,
comments
=> 'My first chain');
END;
/

Defining Chain Steps
After creating a chain object, you define one or more chain steps. Each step can point
to one of the following:
■

A program

■

Another chain (a nested chain)

■

An event

You define a step that points to a program or nested chain by using the
DEFINE_CHAIN_STEP procedure. An example is the following, which adds two steps
to my_chain1:

27-38 Oracle Database Administrator’s Guide

Using Chains

BEGIN
DBMS_SCHEDULER.DEFINE_CHAIN_STEP (
chain_name
=> 'my_chain1',
step_name
=> 'my_step1',
program_name
=> 'my_program1');
DBMS_SCHEDULER.DEFINE_CHAIN_STEP (
chain_name
=> 'my_chain1',
step_name
=> 'my_step2',
program_name
=> 'my_chain2');
END;
/

To define a step that waits for an event to occur, you use the
DEFINE_CHAIN_EVENT_STEP procedure. Procedure arguments can point to an event
schedule or can include an inline queue specification and event condition. This
example creates a third chain step that waits for the event specified in the named event
schedule:
BEGIN
DBMS_SCHEDULER.DEFINE_CHAIN_EVENT_STEP (
chain_name
=> 'my_chain1',
step_name
=> 'my_step3',
event_schedule_name => 'my_event_schedule');
END;
/

See Oracle Database PL/SQL Packages and Types Reference for more information regarding
the DEFINE_CHAIN_STEP and DEFINE_CHAIN_EVENT_STEP procedures.

Adding Rules to a Chain
Chain rules define when steps run, and define dependencies between steps. Each rule
has a condition and an action. If the condition evaluates to TRUE, the action is
performed. The condition can contain Scheduler chain condition syntax or any syntax
that is valid in a SQL WHERE clause. The syntax can include references to attributes of
any chain step, including step completion status. A typical action is to run a specified
step.
Conditions are usually based on the outcome of one or more previous steps. For
example, you might want one step to run if the two previous steps succeeded, and
another to run if either of the two previous steps failed.
All rules added to a chain work together to define the overall behavior of the chain.
When the job starts and at the end of each step, all rules are evaluated to see what
action or actions occur next. (You can cause rules to be evaluated at regular intervals
also. See "Creating Chains" on page 27-38 for details.)
You add a rule to a chain with the DEFINE_CHAIN_RULE procedure. You call this
procedure once for each rule that you want to add to the chain.
Starting the Chain
At least one rule must have a condition that always evaluates to TRUE so that the chain
can start when the job starts. The easiest way to accomplish this is to just set the
condition to 'TRUE' if you are using Schedule chain condition syntax, or '1=1' if you
are using SQL syntax.

Using the Scheduler

27-39

Using Chains

Ending the Chain
At least one chain rule must contain an action of 'END'. A chain job does not
complete until one of the rules containing the END action evaluates to TRUE. Several
different rules with different END actions are common, some with error codes, and
some without.
If a chain has no more running steps or it is not waiting for an event to occur, and no
rules containing the END action evaluate to TRUE (or there are no rules with the END
action), the job enters the CHAIN_STALLED state. See "Handling Stalled Chains" on
page 27-44 for more information.
Example
The following example defines a rule that starts the chain at step 1 and a rule that
starts step 2 when step 1 completes. rule_name and comments are optional and
default to NULL.
BEGIN
DBMS_SCHEDULER.DEFINE_CHAIN_RULE (
chain_name
=>
'my_chain1',
condition
=>
'TRUE',
action
=>
'START step1',
rule_name
=>
'my_rule1',
comments
=>
'start the chain');
DBMS_SCHEDULER.DEFINE_CHAIN_RULE (
chain_name
=>
'my_chain1',
condition
=>
'step1 completed',
action
=>
'START step2',
rule_name
=>
'my_rule2');
END;
/

See Also:
■

■

Oracle Database PL/SQL Packages and Types Reference for
information on the DEFINE_CHAIN_RULE procedure and on
Scheduler chain condition syntax.
"Examples of Creating Chains" on page 28-26

Enabling Chains
You enable a chain with the ENABLE procedure. A chain must be enabled before it can
be run by a job. Enabling an already enabled chain does not return an error.
The following example enables chain my_chain1:
BEGIN
DBMS_SCHEDULER.ENABLE ('my_chain1');
END;
/

See Oracle Database PL/SQL Packages and Types Reference for more information regarding
the ENABLE procedure.

27-40 Oracle Database Administrator’s Guide

Using Chains

Note:

Chains are automatically disabled by the Scheduler when:

■

The program that one of the chain steps points to is dropped

■

The nested chain that one of the chain steps points to is dropped

■

The event schedule that one of the chain event steps points to is
dropped

Creating Jobs for Chains
To run a chain, you must either use the RUN_CHAIN procedure or create a job of type
'CHAIN'. The job action must refer to the chain name, as shown in the following
example:
BEGIN
DBMS_SCHEDULER.CREATE_JOB (
job_name
=> 'chain_job_1',
job_type
=> 'CHAIN',
job_action
=> 'my_chain1',
repeat_interval => 'freq=daily;byhour=13;byminute=0;bysecond=0',
enabled
=> TRUE);
END;
/

For every step of a chain job that is running, the Scheduler creates a step job with the
same job name and owner as the chain job. Each step job additionally has a job
subname to uniquely identify it. The job subname is included as a column in the views
*_SCHEDULER_RUNNING_JOBS, *_SCHEDULER_JOB_LOG, and
*_SCHEDULER_JOB_RUN_DETAILS. The job subname is normally the same as the
step name except in the following cases:
■

■

For nested chains, the current step name may have already been used as a job
subname. In this case, the Scheduler appends '_N' to the step name, where N is an
integer that results in a unique job subname.
If there is a failure when creating a step job, the Scheduler logs a FAILED entry in
the job log views (*_SCHEDULER_JOB_LOG and
*_SCHEDULER_JOB_RUN_DETAILS) with the job subname set to 'step_name_0'.
See Also:
■

■

Oracle Database PL/SQL Packages and Types Reference for more
information on the CREATE_JOB procedure.
"Running Chains" on page 27-42 for another way to run a chain
without creating a job ahead of time.

Dropping Chains
You drop a chain, including its steps and rules, by using the DROP_CHAIN procedure.
An example of dropping a chain is the following, which drops my_chain1:
BEGIN
DBMS_SCHEDULER.DROP_CHAIN (
chain_name
=> 'my_chain1',
force
=> TRUE);
END;
/

Using the Scheduler

27-41

Using Chains

See Oracle Database PL/SQL Packages and Types Reference for more information regarding
the DROP_CHAIN procedure.

Running Chains
You can use the RUN_CHAIN procedure to run a chain immediately, without having to
create a job ahead of time for the chain. You can also use RUN_CHAIN to run only part
of a chain.
RUN_CHAIN creates a temporary job to run the specified chain. If you supply a job
name, the job is created with that name, otherwise a default job name is assigned.
If you supply a list of start steps, only those steps are started when the chain begins
running. (Steps that would normally have started do not run if they are not in the list.)
If no list of start steps is given, the chain starts normally—that is, an initial evaluation
is done to see which steps to start running. An example is the following, which
immediately runs the chain my_chain1:
BEGIN
DBMS_SCHEDULER.RUN_CHAIN (
chain_name
=> 'my_chain1',
job_name
=> 'quick_chain_job',
start_steps
=> 'my_step1, my_step2');
END;
/

See Oracle Database PL/SQL Packages and Types Reference for more information regarding
the RUN_CHAIN procedure.

Dropping Rules from a Chain
You drop a rule from a chain by using the DROP_CHAIN_RULE procedure. An example
is the following, which drops my_rule1:
BEGIN
DBMS_SCHEDULER.DROP_CHAIN_RULE (
chain_name
=>
'my_chain1',
rule_name
=>
'my_rule1',
force
=>
TRUE);
END;
/

See Oracle Database PL/SQL Packages and Types Reference for more information regarding
the DROP_CHAIN_RULE procedure.

Disabling Chains
You disable a chain by using the DISABLE procedure. An example is the following,
which disables my_chain1:
BEGIN
DBMS_SCHEDULER.DISABLE ('my_chain1');
END;
/

See Oracle Database PL/SQL Packages and Types Reference for more information regarding
the DISABLE procedure.

27-42 Oracle Database Administrator’s Guide

Using Chains

Note:

Chains are automatically disabled by the Scheduler when:

■

The program that one of the chain steps points to is dropped

■

The nested chain that one of the chain steps points to is dropped

■

The event schedule that one of the chain event steps points to is
dropped

Dropping Chain Steps
You drop a step from a chain by using the DROP_CHAIN_STEP procedure. An example
is the following, which drops my_step2 from my_chain2:
BEGIN
DBMS_SCHEDULER.DROP_CHAIN_STEP (
chain_name
=>
'my_chain2',
step_name
=>
'my_step2',
force
=>
TRUE);
END;
/

See Oracle Database PL/SQL Packages and Types Reference for more information regarding
the DROP_CHAIN_STEP procedure.

Altering Chain Steps
You alter the SKIP, PAUSE, or RESTART_ON_RECOVERY attributes of a chain step by
using the ALTER_CHAIN procedure. An example is the following, which causes
my_step3 to be skipped:
BEGIN
DBMS_SCHEDULE.ALTER_CHAIN (
chain_name => 'my_chain1',
step_name
=> 'my_step3',
attribute
=> 'SKIP',
value
=> TRUE);
END;
/

The ALTER_CHAIN procedure affects only future runs of the specified steps.
You alter the steps in a running chain by using the ALTER_RUNNING_CHAIN
procedure. An example is the following, which causes step my_step1 to pause after it
has completed—that is, its state is changed to PAUSED and its completed attribute
remains FALSE:
BEGIN
DBMS_SCHEDULER.ALTER_RUNNING_CHAIN (
job_name
=>
'my_job1',
step_name
=>
'my_step1',
attribute
=>
'PAUSE',
value
=>
TRUE);
END;
/

The ALTER_RUNNING_CHAIN procedure affects only the running instance of the chain.
See Oracle Database PL/SQL Packages and Types Reference for more information regarding
the ALTER_CHAIN procedure.

Using the Scheduler

27-43

Allocating Resources Among Jobs

Handling Stalled Chains
A chain can become stalled when no steps are running, no steps are scheduled to run,
no event steps are waiting for an event, and the evaluation_interval for the chain is
NULL. The chain can make no further progress unless you manually intervene. In this
case, the state of the job that is running the chain is set to CHAIN_STALLED. (However,
the job is still listed in the *_SCHEDULER_RUNNING_JOBS views.)
You can troubleshoot a stalled chain with the views
ALL_SCHEDULER_RUNNING_CHAINS, which shows the state of all steps in the chain
(including any nested chains), and ALL_SCHEDULER_CHAIN_RULES, which contains
all the chain rules.
You can enable the chain to continue by altering the state of one of its steps with the
ALTER_RUNNING_CHAIN procedure. For example, if step 11 is waiting for step 9 to
succeed before it can start, and if it makes sense to do so, you can set the state of step
9 to 'SUCCEEDED'.
Alternatively, if one or more rules are incorrect, you can use the
DEFINE_CHAIN_RULE procedure to replace them (using the same rule names), or to
create new rules. The new and updated rules apply to the running chain and all future
chain runs. After adding or updating rules, you must run
EVALUATE_RUNNING_CHAIN on the stalled chain job to trigger any required actions.

Allocating Resources Among Jobs
It is not practical to manage resource allocation at an individual job level, therefore, the
Scheduler uses the concept of job classes to manage resource allocation among jobs. In
addition to job classes, the Scheduler uses the Resource Manager to manage resource
allocation among jobs.

Allocating Resources Among Jobs Using Resource Manager
Resource Manager is the database feature that controls how resources are allocated in
the database. It not only controls asynchronous sessions like jobs but also synchronous
sessions like user sessions. It groups all "units of work" in the database into resource
consumer groups and uses a resource plan to specify how the resources will be
allocated among the various groups. See Chapter 24, "Using the Database Resource
Manager" for more information about what resources are controlled by resource
manager.
For jobs, resource allocation is specified by associating a job class with a consumer
group, or by associating a job class with a database service name and mapping that
database service to a consumer group. The consumer group that a job class maps to
can be specified when creating a job class. If no resource consumer group or database
service name is specified when a job class is created, the job class will map to the
default consumer group.
The Scheduler tries to limit the number of jobs that are running simultaneously so that
at least some jobs can complete, rather than running a lot of jobs concurrently but
without enough resources for any of them to complete.
The Scheduler and Resource Manager are tightly integrated. The job coordinator
obtains database resource availability from Resource Manager. Based on that
information, the coordinator determines how many jobs to start. It will only start jobs
from those job classes that will have enough resources to run. The coordinator will
keep starting jobs in a particular job class that maps to a consumer group until
Resource Manager determines that the maximum resource allocated for that consumer

27-44 Oracle Database Administrator’s Guide

Allocating Resources Among Jobs

group has been reached. This means that it is possible that there will be jobs in the job
table that are ready to run but will not be picked up by the job coordinator because
there are no resources to run them. Therefore, there is no guarantee that a job will run
at the exact time that it was scheduled. The coordinator picks up jobs from the job
table on the basis of which consumer groups still have resources available.
Even when jobs are running, Resource Manager will continue to manage the amount
of CPU cycles that are assigned to each running job based on the specified resource
plan. Keep in mind that Resource Manager can only manage database processes. The
active management of CPU cycles does not apply to jobs of type executable.
In a database, only one resource plan can be in effect at one time. It is possible to
manually switch the resource plan that is active on a system using the
DBMS_RESOURCE_MANAGER.SWITCH_PLAN procedure. In special scenarios, you
might want to run a specific resource plan and disable resource plan switches caused
by windows opening. To do this, you can use the
DBMS_RESOURCE_MANAGER.SWITCH_PLAN procedure with
allow_scheduler_plan_switches set to FALSE.
In a Real Application Clusters environment, the same resource plan will be in effect on
every database instance.

Example of Resource Allocation for Jobs
The following example can help to understand how resources are allocated for jobs.
Assume that the active resource plan is called "Night Plan" and that there are three job
classes: JC1, which maps to consumer group DW; JC2, which maps to consumer group
OLTP; and JC3, which maps to the default consumer group. Figure 27–3 offers a
simple graphical illustration of this scenario.
Figure 27–3 Sample Resource Plan
Night
Plan
60%
DW
Consumer
Group

30%
OLTP
Consumer
Group

10%
Other
Consumer
Group

This resource plan clearly gives priority to jobs that are part of job class JC1.
Consumer group DW gets 60% of the resources, thus jobs that belong to job class JC1
will get 60% of the resources. Consumer group OLTP has 30% of the resources, which
implies that jobs in job class JC2 will get 30% of the resources. The consumer group
Other specifies that all other consumer groups will be getting 10% of the resources.
This means that all jobs that belong in job class JC3 will share 10% of the resources
and can get a maximum of 10% of the resources.

Using the Scheduler

27-45

Allocating Resources Among Jobs

27-46 Oracle Database Administrator’s Guide

28
Administering the Scheduler
This chapter provides step-by-step instructions for configuring, administering, and
monitoring Oracle Scheduler (the Scheduler). It contains the following sections:
■

Configuring the Scheduler

■

Monitoring and Managing the Scheduler

■

Import/Export and the Scheduler

■

Examples of Using the Scheduler
This chapter discusses the use of the Oracle-supplied
DBMS_SCHEDULER package to administer scheduling capabilities.
You can also use Oracle Enterprise Manager (EM) as an easy-to-use
graphical interface for many of the same capabilities.

Note:

See the Oracle Database PL/SQL Packages and Types Reference for
DBMS_SCHEDULER syntax and the Oracle Enterprise Manager
documentation set for more information regarding EM.

Configuring the Scheduler
The following tasks are necessary when configuring the Scheduler:
■

Task 1: Setting Scheduler Privileges

■

Task 2: Configuring the Scheduler Environment

Task 1: Setting Scheduler Privileges
You should have the SCHEDULER_ADMIN role to administer the Scheduler. Typically,
database administrators will already have this role with the ADMIN option as part of
the DBA (or equivalent) role. You can grant this role to another administrator by
issuing the following statement:
GRANT SCHEDULER_ADMIN TO username;

Because the SCHEDULER_ADMIN role is a powerful role allowing a grantee to execute
code as any user, you should consider granting individual Scheduler system privileges
instead. Object and system privileges are granted using regular SQL grant syntax. An
example is if the database administrator issues the following statement:
GRANT CREATE JOB TO scott;

Administering the Scheduler 28-1

Configuring the Scheduler

After this statement is executed, scott can create jobs, schedules, or programs in his
schema. Another example is if the database administrator issues the following
statement:
GRANT MANAGE SCHEDULER TO adam;

After this statement is executed, adam can create, alter, or drop windows, job classes,
or window groups. He will also be able to set and retrieve Scheduler attributes and
purge Scheduler logs.
See "How to Manage Scheduler Privileges" on page 28-14 for more information
regarding privileges.
Setting Chain Privileges
Scheduler chains use underlying Oracle Streams Rules Engine objects along with their
associated privileges. To create a chain in his own schema, a user must have the
CREATE JOB privilege in addition to the Rules Engine privileges required to create
rules, rule sets, and evaluation contexts in his own schema. These can be granted by
issuing the following statement:
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(DBMS_RULE_ADM.CREATE_RULE_OBJ, 'username'),
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE (
DBMS_RULE_ADM.CREATE_RULE_SET_OBJ, 'username'),
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE (
DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT_OBJ, 'username')
END;
/

To create a chain in a different schema, a user must have the CREATE ANY JOB
privilege in addition to the privileges required to create rules, rule sets, and evaluation
contexts in schemas other than his own. These can be granted by issuing the following
statement:
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(DBMS_RULE_ADM.CREATE_ANY_RULE, 'username'),
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE (
DBMS_RULE_ADM.CREATE_ANY_RULE_SET, 'username'),
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE (
DBMS_RULE_ADM.CREATE_ANY_EVALUATION_CONTEXT, 'username')
END;
/

Altering or dropping chains in schemas other than the user's schema will require
corresponding system Rules Engine privileges for rules, rule sets, and evaluation
contexts. See the usage notes for DBMS_RULE_ADM.GRANT_OBJECT_PRIVILEGE for
more information on Streams Rules Engine privileges.
See "How to Manage Scheduler Privileges" on page 28-14 for more information
regarding privileges.

Task 2: Configuring the Scheduler Environment
This section discusses the following tasks:
■

Task 2A: Creating Job Classes

■

Task 2B: Creating Windows

■

Task 2C: Creating Resource Plans

■

Task 2D: Creating Window Groups

28-2 Oracle Database Administrator’s Guide

Configuring the Scheduler

■

Task 2E: Setting Scheduler Attributes

Task 2A: Creating Job Classes
To create job classes, use the CREATE_JOB_CLASS procedure. The following statement
illustrates an example of creating a job class:
BEGIN
DBMS_SCHEDULER.CREATE_JOB_CLASS (
job_class_name
=> 'my_jobclass1',
resource_consumer_group
=> 'my_res_group1',
comments
=> 'This is my first job class.');
END;
/

This statement creates a job class called my_jobclass1 with attributes such as a
resource consumer group of my_res_group1. To verify the job class contents, issue
the following statement:
SELECT * FROM DBA_SCHEDULER_JOB_CLASSES;
JOB_CLASS_NAME
----------------DEFAULT_JOB_CLASS
AUTO_TASKS_JOB_CLASS
FINANCE_JOBS
MY_JOBCLASS1
MY_CLASS1

RESOURCE_CONSU
-------------AUTO_TASK_CON
FINANCE_GROUP
MY_RES_GROUP1

SERVICE
-------

LOGGING_LEV
----------RUNS
RUNS
RUNS
RUNS
my_service1
RUNS

LOG_HISTORY
-----------

COMMENTS
-------The default
System maintenance
My first job class
My second job class

5 rows selected.

Note that job classes are created in the SYS schema.
See Also: Oracle Database PL/SQL Packages and Types Reference for
CREATE_JOB_CLASS syntax, "Creating Job Classes" on page 27-19
for further information on job classes, and "Examples of Creating
Job Classes" on page 28-21 for more examples of creating job classes

Task 2B: Creating Windows
To create windows, use the CREATE_WINDOW procedure. The following statement
illustrates an example of creating a window:
BEGIN
DBMS_SCHEDULER.CREATE_WINDOW (
window_name
=> 'my_window1',
resource_plan
=> 'my_resourceplan1',
start_date
=> '15-APR-03 01.00.00 AM Europe/Lisbon',
repeat_interval => 'FREQ=DAILY',
end_date
=> '15-SEP-04 01.00.00 AM Europe/Lisbon',
duration
=> interval '50' minute,
window_priority => 'HIGH',
comments
=> 'This is my first window.');
END;
/

To verify that the window was created properly, query the view
DBA_SCHEDULER_WINDOWS. As an example, issue the following statement:
SELECT WINDOW_NAME, RESOURCE_PLAN, DURATION, REPEAT_INTERVAL
FROM DBA_SCHEDULER_WINDOWS;

Administering the Scheduler 28-3

Configuring the Scheduler

WINDOW_NAME
----------MY_WINDOW1

RESOURCE_PLAN
------------MY_RESOURCEPLAN1

DURATION
------------+000 00:50:00

REPEAT_INTERVAL
--------------FREQ=DAILY

See Also: Oracle Database PL/SQL Packages and Types Reference for
CREATE_WINDOW syntax, "Creating Windows" on page 27-21 for
further information on windows, and "Examples of Creating
Windows" on page 28-23 for more examples of creating job classes

Task 2C: Creating Resource Plans
To create resource plans, use the CREATE_SIMPLE_PLAN procedure. This procedure
enables you to create consumer groups and allocate resources to them by executing a
single statement.
The following statement illustrates an example of using this procedure to create a
resource plan called my_simple_plan1:
BEGIN
DBMS_RESOURCE_MANAGER.CREATE_SIMPLE_PLAN (
simple_plan
=> 'my_simple_plan1',
consumer_group1
=> 'my_group1',
group1_cpu
=> 80,
consumer_group2
=> 'my_group2',
group2_cpu
=> 20);
END;
/

This statement creates a resource plan called my_simple_plan1. To verify the
resource plan contents, query the view DBA_RSRC_PLANS. An example is the
following statement:
SELECT PLAN, STATUS FROM DBA_RSRC_PLANS;
PLAN
-----------------------------SYSTEM_PLAN
INTERNAL_QUIESCE
INTERNAL_PLAN
MY_SIMPLE_PLAN1

STATUS
-------------------------ACTIVE
ACTIVE
ACTIVE
ACTIVE

See Also: "Allocating Resources Among Jobs" on page 27-44 for
further information on resource plans

Task 2D: Creating Window Groups
To create window groups, use the CREATE_WINDOW_GROUP and
ADD_WINDOW_GROUP_MEMBER procedures. The following statements illustrate an
example of using these procedures:
BEGIN
DBMS_SCHEDULER.CREATE_WINDOW_GROUP (
group_name
=> 'my_window_group1',
comments
=> 'This is my first window group.');
DBMS_SCHEDULER.ADD_WINDOW_GROUP_MEMBER (
group_name
=> 'my_window_group1',
window_list
=> 'my_window1, my_window2');

28-4 Oracle Database Administrator’s Guide

Configuring the Scheduler

DBMS_SCHEDULER.ADD_WINDOW_GROUP_MEMBER (
group_name
=> 'my_window_group1',
window_list
=> 'my_window3');
END;
/

These statements assume that you have already created my_window2 and
my_window3. You can do this with the CREATE_WINDOW procedure.
These statements create a window group called my_window_group1 and then add
my_window1, my_window2, and my_window3 to it. To verify the window group
contents, issue the following statements:
SELECT * FROM DBA_SCHEDULER_WINDOW_GROUPS;
WINDOW_GROUP_NAME
----------------MY_WINDOW_GROUP1

ENABLED
------TRUE

NUMBER_OF_WINDOWS
----------------3

COMMENTS
-------------------This is my first window group.

SELECT * FROM DBA_SCHEDULER_WINGROUP_MEMBERS;
WINDOW_GROUP_NAME
-----------------------------MY_WINDOW_GROUP1
MY_WINDOW_GROUP1
MY_WINDOW_GROUP1

WINDOW_NAME
--------------MY_WINDOW1
MY_WINDOW2
MY_WINDOW3

See Also: Oracle Database PL/SQL Packages and Types Reference for
CREATE_WINDOW_GROUP syntax, "Using Window Groups" on
page 27-27 for further information on window groups, and
"Example of Creating Window Groups" on page 28-24 for more
detailed examples of creating window groups

Task 2E: Setting Scheduler Attributes
There are several Scheduler attributes that control the behavior of the Scheduler. They
are default_timezone, log_history, max_job_slave_processes, and
event_expiry_time. The values of these attributes can be set by using the
SET_SCHEDULER_ATTRIBUTE procedure. Setting these attributes requires the
MANAGE SCHEDULER privilege. Attributes that can be set are:
■

default_timezone
Repeating jobs and windows that use the calendaring syntax need to know which
time zone to use for their repeat intervals. They normally retrieve the time zone
from start_date, but if no start_date is provided (which is not uncommon),
they retrieve the time zone from the default_timezone Scheduler attribute.
Scheduler derives the value of default_timezone from the operating system
environment. If Scheduler can find no compatible value from the operating
system, it sets default_timezone to NULL.
It is crucial that you verify that default_timezone is set properly, and if not,
that you set it. To verify it, run this query:
SQL> select dbms_scheduler.stime from dual;
STIME
--------------------------------------------------------------------------14-OCT-04 02.56.03.206273000 PM US/PACIFIC

Administering the Scheduler 28-5

Monitoring and Managing the Scheduler

To ensure that daylight savings adjustments are followed, it is strongly
recommended that you set default_timezone to a region name instead of an
absolute time zone offset. For example, if your database resides in Miami, Florida,
USA, issue the following statement:
DBMS_SCHEDULER.SET_SCHEDULER_ATTRIBUTE('default_timezone','US/Eastern');

To see a list of valid region names, run this query:
SELECT DISTINCT TZNAME FROM V$TIMEZONE_NAMES;

If you do not properly set default_timezone, the default time zone for
repeating jobs and windows will be the absolute offset retrieved from
SYSTIMESTAMP (the time zone of the operating system environment of the
database), which means that repeating jobs and windows that do not have their
start_date set will not follow daylight savings adjustments.
■

log_history
This enables you to control the amount of logging the Scheduler performs. To
prevent the job log and the window log from growing indiscriminately, the
Scheduler has an attribute that specifies how much history (in days) to keep. Once
a day, the Scheduler automatically purges all log entries from both the job log as
well as the window log that are older than the specified history. The default is 30
days.
You can change the default by using the SET_SCHEDULER_ATTRIBUTE
procedure. For example, to change it to 90 days, issue the following statement:
DBMS_SCHEDULER.SET_SCHEDULER_ATTRIBUTE('log_history','90');

The range of valid values is 1 through 999.
■

max_job_slave_processes
This enables you to set a maximum number of slave processes for a particular
system configuration and load. Even though the Scheduler automatically
determines what the optimum number of slave processes is for a given system
configuration and load, you still might want to set a fixed limit on the Scheduler. If
this is the case, you can set this attribute. The default value is NULL, and the valid
range is 1-999.
Although the number set by max_job_slave_processes is a real maximum, it
does not mean the Scheduler will start the specified number of slaves. For
example, even though this attribute is set to 10, the Scheduler might still determine
that is should not start more than 3 slave processes. However, if it wants to start
15, but it is set to 10, it will not start more than 10.

■

event_expiry_time
This enables you to set the time in seconds before an event generated by the
Scheduler expires (in other words, is automatically purged from the queue).

See Oracle Database PL/SQL Packages and Types Reference for detailed information about
the SET_SCHEDULER_ATTRIBUTE procedure.

Monitoring and Managing the Scheduler
The following sections discuss how to monitor and manage the Scheduler:
■

How to View Scheduler Information

28-6 Oracle Database Administrator’s Guide

Monitoring and Managing the Scheduler

■

How to View the Currently Active Window and Resource Plan

■

How to View Scheduler Privileges

■

How to Find Information About Currently Running Jobs

■

How the Job Coordinator Works

■

How to Monitor and Manage Window and Job Logs

■

How to Manage Scheduler Privileges

■

How to Drop a Job

■

How to Drop a Running Job

■

Why Does a Job Fail to Run?

■

Job Recovery After a Failure

■

How to Change Job Priorities

■

How to Monitor Running Chains

■

Why Does a Program Become Disabled?

■

Why Does a Window Fail to Take Effect?

■

How the Scheduler Guarantees Availability

■

How to Handle Scheduler Security

■

How to Manage the Scheduler in a RAC Environment

How to View Scheduler Information
You can check Scheduler information by using many views. An example is the
following, which shows information for completed instances of my_job1:
SELECT JOB_NAME, STATUS, ERROR#
FROM DBA_SCHEDULER_JOB_RUN_DETAILS WHERE JOB_NAME = 'MY_JOB1';
JOB_NAME
-------MY_JOB1

STATUS
-------------FAILURE

ERROR#
-----20000

Table 28–1 contains views associated with the Scheduler. The *_SCHEDULER_JOBS,
*_SCHEDULER_SCHEDULES, *_SCHEDULER_PROGRAMS,
*_SCHEDULER_RUNNING_JOBS, *_SCHEDULER_JOB_LOG,
*_SCHEDULER_JOB_RUN_DETAILS views are particularly useful for managing jobs.
See Oracle Database Reference for details regarding Scheduler views.
Table 28–1

Scheduler Views

View

Description

*_SCHEDULER_SCHEDULES

These views show all schedules.

*_SCHEDULER_PROGRAMS

These views show all programs.

*_SCHEDULER_PROGRAM_ARGS

These views show all arguments defined for all
programs as well as the default values if they exist.

*_SCHEDULER_JOBS

These views show all jobs, enabled as well as disabled.

*_SCHEDULER_RUNNING_CHAINS

These views show all chains that are running.

*_SCHEDULER_CHAIN_STEPS

These views show all steps for all chains.

Administering the Scheduler 28-7

Monitoring and Managing the Scheduler

Table 28–1 (Cont.) Scheduler Views
View

Description

*_SCHEDULER_CHAINS

These views show all chains.

*_SCHEDULER_CHAIN_RULES

These views show all rules for all chains.

*_SCHEDULER_GLOBAL_ATTRIBUTE

These views show the current values of Scheduler
attributes.

*_SCHEDULER_JOB_ARGS

These views show all set argument values for all jobs.

*_SCHEDULER_JOB_CLASSES

These views show all job classes.

*_SCHEDULER_WINDOWS

These views show all windows.

*_SCHEDULER_JOB_RUN_DETAILS

These views show all completed (failed or successful) job
runs.

*_SCHEDULER_WINDOW_GROUPS

These views show all window groups.

*_SCHEDULER_WINGROUP_MEMBERS

These views show the members of all window groups,
one row for each group member.

*_SCHEDULER_RUNNING_JOBS

These views show state information on all jobs that are
currently being run.

*_SCHEDULER_JOB_LOG

These views show all state changes made to jobs.

*_SCHEDULER_WINDOW_LOG

These views show all state changes made to windows.

*_SCHEDULER_WINDOW_DETAILS

These views show all completed window runs.

How to View the Currently Active Window and Resource Plan
You can view the currently active window and the plan associated with it by issuing
the following statement:
SELECT WINDOW_NAME, RESOURCE_PLAN FROM DBA_SCHEDULER_WINDOWS
WHERE ACTIVE='TRUE';
WINDOW_NAME
RESOURCE_PLAN
------------------------------ -------------------------MY_WINDOW10
MY_RESOURCEPLAN1

If there is no window active, you can view the active resource plan by issuing the
following statement:
SELECT * FROM V$RSRC_PLAN;

How to View Scheduler Privileges
You must have the MANAGE SCHEDULER privilege to administer the Scheduler.
Typically, database administrators have this privilege with the ADMIN option as part of
the DBA (or equivalent) role. You can check your current system privileges by issuing
the following statement:
SELECT * FROM SESSION_PRIVS;

If you do not have sufficient privileges, see "Task 1: Setting Scheduler Privileges" on
page 28-1, "How to Manage Scheduler Privileges" on page 28-14, and Oracle Database
Security Guide.

28-8 Oracle Database Administrator’s Guide

Monitoring and Managing the Scheduler

How to Find Information About Currently Running Jobs
You can check a job's state by issuing the following statement:
SELECT JOB_NAME, STATE FROM DBA_SCHEDULER_JOBS
WHERE JOB_NAME = 'MY_EMP_JOB1';
JOB_NAME
STATE
------------------------------ --------MY_EMP_JOB1
DISABLED

In this case, you could enable the job using the ENABLE procedure. Table 28–2 shows
the valid values for job state.
Table 28–2

Job States

Job State

Description

disabled

The job is disabled.

scheduled

The job is scheduled to be executed.

running

The job is currently running.

completed

The job has completed, and is not scheduled to run again.

stopped

The job was scheduled to run once and was stopped while it was
running.

broken

The job is broken.

failed

The job was scheduled to run once and failed.

retry scheduled The job has failed at least once and a retry has been scheduled to be
executed.
succeeded

The job was scheduled to run once and completed successfully.

chain_stalled

The job is of type chain and has no steps running, no steps scheduled
to run, and no event steps waiting on an event, and the chain
evaluation_interval is set to NULL. No progress will be made in
the chain unless there is manual intervention.

You can check the progress of currently running jobs by issuing the following
statement:
SELECT * FROM ALL_SCHEDULER_RUNNING_JOBS;

Note that, for the column CPU_USED to show valid data, the initialization parameter
RESOURCE_LIMIT must be set to true.
You can find out information about a job that is part of a running chain by issuing the
following statement:
SELECT * FROM ALL_SCHEDULER_RUNNING_CHAINS WHERE JOB_NAME='MY_JOB1';

You can check whether the job coordinator is running by searching for a process of the
form cjqNNN.
Oracle Database Reference for details regarding the
*_SCHEDULER_RUNNING_JOBS and DBA_SCHEDULER_JOBS
views

See Also:

Administering the Scheduler 28-9

Monitoring and Managing the Scheduler

How the Job Coordinator Works
The job coordinator background process is automatically started and stopped on an
as-needed basis. By default, the coordinator will not be up and running, but the
database does monitor whether there are any jobs to be executed, or windows to be
opened in the near future. If so it will start the coordinator.
As long as there are jobs or windows running, the coordinator continues to be up.
Once there has been a certain period of Scheduler inactivity and there are no jobs or
windows scheduled in the near future, the coordinator will automatically be stopped.

Job Coordinator and Real Application Clusters
Each RAC instance has its own job coordinator. The database monitoring checks that
determine whether or not to start the job coordinator do take the service affinity of jobs
into account. For example, if there is only one job scheduled in the near future and the
job class to which this job belongs has service affinity for only two out of the four RAC
instances, only the job coordinators for those two instances will be started.

Using DBMS_SCHEDULER and DBMS_JOB at the Same Time
Even though Oracle recommends you switch from DBMS_JOB to DBMS_SCHEDULER,
DBMS_JOB is still supported for backward compatibility. Both Scheduler packages
share the same job coordinator, but DBMS_JOB does not have the auto start and stop
functionality. Instead, the job coordinator is controlled by the
JOB_QUEUE_PROCESSES initialization parameter. When JOB_QUEUE_PROCESSES is
set to 0, the coordinator is turned off and when it has a non-zero value it is turned on.
The JOB_QUEUE_PROCESSES initialization parameter is only used for DBMS_JOB.
When this parameter is set to a non-zero value, auto start and stop no longer apply
because the coordinator will always be up and running. In this case, the coordinator
will take care of execution of both DBMS_SCHEDULER and DBMS_JOB jobs.
If the initialization parameter is set to 0, or if it is not set at all, no DBMS_JOB jobs will
be run, however, the auto start and stop feature will be used for all DBMS_SCHEDULER
jobs and windows. If there is a DBMS_SCHEDULER job to be executed, the coordinator
will be started and the job will be executed. However, DBMS_JOB jobs still will not be
run.

Scheduler Attribute max_job_slave_processes
The initialization parameter JOB_QUEUE_PROCESSES only applies to DBMS_JOB.
When DBMS_SCHEDULER is used, the coordinator will automatically determine how
many job slaves to start based on CPU load and the number of outstanding jobs. In
special scenarios a dba can still limit the maximum number of slaves to be started by
the coordinator by setting the MAX_JOB_SLAVE_PROCESSES with the
DBMS_SCHEDULER.SET_SCHEDULER_ATTRIBUTE procedure.

How to Monitor and Manage Window and Job Logs
Logs have a new entry for each event that occurs so that you can track historical
information. This is different from a queue, where you only want to track the latest
status for an item. There are logs for jobs, job runs, and windows.
Job activity is logged in the *_SCHEDULER_JOB_LOG views. Altering a job is logged
with an operation of UPDATE. Dropping a job is logged in these views with an
operation of DROP.

28-10 Oracle Database Administrator’s Guide

Monitoring and Managing the Scheduler

Oracle Database Reference for details on the
*_SCHEDULER_JOB_LOG views and other Scheduler log views.

See Also:

Job Logs
To see the contents of the job log, query the DBA_SCHEDULER_JOB_LOG view. An
example is the following statement, which shows what happened for past job runs:
SELECT JOB_NAME, OPERATION, OWNER FROM DBA_SCHEDULER_JOB_LOG;
JOB_NAME
-------MY_JOB13
MY_JOB14
MY_NEW_JOB3
MY_EMP_JOB1
MY_JOB1
MY_EMP_JOB1
MY_EMP_JOB
MY_JOB14
MY_JOB14
MY_JOB14
MY_JOB14
MY_JOB14
MY_JOB14
MY_JOB14

OPERATION
--------CREATE
CREATE
ENABLE
UPDATE
CREATE
UPDATE
CREATE
RUN
RETRY_RUN
RETRY_RUN
RETRY_RUN
RETRY_RUN
BROKEN
DROP

OWNER
----SYS
OE
SYS
SYS
SCOTT
SYS
SYS
OE
OE
OE
OE
OE
OE
OE

When logging_level for a job is set to LOGGING_FULL, the additional_info
column of the job log contains the before and after values of the modified attribute on
update operations, and contains the values of all attributes on drop operations. This
enables you to trace backwards from the current job state to the state of the job on
previous job runs.

Job Run Details
To further analyze each job run—why it failed, what the actual start time was, how
long the job ran, and so on—query the DBA_SCHEDULER_JOB_RUN_DETAILS view.
As an example, the following statement illustrates the status for my_job14:
select log_id, job_name, status,
to_char(log_date, 'DD-MON-YYYY HH24:MI') log_date
from dba_scheduler_job_run_details
where job_name = 'MY_JOB14';
LOG_ID
---------69
124
133
146

JOB_NAME
---------------------MY_JOB14
MY_JOB14
MY_JOB14
MY_JOB14

STATUS
-----------SUCCEEDED
SUCCEEDED
FAILURE
FAILURE

LOG_DATE
----------------02-JUN-2005 03:14
03-JUN-2005 03:15
04-JUN-2005 03:00
05-JUN-2005 03:01

For every row in SCHEDULER_JOB_LOG that is of operation RUN, RETRY_RUN, or
RECOVERY_RUN, there will be a corresponding row in the *_JOB_RUN_DETAILS view
with the same LOG_ID. LOG_DATE contains the timestamp of the entry, so sorting by
LOG_DATE should give you a chronological picture of the life of a job.

Controlling Job Logging
You can control the amount of logging that the Scheduler performs on jobs at either a
class or job level. Normally, you will want to control jobs at a class level as this offers a

Administering the Scheduler

28-11

Monitoring and Managing the Scheduler

full audit trail. To do this, use the logging_level attribute in the
CREATE_JOB_CLASS procedure.
For each new class, the creator of the class must specify what the logging level is for all
jobs in that class. The three possible options are:
■

DBMS_SCHEDULER.LOGGING_OFF
No logging will be performed for any jobs in this class.

■

DBMS_SCHEDULER.LOGGING_RUNS
The Scheduler will write detailed information to the job log for all runs of each job
in this class.

■

DBMS_SCHEDULER.LOGGING_FULL
In addition to recording every run of a job, the Scheduler will record all operations
performed on all jobs in this class. In other words, every time a job is created,
enabled, disabled, altered, and so on will be recorded in the log.

By default, only job runs are recorded. For job classes that have very short and highly
frequent jobs, the overhead of recording every single run might be too much and you
might choose to turn the logging off. You might, however, prefer to have a complete
audit trail of everything that happened to the jobs in a specific class, in which case you
need to turn on full logging for that class.
The second way of controlling the logging level is on an individual job basis. You
should keep in mind, however, that the log in many cases is used as an audit trail, thus
if you want a certain level of logging, the individual job creator must not be able to
turn logging off. The class-specific level is, therefore, the minimum level at which job
information will be logged. A job creator can only turn on more logging for an
individual job, not less.
This functionality is provided for debugging purposes. For example, if the
class-specific level is set to record job runs and the job-specific logging is turned off,
the Scheduler will still log the runs. If, on the other hand, the job creator turns on full
logging and the class-specific level is set to record runs only, all operations on this
individual job will be logged. This way, an end user can test his job by turning on full
logging.
To set the logging level of an individual job, you must use the SET_ATTRIBUTE
procedure on that job. For example, to turn on full logging for a job called mytestjob,
issue the following statement:
DBMS_SCHEDULER.SET_ATTRIBUTE (
'mytestjob', 'logging_level', DBMS_SCHEDULER.LOGGING_FULL);

See Also: Oracle Database PL/SQL Packages and Types Reference for
detailed information about the CREATE_JOB_CLASS and
SET_ATTRIBUTE procedures and "Task 2E: Setting Scheduler
Attributes" on page 28-5

Window Logs
A window log has an entry for each time you do the following:
■

Create a window

■

Drop a window

■

Open a window

28-12 Oracle Database Administrator’s Guide

Monitoring and Managing the Scheduler

■

Close a window

■

Overlap windows

■

Disable a window

■

Enable a window

There are no logging levels for window logging, but every window operation will
automatically be logged by the Scheduler.
To see the contents of the window log, query the DBA_SCHEDULER_WINDOW_LOG
view. The following statement shows sample output from this view:
SELECT LOG_ID, TO_CHAR(LOG_DATE, 'MM/DD/YYYY'), WINDOW_NAME, OPERATION
FROM DBA_SCHEDULER_WINDOW_LOG;
LOG_ID
---------1
2
3
4
5
6
22
25
26
29

TO_CHAR(LO
---------10/01/2004
10/01/2004
10/01/2004
10/01/2004
10/01/2004
10/01/2004
10/06/2004
10/06/2004
10/06/2004
10/06/2004

WINDOW_NAME
----------------WEEKNIGHT_WINDOW
WEEKNIGHT_WINDOW
WEEKNIGHT_WINDOW
WEEKEND_WINDOW
WEEKEND_WINDOW
WEEKEND_WINDOW
WEEKNIGHT_WINDOW
WEEKNIGHT_WINDOW
WEEKNIGHT_WINDOW
WEEKNIGHT_WINDOW

OPERATION
-----------------------------CREATE
UPDATE
UPDATE
CREATE
UPDATE
UPDATE
OPEN
CLOSE
OPEN
CLOSE

The DBA_SCHEDULER_WINDOWS_DETAILS view provides information about every
window that was active and is now closed (completed). The following statement
shows sample output from that view:
SELECT LOG_ID, WINDOW_NAME, ACTUAL_START_DATE, ACTUAL_DURATION
FROM DBA_SCHEDULER_WINDOW_DETAILS;
LOG_ID
---------25
29

WINDOW_NAME
---------------WEEKNIGHT_WINDOW
WEEKNIGHT_WINDOW

ACTUAL_START_DATE
-----------------------------------06-OCT-04 03.12.48.832438 PM PST8PDT
06-OCT-04 06.19.37.025704 PM PST8PDT

ACTUAL_DURATI
------------+000 01:02:32
+000 03:02:00

Notice that log IDs correspond in both of these views, and that in this case the rows in
the DBA_SCHEDULER_WINDOWS_DETAILS view correspond to the CLOSE operations
in the DBA_SCHEDULER_WINDOW_LOG view.

Purging Logs
To prevent job and window logs from growing indiscriminately, use the
SET_SCHEDULER_ATTRIBUTE procedure to specify how much history (in days) to
keep. Once per day, the Scheduler automatically purges all log entries that are older
than the specified history period from both the job log and the window log. The
default history period is 30 days. For example, to change the history period to 90 days,
issue the following statement:
DBMS_SCHEDULER.SET_SCHEDULER_ATTRIBUTE('log_history','90');

Some job classes are more important than others. Because of this, you can override this
global history setting by using a class-specific setting. For example, suppose that there
are three job classes (class1, class2, and class3), and that you want to keep 10
days of history for the window log, class1, and class3, but 30 days for class2. To
achieve this, issue the following statements:
Administering the Scheduler

28-13

Monitoring and Managing the Scheduler

DBMS_SCHEDULER.SET_SCHEDULER_ATTRIBUTE('log_history','10');
DBMS_SCHEDULER.SET_ATTRIBUTE('class2','log_history','30');

You can also set the class-specific history when creating the job class.
Note that log entries pertaining to steps of a chain run are not purged until the entries
for the main chain job are purged.
Purging Logs Manually
The PURGE_LOG procedure enables you to manually purge logs. As an example, the
following statement purges all entries from both the job and window logs:
DBMS_SCHEDULER.PURGE_LOG();

Another example is the following, which purges all entries from the jog log that are
older than three days. The window log is not affected by this statement.
DBMS_SCHEDULER.PURGE_LOG(log_history => 3, which_log => 'JOB_LOG');

The following statement purges all window log entries older than 10 days and all job
log entries older than 10 days that relate to job1 and to the jobs in class2:
DBMS_SCHEDULER.PURGE_LOG(log_history => 10, job_name => 'job1, sys.class2');

How to Manage Scheduler Privileges
You should have the SCHEDULER_ADMIN role to administer the Scheduler. Typically,
database administrators will already have this role with the ADMIN option as part of
the DBA (or equivalent) role. You can grant this role to another administrator by
issuing the following statement:
GRANT SCHEDULER_ADMIN TO username;

Because the SCHEDULER_ADMIN role is a powerful role allowing a grantee to execute
code as any user, you should consider granting individual Scheduler system privileges
instead. Both system and object privileges are granted using regular SQL grant syntax.
An example is for the database administrator to issue the following statement:
GRANT CREATE JOB TO scott;

After this statement is executed, scott can create jobs, schedules, or programs in his
schema. Another example is to grant an object privilege, as in the following statement:
GRANT ALTER myjob1 TO scott;

After this statement is executed, scott can execute, alter, or copy myjob1. See Oracle
Database SQL Reference for system and object privilege details and Oracle Database
Security Guide for general information.
An alternative to the SCHEDULER_ADMIN role for administering the Scheduler is to use
the MANAGE SCHEDULER privilege, which is recommended for managing resources. As
an example of granting this privilege to adam, issue the following statement:
GRANT MANAGE SCHEDULER TO adam;

After this statement is executed, adam can create, alter, or drop windows, job classes,
or window groups. He will also be able to set and retrieve Scheduler attributes and
purge Scheduler logs.
If users will create chains in their own schemas, they need to have the CREATE JOB
privilege in addition to the Rules Engine privileges required to create rules, rule sets,

28-14 Oracle Database Administrator’s Guide

Monitoring and Managing the Scheduler

and evaluation contexts in their own schemas. If users will create chains in different
schemas, they must have the CREATE ANY JOB privilege in addition to the privileges
required to create rules, rule sets, and evaluation contexts in schemas other than their
own. See "Setting Chain Privileges" on page 28-2 for the statements necessary to enable
users to create chains and "Chain Tasks and Their Procedures" on page 27-37 for more
information regarding chain privileges.

Types of Privileges and Their Descriptions
The following privileges are important when using the Scheduler.
Table 28–3

Scheduler Privileges

Privilege Name

Operations Authorized

System Privileges:
CREATE JOB

This privilege enables you to create jobs, chains, schedules, and programs in your
own schema. You will always be able to alter and drop jobs, schedules and
programs in your own schema, even if you do not have the CREATE JOB privilege.
In this case, the job must have been created in your schema by another user with
the CREATE ANY JOB privilege.

CREATE ANY JOB

This privilege enables you to create, alter, and drop jobs, chains, schedules, and
programs in any schema except SYS. This privilege is very powerful and should be
used with care because it allows the grantee to execute code as any other user.

CREATE EXTERNAL JOB

This privilege is required to create jobs that run outside of the database. Owners of
jobs of type 'EXECUTABLE' or jobs that point to programs of type 'EXECUTABLE'
require this privilege. To run a job of type 'EXECUTABLE', you must have this
privilege and the CREATE JOB privilege.

EXECUTE ANY PROGRAM

This privilege enables your jobs to use programs or chains from any schema.

EXECUTE ANY CLASS

This privilege enables your jobs to run under any job class.

MANAGE SCHEDULER

This is the most important privilege for administering the Scheduler. It enables
you to create, alter, and drop job classes, windows, and window groups. It also
enables you to set and retrieve Scheduler attributes and purge Scheduler logs.

Object Privileges:

Administering the Scheduler

28-15

Monitoring and Managing the Scheduler

Table 28–3 (Cont.) Scheduler Privileges
Privilege Name

Operations Authorized

EXECUTE

This privilege can only be granted for programs, chains, and job classes. It enables
you to create a job that runs with the program, chain, or job class. It also enables
you to view object attributes.

ALTER

This privilege enables you to alter or drop the object it is granted on. Altering
includes such operations as enabling, disabling, defining or dropping program
arguments, setting or resetting job argument values and running a job. For
programs, jobs, and chains, this privilege enables you to view object attributes.
This privilege can only be granted on jobs, chains, programs and schedules. For
other types of Scheduler objects, you can grant the MANAGE SCHEDULER system
privilege. This privilege can be granted for:
jobs (DROP_JOB, RUN_JOB, ALTER_RUNNING_CHAIN,
SET_JOB_ARGUMENT_VALUE, RESET_JOB_ARGUMENT_VALUE,
SET_JOB_ANYDATA_VALUE) and (STOP_JOB without the force option)
chains (DROP_CHAIN, ALTER_CHAIN, DEFINE_CHAIN_RULE,
DEFINE_CHAIN_STEP, DEFINE_CHAIN_EVENT_STEP, DROP_CHAIN_RULE, and
DROP_CHAIN_STEP)
programs (DROP_PROGRAM, DEFINE_PROGRAM_ARGUMENT,
DEFINE_ANYDATA_ARGUMENT, DEFINE_METADATA_ARGUMENT,
DROP_PROGRAM_ARGUMENT, SET_ATTRIBUTE_NULL)
schedules (DROP_SCHEDULE)
This privilege authorizes operations allowed by all other object privileges possible
for a given object. It can be granted on jobs, programs, chains, schedules and job
classes.

ALL

The SCHEDULER_ADMIN role is created with all of the system privileges shown in
Table 28–3 (with the ADMIN option). The SCHEDULER_ADMIN role is granted to DBA
(with the ADMIN option).
The following object privileges are granted to PUBLIC: SELECT ALL_SCHEDULER_*
views, SELECT USER_SCHEDULER_* views, SELECT
SYS.SCHEDULER$_JOBSUFFIX_S (for generating a job name), and EXECUTE
SYS.DEFAULT_JOB_CLASS.

How to Drop a Job
You can remove a job from the database by issuing a DROP_JOB statement, as in the
following:
BEGIN
DBMS_SCHEDULER.DROP_JOB (
job_name
=> 'my_job1');
END;
/

See Also: Oracle Database PL/SQL Packages and Types Reference for
DROP_JOB procedure syntax

How to Drop a Running Job
You can delete a running job by issuing the DROP_JOB procedure with the force
option. For example, the following statement forces the deletion of my_job1:
BEGIN
DBMS_SCHEDULER.DROP_JOB (
job_name
=> 'my_job1',

28-16 Oracle Database Administrator’s Guide

Monitoring and Managing the Scheduler

force
END;
/

=>

TRUE);

Note that this statement will fail if my_job1 is running and you do not use the force
option.
If the force option is specified, it will try to stop the job by using an interrupt
mechanism. (which would be equivalent to calling STOP_JOB without force first).
Alternatively, you can call STOP_JOB to first stop the job and then call DROP_JOB to
drop it. If you have the MANAGE SCHEDULER privilege, you can call STOP_JOB with
force, if the regular STOP_JOB call failed to stop the job, and then call DROP_JOB.

Why Does a Job Fail to Run?
A job may fail to run for several reasons. Before troubleshooting a job that you suspect
did not run, check that the job is not running by issuing the following statement:
SELECT JOB_NAME, STATE FROM DBA_SCHEDULER_JOBS;

Typical output will resemble the following:
JOB_NAME
-----------------------------MY_EMP_JOB
MY_EMP_JOB1
MY_NEW_JOB1
MY_NEW_JOB2
MY_NEW_JOB3

STATE
--------DISABLED
FAILED
DISABLED
BROKEN
COMPLETED

There are four types of jobs that are not running:
■

Failed Jobs

■

Broken Jobs

■

Disabled Jobs

■

Completed Jobs
See Also:

"Job Recovery After a Failure" on page 28-18

Failed Jobs
If a job has the status of FAILED in the job table, it was scheduled to run once but the
execution has failed. If the job was specified as restartable, all retries have failed.
If a job fails in the middle of execution, only the last transaction of that job is rolled
back. If your job executes multiple transactions, you need to be careful about setting
restartable to TRUE. You can query failed jobs by querying the
*_SCHEDULER_JOB_RUN_DETAILS views.

Broken Jobs
A broken job is one that has exceeded a certain number of failures. This number is set
in max_failures, and can be altered. In the case of a broken job, the entire job is
broken, and it will not be run until it has been fixed. For debugging and testing, you
can use the RUN_JOB procedure.
You can query broken jobs by querying the *_SCHEDULER_JOBS and
*_SCHEDULER_JOB_LOG views.

Administering the Scheduler

28-17

Monitoring and Managing the Scheduler

Disabled Jobs
A job can become disabled for the following reasons:
■

The job was manually disabled

■

The job class it belongs to was dropped

■

The program, chain, or schedule that it points to was dropped

■

A window or window group is its schedule and the window or window group is
dropped

Completed Jobs
A job will be completed if end_date or max_runs is reached. (If a job recently
completed successfully but is scheduled to run again, the job state is SCHEDULED.)

Job Recovery After a Failure
The Scheduler attempts to recover jobs that are interrupted when:
■

The database abnormally shuts down

■

A job slave process is killed or otherwise fails

■

■

For an external job, the external job process that starts the executable or script is
killed or otherwise fails. (The external job process is extjob on Unix. On
Windows, it is the external job service.)
For an external job, the process that runs the end-user executable or script is killed
or otherwise fails.

Job recovery proceeds as follows:
■

The Scheduler adds an entry to the job log for the instance of the job that was
running when the failure occurred. In the log entry, the OPERATION is 'RUN', the
STATUS is 'STOPPED', and ADDITIONAL_INFO contains one of the following:
–

REASON="Job slave process was terminated"

–

REASON="ORA-01014: ORACLE shutdown in progress"

■

If restartable is set to TRUE for the job, the job is restarted.

■

If restartable is set to FALSE for the job:
–

If the job is a run-once job and auto_drop is set to TRUE, the job run is done
and the job is dropped.

–

If the job is a run-once job and auto_drop is set to FALSE, the job is disabled
and the job state is set to 'STOPPED'.

–

If the job is a repeating job, the Scheduler schedules the next job run and the
job state is set to 'SCHEDULED'.

When a job is restarted as a result of this recovery process, the new run is entered into
the job log with the operation 'RECOVERY_RUN'.

How to Change Job Priorities
You can change job priorities by using the SET_ATTRIBUTE procedure. Job priorities
must be in the range 1-5 with 1 being the highest priority. For example, the following
statement changes the job priority for my_job1 to a setting of 1:
BEGIN

28-18 Oracle Database Administrator’s Guide

Monitoring and Managing the Scheduler

DBMS_SCHEDULER.SET_ATTRIBUTE (
name
=>
'my_emp_job1',
attribute
=>
'job_priority',
value
=>
1);
END;
/

You can verify that the attribute was changed by issuing the following statement:
SELECT JOB_NAME, JOB_PRIORITY FROM DBA_SCHEDULER_JOBS;
JOB_NAME
JOB_PRIORITY
------------------------------ -----------MY_EMP_JOB
3
MY_EMP_JOB1
1
MY_NEW_JOB1
3
MY_NEW_JOB2
3
MY_NEW_JOB3
3

See Also: Oracle Database PL/SQL Packages and Types Reference for
detailed information about the SET_ATTRIBUTE procedure

How to Monitor Running Chains
You can check the state of a running chain by querying the
*_SCHEDULER_RUNNING_CHAINS views. The results contain a row describing the
current state of every step in every running instance of a chain. For example, the
following statement displays the state of all steps in the running job MY_CHAIN_JOB.
It also shows the state of all steps of any nested chain jobs that are running or have
completed.
SELECT * FROM USER_SCHEDULER_RUNNING_CHAINS WHERE JOB_NAME = 'MY_CHAIN_JOB';

See "Using Chains" on page 27-37 for more information regarding chains.

Why Does a Program Become Disabled?
A program can become disabled if a program argument is dropped or
number_of_arguments is changed so that all arguments are no longer defined.
See "Using Programs" on page 27-9 for more information regarding programs.

Why Does a Window Fail to Take Effect?
A window can fail to take effect for the following reasons:
■

A window becomes disabled when it is at the end of its schedule

■

A window that points to a schedule that no longer exists is disabled

See "Using Windows" on page 27-19 for more information regarding windows.

How the Scheduler Guarantees Availability
You do not need to perform any special operations for the Scheduler in the event of a
system or slave process failure.

Administering the Scheduler

28-19

Import/Export and the Scheduler

How to Handle Scheduler Security
You should grant the CREATE JOB system privilege to regular users who need to be
able to use the Scheduler to schedule and run jobs. You should grant MANAGE
SCHEDULER to any database administrator who needs to be able to manage system
resources. Granting any other Scheduler system privilege or role should not be done
without great caution. In particular, the CREATE ANY JOB system privilege and the
SCHEDULER_ADMIN role, which includes it, are very powerful because they allow
execution of code as any user. They should only be granted to very powerful roles or
users.
A particularly important issue from a security point of view is handling external jobs.
Only users that need to run jobs outside of the database should be allowed to do so.
You must grant the CREATE EXTERNAL JOB system privilege to those users. See
"Running External Jobs" on page 27-6 for further information. Security for the
Scheduler has no other special requirements. See Oracle Database Security Guide for
details regarding security.
When upgrading from Oracle Database 10g Release 1 to 10g
Release 2, CREATE EXTERNAL JOB is automatically granted to all
users and roles that have the CREATE JOB privilege. Oracle
recommends that you revoke this privilege from users that don't need
it.

Note:

How to Manage the Scheduler in a RAC Environment
There are no special requirements for using the Scheduler in a RAC environment.

Import/Export and the Scheduler
You must use the Data Pump utilities (impdp and expdp) to export Scheduler objects.
You cannot use the earlier import/export utilities with the Scheduler. Also, Scheduler
objects cannot be exported while the database is in read-only mode.
An export generates the DDL that was used to create the Scheduler objects. All
attributes are exported. When an import is done, all the database objects are recreated
in the new database. All schedules are stored with their time zones, which are
maintained in the new database. For example, schedule "Monday at 1 PM PST in a
database in San Francisco" would be the same if it was exported and imported to a
database in Germany.
See Also: Oracle Database Utilities for details regarding import and
export

Examples of Using the Scheduler
This section discusses the following topics:
■

Examples of Creating Jobs

■

Examples of Creating Job Classes

■

Examples of Creating Programs

■

Examples of Creating Windows

■

Examples of Setting Attributes

■

Examples of Creating Chains

28-20 Oracle Database Administrator’s Guide

Examples of Using the Scheduler

■

Examples of Creating Jobs and Schedules Based on Events

Examples of Creating Jobs
This section contains several examples of creating jobs. To create a job, you use the
CREATE_JOB procedure.
Example 28–1

Creating a Job

The following statement creates a job called my_job1 in the oe schema:
BEGIN
DBMS_SCHEDULER.CREATE_JOB (
job_name
=> 'oe.my_job1',
job_type
=> 'PLSQL_BLOCK',
job_action
=> 'BEGIN DBMS_STATS.GATHER_TABLE_STATS(''oe'',
''sales''); END;',
start_date
=> '15-JUL-03 1.00.00AM US/Pacific',
repeat_interval
=> 'FREQ=DAILY',
end_date
=> '15-SEP-03 1.00.00AM US/Pacific',
enabled
=> TRUE,
comments
=> 'Gather table statistics');
END;
/

This job gathers table statistics on the sales table. It will run for the first time on July
15th and then once a day until September 15. To verify that the job was created, issue
the following statement:
SELECT JOB_NAME FROM DBA_SCHEDULER_JOBS WHERE JOB_NAME = 'MY_JOB1';
JOB_NAME
-----------------------------MY_JOB1

See Also: Oracle Database PL/SQL Packages and Types Reference for
detailed information about the CREATE_JOB procedure and
"Creating Jobs" on page 27-2 for further information

Examples of Creating Job Classes
This section contains several examples of creating job classes. To create a job class, you
use the CREATE_JOB_CLASS procedure.
Example 28–2

Creating a Job Class

The following statement creates a job class:
BEGIN
DBMS_SCHEDULER.CREATE_JOB_CLASS (
job_class_name
=>
service
=>
comments
=>
END;
/

'my_class1',
'my_service1',
'This is my first job class');

This creates my_class1 in SYS. It uses a service called my_service1. To verify that
the job class was created, issue the following statement:
SELECT JOB_CLASS_NAME FROM DBA_SCHEDULER_JOB_CLASSES
WHERE JOB_CLASS_NAME = 'MY_CLASS1';

Administering the Scheduler

28-21

Examples of Using the Scheduler

JOB_CLASS_NAME
-----------------------------MY_CLASS1
Example 28–3

Creating a Job Class

The following statement creates a job class:
BEGIN
DBMS_SCHEDULER.CREATE_JOB_CLASS (
job_class_name
=> 'finance_jobs',
resource_consumer_group
=> 'finance_group',
comments
=> 'My financial class');
END;
/

This creates finance_jobs in SYS. It uses a resource consumer group called
finance_group.
See Also: Oracle Database PL/SQL Packages and Types Reference for
detailed information about the CREATE_JOB_CLASS procedure
and "Creating Job Classes" on page 27-19 for further information

Examples of Creating Programs
This section contains several examples of creating programs. To create a program, you
use the CREATE_PROGRAM procedure.
Example 28–4

Creating a Program

The following statement creates a program in the oe schema:
BEGIN
DBMS_SCHEDULER.CREATE_PROGRAM (
program_name
=> 'oe.my_program1',
program_type
=> 'PLSQL_BLOCK',
program_action
=> 'BEGIN DBMS_STATS.GATHER_TABLE_STATS(''oe'',
''sales''); END;',
number_of_arguments
=> 0,
enabled
=> TRUE,
comments
=> 'My comments here');
END;
/

This creates my_program1, which uses PL/SQL to gather table statistics on the sales
table. To verify that the program was created, issue the following statement:
SELECT PROGRAM_NAME FROM DBA_SCHEDULER_PROGRAMS
WHERE PROGRAM_NAME = 'MY_PROGRAM1';
PROGRAM_NAME
------------------------MY_PROGRAM1
Example 28–5

Creating a Program

The following statement creates a program in the oe schema:
BEGIN
DBMS_SCHEDULER.CREATE_PROGRAM (
program_name
=> 'oe.my_saved_program1',
28-22 Oracle Database Administrator’s Guide

Examples of Using the Scheduler

program_action
program_type
comments
END;
/

=> '/usr/local/bin/date',
=> 'EXECUTABLE',
=> 'My comments here');

This creates my_saved_program1, which uses an executable.
See Also: Oracle Database PL/SQL Packages and Types Reference for
detailed information about the CREATE_PROGRAM procedure and
"Creating Programs" on page 27-9 for further information

Examples of Creating Windows
This section contains several examples of creating windows. To create a window, you
use the CREATE_WINDOW procedure.
Example 28–6

Creating a Window

The following statement creates a window called my_window1 in SYS:
BEGIN
DBMS_SCHEDULER.CREATE_WINDOW (
window_name
=> 'my_window1',
resource_plan
=> 'my_res_plan1',
start_date
=> '15-JUL-03 1.00.00AM US/Pacific',
repeat_interval
=> 'FREQ=DAILY',
end_date
=> '15-SEP-03 1.00.00AM US/Pacific',
duration
=> interval '80' MINUTE,
comments
=> 'This is my first window');
END;
/

This window will open once a day at 1AM for 80 minutes every day from May 15th to
October 15th. To verify that the window was created, issue the following statement:
SELECT WINDOW_NAME FROM DBA_SCHEDULER_WINDOWS WHERE WINDOW_NAME = 'MY_WINDOW1';
WINDOW_NAME
-----------------------------MY_WINDOW1
Example 28–7

Creating a Window

The following statement creates a window called my_window2 in SYS:
BEGIN
DBMS_SCHEDULER.CREATE_WINDOW (
window_name
=> 'my_window2',
schedule_name
=> 'my_stats_schedule',
resource_plan
=> 'my_resourceplan1',
duration
=> interval '60' minute,
comments
=> 'My window');
END;
/

See Also: Oracle Database PL/SQL Packages and Types Reference for
detailed information about the CREATE_WINDOW procedure and
"Creating Windows" on page 27-21 for further information

Administering the Scheduler

28-23

Examples of Using the Scheduler

Example of Creating Window Groups
This section contains an example of creating a window group. To create a window
group, you use the CREATE_WINDOW_GROUP procedure.
Example 28–8

Creating a Window Group

The following statement creates a window group called my_window_group1:
BEGIN
DBMS_SCHEDULER.CREATE_WINDOW_GROUP ('my_windowgroup1');
END;
/

Then, you could add three windows (my_window1, my_window2, and my_window3)
to my_window_group1 by issuing the following statements:
BEGIN
DBMS_SCHEDULER.ADD_WINDOW_GROUP_MEMBER (
group_name
=> 'my_window_group1',
window_list => 'my_window1, my_window2');
DBMS_SCHEDULER.ADD_WINDOW_GROUP_MEMBER (
group_name
=> 'my_window_group1',
window_list => 'my_window3');
END;
/

To verify that the window group was created and the windows added to it, issue the
following statement:
SELECT * FROM DBA_SCHEDULER_WINDOW_GROUPS;
WINDOW_GROUP_NAME
----------------MY_WINDOW_GROUP1

ENABLED
------TRUE

NUMBER_OF_WINDOWS
----------------3

COMMENTS
--------------This is my first window group

See Also: Oracle Database PL/SQL Packages and Types Reference for
detailed information about the CREATE_WINDOW_GROUP and
ADD_WINDOW_GROUP_MEMBER procedures and "Creating Window
Groups" on page 27-28 for further information

Examples of Setting Attributes
This section contains several examples of setting attributes. To set attributes, you use
SET_ATTRIBUTE and SET_SCHEDULER_ATTRIBUTE procedures.
Example 28–9

Setting the Repeat Interval Attribute

The following example resets the frequency my_emp_job1 will run to daily:
BEGIN
DBMS_SCHEDULER.SET_ATTRIBUTE (
name
=>
'my_emp_job1',
attribute
=>
'repeat_interval',
value
=>
'FREQ=DAILY');
END;
/

To verify the change, issue the following statement:
SELECT JOB_NAME, REPEAT_INTERVAL FROM DBA_SCHEDULER_JOBS

28-24 Oracle Database Administrator’s Guide

Examples of Using the Scheduler

WHERE JOB_NAME =
JOB_NAME
---------------MY_EMP_JOB1

'MY_EMP_JOB1';
REPEAT_INTERVAL
--------------FREQ=DAILY

Example 28–10 Setting the Comments Attribute

The following example resets the comments for my_saved_program1:
BEGIN
DBMS_SCHEDULER.SET_ATTRIBUTE (
name
=>
'my_saved_program1',
attribute
=>
'comments',
value
=>
'For nightly table stats');
END;
/

To verify the change, issue the following statement:
SELECT PROGRAM_NAME, COMMENTS FROM DBA_SCHEDULER_PROGRAMS;
PROGRAM_NAME
-----------MY_PROGRAM1
MY_SAVED_PROGRAM1

COMMENTS
----------------------My comments here
For nightly table stats

Example 28–11 Setting the Duration Attribute

The following example resets the duration of my_window3 to 90 minutes:
BEGIN
DBMS_SCHEDULER.SET_ATTRIBUTE (
name
=>
'my_window3',
attribute
=>
'duration',
value
=>
interval '90' minute);
END;
/

To verify the change, issue the following statement:
SELECT WINDOW_NAME, DURATION FROM DBA_SCHEDULER_WINDOWS
WHERE WINDOW_NAME = 'MY_WINDOW3';
WINDOW_NAME
----------MY_WINDOW3

DURATION
--------------+000 00:90:00

Example 28–12 Setting the Event Expiration Attribute

The following example sets the time in seconds to 3600 when an event expires:
BEGIN
DBMS_SCHEDULER.SET_SCHEDULER_ATTRIBUTE (
attribute
=>
event_expiry_time,
value
=>
3600);
END;
/

Administering the Scheduler

28-25

Examples of Using the Scheduler

See Also: Oracle Database PL/SQL Packages and Types Reference for
detailed information about the SET_SCHEDULER_ATTRIBUTE
procedure and "Task 2E: Setting Scheduler Attributes" on page 28-5

Examples of Creating Chains
This section contains examples of creating chains. To create chains, you use the
CREATE_CHAIN procedure. After creating a chain, you add steps to the chain with the
DEFINE_CHAIN_STEP procedure and define the rules with the DEFINE_CHAIN_RULE
procedure.
Example 28–13 Creating a Chain

The following example creates a chain where my_program1 runs before
my_program2 and my_program3. my_program2 and my_program3 run in parallel
after my_program1 has completed.
BEGIN
DBMS_SCHEDULER.CREATE_CHAIN (
chain_name
=> 'my_chain1',
rule_set_name
=> NULL,
evaluation_interval
=> NULL,
comments
=> NULL);
END;
/
--- define three steps for this chain.
BEGIN
DBMS_SCHEDULER.DEFINE_CHAIN_STEP('my_chain1', 'step1', 'my_program1');
DBMS_SCHEDULER.DEFINE_CHAIN_STEP('my_chain1', 'step2', 'my_program2');
DBMS_SCHEDULER.DEFINE_CHAIN_STEP('my_chain1', 'step3', 'my_program3');
END;
/
--- define corresponding rules for the chain.
BEGIN
DBMS_SCHEDULER.DEFINE_CHAIN_RULE('my_chain1', 'TRUE', 'START step1');
DBMS_SCHEDULER.DEFINE_CHAIN_RULE (
'my_chain1', 'step1 COMPLETED', 'Start step2, step3');
DBMS_SCHEDULER.DEFINE_CHAIN_RULE (
'my_chain1', 'step2 COMPLETED AND step3 COMPLETED', 'END');
END;
/
Example 28–14 Creating a Chain

The following example creates a chain where first my_program1 runs. If it succeeds,
my_program2 runs; otherwise, my_program3 runs.
BEGIN
DBMS_SCHEDULER.CREATE_CHAIN
chain_name
=>
rule_set_name
=>
evaluation_interval
=>
comments
=>
END;
/

(
'my_chain2',
NULL,
NULL,
NULL);

--- define three steps for this chain.
BEGIN

28-26 Oracle Database Administrator’s Guide

Examples of Using the Scheduler

DBMS_SCHEDULER.DEFINE_CHAIN_STEP('my_chain2', 'step1', 'my_program1');
DBMS_SCHEDULER.DEFINE_CHAIN_STEP('my_chain2', 'step2', 'my_program2');
DBMS_SCHEDULER.DEFINE_CHAIN_STEP('my_chain2', 'step3', 'my_program3');
END;
/
--- define corresponding rules for the chain.
BEGIN
DBMS_SCHEDULER.DEFINE_CHAIN_RULE ('my_chain2', 'TRUE', 'START step1');
DBMS_SCHEDULER.DEFINE_CHAIN_RULE (
'my_chain2', 'step1 SUCCEEDED', 'Start step2');
DBMS_SCHEDULER.DEFINE_CHAIN_RULE (
'my_chain2', 'step1 COMPLETED AND step1 NOT SUCCEEDED', 'Start step3');
DBMS_SCHEDULER.DEFINE_CHAIN_RULE (
'my_chain2', 'step2 COMPLETED OR step3 COMPLETED', 'END');
END;
/

See Also: Oracle Database PL/SQL Packages and Types Reference for
detailed information about the CREATE_CHAIN,
DEFINE_CHAIN_STEP, and DEFINE_CHAIN_RULE procedures and
"Task 2E: Setting Scheduler Attributes" on page 28-5

Examples of Creating Jobs and Schedules Based on Events
This section contains examples of creating event-based jobs and event schedules. To
create event-based jobs, you use the CREATE_JOB procedure. To create event-based
schedules, you use the CREATE_EVENT_SCHEDULE procedure.
These examples assume the existence of an application that, when it detects the arrival
of a file on a system, enqueues an event onto the queue my_events_q.
Example 28–15 Creating an Event-Based Schedule

The following example illustrates creating a schedule that can be used to start a job
whenever the Scheduler receives an event indicating that a file arrived on the system
before 9AM:
BEGIN
DBMS_SCHEDULER.CREATE_EVENT_SCHEDULE (
schedule_name
=> 'scott.file_arrival',
start_date
=> systimestamp,
event_condition
=> 'tab.user_data.object_owner = ''SCOTT''
and tab.user_data.event_name = ''FILE_ARRIVAL''
and extract hour from tab.user_data.event_timestamp < 9',
queue_spec
=> 'my_events_q');
END;
/
Example 28–16 Creating an Event-Based Job

The following example creates a job that starts when the Scheduler receives an event
indicating that a file arrived on the system:
BEGIN
DBMS_SCHEDULER.CREATE_JOB (
job_name
=> my_job,
program_name
=> my_program,
start_date
=> '15-JUL-04 1.00.00AM US/Pacific',
event_condition
=> 'tab.user_data.event_name = ''FILE_ARRIVAL''',
queue_spec
=> 'my_events_q'

Administering the Scheduler

28-27

Examples of Using the Scheduler

enabled
comments
END;
/

=>
=>

TRUE,
'my event-based job');

See Also: Oracle Database PL/SQL Packages and Types Reference for
detailed information about the CREATE_JOB and
CREATE_EVENT_SCHEDULE procedures

28-28 Oracle Database Administrator’s Guide

Part VII
Distributed Database Management
Part VII discusses the management of a distributed database environment. It contains
the following sections:
■

Chapter 29, "Distributed Database Concepts"

■

Chapter 30, "Managing a Distributed Database"

■

Chapter 31, "Developing Applications for a Distributed Database System"

■

Chapter 32, "Distributed Transactions Concepts"

■

Chapter 33, "Managing Distributed Transactions"

29
Distributed Database Concepts
This chapter describes the basic concepts and terminology of Oracle Database
distributed database architecture. It contains the following topics:
■

Distributed Database Architecture

■

Database Links

■

Distributed Database Administration

■

Transaction Processing in a Distributed System

■

Distributed Database Application Development

■

Character Set Support for Distributed Environments

Distributed Database Architecture
A distributed database system allows applications to access data from local and
remote databases. In a homogenous distributed database system, each database is an
Oracle Database. In a heterogeneous distributed database system, at least one of the
databases is not an Oracle Database. Distributed databases use a client/server
architecture to process information requests.
This section contains the following topics:
■

Homogenous Distributed Database Systems

■

Heterogeneous Distributed Database Systems

■

Client/Server Database Architecture

Homogenous Distributed Database Systems
A homogenous distributed database system is a network of two or more Oracle
Databases that reside on one or more machines. Figure 29–1 illustrates a distributed
system that connects three databases: hq, mfg, and sales. An application can
simultaneously access or modify the data in several databases in a single distributed
environment. For example, a single query from a Manufacturing client on local
database mfg can retrieve joined data from the products table on the local database
and the dept table on the remote hq database.
For a client application, the location and platform of the databases are transparent. You
can also create synonyms for remote objects in the distributed system so that users can
access them with the same syntax as local objects. For example, if you are connected to
database mfg but want to access data on database hq, creating a synonym on mfg for
the remote dept table enables you to issue this query:

Distributed Database Concepts 29-1

Distributed Database Architecture

SELECT * FROM dept;

In this way, a distributed system gives the appearance of native data access. Users on
mfg do not have to know that the data they access resides on remote databases.
Figure 29–1 Homogeneous Distributed Database
Manufacturing

Distributed Database

Headquarters

MFG.ACME.COM

HQ.ACME.COM

Oracle

Oracle

Oracle

.
.
.

SALES.ACME.COM

.
.
.

.
.
.

Sales

An Oracle Database distributed database system can incorporate Oracle Databases of
different versions. All supported releases of Oracle Database can participate in a
distributed database system. Nevertheless, the applications that work with the
distributed database must understand the functionality that is available at each node
in the system. A distributed database application cannot expect an Oracle7 database to
understand the SQL extensions that are only available with Oracle Database.

Distributed Databases Versus Distributed Processing
The terms distributed database and distributed processing are closely related, yet
have distinct meanings. There definitions are as follows:
■

Distributed database
A set of databases in a distributed system that can appear to applications as a
single data source.

■

Distributed processing
The operations that occurs when an application distributes its tasks among
different computers in a network. For example, a database application typically
distributes front-end presentation tasks to client computers and allows a back-end
database server to manage shared access to a database. Consequently, a

29-2 Oracle Database Administrator’s Guide

Distributed Database Architecture

distributed database application processing system is more commonly referred to
as a client/server database application system.
Distributed database systems employ a distributed processing architecture. For
example, an Oracle Database server acts as a client when it requests data that another
Oracle Database server manages.

Distributed Databases Versus Replicated Databases
The terms distributed database system and database replication are related, yet
distinct. In a pure (that is, not replicated) distributed database, the system manages a
single copy of all data and supporting database objects. Typically, distributed database
applications use distributed transactions to access both local and remote data and
modify the global database in real-time.
Note:

This book discusses only pure distributed databases.

The term replication refers to the operation of copying and maintaining database
objects in multiple databases belonging to a distributed system. While replication
relies on distributed database technology, database replication offers applications
benefits that are not possible within a pure distributed database environment.
Most commonly, replication is used to improve local database performance and
protect the availability of applications because alternate data access options exist. For
example, an application may normally access a local database rather than a remote
server to minimize network traffic and achieve maximum performance. Furthermore,
the application can continue to function if the local server experiences a failure, but
other servers with replicated data remain accessible.
See Also:
■

■

Oracle Database Advanced Replication for more information about
Oracle Database replication features
Oracle Streams Concepts and Administration for information
about Oracle Streams, another method of sharing information
between databases

Heterogeneous Distributed Database Systems
In a heterogeneous distributed database system, at least one of the databases is a
non-Oracle Database system. To the application, the heterogeneous distributed
database system appears as a single, local, Oracle Database. The local Oracle Database
server hides the distribution and heterogeneity of the data.
The Oracle Database server accesses the non-Oracle Database system using Oracle
Heterogeneous Services in conjunction with an agent. If you access the non-Oracle
Database data store using an Oracle Transparent Gateway, then the agent is a
system-specific application. For example, if you include a Sybase database in an Oracle
Database distributed system, then you need to obtain a Sybase-specific transparent
gateway so that the Oracle Database in the system can communicate with it.
Alternatively, you can use generic connectivity to access non-Oracle Database data
stores so long as the non-Oracle Database system supports the ODBC or OLE DB
protocols.

Distributed Database Concepts 29-3

Distributed Database Architecture

Other than the introductory material presented in this
chapter, this book does not discuss Oracle Heterogeneous Services.
See Oracle Database Heterogeneous Connectivity Administrator's Guide
for more detailed information about Heterogeneous Services.

Note:

Heterogeneous Services
Heterogeneous Services (HS) is an integrated component within the Oracle Database
server and the enabling technology for the current suite of Oracle Transparent
Gateway products. HS provides the common architecture and administration
mechanisms for Oracle Database gateway products and other heterogeneous access
facilities. Also, it provides upwardly compatible functionality for users of most of the
earlier Oracle Transparent Gateway releases.

Transparent Gateway Agents
For each non-Oracle Database system that you access, Heterogeneous Services can use
a transparent gateway agent to interface with the specified non-Oracle Database
system. The agent is specific to the non-Oracle Database system, so each type of
system requires a different agent.
The transparent gateway agent facilitates communication between Oracle Database
and non-Oracle Database systems and uses the Heterogeneous Services component in
the Oracle Database server. The agent executes SQL and transactional requests at the
non-Oracle Database system on behalf of the Oracle Database server.
See Also: Your Oracle-supplied gateway-specific documentation
for information about transparent gateways

Generic Connectivity
Generic connectivity enables you to connect to non-Oracle Database data stores by
using either a Heterogeneous Services ODBC agent or a Heterogeneous Services OLE
DB agent. Both are included with your Oracle product as a standard feature. Any data
source compatible with the ODBC or OLE DB standards can be accessed using a
generic connectivity agent.
The advantage to generic connectivity is that it may not be required for you to
purchase and configure a separate system-specific agent. You use an ODBC or OLE DB
driver that can interface with the agent. However, some data access features are only
available with transparent gateway agents.

Client/Server Database Architecture
A database server is the Oracle software managing a database, and a client is an
application that requests information from a server. Each computer in a network is a
node that can host one or more databases. Each node in a distributed database system
can act as a client, a server, or both, depending on the situation.
In Figure 29–2, the host for the hq database is acting as a database server when a
statement is issued against its local data (for example, the second statement in each
transaction issues a statement against the local dept table), but is acting as a client
when it issues a statement against remote data (for example, the first statement in each
transaction is issued against the remote table emp in the sales database).

29-4 Oracle Database Administrator’s Guide

Database Links

Figure 29–2 An Oracle Database Distributed Database System
Server

Server
Oracle
Net

Oracle
Net
Network
Database Link
CONNECT TO...
IDENTIFIED BY ...

DEPT Table

EMP Table
HQ
Database

Sales
Database

Application
TRANSACTION
INSERT INTO EMP@SALES..;
DELETE FROM DEPT..;
SELECT...
FROM EMP@SALES...;
COMMIT;

.
.
.
A client can connect directly or indirectly to a database server. A direct connection
occurs when a client connects to a server and accesses information from a database
contained on that server. For example, if you connect to the hq database and access the
dept table on this database as in Figure 29–2, you can issue the following:
SELECT * FROM dept;

This query is direct because you are not accessing an object on a remote database.
In contrast, an indirect connection occurs when a client connects to a server and then
accesses information contained in a database on a different server. For example, if you
connect to the hq database but access the emp table on the remote sales database as
in Figure 29–2, you can issue the following:
SELECT * FROM emp@sales;

This query is indirect because the object you are accessing is not on the database to
which you are directly connected.

Database Links
The central concept in distributed database systems is a database link. A database link
is a connection between two physical database servers that allows a client to access
them as one logical database.
This section contains the following topics:
■

What Are Database Links?

■

Why Use Database Links?
Distributed Database Concepts 29-5

Database Links

■

Global Database Names in Database Links

■

Names for Database Links

■

Types of Database Links

■

Users of Database Links

■

Creation of Database Links: Examples

■

Schema Objects and Database Links

■

Database Link Restrictions

What Are Database Links?
A database link is a pointer that defines a one-way communication path from an
Oracle Database server to another database server. The link pointer is actually defined
as an entry in a data dictionary table. To access the link, you must be connected to the
local database that contains the data dictionary entry.
A database link connection is one-way in the sense that a client connected to local
database A can use a link stored in database A to access information in remote
database B, but users connected to database B cannot use the same link to access data
in database A. If local users on database B want to access data on database A, then
they must define a link that is stored in the data dictionary of database B.
A database link connection allows local users to access data on a remote database. For
this connection to occur, each database in the distributed system must have a unique
global database name in the network domain. The global database name uniquely
identifies a database server in a distributed system.
Figure 29–3 shows an example of user scott accessing the emp table on the remote
database with the global name hq.acme.com:
Figure 29–3 Database Link
User Scott

Select *
FROM emp

Local
database

PUBLIC SYNONYM
emp -> emp@HQ.ACME.COM

Database
link
(unidirectional)

Remote
database

EMP table

29-6 Oracle Database Administrator’s Guide

Database Links

Database links are either private or public. If they are private, then only the user who
created the link has access; if they are public, then all database users have access.
One principal difference among database links is the way that connections to a remote
database occur. Users access a remote database through the following types of links:
Type of Link

Description

Connected user link

Users connect as themselves, which means that they must have an
account on the remote database with the same username as their
account on the local database.

Fixed user link

Users connect using the username and password referenced in the
link. For example, if Jane uses a fixed user link that connects to the
hq database with the username and password scott/tiger, then
she connects as scott, Jane has all the privileges in hq granted to
scott directly, and all the default roles that scott has been
granted in the hq database.

Current user link

A user connects as a global user. A local user can connect as a
global user in the context of a stored procedure, without storing the
global user's password in a link definition. For example, Jane can
access a procedure that Scott wrote, accessing Scott's account and
Scott's schema on the hq database. Current user links are an aspect
of Oracle Advanced Security.

Create database links using the CREATE DATABASE LINK statement. After a link is
created, you can use it to specify schema objects in SQL statements.
See Also:
■

■

Oracle Database SQL Reference for syntax of the CREATE
DATABASE statement
Oracle Database Advanced Security Administrator's Guide for
information about Oracle Advanced Security

What Are Shared Database Links?
A shared database link is a link between a local server process and the remote
database. The link is shared because multiple client processes can use the same link
simultaneously.
When a local database is connected to a remote database through a database link,
either database can run in dedicated or shared server mode. The following table
illustrates the possibilities:
Local Database Mode

Remote Database Mode

Dedicated

Dedicated

Dedicated

Shared server

Shared server

Dedicated

Shared server

Shared server

A shared database link can exist in any of these four configurations. Shared links differ
from standard database links in the following ways:
■

Different users accessing the same schema object through a database link can share
a network connection.

Distributed Database Concepts 29-7

Database Links

■

■

When a user needs to establish a connection to a remote server from a particular
server process, the process can reuse connections already established to the remote
server. The reuse of the connection can occur if the connection was established on
the same server process with the same database link, possibly in a different
session. In a non-shared database link, a connection is not shared across multiple
sessions.
When you use a shared database link in a shared server configuration, a network
connection is established directly out of the shared server process in the local
server. For a non-shared database link on a local shared server, this connection
would have been established through the local dispatcher, requiring context
switches for the local dispatcher, and requiring data to go through the dispatcher.
See Also: Oracle Database Net Services Administrator's Guide for
information about shared server

Why Use Database Links?
The great advantage of database links is that they allow users to access another user's
objects in a remote database so that they are bounded by the privilege set of the object
owner. In other words, a local user can access a link to a remote database without
having to be a user on the remote database.
For example, assume that employees submit expense reports to Accounts Payable
(A/P), and further suppose that a user using an A/P application needs to retrieve
information about employees from the hq database. The A/P users should be able to
connect to the hq database and execute a stored procedure in the remote hq database
that retrieves the desired information. The A/P users should not need to be hq
database users to do their jobs; they should only be able to access hq information in a
controlled way as limited by the procedure.
See Also:
■

■

"Users of Database Links" on page 29-11 for an explanation of
database link users
"Viewing Information About Database Links" for an
explanation of how to hide passwords from non-administrative
users

Global Database Names in Database Links
To understand how a database link works, you must first understand what a global
database name is. Each database in a distributed database is uniquely identified by its
global database name. The database forms a global database name by prefixing the
database network domain, specified by the DB_DOMAIN initialization parameter at
database creation, with the individual database name, specified by the DB_NAME
initialization parameter.
For example, Figure 29–4 illustrates a representative hierarchical arrangement of
databases throughout a network.

29-8 Oracle Database Administrator’s Guide

Database Links

Figure 29–4 Hierarchical Arrangement of Networked Databases
EDU

COM

ORG

Educational
Institutions

Other
Companies

Non–Commercial
Organizations

ACME_TOOLS

DIVISION1

DIVISION2

ACME_AUTO

DIVISION3

ASIA

AMERICAS

EUROPE

US

MEXICO

UK

JAPAN
HQ

Finance

Sales

GERMANY

mfg

Sales

HQ

Sales

Sales

Sales

Sales

Employees (HR)

Employees (HR)

The name of a database is formed by starting at the leaf of the tree and following a
path to the root. For example, the mfg database is in division3 of the acme_tools
branch of the com domain. The global database name for mfg is created by
concatenating the nodes in the tree as follows:
■

mfg.division3.acme_tools.com

While several databases can share an individual name, each database must have a
unique global database name. For example, the network domains
us.americas.acme_auto.com and uk.europe.acme_auto.com each contain a
sales database. The global database naming system distinguishes the sales
database in the americas division from the sales database in the europe division
as follows:
■

sales.us.americas.acme_auto.com

■

sales.uk.europe.acme_auto.com
See Also: "Managing Global Names in a Distributed System" on
page 30-1 to learn how to specify and change global database
names

Names for Database Links
Typically, a database link has the same name as the global database name of the remote
database that it references. For example, if the global database name of a database is
sales.us.oracle.com, then the database link is also called
sales.us.oracle.com.
When you set the initialization parameter GLOBAL_NAMES to TRUE, the database
ensures that the name of the database link is the same as the global database name of
Distributed Database Concepts 29-9

Database Links

the remote database. For example, if the global database name for hq is
hq.acme.com, and GLOBAL_NAMES is TRUE, then the link name must be called
hq.acme.com. Note that the database checks the domain part of the global database
name as stored in the data dictionary, not the DB_DOMAIN setting in the initialization
parameter file (see "Changing the Domain in a Global Database Name" on page 30-3).
If you set the initialization parameter GLOBAL_NAMES to FALSE, then you are not
required to use global naming. You can then name the database link whatever you
want. For example, you can name a database link to hq.acme.com as foo.
Oracle recommends that you use global naming because
many useful features, including Replication, require global naming.

Note:

After you have enabled global naming, database links are essentially transparent to
users of a distributed database because the name of a database link is the same as the
global name of the database to which the link points. For example, the following
statement creates a database link in the local database to remote database sales:
CREATE PUBLIC DATABASE LINK sales.division3.acme.com USING 'sales1';

See Also: Oracle Database Reference for more information about
specifying the initialization parameter GLOBAL_NAMES

Types of Database Links
Oracle Database lets you create private, public, and global database links. These basic
link types differ according to which users are allowed access to the remote database:
Type

Owner

Description

Private

User who created the link.
View ownership data
through:

Creates link in a specific schema of the local
database. Only the owner of a private database
link or PL/SQL subprograms in the schema can
use this link to access database objects in the
corresponding remote database.

■

DBA_DB_LINKS

■

ALL_DB_LINKS

■

USER_DB_LINKS

Public

User called PUBLIC. View
ownership data through
views shown for private
database links.

Creates a database-wide link. All users and
PL/SQL subprograms in the database can use
the link to access database objects in the
corresponding remote database.

Global

User called PUBLIC. View
ownership data through
views shown for private
database links.

Creates a network-wide link. When an Oracle
network uses a directory server, the directory
server automatically create and manages global
database links (as net service names) for every
Oracle Database in the network. Users and
PL/SQL subprograms in any database can use
a global link to access objects in the
corresponding remote database.
Note: In earlier releases of Oracle Database, a
global database link referred to a database link
that was registered with an Oracle Names
server. The use of an Oracle Names server has
been deprecated. In this document, global
database links refer to the use of net service
names from the directory server.

29-10 Oracle Database Administrator’s Guide

Database Links

Determining the type of database links to employ in a distributed database depends
on the specific requirements of the applications using the system. Consider these
features when making your choice:
Type of Link

Features

Private database link

This link is more secure than a public or global link, because
only the owner of the private link, or subprograms within the
same schema, can use the link to access the remote database.

Public database link

When many users require an access path to a remote Oracle
Database, you can create a single public database link for all
users in a database.

Global database link

When an Oracle network uses a directory server, an
administrator can conveniently manage global database links
for all databases in the system. Database link management is
centralized and simple.

See Also:
■

■

"Specifying Link Types" on page 30-7 to learn how to create
different types of database links
"Viewing Information About Database Links" on page 30-16 to
learn how to access information about links

Users of Database Links
When creating the link, you determine which user should connect to the remote
database to access the data. The following table explains the differences among the
categories of users involved in database links:
Sample Link
Creation Syntax

User Type

Description

Connected user

A local user accessing a database link in which no fixed username
and password have been specified. If SYSTEM accesses a public link
in a query, then the connected user is SYSTEM, and the database
connects to the SYSTEM schema in the remote database.

CREATE PUBLIC
DATABASE LINK hq
USING 'hq';

Note: A connected user does not have to be the user who created the
link, but is any user who is accessing the link.
Current user

A global user in a CURRENT_USER database link. The global user
must be authenticated by an X.509 certificate (an SSL-authenticated
enterprise user) or a password (a password-authenticated enterprise
user), and be a user on both databases involved in the link. Current
user links are an aspect of the Oracle Advanced Security option.

CREATE PUBLIC
DATABASE LINK hq
CONNECT TO
CURRENT_USER
using 'hq';

See Oracle Database Advanced Security Administrator's Guide for
information about global security
Fixed user

A user whose username/password is part of the link definition. If a
link includes a fixed user, the fixed user's username and password
are used to connect to the remote database.

CREATE PUBLIC
DATABASE LINK hq
CONNECT TO jane
IDENTIFIED BY
doe USING 'hq';

See Also: "Specifying Link Users" on page 30-8 to learn how to
specify users when creating links

Distributed Database Concepts 29-11

Database Links

Connected User Database Links
Connected user links have no connect string associated with them. The advantage of a
connected user link is that a user referencing the link connects to the remote database
as the same user, and credentials don't have to be stored in the link definition in the
data dictionary.
Connected user links have some disadvantages. Because these links require users to
have accounts and privileges on the remote databases to which they are attempting to
connect, they require more privilege administration for administrators. Also, giving
users more privileges than they need violates the fundamental security concept of least
privilege: users should only be given the privileges they need to perform their jobs.
The ability to use a connected user database link depends on several factors, chief
among them whether the user is authenticated by the database using a password, or
externally authenticated by the operating system or a network authentication service.
If the user is externally authenticated, then the ability to use a connected user link also
depends on whether the remote database accepts remote authentication of users,
which is set by the REMOTE_OS_AUTHENT initialization parameter.
The REMOTE_OS_AUTHENT parameter operates as follows:
REMOTE_OS_AUTHENT Value Consequences
TRUE for the remote database

An externally-authenticated user can connect to the
remote database using a connected user database link.

FALSE for the remote database

An externally-authenticated user cannot connect to the
remote database using a connected user database link
unless a secure protocol or a network authentication
service supported by the Oracle Advanced Security
option is used.

Fixed User Database Links
A benefit of a fixed user link is that it connects a user in a primary database to a
remote database with the security context of the user specified in the connect string.
For example, local user joe can create a public database link in joe's schema that
specifies the fixed user scott with password tiger. If jane uses the fixed user link
in a query, then jane is the user on the local database, but she connects to the remote
database as scott/tiger.
Fixed user links have a username and password associated with the connect string.
The username and password are stored with other link information in data dictionary
tables.

Current User Database Links
Current user database links make use of a global user. A global user must be
authenticated by an X.509 certificate or a password, and be a user on both databases
involved in the link.
The user invoking the CURRENT_USER link does not have to be a global user. For
example, if jane is authenticated (not as a global user) by password to the Accounts
Payable database, she can access a stored procedure to retrieve data from the hq
database. The procedure uses a current user database link, which connects her to hq as
global user scott. User scott is a global user and authenticated through a
certificate over SSL, but jane is not.
Note that current user database links have these consequences:

29-12 Oracle Database Administrator’s Guide

Database Links

■

■

■

■

If the current user database link is not accessed from within a stored object, then
the current user is the same as the connected user accessing the link. For example,
if scott issues a SELECT statement through a current user link, then the current
user is scott.
When executing a stored object such as a procedure, view, or trigger that accesses
a database link, the current user is the user that owns the stored object, and not the
user that calls the object. For example, if jane calls procedure scott.p (created
by scott), and a current user link appears within the called procedure, then
scott is the current user of the link.
If the stored object is an invoker-rights function, procedure, or package, then the
invoker's authorization ID is used to connect as a remote user. For example, if user
jane calls procedure scott.p (an invoker-rights procedure created by scott),
and the link appears inside procedure scott.p, then jane is the current user.
You cannot connect to a database as an enterprise user and then use a current user
link in a stored procedure that exists in a shared, global schema. For example, if
user jane accesses a stored procedure in the shared schema guest on database
hq, she cannot use a current user link in this schema to log on to a remote
database.
See Also:
■

■
■

"Distributed Database Security" on page 29-17 for more
information about security issues relating to database links
Oracle Database Advanced Security Administrator's Guide
Oracle Database PL/SQL User's Guide and Reference for more
information about invoker-rights functions, procedures, or
packages.

Creation of Database Links: Examples
Create database links using the CREATE DATABASE LINK statement. The table gives
examples of SQL statements that create database links in a local database to the remote
sales.us.americas.acme_auto.com database:
SQL Statement

Connects To Database

Connects As

Link Type

CREATE DATABASE LINK
sales.us.americas.acme_auto.co
m USING 'sales_us';

sales using net service
name sales_us

Connected user

Private
connected
user

CREATE DATABASE LINK foo
CONNECT TO CURRENT_USER USING
'am_sls';

sales using service
name am_sls

Current global user

Private
current user

CREATE DATABASE LINK
sales.us.americas.acme_auto.co
m CONNECT TO scott IDENTIFIED
BY tiger USING 'sales_us';

sales using net service
name sales_us

scott using password
tiger

Private fixed
user

CREATE PUBLIC DATABASE LINK
sales CONNECT TO scott
IDENTIFIED BY tiger USING
'rev';

sales using net service
name rev

scott using password
tiger

Public fixed
user

Distributed Database Concepts 29-13

Database Links

SQL Statement

Connects To Database

Connects As

Link Type

CREATE SHARED PUBLIC DATABASE
LINK
sales.us.americas.acme_auto.co
m CONNECT TO scott IDENTIFIED
BY tiger AUTHENTICATED BY
anupam IDENTIFIED BY bhide
USING 'sales';

sales using net service
name sales

scott using password
tiger, authenticated as
anupam using password
bhide

Shared
public fixed
user

See Also:
■

■

"Creating Database Links" on page 30-6 to learn how to create
link
Oracle Database SQL Reference for information about the CREATE
DATABASE LINK statement syntax

Schema Objects and Database Links
After you have created a database link, you can execute SQL statements that access
objects on the remote database. For example, to access remote object emp using
database link foo, you can issue:
SELECT * FROM emp@foo;

You must also be authorized in the remote database to access specific remote objects.
Constructing properly formed object names using database links is an essential aspect
of data manipulation in distributed systems.

Naming of Schema Objects Using Database Links
Oracle Database uses the global database name to name the schema objects globally
using the following scheme:
schema.schema_object@global_database_name
where:
■

■

■

schema is a collection of logical structures of data, or schema objects. A schema is
owned by a database user and has the same name as that user. Each user owns a
single schema.
schema_object is a logical data structure like a table, index, view, synonym,
procedure, package, or a database link.
global_database_name is the name that uniquely identifies a remote database.
This name must be the same as the concatenation of the remote database
initialization parameters DB_NAME and DB_DOMAIN, unless the parameter
GLOBAL_NAMES is set to FALSE, in which case any name is acceptable.

For example, using a database link to database sales.division3.acme.com, a user
or application can reference remote data as follows:
SELECT * FROM scott.emp@sales.division3.acme.com; # emp table in scott's schema
SELECT loc FROM scott.dept@sales.division3.acme.com;

If GLOBAL_NAMES is set to FALSE, then you can use any name for the link to
sales.division3.acme.com. For example, you can call the link foo. Then, you
can access the remote database as follows:
SELECT name FROM scott.emp@foo;

29-14 Oracle Database Administrator’s Guide

# link name different from global name

Database Links

Authorization for Accessing Remote Schema Objects
To access a remote schema object, you must be granted access to the remote object in
the remote database. Further, to perform any updates, inserts, or deletes on the remote
object, you must be granted the SELECT privilege on the object, along with the
UPDATE, INSERT, or DELETE privilege. Unlike when accessing a local object, the
SELECT privilege is necessary for accessing a remote object because the database has
no remote describe capability. The database must do a SELECT * on the remote object
in order to determine its structure.

Synonyms for Schema Objects
Oracle Database lets you create synonyms so that you can hide the database link name
from the user. A synonym allows access to a table on a remote database using the same
syntax that you would use to access a table on a local database. For example, assume
you issue the following query against a table in a remote database:
SELECT * FROM emp@hq.acme.com;

You can create the synonym emp for emp@hq.acme.com so that you can issue the
following query instead to access the same data:
SELECT * FROM emp;

"Using Synonyms to Create Location Transparency" on
page 30-20 to learn how to create synonyms for objects specified
using database links

See Also:

Schema Object Name Resolution
To resolve application references to schema objects (a process called name resolution),
the database forms object names hierarchically. For example, the database guarantees
that each schema within a database has a unique name, and that within a schema each
object has a unique name. As a result, a schema object name is always unique within
the database. Furthermore, the database resolves application references to the local
name of the object.
In a distributed database, a schema object such as a table is accessible to all
applications in the system. The database extends the hierarchical naming model with
global database names to effectively create global object names and resolve references
to the schema objects in a distributed database system. For example, a query can
reference a remote table by specifying its fully qualified name, including the database
in which it resides.
For example, assume that you connect to the local database as user SYSTEM:
CONNECT SYSTEM/password@sales1

You then issue the following statements using database link hq.acme.com to access
objects in the scott and jane schemas on remote database hq:
SELECT * FROM scott.emp@hq.acme.com;
INSERT INTO jane.accounts@hq.acme.com (acc_no, acc_name, balance)
VALUES (5001, 'BOWER', 2000);
UPDATE jane.accounts@hq.acme.com
SET balance = balance + 500;
DELETE FROM jane.accounts@hq.acme.com
WHERE acc_name = 'BOWER';

Distributed Database Concepts 29-15

Distributed Database Administration

Database Link Restrictions
You cannot perform the following operations using database links:
■
■

Grant privileges on remote objects
Execute DESCRIBE operations on some remote objects. The following remote
objects, however, do support DESCRIBE operations:
–

Tables

–

Views

–

Procedures

–

Functions

■

Analyze remote objects

■

Define or enforce referential integrity

■

Grant roles to users in a remote database

■

■
■

Obtain nondefault roles on a remote database. For example, if jane connects to
the local database and executes a stored procedure that uses a fixed user link
connecting as scott, jane receives scott's default roles on the remote database.
Jane cannot issue SET ROLE to obtain a nondefault role.
Execute hash query joins that use shared server connections
Use a current user link without authentication through SSL, password, or NT
native authentication

Distributed Database Administration
The following sections explain some of the topics relating to database management in
an Oracle Database distributed database system:
■

Site Autonomy

■

Distributed Database Security

■

Auditing Database Links

■

Administration Tools
See Also:
■

■

Chapter 30, "Managing a Distributed Database" to learn how to
administer homogenous systems
Oracle Database Heterogeneous Connectivity Administrator's Guide
to learn about heterogeneous services concepts

Site Autonomy
Site autonomy means that each server participating in a distributed database is
administered independently from all other databases. Although several databases can
work together, each database is a separate repository of data that is managed
individually. Some of the benefits of site autonomy in an Oracle Database distributed
database include:
■

Nodes of the system can mirror the logical organization of companies or groups
that need to maintain independence.

29-16 Oracle Database Administrator’s Guide

Distributed Database Administration

■

■

■

■

■

Local administrators control corresponding local data. Therefore, each database
administrator's domain of responsibility is smaller and more manageable.
Independent failures are less likely to disrupt other nodes of the distributed
database. No single database failure need halt all distributed operations or be a
performance bottleneck.
Administrators can recover from isolated system failures independently from
other nodes in the system.
A data dictionary exists for each local database. A global catalog is not necessary
to access local data.
Nodes can upgrade software independently.

Although Oracle Database permits you to manage each database in a distributed
database system independently, you should not ignore the global requirements of the
system. For example, you may need to:
■

■

Create additional user accounts in each database to support the links that you
create to facilitate server-to-server connections.
Set additional initialization parameters such as COMMIT_POINT_STRENGTH, and
OPEN_LINKS.

Distributed Database Security
The database supports all of the security features that are available with a
non-distributed database environment for distributed database systems, including:
■

Password authentication for users and roles

■

Some types of external authentication for users and roles including:

■

–

Kerberos version 5 for connected user links

–

DCE for connected user links

Login packet encryption for client-to-server and server-to-server connections

The following sections explain some additional topics to consider when configuring an
Oracle Database distributed database system:
■

Authentication Through Database Links

■

Authentication Without Passwords

■

Supporting User Accounts and Roles

■

Centralized User and Privilege Management

■

Data Encryption
See Also: Oracle Database Advanced Security Administrator's Guide
for more information about external authentication

Authentication Through Database Links
Database links are either private or public, authenticated or non-authenticated. You
create public links by specifying the PUBLIC keyword in the link creation statement.
For example, you can issue:
CREATE PUBLIC DATABASE LINK foo USING 'sales';

Distributed Database Concepts 29-17

Distributed Database Administration

You create authenticated links by specifying the CONNECT TO clause,
AUTHENTICATED BY clause, or both clauses together in the database link creation
statement. For example, you can issue:
CREATE DATABASE LINK sales CONNECT TO scott IDENTIFIED BY tiger USING 'sales';
CREATE SHARED PUBLIC DATABASE LINK sales CONNECT TO mick IDENTIFIED BY jagger
AUTHENTICATED BY david IDENTIFIED BY bowie USING 'sales';

This table describes how users access the remote database through the link:
Link Type

Authenticated

Security Access

Private

No

When connecting to the remote database, the database
uses security information (userid/password) taken from
the local session. Hence, the link is a connected user
database link. Passwords must be synchronized between
the two databases.

Private

Yes

The userid/password is taken from the link definition
rather than from the local session context. Hence, the link
is a fixed user database link.
This configuration allows passwords to be different on the
two databases, but the local database link password must
match the remote database password.

Public

No

Works the same as a private nonauthenticated link, except
that all users can reference this pointer to the remote
database.

Public

Yes

All users on the local database can access the remote
database and all use the same userid/password to make
the connection.

Authentication Without Passwords
When using a connected user or current user database link, you can use an external
authentication source such as Kerberos to obtain end-to-end security. In end-to-end
authentication, credentials are passed from server to server and can be authenticated
by a database server belonging to the same domain. For example, if jane is
authenticated externally on a local database, and wants to use a connected user link to
connect as herself to a remote database, the local server passes the security ticket to the
remote database.

Supporting User Accounts and Roles
In a distributed database system, you must carefully plan the user accounts and roles
that are necessary to support applications using the system. Note that:
■

■

The user accounts necessary to establish server-to-server connections must be
available in all databases of the distributed database system.
The roles necessary to make available application privileges to distributed
database application users must be present in all databases of the distributed
database system.

As you create the database links for the nodes in a distributed database system,
determine which user accounts and roles each site needs to support server-to-server
connections that use the links.
In a distributed environment, users typically require access to many network services.
When you must configure separate authentications for each user to access each

29-18 Oracle Database Administrator’s Guide

Distributed Database Administration

network service, security administration can become unwieldy, especially for large
systems.
See Also: "Creating Database Links" on page 30-6 for more
information about the user accounts that must be available to
support different types of database links in the system

Centralized User and Privilege Management
The database provides different ways for you to manage the users and privileges
involved in a distributed system. For example, you have these options:
■

■

Enterprise user management. You can create global users who are authenticated
through SSL or by using passwords, then manage these users and their privileges
in a directory through an independent enterprise directory service.
Network authentication service. This common technique simplifies security
management for distributed environments. You can use the Oracle Advanced
Security option to enhance Oracle Net and the security of an Oracle Database
distributed database system. Windows NT native authentication is an example of a
non-Oracle authentication solution.
See Also: Oracle Database Advanced Security Administrator's Guide
for more information about global user security

Schema-Dependent Global Users One option for centralizing user and privilege
management is to create the following:
■

A global user in a centralized directory

■

A user in every database that the global user must connect to

For example, you can create a global user called fred with the following SQL
statement:
CREATE USER fred IDENTIFIED GLOBALLY AS 'CN=fred adams,O=Oracle,C=England';

This solution allows a single global user to be authenticated by a centralized directory.
The schema-dependent global user solution has the consequence that you must create
a user called fred on every database that this user must access. Because most users
need permission to access an application schema but do not need their own schemas,
the creation of a separate account in each database for every global user creates
significant overhead. Because of this problem, the database also supports
schema-independent users, which are global users that an access a single, generic
schema in every database.
Schema-Independent Global Users The database supports functionality that allows a
global user to be centrally managed by an enterprise directory service. Users who are
managed in the directory are called enterprise users. This directory contains
information about:
■

Which databases in a distributed system an enterprise user can access

■

Which role on each database an enterprise user can use

■

Which schema on each database an enterprise user can connect to

The administrator of each database is not required to create a global user account for
each enterprise user on each database to which the enterprise user needs to connect.
Instead, multiple enterprise users can connect to the same database schema, called a
shared schema.
Distributed Database Concepts 29-19

Distributed Database Administration

You cannot access a current user database link in a shared
schema.

Note:

For example, suppose jane, bill, and scott all use a human resources application.
The hq application objects are all contained in the guest schema on the hq database.
In this case, you can create a local global user account to be used as a shared schema.
This global username, that is, shared schema name, is guest. jane, bill, and scott
are all created as enterprise users in the directory service. They are also mapped to the
guest schema in the directory, and can be assigned different authorizations in the hq
application.
Figure 29–5 illustrates an example of global user security using the enterprise directory
service:
Figure 29–5 Global User Security

SSL
LDAP

SSL

HQ

SSL

SSL password

SALES

SCOTT

Assume that the enterprise directory service contains the following information on
enterprise users for hq and sales:
Database

Role

Schema

Enterprise Users

tb

clerk1

guest

bill
scott

sales

clerk2

guest

jane
scott

Also, assume that the local administrators for hq and sales have issued statements as
follows:
Database

CREATE Statements

hq

CREATE USER guest IDENTIFIED GLOBALLY AS '';
CREATE ROLE clerk1 GRANT select ON emp;
CREATE PUBLIC DATABASE LINK sales_link CONNECT AS CURRENT_USER
USING 'sales';

sales

CREATE USER guest IDENTIFIED GLOBALLY AS '';
CREATE ROLE clerk2 GRANT select ON dept;

29-20 Oracle Database Administrator’s Guide

Distributed Database Administration

Assume that enterprise user scott requests a connection to local database hq in order
to execute a distributed transaction involving sales. The following steps occur (not
necessarily in this exact order):
1.

Enterprise user scott is authenticated using SSL or a password.

2.

User scott issues the following statement:
SELECT e.ename, d.loc
FROM emp e, dept@sales_link d
WHERE e.deptno=d.deptno;

3.

Databases hq and sales mutually authenticate one another using SSL.

4.

Database hq queries the enterprise directory service to determine whether
enterprise user scott has access to hq, and discovers scott can access local
schema guest using role clerk1.

5.

Database sales queries the enterprise directory service to determine whether
enterprise user scott has access to sales, and discovers scott can access local
schema guest using role clerk2.

6.

Enterprise user scott logs into sales to schema guest with role clerk2 and
issues a SELECT to obtain the required information and transfer it to hq.

7.

Database hq receives the requested data from sales and returns it to the client
scott.
See Also: Oracle Database Advanced Security Administrator's Guide
for more information about enterprise user security

Data Encryption
The Oracle Advanced Security option also enables Oracle Net and related products to
use network data encryption and checksumming so that data cannot be read or
altered. It protects data from unauthorized viewing by using the RSA Data Security
RC4 or the Data Encryption Standard (DES) encryption algorithm.
To ensure that data has not been modified, deleted, or replayed during transmission,
the security services of the Oracle Advanced Security option can generate a
cryptographically secure message digest and include it with each packet sent across
the network.
See Also: Oracle Database Advanced Security Administrator's Guide
for more information about these and other features of the Oracle
Advanced Security option

Auditing Database Links
You must always perform auditing operations locally. That is, if a user acts in a local
database and accesses a remote database through a database link, the local actions are
audited in the local database, and the remote actions are audited in the remote
database, provided appropriate audit options are set in the respective databases.
The remote database cannot determine whether a successful connect request and
subsequent SQL statements come from another server or from a locally connected
client. For example, assume the following:
■

■

Fixed user link hq.acme.com connects local user jane to the remote hq database
as remote user scott.
User scott is audited on the remote database.

Distributed Database Concepts 29-21

Distributed Database Administration

Actions performed during the remote database session are audited as if scott were
connected locally to hq and performing the same actions there. You must set audit
options in the remote database to capture the actions of the username--in this case,
scott on the hq database--embedded in the link if the desired effect is to audit what
jane is doing in the remote database.
Note:

You can audit the global username for global users.

You cannot set local auditing options on remote objects. Therefore, you cannot audit
use of a database link, although access to remote objects can be audited on the remote
database.

Administration Tools
The database administrator has several choices for tools to use when managing an
Oracle Database distributed database system:
■

Enterprise Manager

■

Third-Party Administration Tools

■

SNMP Support

Enterprise Manager
Enterprise Manager is the Oracle Database administration tool that provides a
graphical user interface (GUI). Enterprise Manager provides administrative
functionality for distributed databases through an easy-to-use interface. You can use
Enterprise Manager to:
■

■

■

Administer multiple databases. You can use Enterprise Manager to administer a
single database or to simultaneously administer multiple databases.
Centralize database administration tasks. You can administer both local and
remote databases running on any Oracle Database platform in any location
worldwide. In addition, these Oracle Database platforms can be connected by any
network protocols supported by Oracle Net.
Dynamically execute SQL, PL/SQL, and Enterprise Manager commands. You can
use Enterprise Manager to enter, edit, and execute statements. Enterprise Manager
also maintains a history of statements executed.
Thus, you can reexecute statements without retyping them, a particularly useful
feature if you need to execute lengthy statements repeatedly in a distributed
database system.

■

Manage security features such as global users, global roles, and the enterprise
directory service.

Third-Party Administration Tools
Currently more than 60 companies produce more than 150 products that help manage
Oracle Databases and networks, providing a truly open environment.

SNMP Support
Besides its network administration capabilities, Oracle Simple Network Management
Protocol (SNMP) support allows an Oracle Database server to be located and queried

29-22 Oracle Database Administrator’s Guide

Transaction Processing in a Distributed System

by any SNMP-based network management system. SNMP is the accepted standard
underlying many popular network management systems such as:
■

HP OpenView

■

Digital POLYCENTER Manager on NetView

■

IBM NetView/6000

■

Novell NetWare Management System

■

SunSoft SunNet Manager

Transaction Processing in a Distributed System
A transaction is a logical unit of work constituted by one or more SQL statements
executed by a single user. A transaction begins with the user's first executable SQL
statement and ends when it is committed or rolled back by that user.
A remote transaction contains only statements that access a single remote node. A
distributed transaction contains statements that access more than one node.
The following sections define important concepts in transaction processing and
explain how transactions access data in a distributed database:
■

Remote SQL Statements

■

Distributed SQL Statements

■

Shared SQL for Remote and Distributed Statements

■

Remote Transactions

■

Distributed Transactions

■

Two-Phase Commit Mechanism

■

Database Link Name Resolution

■

Schema Object Name Resolution

Remote SQL Statements
A remote query statement is a query that selects information from one or more remote
tables, all of which reside at the same remote node. For example, the following query
accesses data from the dept table in the scott schema of the remote sales database:
SELECT * FROM scott.dept@sales.us.americas.acme_auto.com;

A remote update statement is an update that modifies data in one or more tables, all of
which are located at the same remote node. For example, the following query updates
the dept table in the scott schema of the remote sales database:
UPDATE scott.dept@mktng.us.americas.acme_auto.com
SET loc = 'NEW YORK'
WHERE deptno = 10;

A remote update can include a subquery that retrieves data
from one or more remote nodes, but because the update happens at
only a single remote node, the statement is classified as a remote
update.

Note:

Distributed Database Concepts 29-23

Transaction Processing in a Distributed System

Distributed SQL Statements
A distributed query statement retrieves information from two or more nodes. For
example, the following query accesses data from the local database as well as the
remote sales database:
SELECT ename, dname
FROM scott.emp e, scott.dept@sales.us.americas.acme_auto.com d
WHERE e.deptno = d.deptno;

A distributed update statement modifies data on two or more nodes. A distributed
update is possible using a PL/SQL subprogram unit such as a procedure or trigger
that includes two or more remote updates that access data on different nodes. For
example, the following PL/SQL program unit updates tables on the local database and
the remote sales database:
BEGIN
UPDATE scott.dept@sales.us.americas.acme_auto.com
SET loc = 'NEW YORK'
WHERE deptno = 10;
UPDATE scott.emp
SET deptno = 11
WHERE deptno = 10;
END;
COMMIT;

The database sends statements in the program to the remote nodes, and their
execution succeeds or fails as a unit.

Shared SQL for Remote and Distributed Statements
The mechanics of a remote or distributed statement using shared SQL are essentially
the same as those of a local statement. The SQL text must match, and the referenced
objects must match. If available, shared SQL areas can be used for the local and remote
handling of any statement or decomposed query.
See Also:

Oracle Database Concepts for more information about

shared SQL

Remote Transactions
A remote transaction contains one or more remote statements, all of which reference a
single remote node. For example, the following transaction contains two statements,
each of which accesses the remote sales database:
UPDATE scott.dept@sales.us.americas.acme_auto.com
SET loc = 'NEW YORK'
WHERE deptno = 10;
UPDATE scott.emp@sales.us.americas.acme_auto.com
SET deptno = 11
WHERE deptno = 10;
COMMIT;

Distributed Transactions
A distributed transaction is a transaction that includes one or more statements that,
individually or as a group, update data on two or more distinct nodes of a distributed
database. For example, this transaction updates the local database and the remote
sales database:

29-24 Oracle Database Administrator’s Guide

Transaction Processing in a Distributed System

UPDATE scott.dept@sales.us.americas.acme_auto.com
SET loc = 'NEW YORK'
WHERE deptno = 10;
UPDATE scott.emp
SET deptno = 11
WHERE deptno = 10;
COMMIT;

If all statements of a transaction reference only a single
remote node, the transaction is remote, not distributed.

Note:

Two-Phase Commit Mechanism
A database must guarantee that all statements in a transaction, distributed or
non-distributed, either commit or roll back as a unit. The effects of an ongoing
transaction should be invisible to all other transactions at all nodes; this transparency
should be true for transactions that include any type of operation, including queries,
updates, or remote procedure calls.
The general mechanisms of transaction control in a non-distributed database are
discussed in the Oracle Database Concepts. In a distributed database, the database must
coordinate transaction control with the same characteristics over a network and
maintain data consistency, even if a network or system failure occurs.
The database two-phase commit mechanism guarantees that all database servers
participating in a distributed transaction either all commit or all roll back the
statements in the transaction. A two-phase commit mechanism also protects implicit
DML operations performed by integrity constraints, remote procedure calls, and
triggers.
See Also: Chapter 32, "Distributed Transactions Concepts" for
more information about the Oracle Database two-phase commit
mechanism

Database Link Name Resolution
A global object name is an object specified using a database link. The essential
components of a global object name are:
■

Object name

■

Database name

■

Domain

The following table shows the components of an explicitly specified global database
object name:
Statement

Object

Database

Domain

SELECT * FROM
dept
joan.dept@sales.acme.com

sales

acme.com

SELECT * FROM
emp@mktg.us.acme.com

mktg

us.acme.com

emp

Whenever a SQL statement includes a reference to a global object name, the database
searches for a database link with a name that matches the database name specified in
the global object name. For example, if you issue the following statement:

Distributed Database Concepts 29-25

Transaction Processing in a Distributed System

SELECT * FROM scott.emp@orders.us.acme.com;

The database searches for a database link called orders.us.acme.com. The database
performs this operation to determine the path to the specified remote database.
The database always searches for matching database links in the following order:
1.

Private database links in the schema of the user who issued the SQL statement.

2.

Public database links in the local database.

3.

Global database links (only if a directory server is available).

Name Resolution When the Global Database Name Is Complete
Assume that you issue the following SQL statement, which specifies a complete global
database name:
SELECT * FROM emp@prod1.us.oracle.com;

In this case, both the database name (prod1) and domain components
(us.oracle.com) are specified, so the database searches for private, public, and
global database links. The database searches only for links that match the specified
global database name.

Name Resolution When the Global Database Name Is Partial
If any part of the domain is specified, the database assumes that a complete global
database name is specified. If a SQL statement specifies a partial global database name
(that is, only the database component is specified), the database appends the value in
the DB_DOMAIN initialization parameter to the value in the DB_NAME initialization
parameter to construct a complete name. For example, assume you issue the following
statements:
CONNECT scott/tiger@locdb
SELECT * FROM scott.emp@orders;

If the network domain for locdb is us.acme.com, then the database appends this
domain to orders to construct the complete global database name of
orders.us.acme.com. The database searches for database links that match only the
constructed global name. If a matching link is not found, the database returns an error
and the SQL statement cannot execute.

Name Resolution When No Global Database Name Is Specified
If a global object name references an object in the local database and a database link
name is not specified using the @ symbol, then the database automatically detects that
the object is local and does not search for or use database links to resolve the object
reference. For example, assume that you issue the following statements:
CONNECT scott/tiger@locdb
SELECT * from scott.emp;

Because the second statement does not specify a global database name using a
database link connect string, the database does not search for database links.

Terminating the Search for Name Resolution
The database does not necessarily stop searching for matching database links when it
finds the first match. The database must search for matching private, public, and

29-26 Oracle Database Administrator’s Guide

Transaction Processing in a Distributed System

network database links until it determines a complete path to the remote database
(both a remote account and service name).
The first match determines the remote schema as illustrated in the following table:
User Operation

Database Response

Example

Do not specify the
CONNECT clause

Uses a connected user
database link

CREATE DATABASE LINK k1 USING
'prod'

Do specify the
CONNECT TO ...
IDENTIFIED BY
clause

Uses a fixed user
database link

CREATE DATABASE LINK k2
CONNECT TO scott IDENTIFIED BY
tiger USING 'prod'

Specify the CONNECT
TO CURRENT_USER
clause

Uses a current user
database link

CREATE DATABASE LINK k3
CONNECT TO CURRENT_USER USING
'prod'

Do not specify the
USING clause

Searches until it finds a
link specifying a
database string. If
matching database
links are found and a
string is never
identified, the database
returns an error.

CREATE DATABASE LINK k4
CONNECT TO CURRENT_USER

After the database determines a complete path, it creates a remote session, assuming
that an identical connection is not already open on behalf of the same local session. If a
session already exists, the database reuses it.

Schema Object Name Resolution
After the local Oracle Database connects to the specified remote database on behalf of
the local user that issued the SQL statement, object resolution continues as if the
remote user had issued the associated SQL statement. The first match determines the
remote schema according to the following rules:
Type of Link Specified

Location of Object Resolution

A fixed user database link

Schema specified in the link creation statement

A connected user database link

Connected user's remote schema

A current user database link

Current user's schema

If the database cannot find the object, then it checks public objects of the remote
database. If it cannot resolve the object, then the established remote session remains
but the SQL statement cannot execute and returns an error.
The following are examples of global object name resolution in a distributed database
system. For all the following examples, assume that:

Example of Global Object Name Resolution: Complete Object Name
This example illustrates how the database resolves a complete global object name and
determines the appropriate path to the remote database using both a private and
public database link. For this example, assume the following:
■

The remote database is named sales.division3.acme.com.

Distributed Database Concepts 29-27

Transaction Processing in a Distributed System

■

The local database is named hq.division3.acme.com.

■

A directory server (and therefore, global database links) is not available.

■

A remote table emp is contained in the schema tsmith.

Consider the following statements issued by scott at the local database:
CONNECT scott/tiger@hq
CREATE PUBLIC DATABASE LINK sales.division3.acme.com
CONNECT TO guest IDENTIFIED BY network
USING 'dbstring';

Later, JWARD connects and issues the following statements:
CONNECT jward/bronco@hq
CREATE DATABASE LINK sales.division3.acme.com
CONNECT TO tsmith IDENTIFIED BY radio;
UPDATE tsmith.emp@sales.division3.acme.com
SET deptno = 40
WHERE deptno = 10;

The database processes the final statement as follows:
1.

The database determines that a complete global object name is referenced in
jward's UPDATE statement. Therefore, the system begins searching in the local
database for a database link with a matching name.

2.

The database finds a matching private database link in the schema jward.
Nevertheless, the private database link jward.sales.division3.acme.com
does not indicate a complete path to the remote sales database, only a remote
account. Therefore, the database now searches for a matching public database link.

3.

The database finds the public database link in scott's schema. From this public
database link, the database takes the service name dbstring.

4.

Combined with the remote account taken from the matching private fixed user
database link, the database determines a complete path and proceeds to establish a
connection to the remote sales database as user tsmith/radio.

5.

The remote database can now resolve the object reference to the emp table. The
database searches in the tsmith schema and finds the referenced emp table.

6.

The remote database completes the execution of the statement and returns the
results to the local database.

Example of Global Object Name Resolution: Partial Object Name
This example illustrates how the database resolves a partial global object name and
determines the appropriate path to the remote database using both a private and
public database link.
For this example, assume that:
■

The remote database is named sales.division3.acme.com.

■

The local database is named hq.division3.acme.com.

■

A directory server (and therefore, global database links) is not available.

■

A table emp on the remote database sales is contained in the schema tsmith,
but not in schema scott.

29-28 Oracle Database Administrator’s Guide

Transaction Processing in a Distributed System

■

■

A public synonym named emp resides at remote database sales and points to
tsmith.emp in the remote database sales.
The public database link in "Example of Global Object Name Resolution: Complete
Object Name" on page 29-27 is already created on local database hq:
CREATE PUBLIC DATABASE LINK sales.division3.acme.com
CONNECT TO guest IDENTIFIED BY network
USING 'dbstring';

Consider the following statements issued at local database hq:
CONNECT scott/tiger@hq
CREATE DATABASE LINK sales.division3.acme.com;
DELETE FROM emp@sales
WHERE empno = 4299;

The database processes the final DELETE statement as follows:
1.

The database notices that a partial global object name is referenced in scott's
DELETE statement. It expands it to a complete global object name using the
domain of the local database as follows:
DELETE FROM emp@sales.division3.acme.com
WHERE empno = 4299;

2.

The database searches the local database for a database link with a matching
name.

3.

The database finds a matching private connected user link in the schema scott,
but the private database link indicates no path at all. The database uses the
connected username/password as the remote account portion of the path and then
searches for and finds a matching public database link:
CREATE PUBLIC DATABASE LINK sales.division3.acme.com
CONNECT TO guest IDENTIFIED BY network
USING 'dbstring';

4.

The database takes the database net service name dbstring from the public
database link. At this point, the database has determined a complete path.

5.

The database connects to the remote database as scott/tiger and searches for
and does not find an object named emp in the schema scott.

6.

The remote database searches for a public synonym named emp and finds it.

7.

The remote database executes the statement and returns the results to the local
database.

Global Name Resolution in Views, Synonyms, and Procedures
A view, synonym, or PL/SQL program unit (for example, a procedure, function, or
trigger) can reference a remote schema object by its global object name. If the global
object name is complete, then the database stores the definition of the object without
expanding the global object name. If the name is partial, however, the database
expands the name using the domain of the local database name.
The following table explains when the database completes the expansion of a partial
global object name for views, synonyms, and program units:

Distributed Database Concepts 29-29

Transaction Processing in a Distributed System

User Operation

Database Response

Create a view

Does not expand partial global names. The data dictionary
stores the exact text of the defining query. Instead, the database
expands a partial global object name each time a statement that
uses the view is parsed.

Create a synonym

Expands partial global names. The definition of the synonym
stored in the data dictionary includes the expanded global
object name.

Compile a program unit

Expands partial global names.

What Happens When Global Names Change
Global name changes can affect views, synonyms, and procedures that reference
remote data using partial global object names. If the global name of the referenced
database changes, views and procedures may try to reference a nonexistent or
incorrect database. On the other hand, synonyms do not expand database link names
at runtime, so they do not change.

Scenarios for Global Name Changes
For example, consider two databases named sales.uk.acme.com and
hq.uk.acme.com. Also, assume that the sales database contains the following view
and synonym:
CREATE VIEW employee_names AS
SELECT ename FROM scott.emp@hr;
CREATE SYNONYM employee FOR scott.emp@hr;

The database expands the employee synonym definition and stores it as:
scott.emp@hr.uk.acme.com
Scenario 1: Both Databases Change Names First, consider the situation where both the
Sales and Human Resources departments are relocated to the United States.
Consequently, the corresponding global database names are both changed as follows:
■

sales.uk.acme.com becomes sales.us.acme.com

■

hq.uk.acme.com becomes hq.us.acme.com

The following table describes query expansion before and after the change in global
names:
Query on sales

Expansion Before Change

Expansion After Change

SELECT * FROM
employee_names

SELECT * FROM
SELECT * FROM
scott.emp@hr.uk.acme.com scott.emp@hr.us.acme.com

SELECT * FROM
employee

SELECT * FROM
SELECT * FROM
scott.emp@hr.uk.acme.com scott.emp@hr.uk.acme.com

Scenario 2: One Database Changes Names Now consider that only the Sales department is
moved to the United States; Human Resources remains in the UK. Consequently, the
corresponding global database names are both changed as follows:
■

sales.uk.acme.com becomes sales.us.acme.com

■

hq.uk.acme.com is not changed

29-30 Oracle Database Administrator’s Guide

Distributed Database Application Development

The following table describes query expansion before and after the change in global
names:
Query on sales

Expansion Before Change

Expansion After Change

SELECT * FROM
employee_names

SELECT * FROM
SELECT * FROM
scott.emp@hr.uk.acme.com scott.emp@hr.us.acme.com

SELECT * FROM
employee

SELECT * FROM
SELECT * FROM
scott.emp@hr.uk.acme.com scott.emp@hr.uk.acme.com

In this case, the defining query of the employee_names view expands to a
nonexistent global database name. On the other hand, the employee synonym
continues to reference the correct database, hq.uk.acme.com.

Distributed Database Application Development
Application development in a distributed system raises issues that are not applicable
in a non-distributed system. This section contains the following topics relevant for
distributed application development:
■

Transparency in a Distributed Database System

■

Remote Procedure Calls (RPCs)

■

Distributed Query Optimization
See Also: Chapter 31, "Developing Applications for a Distributed
Database System" to learn how to develop applications for
distributed systems

Transparency in a Distributed Database System
With minimal effort, you can develop applications that make an Oracle Database
distributed database system transparent to users that work with the system. The goal
of transparency is to make a distributed database system appear as though it is a
single Oracle Database. Consequently, the system does not burden developers and
users of the system with complexities that would otherwise make distributed database
application development challenging and detract from user productivity.
The following sections explain more about transparency in a distributed database
system.

Location Transparency
An Oracle Database distributed database system has features that allow application
developers and administrators to hide the physical location of database objects from
applications and users. Location transparency exists when a user can universally refer
to a database object such as a table, regardless of the node to which an application
connects. Location transparency has several benefits, including:
■

■

Access to remote data is simple, because database users do not need to know the
physical location of database objects.
Administrators can move database objects with no impact on end-users or existing
database applications.

Typically, administrators and developers use synonyms to establish location
transparency for the tables and supporting objects in an application schema. For

Distributed Database Concepts 29-31

Distributed Database Application Development

example, the following statements create synonyms in a database for tables in another,
remote database.
CREATE PUBLIC SYNONYM emp
FOR scott.emp@sales.us.americas.acme_auto.com;
CREATE PUBLIC SYNONYM dept
FOR scott.dept@sales.us.americas.acme_auto.com;

Now, rather than access the remote tables with a query such as:
SELECT ename, dname
FROM scott.emp@sales.us.americas.acme_auto.com e,
scott.dept@sales.us.americas.acme_auto.com d
WHERE e.deptno = d.deptno;

An application can issue a much simpler query that does not have to account for the
location of the remote tables.
SELECT ename, dname
FROM emp e, dept d
WHERE e.deptno = d.deptno;

In addition to synonyms, developers can also use views and stored procedures to
establish location transparency for applications that work in a distributed database
system.

SQL and COMMIT Transparency
The Oracle Database distributed database architecture also provides query, update,
and transaction transparency. For example, standard SQL statements such as SELECT,
INSERT, UPDATE, and DELETE work just as they do in a non-distributed database
environment. Additionally, applications control transactions using the standard SQL
statements COMMIT, SAVEPOINT, and ROLLBACK. There is no requirement for complex
programming or other special operations to provide distributed transaction control.
■

■

■

The statements in a single transaction can reference any number of local or remote
tables.
The database guarantees that all nodes involved in a distributed transaction take
the same action: they either all commit or all roll back the transaction.
If a network or system failure occurs during the commit of a distributed
transaction, the transaction is automatically and transparently resolved globally.
Specifically, when the network or system is restored, the nodes either all commit or
all roll back the transaction.

Internal to the database, each committed transaction has an associated system change
number (SCN) to uniquely identify the changes made by the statements within that
transaction. In a distributed database, the SCNs of communicating nodes are
coordinated when:
■

A connection is established using the path described by one or more database
links.

■

A distributed SQL statement is executed.

■

A distributed transaction is committed.

Among other benefits, the coordination of SCNs among the nodes of a distributed
database system allows global distributed read-consistency at both the statement and
transaction level. If necessary, global distributed time-based recovery can also be
completed.

29-32 Oracle Database Administrator’s Guide

Character Set Support for Distributed Environments

Replication Transparency
The database also provide many features to transparently replicate data among the
nodes of the system. For more information about Oracle Database replication features,
see Oracle Database Advanced Replication.

Remote Procedure Calls (RPCs)
Developers can code PL/SQL packages and procedures to support applications that
work with a distributed database. Applications can make local procedure calls to
perform work at the local database and remote procedure calls (RPCs) to perform
work at a remote database.
When a program calls a remote procedure, the local server passes all procedure
parameters to the remote server in the call. For example, the following PL/SQL
program unit calls the packaged procedure del_emp located at the remote sales
database and passes it the parameter 1257:
BEGIN
emp_mgmt.del_emp@sales.us.americas.acme_auto.com(1257);
END;

In order for the RPC to succeed, the called procedure must exist at the remote site, and
the user being connected to must have the proper privileges to execute the procedure.
When developing packages and procedures for distributed database systems,
developers must code with an understanding of what program units should do at
remote locations, and how to return the results to a calling application.

Distributed Query Optimization
Distributed query optimization is an Oracle Database feature that reduces the
amount of data transfer required between sites when a transaction retrieves data from
remote tables referenced in a distributed SQL statement.
Distributed query optimization uses cost-based optimization to find or generate SQL
expressions that extract only the necessary data from remote tables, process that data
at a remote site or sometimes at the local site, and send the results to the local site for
final processing. This operation reduces the amount of required data transfer when
compared to the time it takes to transfer all the table data to the local site for
processing.
Using various cost-based optimizer hints such as DRIVING_SITE, NO_MERGE, and
INDEX, you can control where Oracle Database processes the data and how it accesses
the data.
See Also: "Using Cost-Based Optimization" on page 31-4 for more
information about cost-based optimization

Character Set Support for Distributed Environments
Oracle Database supports environments in which clients, Oracle Database servers, and
non-Oracle Database servers use different character sets. NCHAR support is provided
for heterogeneous environments. You can set a variety of National Language Support
(NLS) and Heterogeneous Services (HS) environment variables and initialization
parameters to control data conversion between different character sets.
Character settings are defined by the following NLS and HS parameters:

Distributed Database Concepts 29-33

Character Set Support for Distributed Environments

Parameters

Environment

Defined For

NLS_LANG
(environment variable)

Client-Server

Client

NLS_LANGUAGE

Client-Server

Oracle Database server

NLS_CHARACTERSET

Not Heterogeneous Distributed

NLS_TERRITORY

Heterogeneous Distributed

HS_LANGUAGE

Heterogeneous Distributed

Non-Oracle Database
server
Transparent gateway

Heterogeneous Distributed

NLS_NCHAR
(environment variable)

Oracle Database server
Transparent gateway

HS_NLS_NCHAR

See Also:
■

■

Oracle Database Globalization Support Guide for information
about NLS parameters
Oracle Database Heterogeneous Connectivity Administrator's Guide
for information about HS parameters

Client/Server Environment
In a client/server environment, set the client character set to be the same as or a subset
of the Oracle Database server character set, as illustrated in Figure 29–6.
Figure 29–6 NLS Parameter Settings in a Client/Server Environment

NLS_LANG =
NLS settings of
Oracle server or
subset of it

Oracle

Homogeneous Distributed Environment
In a non-heterogeneous environment, the client and server character sets should be
either the same as or subsets of the main server character set, as illustrated in
Figure 29–7:

29-34 Oracle Database Administrator’s Guide

Character Set Support for Distributed Environments

Figure 29–7 NLS Parameter Settings in a Homogeneous Environment

NLS_LANG =
NLS settings of Oracle
server(s) or subset(s)
of it

Oracle

Oracle

NLS setting similar or
subset of the other
Oracle server

Heterogeneous Distributed Environment
In a heterogeneous environment, the globalization support parameter settings of the
client, the transparent gateway, and the non-Oracle Database data source should be
either the same or a subset of the database server character set as illustrated in
Figure 29–8. Transparent gateways have full globalization support.
Figure 29–8 NLS Parameter Settings in a Heterogeneous Environment

NLS_LANG =
NLS settings of Oracle
server or subset of it

Oracle

Gateway
Agent
NLS settings to be the
same or the subset
of Oracle server
NLS setting

Non-Oracle

In a heterogeneous environment, only transparent gateways built with HS technology
support complete NCHAR capabilities. Whether a specific transparent gateway
supports NCHAR depends on the non-Oracle Database data source it is targeting. For
information on how a particular transparent gateway handles NCHAR support, consult
the system-specific transparent gateway documentation.

Distributed Database Concepts 29-35

Character Set Support for Distributed Environments

See Also: Oracle Database Heterogeneous Connectivity
Administrator's Guide for more detailed information about
Heterogeneous Services

29-36 Oracle Database Administrator’s Guide

30
Managing a Distributed Database
This chapter describes how to manage and maintain a distributed database system and
contains the following topics:
■

Managing Global Names in a Distributed System

■

Creating Database Links

■

Using Shared Database Links

■

Managing Database Links

■

Viewing Information About Database Links

■

Creating Location Transparency

■

Managing Statement Transparency

■

Managing a Distributed Database: Examples

Managing Global Names in a Distributed System
In a distributed database system, each database should have a unique global database
name. Global database names uniquely identify a database in the system. A primary
administration task in a distributed system is managing the creation and alteration of
global database names.
This section contains the following topics:
■

Understanding How Global Database Names Are Formed

■

Determining Whether Global Naming Is Enforced

■

Viewing a Global Database Name

■

Changing the Domain in a Global Database Name

■

Changing a Global Database Name: Scenario

Understanding How Global Database Names Are Formed
A global database name is formed from two components: a database name and a
domain. The database name and the domain name are determined by the following
initialization parameters at database creation:
Component

Parameter

Requirements

Example

Database name

DB_NAME

Must be eight characters or less.

sales

Managing a Distributed Database

30-1

Managing Global Names in a Distributed System

Component

Parameter

Requirements

Example

Domain
containing the
database

DB_DOMAIN

Must follow standard Internet
conventions. Levels in domain names
must be separated by dots and the
order of domain names is from leaf to
root, left to right.

us.acme.com

These are examples of valid global database names:
DB_NAME

DB_DOMAIN

Global Database Name

sales

au.oracle.com

sales.au.oracle.com

sales

us.oracle.com

sales.us.oracle.com

mktg

us.oracle.com

mktg.us.oracle.com

payroll

nonprofit.org

payroll.nonprofit.org

The DB_DOMAIN initialization parameter is only important at database creation time
when it is used, together with the DB_NAME parameter, to form the database global
name. At this point, the database global name is stored in the data dictionary. You
must change the global name using an ALTER DATABASE statement, not by altering
the DB_DOMAIN parameter in the initialization parameter file. It is good practice,
however, to change the DB_DOMAIN parameter to reflect the change in the domain
name before the next database startup.

Determining Whether Global Naming Is Enforced
The name that you give to a link on the local database depends on whether the remote
database that you want to access enforces global naming. If the remote database
enforces global naming, then you must use the remote database global database name
as the name of the link. For example, if you are connected to the local hq server and
want to create a link to the remote mfg database, and mfg enforces global naming,
then you must use the mfg global database name as the link name.
You can also use service names as part of the database link name. For example, if you
use the service names sn1 and sn2 to connect to database hq.acme.com, and hq
enforces global naming, then you can create the following link names to hq:
■

HQ.ACME.COM@SN1

■

HQ.ACME.COM@SN2
See Also: "Using Connection Qualifiers to Specify Service Names
Within Link Names" on page 30-10 for more information about
using services names in link names

To determine whether global naming on a database is enforced on a database, either
examine the database initialization parameter file or query the V$PARAMETER view.
For example, to see whether global naming is enforced on mfg, you could start a
session on mfg and then create and execute the following globalnames.sql script
(sample output included):
COL NAME FORMAT A12
COL VALUE FORMAT A6
SELECT NAME, VALUE FROM V$PARAMETER

30-2 Oracle Database Administrator’s Guide

Managing Global Names in a Distributed System

WHERE NAME = 'global_names'
/
SQL> @globalnames
NAME
VALUE
------------ -----global_names FALSE

Viewing a Global Database Name
Use the data dictionary view GLOBAL_NAME to view the database global name. For
example, issue the following:
SELECT * FROM GLOBAL_NAME;
GLOBAL_NAME
------------------------------------------------------------------------------SALES.AU.ORACLE.COM

Changing the Domain in a Global Database Name
Use the ALTER DATABASE statement to change the domain in a database global name.
Note that after the database is created, changing the initialization parameter
DB_DOMAIN has no effect on the global database name or on the resolution of database
link names.
The following example shows the syntax for the renaming statement, where database is
a database name and domain is the network domain:
ALTER DATABASE RENAME GLOBAL_NAME TO database.domain;

Use the following procedure to change the domain in a global database name:
1.

Determine the current global database name. For example, issue:
SELECT * FROM GLOBAL_NAME;
GLOBAL_NAME
---------------------------------------------------------------------------SALES.AU.ORACLE.COM

2.

Rename the global database name using an ALTER DATABASE statement. For
example, enter:
ALTER DATABASE RENAME GLOBAL_NAME TO sales.us.oracle.com;

3.

Query the GLOBAL_NAME table to check the new name. For example, enter:
SELECT * FROM GLOBAL_NAME;
GLOBAL_NAME
---------------------------------------------------------------------------SALES.US.ORACLE.COM

Changing a Global Database Name: Scenario
In this scenario, you change the domain part of the global database name of the local
database. You also create database links using partially specified global names to test
how Oracle Database resolves the names. You discover that the database resolves the

Managing a Distributed Database

30-3

Managing Global Names in a Distributed System

partial names using the domain part of the current global database name of the local
database, not the value for the initialization parameter DB_DOMAIN.
1.

You connect to SALES.US.ACME.COM and query the GLOBAL_NAME data
dictionary view to determine the current database global name:
CONNECT SYSTEM/password@sales.us.acme.com
SELECT * FROM GLOBAL_NAME;
GLOBAL_NAME
---------------------------------------------------------------------------SALES.US.ACME.COM

2.

You query the V$PARAMETER view to determine the current setting for the
DB_DOMAIN initialization parameter:
SELECT NAME, VALUE FROM V$PARAMETER WHERE NAME = 'db_domain';
NAME
--------db_domain

3.

VALUE
----------US.ACME.COM

You then create a database link to a database called hq, using only a
partially-specified global name:
CREATE DATABASE LINK hq USING 'sales';

The database expands the global database name for this link by appending the
domain part of the global database name of the local database to the name of the
database specified in the link.
4.

You query USER_DB_LINKS to determine which domain name the database uses
to resolve the partially specified global database name:
SELECT DB_LINK FROM USER_DB_LINKS;
DB_LINK
-----------------HQ.US.ACME.COM

This result indicates that the domain part of the global database name of the local
database is us.acme.com. The database uses this domain in resolving partial
database link names when the database link is created.
5.

Because you have received word that the sales database will move to Japan, you
rename the sales database to sales.jp.acme.com:
ALTER DATABASE RENAME GLOBAL_NAME TO sales.jp.acme.com;
SELECT * FROM GLOBAL_NAME;
GLOBAL_NAME
---------------------------------------------------------------------------SALES.JP.ACME.COM

6.

You query V$PARAMETER again and discover that the value of DB_DOMAIN is not
changed, although you renamed the domain part of the global database name:
SELECT NAME, VALUE FROM V$PARAMETER
WHERE NAME = 'db_domain';
NAME
--------db_domain

VALUE
----------US.ACME.COM

30-4 Oracle Database Administrator’s Guide

Managing Global Names in a Distributed System

This result indicates that the value of the DB_DOMAIN initialization parameter is
independent of the ALTER DATABASE RENAME GLOBAL_NAME statement. The
ALTER DATABASE statement determines the domain of the global database name,
not the DB_DOMAIN initialization parameter (although it is good practice to alter
DB_DOMAIN to reflect the new domain name).
7.

You create another database link to database supply, and then query
USER_DB_LINKS to see how the database resolves the domain part of the global
database name of supply:
CREATE DATABASE LINK supply USING 'supply';
SELECT DB_LINK FROM USER_DB_LINKS;
DB_LINK
-----------------HQ.US.ACME.COM
SUPPLY.JP.ACME.COM

This result indicates that the database resolves the partially specified link name by
using the domain jp.acme.com. This domain is used when the link is created
because it is the domain part of the global database name of the local database.
The database does not use the DB_DOMAIN initialization parameter setting when
resolving the partial link name.
8.

You then receive word that your previous information was faulty: sales will be
in the ASIA.JP.ACME.COM domain, not the JP.ACME.COM domain.
Consequently, you rename the global database name as follows:
ALTER DATABASE RENAME GLOBAL_NAME TO sales.asia.jp.acme.com;
SELECT * FROM GLOBAL_NAME;
GLOBAL_NAME
---------------------------------------------------------------------------SALES.ASIA.JP.ACME.COM

9.

You query V$PARAMETER to again check the setting for the parameter
DB_DOMAIN:
SELECT NAME, VALUE FROM V$PARAMETER
WHERE NAME = 'db_domain';
NAME
---------db_domain

VALUE
----------US.ACME.COM

The result indicates that the domain setting in the parameter file is exactly the
same as it was before you issued either of the ALTER DATABASE RENAME
statements.
10. Finally, you create a link to the warehouse database and again query

USER_DB_LINKS to determine how the database resolves the partially-specified
global name:
CREATE DATABASE LINK warehouse USING 'warehouse';
SELECT DB_LINK FROM USER_DB_LINKS;
DB_LINK
-----------------HQ.US.ACME.COM
SUPPLY.JP.ACME.COM

Managing a Distributed Database

30-5

Creating Database Links

WAREHOUSE.ASIA.JP.ACME.COM

Again, you see that the database uses the domain part of the global database name
of the local database to expand the partial link name during link creation.
Note: In order to correct the supply database link, it must be
dropped and re-created.

Oracle Database Reference for more information about
specifying the DB_NAME and DB_DOMAIN initialization parameters

See Also:

Creating Database Links
To support application access to the data and schema objects throughout a distributed
database system, you must create all necessary database links. This section contains
the following topics:
■

Obtaining Privileges Necessary for Creating Database Links

■

Specifying Link Types

■

Specifying Link Users

■

Using Connection Qualifiers to Specify Service Names Within Link Names

Obtaining Privileges Necessary for Creating Database Links
A database link is a pointer in the local database that lets you access objects on a
remote database. To create a private database link, you must have been granted the
proper privileges. The following table illustrates which privileges are required on
which database for which type of link:
Privilege

Database

Required For

CREATE DATABASE LINK

Local

Creation of a private database link.

CREATE PUBLIC DATABASE LINK

Local

Creation of a public database link.

CREATE SESSION

Remote

Creation of any type of database link.

To see which privileges you currently have available, query ROLE_SYS_PRIVS. For
example, you could create and execute the following privs.sql script (sample
output included):
SELECT DISTINCT PRIVILEGE AS "Database Link Privileges"
FROM ROLE_SYS_PRIVS
WHERE PRIVILEGE IN ( 'CREATE SESSION','CREATE DATABASE LINK',
'CREATE PUBLIC DATABASE LINK')
/
SQL> @privs
Database Link Privileges
---------------------------------------CREATE DATABASE LINK
CREATE PUBLIC DATABASE LINK
CREATE SESSION

30-6 Oracle Database Administrator’s Guide

Creating Database Links

Specifying Link Types
When you create a database link, you must decide who will have access to it. The
following sections describe how to create the three basic types of links:
■

Creating Private Database Links

■

Creating Public Database Links

■

Creating Global Database Links

Creating Private Database Links
To create a private database link, specify the following (where link_name is the global
database name or an arbitrary link name):
CREATE DATABASE LINK link_name ...;

Following are examples of private database links:
SQL Statement

Result

CREATE DATABASE LINK
supply.us.acme.com;

A private link using the global database name to the
remote supply database.
The link uses the userid/password of the connected
user. So if scott (identified by tiger) uses the link
in a query, the link establishes a connection to the
remote database as scott/tiger.

CREATE DATABASE LINK link_2
CONNECT TO jane IDENTIFIED
BY doe USING 'us_supply';

A private fixed user link called link_2 to the
database with service name us_supply. The link
connects to the remote database with the
userid/password of jane/doe regardless of the
connected user.

CREATE DATABASE LINK link_1
CONNECT TO CURRENT_USER
USING 'us_supply';

A private link called link_1 to the database with
service name us_supply. The link uses the
userid/password of the current user to log onto the
remote database.
Note: The current user may not be the same as the
connected user, and must be a global user on both
databases involved in the link (see "Users of
Database Links" on page 29-11). Current user links
are part of the Oracle Advanced Security option.

See Also: Oracle Database SQL Reference for CREATE DATABASE
LINK syntax

Creating Public Database Links
To create a public database link, use the keyword PUBLIC (where link_name is the
global database name or an arbitrary link name):
CREATE PUBLIC DATABASE LINK link_name ...;

Following are examples of public database links:

Managing a Distributed Database

30-7

Creating Database Links

SQL Statement

Result

CREATE PUBLIC DATABASE LINK
supply.us.acme.com;

A public link to the remote supply database.
The link uses the userid/password of the
connected user. So if scott (identified by
tiger) uses the link in a query, the link
establishes a connection to the remote database
as scott/tiger.

CREATE PUBLIC DATABASE LINK
pu_link CONNECT TO
CURRENT_USER USING 'supply';

A public link called pu_link to the database
with service name supply. The link uses the
userid/password of the current user to log onto
the remote database.
Note: The current user may not be the same as
the connected user, and must be a global user
on both databases involved in the link (see
"Users of Database Links" on page 29-11).

CREATE PUBLIC DATABASE LINK
sales.us.acme.com CONNECT TO
jane IDENTIFIED BY doe;

A public fixed user link to the remote sales
database. The link connects to the remote
database with the userid/password of
jane/doe.

See Also: Oracle Database SQL Reference for CREATE PUBLIC
DATABASE LINK syntax

Creating Global Database Links
In earlier releases, you defined global database links in the Oracle Names server. The
Oracle Names server has been deprecated. Now, you can use a directory server in
which databases are identified by net service names. In this document these are what
are referred to as global database links.
See the Oracle Database Net Services Administrator's Guide to learn how to create
directory entries that act as global database links.

Specifying Link Users
A database link defines a communication path from one database to another. When an
application uses a database link to access a remote database, Oracle Database
establishes a database session in the remote database on behalf of the local application
request.
When you create a private or public database link, you can determine which schema
on the remote database the link will establish connections to by creating fixed user,
current user, and connected user database links.

Creating Fixed User Database Links
To create a fixed user database link, you embed the credentials (in this case, a
username and password) required to access the remote database in the definition of
the link:
CREATE DATABASE LINK ... CONNECT TO username IDENTIFIED BY password ...;

Following are examples of fixed user database links:

30-8 Oracle Database Administrator’s Guide

Creating Database Links

SQL Statement

Result

CREATE PUBLIC DATABASE LINK
supply.us.acme.com CONNECT
TO scott AS tiger;

A public link using the global database name to the
remote supply database. The link connects to the
remote database with the userid/password
scott/tiger.

CREATE DATABASE LINK foo
CONNECT TO jane IDENTIFIED
BY doe USING 'finance';

A private fixed user link called foo to the database
with service name finance. The link connects to
the remote database with the userid/password
jane/doe.

When an application uses a fixed user database link, the local server always
establishes a connection to a fixed remote schema in the remote database. The local
server also sends the fixed user's credentials across the network when an application
uses the link to access the remote database.

Creating Connected User and Current User Database Links
Connected user and current user database links do not include credentials in the
definition of the link. The credentials used to connect to the remote database can
change depending on the user that references the database link and the operation
performed by the application.
For many distributed applications, you do not want a user
to have privileges in a remote database. One simple way to achieve
this result is to create a procedure that contains a fixed user or
current user database link within it. In this way, the user accessing
the procedure temporarily assumes someone else's privileges.

Note:

For an extended conceptual discussion of the distinction between connected users and
current users, see "Users of Database Links" on page 29-11.
Creating a Connected User Database Link To create a connected user database link, omit
the CONNECT TO clause. The following syntax creates a connected user database link,
where dblink is the name of the link and net_service_name is an optional connect string:
CREATE [SHARED] [PUBLIC] DATABASE LINK dblink ... [USING 'net_service_name'];

For example, to create a connected user database link, use the following syntax:
CREATE DATABASE LINK sales.division3.acme.com USING 'sales';

Creating a Current User Database Link To create a current user database link, use the
CONNECT TO CURRENT_USER clause in the link creation statement. Current user links
are only available through the Oracle Advanced Security option.
The following syntax creates a current user database link, where dblink is the name of
the link and net_service_name is an optional connect string:
CREATE [SHARED] [PUBLIC] DATABASE LINK dblink CONNECT TO CURRENT_USER
[USING 'net_service_name'];

For example, to create a connected user database link to the sales database, you
might use the following syntax:
CREATE DATABASE LINK sales CONNECT TO CURRENT_USER USING 'sales';

Managing a Distributed Database

30-9

Using Shared Database Links

To use a current user database link, the current user must be
a global user on both databases involved in the link.

Note:

Oracle Database SQL Reference for more syntax
information about creating database links

See Also:

Using Connection Qualifiers to Specify Service Names Within Link Names
In some situations, you may want to have several database links of the same type (for
example, public) that point to the same remote database, yet establish connections to
the remote database using different communication pathways. Some cases in which
this strategy is useful are:
■

■

A remote database is part of an Oracle Real Application Clusters configuration, so
you define several public database links at your local node so that connections can
be established to specific instances of the remote database.
Some clients connect to the Oracle Database server using TCP/IP while others use
DECNET.

To facilitate such functionality, the database lets you create a database link with an
optional service name in the database link name. When creating a database link, a
service name is specified as the trailing portion of the database link name, separated
by an @ sign, as in @sales. This string is called a connection qualifier.
For example, assume that remote database hq.acme.com is managed in a Oracle
Real Application Clusters environment. The hq database has two instances named
hq_1 and hq_2. The local database can contain the following public database links to
define pathways to the remote instances of the hq database:
CREATE PUBLIC DATABASE LINK hq.acme.com@hq_1
USING 'string_to_hq_1';
CREATE PUBLIC DATABASE LINK hq.acme.com@hq_2
USING 'string_to_hq_2';
CREATE PUBLIC DATABASE LINK hq.acme.com
USING 'string_to_hq';

Notice in the first two examples that a service name is simply a part of the database
link name. The text of the service name does not necessarily indicate how a connection
is to be established; this information is specified in the service name of the USING
clause. Also notice that in the third example, a service name is not specified as part of
the link name. In this case, just as when a service name is specified as part of the link
name, the instance is determined by the USING string.
To use a service name to specify a particular instance, include the service name at the
end of the global object name:
SELECT * FROM scott.emp@hq.acme.com@hq_1

Note that in this example, there are two @ symbols.

Using Shared Database Links
Every application that references a remote server using a standard database link
establishes a connection between the local database and the remote database. Many
users running applications simultaneously can cause a high number of connections
between the local and remote databases.

30-10 Oracle Database Administrator’s Guide

Using Shared Database Links

Shared database links enable you to limit the number of network connections required
between the local server and the remote server.
This section contains the following topics:
■

Determining Whether to Use Shared Database Links

■

Creating Shared Database Links

■

Configuring Shared Database Links
See Also: "What Are Shared Database Links?" on page 29-7 for a
conceptual overview of shared database links

Determining Whether to Use Shared Database Links
Look carefully at your application and shared server configuration to determine
whether to use shared links. A simple guideline is to use shared database links when
the number of users accessing a database link is expected to be much larger than the
number of server processes in the local database.
The following table illustrates three possible configurations involving database links:
Link Type

Server Mode

Consequences

Nonshared

Dedicated/shared
server

If your application uses a standard public database
link, and 100 users simultaneously require a
connection, then 100 direct network connections to
the remote database are required.

Shared

Shared server

If 10 shared server processes exist in the local
shared server mode database, then 100 users that
use the same database link require 10 or fewer
network connections to the remote server. Each
local shared server process may only need one
connection to the remote server.

Shared

Dedicated

If 10 clients connect to a local dedicated server, and
each client has 10 sessions on the same connection
(thus establishing 100 sessions overall), and each
session references the same remote database, then
only 10 connections are needed. With a nonshared
database link, 100 connections are needed.

Shared database links are not useful in all situations. Assume that only one user
accesses the remote server. If this user defines a shared database link and 10 shared
server processes exist in the local database, then this user can require up to 10 network
connections to the remote server. Because the user can use each shared server process,
each process can establish a connection to the remote server.
Clearly, a nonshared database link is preferable in this situation because it requires
only one network connection. Shared database links lead to more network connections
in single-user scenarios, so use shared links only when many users need to use the
same link. Typically, shared links are used for public database links, but can also be
used for private database links when many clients access the same local schema (and
therefore the same private database link).

Managing a Distributed Database

30-11

Using Shared Database Links

In a multitiered environment, there is a restriction that if
you use a shared database link to connect to a remote database,
then that remote database cannot link to another database with a
database link that cannot be migrated. That link must use a shared
server, or it must be another shared database link.

Note:

Creating Shared Database Links
To create a shared database link, use the keyword SHARED in the CREATE DATABASE
LINK statement:
CREATE SHARED DATABASE LINK dblink_name
[CONNECT TO username IDENTIFIED BY password]|[CONNECT TO CURRENT_USER]
AUTHENTICATED BY schema_name IDENTIFIED BY password
[USING 'service_name'];

Whenever you use the keyword SHARED, the clause AUTHENTICATED BY is required.
The schema specified in the AUTHENTICATED BY clause must exist in the remote
database and must be granted at least the CREATE SESSION privilege. The credentials
of this schema can be considered the authentication method between the local
database and the remote database. These credentials are required to protect the remote
shared server processes from clients that masquerade as a database link user and
attempt to gain unauthorized access to information.
After a connection is made with a shared database link, operations on the remote
database proceed with the privileges of the CONNECT TO user or CURRENT_USER, not
the AUTHENTICATED BY schema.
The following example creates a fixed user, shared link to database sales, connecting
as scott and authenticated as linkuser:
CREATE SHARED DATABASE LINK link2sales
CONNECT TO scott IDENTIFIED BY tiger
AUTHENTICATED BY linkuser IDENTIFIED BY ostrich
USING 'sales';

Oracle Database SQL Reference for information about the
CREATE DATABASE LINK statement

See Also:

Configuring Shared Database Links
You can configure shared database links in the following ways:
■

Creating Shared Links to Dedicated Servers

■

Creating Shared Links to Shared Servers

Creating Shared Links to Dedicated Servers
In the configuration illustrated in Figure 30–1, a shared server process in the local
server owns a dedicated remote server process. The advantage is that a direct network
transport exists between the local shared server and the remote dedicated server. A
disadvantage is that extra back-end server processes are needed.

30-12 Oracle Database Administrator’s Guide

Using Shared Database Links

The remote server can either be a shared server or dedicated
server. There is a dedicated connection between the local and
remote servers. When the remote server is a shared server, you can
force a dedicated server connection by using the
(SERVER=DEDICATED) clause in the definition of the service name.

Note:

Figure 30–1 A Shared Database Link to Dedicated Server Processes
User
Process

Client Workstation
Database Server

Dispatcher Processes

Oracle
Server Code

Oracle
Server Code

Oracle
Server Code

Shared
Server
Processes

System Global Area

Dedicated
Server
Process

System Global Area

Creating Shared Links to Shared Servers
The configuration illustrated in Figure 30–2 uses shared server processes on the remote
server. This configuration eliminates the need for more dedicated servers, but requires
the connection to go through the dispatcher on the remote server. Note that both the
local and the remote server must be configured as shared servers.

Managing a Distributed Database

30-13

Managing Database Links

Figure 30–2 Shared Database Link to Shared Server
User
Process

User
Process

Client Workstation
Database Server

Dispatcher Processes

Shared
Server
Processes

Oracle
Server Code

System Global Area

Dispatcher Processes

Shared
Server
Processes

Oracle
Server Code

System Global Area

See Also: Oracle Database Net Services Administrator's Guide for
information about the shared server option

Managing Database Links
This section contains the following topics:
■

Closing Database Links

■

Dropping Database Links

■

Limiting the Number of Active Database Link Connections

Closing Database Links
If you access a database link in a session, then the link remains open until you close
the session. A link is open in the sense that a process is active on each of the remote
databases accessed through the link. This situation has the following consequences:
■

■

If 20 users open sessions and access the same public link in a local database, then
20 database link connections are open.
If 20 users open sessions and each user accesses a private link, then 20 database
link connections are open.

30-14 Oracle Database Administrator’s Guide

Managing Database Links

■

If one user starts a session and accesses 20 different links, then 20 database link
connections are open.

After you close a session, the links that were active in the session are automatically
closed. You may have occasion to close the link manually. For example, close links
when:
■

■

The network connection established by a link is used infrequently in an
application.
The user session must be terminated.

If you want to close a link, issue the following statement, where linkname refers to the
name of the link:
ALTER SESSION CLOSE DATABASE LINK linkname;

Note that this statement only closes the links that are active in your current session.

Dropping Database Links
You can drop a database link just as you can drop a table or view. If the link is private,
then it must be in your schema. If the link is public, then you must have the DROP
PUBLIC DATABASE LINK system privilege.
The statement syntax is as follows, where dblink is the name of the link:
DROP [PUBLIC] DATABASE LINK dblink;

Procedure for Dropping a Private Database Link
1.

Connect to the local database using SQL*Plus. For example, enter:
CONNECT scott/tiger@local_db

2.

Query USER_DB_LINKS to view the links that you own. For example, enter:
SELECT DB_LINK FROM USER_DB_LINKS;
DB_LINK
----------------------------------SALES.US.ORACLE.COM
MKTG.US.ORACLE.COM
2 rows selected.

3.

Drop the desired link using the DROP DATABASE LINK statement. For example,
enter:
DROP DATABASE LINK sales.us.oracle.com;

Procedure for Dropping a Public Database Link
1.

Connect to the local database as a user with the DROP PUBLIC DATABASE LINK
privilege. For example, enter:
CONNECT SYSTEM/password@local_db AS SYSDBA

2.

Query DBA_DB_LINKS to view the public links. For example, enter:
SELECT DB_LINK FROM USER_DB_LINKS
WHERE OWNER = 'PUBLIC';
DB_LINK
-----------------------------------

Managing a Distributed Database

30-15

Viewing Information About Database Links

DBL1.US.ORACLE.COM
SALES.US.ORACLE.COM
INST2.US.ORACLE.COM
RMAN2.US.ORACLE.COM
4 rows selected.
3.

Drop the desired link using the DROP PUBLIC DATABASE LINK statement. For
example, enter:
DROP PUBLIC DATABASE LINK sales.us.oracle.com;

Limiting the Number of Active Database Link Connections
You can limit the number of connections from a user process to remote databases using
the static initialization parameter OPEN_LINKS. This parameter controls the number of
remote connections that a single user session can use concurrently in distributed
transactions.
Note the following considerations for setting this parameter:
■

■

■

The value should be greater than or equal to the number of databases referred to
in a single SQL statement that references multiple databases.
Increase the value if several distributed databases are accessed over time. Thus, if
you regularly access three databases, set OPEN_LINKS to 3 or greater.
The default value for OPEN_LINKS is 4. If OPEN_LINKS is set to 0, then no
distributed transactions are allowed.
See Also: Oracle Database Reference for more information about
the OPEN_LINKS initialization parameter

Viewing Information About Database Links
The data dictionary of each database stores the definitions of all the database links in
the database. You can use data dictionary tables and views to gain information about
the links. This section contains the following topics:
■

Determining Which Links Are in the Database

■

Determining Which Link Connections Are Open

Determining Which Links Are in the Database
The following views show the database links that have been defined at the local
database and stored in the data dictionary:
View

Purpose

DBA_DB_LINKS

Lists all database links in the database.

ALL_DB_LINKS

Lists all database links accessible to the connected user.

USER_DB_LINKS

Lists all database links owned by the connected user.

These data dictionary views contain the same basic information about database links,
with some exceptions:

30-16 Oracle Database Administrator’s Guide

Viewing Information About Database Links

Column

Which Views?

Description

OWNER

All except USER_*

The user who created the database link. If the link is
public, then the user is listed as PUBLIC.

DB_LINK

All

The name of the database link.

USERNAME

All

If the link definition includes a fixed user, then this
column displays the username of the fixed user. If
there is no fixed user, the column is NULL.

PASSWORD

Only USER_*

Not used. Maintained for backward compatibility
only.

HOST

All

The net service name used to connect to the remote
database.

CREATED

All

Creation time of the database link.

Any user can query USER_DB_LINKS to determine which database links are available
to that user. Only those with additional privileges can use the ALL_DB_LINKS or
DBA_DB_LINKS view.
The following script queries the DBA_DB_LINKS view to access link information:
COL OWNER FORMAT a10
COL USERNAME FORMAT A8 HEADING "USER"
COL DB_LINK FORMAT A30
COL HOST FORMAT A7 HEADING "SERVICE"
SELECT * FROM DBA_DB_LINKS
/

Here, the script is invoked and the resulting output is shown:
SQL>@link_script
OWNER
DB_LINK
---------- -----------------------------SYS
TARGET.US.ACME.COM
PUBLIC
DBL1.UK.ACME.COM
PUBLIC
RMAN2.US.ACME.COM
PUBLIC
DEPT.US.ACME.COM
JANE
DBL.UK.ACME.COM
SCOTT
EMP.US.ACME.COM
6 rows selected.

USER
-------SYS
BLAKE

BLAKE
SCOTT

SERVICE
------inst1
ora51
inst2
inst2
ora51
inst2

CREATED
---------23-JUN-99
23-JUN-99
23-JUN-99
23-JUN-99
23-JUN-99
23-JUN-99

Determining Which Link Connections Are Open
You may find it useful to determine which database link connections are currently
open in your session. Note that if you connect as SYSDBA, you cannot query a view to
determine all the links open for all sessions; you can only access the link information
in the session within which you are working.
The following views show the database link connections that are currently open in
your current session:
View

Purpose

V$DBLINK

Lists all open database links in your session, that is, all database
links with the IN_TRANSACTION column set to YES.

GV$DBLINK

Lists all open database links in your session along with their
corresponding instances. This view is useful in an Oracle Real
Application Clusters configuration.

Managing a Distributed Database

30-17

Creating Location Transparency

These data dictionary views contain the same basic information about database links,
with one exception:

Column

Which
Views?

Description

DB_LINK

All

The name of the database link.

OWNER_ID

All

The owner of the database link.

LOGGED_ON

All

Whether the database link is currently logged on.

HETEROGENEOUS

All

Whether the database link is homogeneous (NO) or
heterogeneous (YES).

PROTOCOL

All

The communication protocol for the database link.

OPEN_CURSORS

All

Whether cursors are open for the database link.

IN_TRANSACTION

All

Whether the database link is accessed in a
transaction that has not yet been committed or
rolled back.

UPDATE_SENT

All

Whether there was an update on the database link.

COMMIT_POINT_STR
ENGTH

All

The commit point strength of the transactions
using the database link.

INST_ID

GV$DBLINK
only

The instance from which the view information
was obtained.

For example, you can create and execute the script below to determine which links are
open (sample output included):
COL
COL
COL
COL
COL
COL
COL
COL
COL

DB_LINK FORMAT A25
OWNER_ID FORMAT 99999 HEADING "OWNID"
LOGGED_ON FORMAT A5 HEADING "LOGON"
HETEROGENEOUS FORMAT A5 HEADING "HETER"
PROTOCOL FORMAT A8
OPEN_CURSORS FORMAT 999 HEADING "OPN_CUR"
IN_TRANSACTION FORMAT A3 HEADING "TXN"
UPDATE_SENT FORMAT A6 HEADING "UPDATE"
COMMIT_POINT_STRENGTH FORMAT 99999 HEADING "C_P_S"

SELECT * FROM V$DBLINK
/
SQL> @dblink
DB_LINK
OWNID LOGON HETER PROTOCOL OPN_CUR TXN UPDATE C_P_S
------------------------- ------ ----- ----- -------- ------- --- ------ -----INST2.ACME.COM
0 YES
YES
UNKN
0 YES YES
255

Creating Location Transparency
After you have configured the necessary database links, you can use various tools to
hide the distributed nature of the database system from users. In other words, users
can access remote objects as if they were local objects. The following sections explain
how to hide distributed functionality from users:
■

Using Views to Create Location Transparency

■

Using Synonyms to Create Location Transparency

30-18 Oracle Database Administrator’s Guide

Creating Location Transparency

■

Using Procedures to Create Location Transparency

Using Views to Create Location Transparency
Local views can provide location transparency for local and remote tables in a
distributed database system.
For example, assume that table emp is stored in a local database and table dept is
stored in a remote database. To make these tables transparent to users of the system,
you can create a view in the local database that joins local and remote data:
CREATE VIEW company AS
SELECT a.empno, a.ename, b.dname
FROM scott.emp a, jward.dept@hq.acme.com b
WHERE a.deptno = b.deptno;
Figure 30–3

Views and Location Transparency
Database
Server

SCOTT.EMP Table
EMPNO ENAME
7329
7499
7521
7566

SMITH
ALLEN
WARD
JONES

JOB

MGR

HIREDATE

CLERK
SALESMAN
SALESMAN
MANAGER

7902
7698
7698
7839

17–DEC–88 800.00
20–FEB–89 1600.00
22–JUN–92 1250.00
02–APR–93 2975.00

SAL

COMM

DEPTNO

300.00
300.00
500.00

20
30
30
20

Sales
Database

COMPANY View
EMPNO ENAME
7329
7499
7521
7566

SMITH
ALLEN
WARD
JONES

DNAME

Database
Server

MARKETING
SALES
SALES
MARKETING

JWARD.DEPT
DEPTNO DNAME
20
30

MARKETING
SALES

HQ
Database

When users access this view, they do not need to know where the data is physically
stored, or if data from more than one table is being accessed. Thus, it is easier for them
to get required information. For example, the following query provides data from both
the local and remote database table:
SELECT * FROM company;

The owner of the local view can grant only those object privileges on the local view
that have been granted by the remote user. (The remote user is implied by the type of
Managing a Distributed Database

30-19

Creating Location Transparency

database link). This is similar to privilege management for views that reference local
data.

Using Synonyms to Create Location Transparency
Synonyms are useful in both distributed and non-distributed environments because
they hide the identity of the underlying object, including its location in a distributed
database system. If you must rename or move the underlying object, you only need to
redefine the synonym; applications based on the synonym continue to function
normally. Synonyms also simplify SQL statements for users in a distributed database
system.

Creating Synonyms
You can create synonyms for the following:
■

Tables

■

Types

■

Views

■

Materialized views

■

Sequences

■

Procedures

■

Functions

■

Packages

All synonyms are schema objects that are stored in the data dictionary of the database
in which they are created. To simplify remote table access through database links, a
synonym can allow single-word access to remote data, hiding the specific object name
and the location from users of the synonym.
The syntax to create a synonym is:
CREATE [PUBLIC] synonym_name
FOR [schema.]object_name[@database_link_name];

where:
■

■

■

■

■

PUBLIC is a keyword specifying that this synonym is available to all users.
Omitting this parameter makes a synonym private, and usable only by the creator.
Public synonyms can be created only by a user with CREATE PUBLIC SYNONYM
system privilege.
synonym_name specifies the alternate object name to be referenced by users and
applications.
schema specifies the schema of the object specified in object_name. Omitting
this parameter uses the schema of the creator as the schema of the object.
object_name specifies either a table, view, sequence, materialized view, type,
procedure, function or package as appropriate.
database_link_name specifies the database link identifying the remote
database and schema in which the object specified in object_name is located.

A synonym must be a uniquely named object for its schema. If a schema contains a
schema object and a public synonym exists with the same name, then the database

30-20 Oracle Database Administrator’s Guide

Creating Location Transparency

always finds the schema object when the user that owns the schema references that
name.
Example: Creating a Public Synonym
Assume that in every database in a distributed database system, a public synonym is
defined for the scott.emp table stored in the hq database:
CREATE PUBLIC SYNONYM emp FOR scott.emp@hq.acme.com;

You can design an employee management application without regard to where the
application is used because the location of the table scott.emp@hq.acme.com is
hidden by the public synonyms. SQL statements in the application access the table by
referencing the public synonym emp.
Furthermore, if you move the emp table from the hq database to the hr database, then
you only need to change the public synonyms on the nodes of the system. The
employee management application continues to function properly on all nodes.

Managing Privileges and Synonyms
A synonym is a reference to an actual object. A user who has access to a synonym for a
particular schema object must also have privileges on the underlying schema object
itself. For example, if the user attempts to access a synonym but does not have
privileges on the table it identifies, an error occurs indicating that the table or view
does not exist.
Assume scott creates local synonym emp as an alias for remote object
scott.emp@sales.acme.com. scott cannot grant object privileges on the
synonym to another local user. scott cannot grant local privileges for the synonym
because this operation amounts to granting privileges for the remote emp table on the
sales database, which is not allowed. This behavior is different from privilege
management for synonyms that are aliases for local tables or views.
Therefore, you cannot manage local privileges when synonyms are used for location
transparency. Security for the base object is controlled entirely at the remote node. For
example, user admin cannot grant object privileges for the emp_syn synonym.
Unlike a database link referenced in a view or procedure definition, a database link
referenced in a synonym is resolved by first looking for a private link owned by the
schema in effect at the time the reference to the synonym is parsed. Therefore, to
ensure the desired object resolution, it is especially important to specify the schema of
the underlying object in the definition of a synonym.

Using Procedures to Create Location Transparency
PL/SQL program units called procedures can provide location transparency. You have
these options:
■

Using Local Procedures to Reference Remote Data

■

Using Local Procedures to Call Remote Procedures

■

Using Local Synonyms to Reference Remote Procedures

Using Local Procedures to Reference Remote Data
Procedures or functions (either standalone or in packages) can contain SQL statements
that reference remote data. For example, consider the procedure created by the
following statement:
CREATE PROCEDURE fire_emp (enum NUMBER) AS
Managing a Distributed Database

30-21

Creating Location Transparency

BEGIN
DELETE FROM emp@hq.acme.com
WHERE empno = enum;
END;

When a user or application calls the fire_emp procedure, it is not apparent that a
remote table is being modified.
A second layer of location transparency is possible when the statements in a procedure
indirectly reference remote data using local procedures, views, or synonyms. For
example, the following statement defines a local synonym:
CREATE SYNONYM emp FOR emp@hq.acme.com;

Given this synonym, you can create the fire_emp procedure using the following
statement:
CREATE PROCEDURE fire_emp (enum NUMBER) AS
BEGIN
DELETE FROM emp WHERE empno = enum;
END;

If you rename or move the table emp@hq, then you only need to modify the local
synonym that references the table. None of the procedures and applications that call
the procedure require modification.

Using Local Procedures to Call Remote Procedures
You can use a local procedure to call a remote procedure. The remote procedure can
then execute the required DML. For example, assume that scott connects to
local_db and creates the following procedure:
CONNECT scott/tiger@local_db
CREATE PROCEDURE fire_emp (enum NUMBER)
AS
BEGIN
EXECUTE term_emp@hq.acme.com;
END;

Now, assume that scott connects to the remote database and creates the remote
procedure:
CONNECT scott/tiger@hq.acme.com
CREATE PROCEDURE term_emp (enum NUMBER)
AS
BEGIN
DELETE FROM emp WHERE empno = enum;
END;

When a user or application connected to local_db calls the fire_emp procedure,
this procedure in turn calls the remote term_emp procedure on hq.acme.com.

Using Local Synonyms to Reference Remote Procedures
For example, scott connects to the local sales.acme.com database and creates the
following procedure:
CREATE PROCEDURE fire_emp (enum NUMBER) AS
BEGIN
DELETE FROM emp@hq.acme.com

30-22 Oracle Database Administrator’s Guide

Managing Statement Transparency

WHERE empno = enum;
END;

User peggy then connects to the supply.acme.com database and creates the
following synonym for the procedure that scott created on the remote sales
database:
SQL> CONNECT peggy/hill@supply
SQL> CREATE PUBLIC SYNONYM emp FOR scott.fire_emp@sales.acme.com;

A local user on supply can use this synonym to execute the procedure on sales.

Managing Procedures and Privileges
Assume a local procedure includes a statement that references a remote table or view.
The owner of the local procedure can grant the execute privilege to any user, thereby
giving that user the ability to execute the procedure and, indirectly, access remote data.
In general, procedures aid in security. Privileges for objects referenced within a
procedure do not need to be explicitly granted to the calling users.

Managing Statement Transparency
The database allows the following standard DML statements to reference remote
tables:
■

SELECT (queries)

■

INSERT

■

UPDATE

■

DELETE

■

SELECT...FOR UPDATE (not always supported in Heterogeneous Systems)

■

LOCK TABLE

Queries including joins, aggregates, subqueries, and SELECT...FOR UPDATE can
reference any number of local and remote tables and views. For example, the following
query joins information from two remote tables:
SELECT e.empno, e.ename, d.dname
FROM scott.emp@sales.division3.acme.com e, jward.dept@hq.acme.com d
WHERE e.deptno = d.deptno;

In a homogeneous environment, UPDATE, INSERT, DELETE, and LOCK TABLE
statements can reference both local and remote tables. No programming is necessary to
update remote data. For example, the following statement inserts new rows into the
remote table emp in the scott.sales schema by selecting rows from the emp table in
the jward schema in the local database:
INSERT INTO scott.emp@sales.division3.acme.com
SELECT * FROM jward.emp;

Restrictions for Statement Transparency:
Several restrictions apply to statement transparency.
■

Data manipulation language statements that update objects on a remote
non-Oracle Database system cannot reference any objects on the local Oracle
Database. For example, a statement such as the following will cause an error to be
raised:

Managing a Distributed Database

30-23

Managing a Distributed Database: Examples

INSERT INTO remote_table@link as SELECT * FROM local_table;
■

■

Within a single SQL statement, all referenced LONG and LONG RAW columns,
sequences, updated tables, and locked tables must be located at the same node.
The database does not allow remote DDL statements (for example, CREATE,
ALTER, and DROP) in homogeneous systems except through remote execution of
procedures of the DBMS_SQL package, as in this example:
DBMS_SQL.PARSE@link_name(crs, 'drop table emp', v7);

Note that in Heterogeneous Systems, a pass-through facility lets you execute DDL.
■

■

The LIST CHAINED ROWS clause of an ANALYZE statement cannot reference
remote tables.
In a distributed database system, the database always evaluates
environmentally-dependent SQL functions such as SYSDATE, USER, UID, and
USERENV with respect to the local server, no matter where the statement (or
portion of a statement) executes.
Note:

Oracle Database supports the USERENV function for queries

only.
■

■

A number of performance restrictions relate to access of remote objects:
–

Remote views do not have statistical data.

–

Queries on partitioned tables may not be optimized.

–

No more than 20 indexes are considered for a remote table.

–

No more than 20 columns are used for a composite index.

There is a restriction in the Oracle Database implementation of distributed read
consistency that can cause one node to be in the past with respect to another node.
In accordance with read consistency, a query may end up retrieving consistent, but
out-of-date data. See "Managing Read Consistency" on page 33-19 to learn how to
manage this problem.
See Also: Oracle Database PL/SQL Packages and Types Reference for
more information about the DBMS_SQL package

Managing a Distributed Database: Examples
This section presents examples illustrating management of database links:
■

Example 1: Creating a Public Fixed User Database Link

■

Example 2: Creating a Public Fixed User Shared Database Link

■

Example 3: Creating a Public Connected User Database Link

■

Example 4: Creating a Public Connected User Shared Database Link

■

Example 5: Creating a Public Current User Database Link

30-24 Oracle Database Administrator’s Guide

Managing a Distributed Database: Examples

Example 1: Creating a Public Fixed User Database Link
The following statements connect to the local database as jane and create a public
fixed user database link to database sales for scott. The database is accessed
through its net service name sldb:
CONNECT jane/doe@local
CREATE PUBLIC DATABASE LINK sales.division3.acme.com
CONNECT TO scott IDENTIFIED BY tiger
USING 'sldb';

After executing these statements, any user connected to the local database can use the
sales.division3.acme.com database link to connect to the remote database. Each
user connects to the schema scott in the remote database.
To access the table emp table in scott's remote schema, a user can issue the following
SQL query:
SELECT * FROM emp@sales.division3.acme.com;

Note that each application or user session creates a separate connection to the common
account on the server. The connection to the remote database remains open for the
duration of the application or user session.

Example 2: Creating a Public Fixed User Shared Database Link
The following example connects to the local database as dana and creates a public link
to the sales database (using its net service name sldb). The link allows a connection
to the remote database as scott and authenticates this user as scott:
CONNECT dana/sculley@local
CREATE SHARED PUBLIC DATABASE LINK sales.division3.acme.com
CONNECT TO scott IDENTIFIED BY tiger
AUTHENTICATED BY scott IDENTIFIED BY tiger
USING 'sldb';

Now, any user connected to the local shared server can use this database link to
connect to the remote sales database through a shared server process. The user can
then query tables in the scott schema.
In the preceding example, each local shared server can establish one connection to the
remote server. Whenever a local shared server process needs to access the remote
server through the sales.division3.acme.com database link, the local shared
server process reuses established network connections.

Example 3: Creating a Public Connected User Database Link
The following example connects to the local database as larry and creates a public
link to the database with the net service name sldb:
CONNECT larry/oracle@local
CREATE PUBLIC DATABASE LINK redwood
USING 'sldb';

Any user connected to the local database can use the redwood database link. The
connected user in the local database who uses the database link determines the remote
schema.

Managing a Distributed Database

30-25

Managing a Distributed Database: Examples

If scott is the connected user and uses the database link, then the database link
connects to the remote schema scott. If fox is the connected user and uses the
database link, then the database link connects to remote schema fox.
The following statement fails for local user fox in the local database when the remote
schema fox cannot resolve the emp schema object. That is, if the fox schema in the
sales.division3.acme.com does not have emp as a table, view, or (public)
synonym, an error will be returned.
CONNECT fox/mulder@local
SELECT * FROM emp@redwood;

Example 4: Creating a Public Connected User Shared Database Link
The following example connects to the local database as neil and creates a shared,
public link to the sales database (using its net service name sldb). The user is
authenticated by the userid/password of crazy/horse. The following statement
creates a public, connected user, shared database link:
CONNECT neil/young@local
CREATE SHARED PUBLIC DATABASE LINK sales.division3.acme.com
AUTHENTICATED BY crazy IDENTIFIED BY horse
USING 'sldb';

Each user connected to the local server can use this shared database link to connect to
the remote database and query the tables in the corresponding remote schema.
Each local, shared server process establishes one connection to the remote server.
Whenever a local server process needs to access the remote server through the
sales.division3.acme.com database link, the local process reuses established
network connections, even if the connected user is a different user.
If this database link is used frequently, eventually every shared server in the local
database will have a remote connection. At this point, no more physical connections
are needed to the remote server, even if new users use this shared database link.

Example 5: Creating a Public Current User Database Link
The following example connects to the local database as the connected user and creates
a public link to the sales database (using its net service name sldb). The following
statement creates a public current user database link:
CONNECT bart/simpson@local
CREATE PUBLIC DATABASE LINK sales.division3.acme.com
CONNECT TO CURRENT_USER
USING 'sldb';

Note:

To use this link, the current user must be a global user.

The consequences of this database link are as follows:
Assume scott creates local procedure fire_emp that deletes a row from the remote
emp table, and grants execute privilege on fire_emp to ford.
CONNECT scott/tiger@local_db

30-26 Oracle Database Administrator’s Guide

Managing a Distributed Database: Examples

CREATE PROCEDURE fire_emp (enum NUMBER)
AS
BEGIN
DELETE FROM emp@sales.division3.acme.com
WHERE empno=enum;
END;
GRANT EXECUTE ON fire_emp TO ford;

Now, assume that ford connects to the local database and runs scott's procedure:
CONNECT ford/fairlane@local_db
EXECUTE PROCEDURE scott.fire_emp (enum 10345);

When ford executes the procedure scott.fire_emp, the procedure runs under
scott's privileges. Because a current user database link is used, the connection is
established to scott's remote schema, not ford's remote schema. Note that scott
must be a global user while ford does not have to be a global user.
If a connected user database link were used instead, the
connection would be to ford's remote schema. For more
information about invoker rights and privileges, see the Oracle
Database PL/SQL User's Guide and Reference.

Note:

You can accomplish the same result by using a fixed user database link to scott's
remote schema.

Managing a Distributed Database

30-27

Managing a Distributed Database: Examples

30-28 Oracle Database Administrator’s Guide

31
Developing Applications for a Distributed
Database System
This chapter describes considerations important when developing an application to
run in a distributed database system. It contains the following topics:
■

Managing the Distribution of Application Data

■

Controlling Connections Established by Database Links

■

Maintaining Referential Integrity in a Distributed System

■

Tuning Distributed Queries

■

Handling Errors in Remote Procedures
See Also: Oracle Database Application Developer's Guide Fundamentals for more information about application development
in an Oracle Database environment

Managing the Distribution of Application Data
In a distributed database environment, coordinate with the database administrator to
determine the best location for the data. Some issues to consider are:
■

Number of transactions posted from each location

■

Amount of data (portion of table) used by each node

■

Performance characteristics and reliability of the network

■

Speed of various nodes, capacities of disks

■

Importance of a node or link when it is unavailable

■

Need for referential integrity among tables

Controlling Connections Established by Database Links
When a global object name is referenced in a SQL statement or remote procedure call,
database links establish a connection to a session in the remote database on behalf of
the local user. The remote connection and session are only created if the connection has
not already been established previously for the local user session.
The connections and sessions established to remote databases persist for the duration
of the local user's session, unless the application or user explicitly terminates them.
Note that when you issue a SELECT statement across a database link, a transaction

Developing Applications for a Distributed Database System 31-1

Maintaining Referential Integrity in a Distributed System

lock is placed on the undo segments. To rerelease the segment, you must issue a
COMMIT or ROLLBACK statement.
Terminating remote connections established using database links is useful for
disconnecting high cost connections that are no longer required by the application.
You can terminate a remote connection and session using the ALTER SESSION
statement with the CLOSE DATABASE LINK clause. For example, assume you issue
the following transactions:
SELECT * FROM emp@sales;
COMMIT;

The following statement terminates the session in the remote database pointed to by
the sales database link:
ALTER SESSION CLOSE DATABASE LINK sales;

To close a database link connection in your user session, you must have the ALTER
SESSION system privilege.
Before closing a database link, first close all cursors that use
the link and then end your current transaction if it uses the link.

Note:

Oracle Database SQL Reference for more information
about the ALTER SESSION statement

See Also:

Maintaining Referential Integrity in a Distributed System
If a part of a distributed statement fails, for example, due to an integrity constraint
violation, the database returns error number ORA-02055. Subsequent statements or
procedure calls return error number ORA-02067 until a ROLLBACK or ROLLBACK TO
SAVEPOINT is issued.
Design your application to check for any returned error messages that indicate that a
portion of the distributed update has failed. If you detect a failure, you should roll
back the entire transaction before allowing the application to proceed.
The database does not permit declarative referential integrity constraints to be defined
across nodes of a distributed system. In other words, a declarative referential integrity
constraint on one table cannot specify a foreign key that references a primary or
unique key of a remote table. Nevertheless, you can maintain parent/child table
relationships across nodes using triggers.
If you decide to define referential integrity across the nodes of a distributed database
using triggers, be aware that network failures can limit the accessibility of not only the
parent table, but also the child table. For example, assume that the child table is in the
sales database and the parent table is in the hq database. If the network connection
between the two databases fails, some DML statements against the child table (those
that insert rows into the child table or update a foreign key value in the child table)
cannot proceed because the referential integrity triggers must have access to the parent
table in the hq database.
See Also: Oracle Database Application Developer's Guide Fundamentals for more information about using triggers to enforce
referential integrity

31-2 Oracle Database Administrator’s Guide

Tuning Distributed Queries

Tuning Distributed Queries
The local Oracle Database server breaks the distributed query into a corresponding
number of remote queries, which it then sends to the remote nodes for execution. The
remote nodes execute the queries and send the results back to the local node. The local
node then performs any necessary post-processing and returns the results to the user
or application.
You have several options for designing your application to optimize query processing.
This section contains the following topics:
■

Using Collocated Inline Views

■

Using Cost-Based Optimization

■

Using Hints

■

Analyzing the Execution Plan

Using Collocated Inline Views
The most effective way of optimizing distributed queries is to access the remote
databases as little as possible and to retrieve only the required data.
For example, assume you reference five remote tables from two different remote
databases in a distributed query and have a complex filter (for example, WHERE
r1.salary + r2.salary > 50000). You can improve the performance of the query
by rewriting the query to access the remote databases once and to apply the filter at
the remote site. This rewrite causes less data to be transferred to the query execution
site.
Rewriting your query to access the remote database once is achieved by using
collocated inline views. The following terms need to be defined:
■

Collocated
Two or more tables located in the same database.

■

Inline view
A SELECT statement that is substituted for a table in a parent SELECT statement.
The embedded SELECT statement, shown within the parentheses is an example of
an inline view:
SELECT e.empno,e.ename,d.deptno,d.dname
FROM (SELECT empno, ename from
emp@orc1.world) e, dept d;

■

Collocated inline view
An inline view that selects data from multiple tables from a single database only. It
reduces the amount of times that the remote database is accessed, improving the
performance of a distributed query.

Oracle recommends that you form your distributed query using collocated inline
views to increase the performance of your distributed query. Oracle Database
cost-based optimization can transparently rewrite many of your distributed queries to
take advantage of the performance gains offered by collocated inline views.

Developing Applications for a Distributed Database System 31-3

Tuning Distributed Queries

Using Cost-Based Optimization
In addition to rewriting your queries with collocated inline views, the cost-based
optimization method optimizes distributed queries according to the gathered statistics
of the referenced tables and the computations performed by the optimizer.
For example, cost-based optimization analyzes the following query. The example
assumes that table statistics are available. Note that it analyzes the query inside a
CREATE TABLE statement:
CREATE TABLE AS (
SELECT l.a, l.b, r1.c, r1.d, r1.e, r2.b, r2.c
FROM local l, remote1 r1, remote2 r2
WHERE l.c = r.c
AND r1.c = r2.c
AND r.e > 300
);

and rewrites it as:
CREATE TABLE AS (
SELECT l.a, l.b, v.c, v.d, v.e
FROM (
SELECT r1.c, r1.d, r1.e, r2.b, r2.c
FROM remote1 r1, remote2 r2
WHERE r1.c = r2.c
AND r1.e > 300
) v, local l
WHERE l.c = r1.c
);

The alias v is assigned to the inline view, which can then be referenced as a table in the
preceding SELECT statement. Creating a collocated inline view reduces the amount of
queries performed at a remote site, thereby reducing costly network traffic.

How Does Cost-Based Optimization Work?
The main task of optimization is to rewrite a distributed query to use collocated inline
views. This optimization is performed in three steps:
1.

All mergeable views are merged.

2.

Optimizer performs collocated query block test.

3.

Optimizer rewrites query using collocated inline views.

After the query is rewritten, it is executed and the data set is returned to the user.
While cost-based optimization is performed transparently to the user, it is unable to
improve the performance of several distributed query scenarios. Specifically, if your
distributed query contains any of the following, cost-based optimization is not
effective:
■

Aggregates

■

Subqueries

■

Complex SQL

If your distributed query contains one of these elements, see "Using Hints" on
page 31-6 to learn how you can modify your query and use hints to improve the
performance of your distributed query.

31-4 Oracle Database Administrator’s Guide

Tuning Distributed Queries

Setting Up Cost-Based Optimization
After you have set up your system to use cost-based optimization to improve the
performance of distributed queries, the operation is transparent to the user. In other
words, the optimization occurs automatically when the query is issued.
You need to complete the following tasks to set up your system to take advantage of
cost-based optimization:
■

Setting Up the Environment

■

Analyzing Tables

Setting Up the Environment To enable cost-based optimization, set the OPTIMIZER_MODE
initialization parameter to CHOOSE or COST. You can set this parameter by:
■

Modifying the OPTIMIZER_MODE parameter in the initialization parameter file

■

Setting it at session level by issuing an ALTER SESSION statement

Issue one of the following statements to set the OPTIMIZER_MODE initialization
parameter at the session level:
ALTER SESSION OPTIMIZER_MODE = CHOOSE;
ALTER SESSION OPTIMIZER_MODE = COST;

See Also: Oracle Database Performance Tuning Guide for
information on setting the OPTIMIZER_MODE initialization
parameter in the parameter file and for configuring your system to
use a cost-based optimization method

Analyzing Tables For cost-based optimization to select the most efficient path for a
distributed query, you must provide accurate statistics for the tables involved. You do
this using the DBMS_STATS package or the ANALYZE statement.
You must connect locally with respect to the tables when
executing the DBMS_STATS procedure or ANALYZE statement. For
example, you cannot execute the following:

Note:

ANALYZE TABLE remote@remote.com COMPUTE STATISTICS;

You must first connect to the remote site and then execute the
preceding ANALYZE statement, or the equivalent DBMS_STATS
procedure.
The following DBMS_STATS procedures enable the gathering of certain classes of
optimizer statistics:
■

GATHER_INDEX_STATS

■

GATHER_TABLE_STATS

■

GATHER_SCHEMA_STATS

■

GATHER_DATABASE_STATS

For example, assume that distributed transactions routinely access the scott.dept
table. To ensure that the cost-based optimizer is still picking the best plan, execute the
following:
BEGIN
DBMS_STATS.GATHER_TABLE_STATS ('scott', 'dept');
END;

Developing Applications for a Distributed Database System 31-5

Tuning Distributed Queries

See Also:
■

■

Oracle Database Performance Tuning Guide for information about
generating statistics
Oracle Database PL/SQL Packages and Types Reference for
additional information on using the DBMS_STATS package

Using Hints
If a statement is not sufficiently optimized, then you can use hints to extend the
capability of cost-based optimization. Specifically, if you write your own query to
utilize collocated inline views, instruct the cost-based optimizer not to rewrite your
distributed query.
Additionally, if you have special knowledge about the database environment (such as
statistics, load, network and CPU limitations, distributed queries, and so forth), you
can specify a hint to guide cost-based optimization. For example, if you have written
your own optimized query using collocated inline views that are based on your
knowledge of the database environment, specify the NO_MERGE hint to prevent the
optimizer from rewriting your query.
This technique is especially helpful if your distributed query contains an aggregate,
subquery, or complex SQL. Because this type of distributed query cannot be rewritten
by the optimizer, specifying NO_MERGE causes the optimizer to skip the steps
described in "How Does Cost-Based Optimization Work?" on page 31-4.
The DRIVING_SITE hint lets you define a remote site to act as the query execution
site. In this way, the query executes on the remote site, which then returns the data to
the local site. This hint is especially helpful when the remote site contains the majority
of the data.
See Also: Oracle Database Performance Tuning Guide for more
information about using hints

Using the NO_MERGE Hint
The NO_MERGE hint prevents the database from merging an inline view into a
potentially non-collocated SQL statement (see "Using Hints" on page 31-6). This hint is
embedded in the SELECT statement and can appear either at the beginning of the
SELECT statement with the inline view as an argument or in the query block that
defines the inline view.
/* with argument */
SELECT /*+NO_MERGE(v)*/ t1.x, v.avg_y
FROM t1, (SELECT x, AVG(y) AS avg_y FROM t2 GROUP BY x) v,
WHERE t1.x = v.x AND t1.y = 1;
/* in query block */
SELECT t1.x, v.avg_y
FROM t1, (SELECT /*+NO_MERGE*/ x, AVG(y) AS avg_y FROM t2 GROUP BY x) v,
WHERE t1.x = v.x AND t1.y = 1;

Typically, you use this hint when you have developed an optimized query based on
your knowledge of your database environment.

31-6 Oracle Database Administrator’s Guide

Tuning Distributed Queries

Using the DRIVING_SITE Hint
The DRIVING_SITE hint lets you specify the site where the query execution is
performed. It is best to let cost-based optimization determine where the execution
should be performed, but if you prefer to override the optimizer, you can specify the
execution site manually.
Following is an example of a SELECT statement with a DRIVING_SITE hint:
SELECT /*+DRIVING_SITE(dept)*/ * FROM emp, dept@remote.com
WHERE emp.deptno = dept.deptno;

Analyzing the Execution Plan
An important aspect to tuning distributed queries is analyzing the execution plan. The
feedback that you receive from your analysis is an important element to testing and
verifying your database. Verification becomes especially important when you want to
compare plans. For example, comparing the execution plan for a distributed query
optimized by cost-based optimization to a plan for a query manually optimized using
hints, collocated inline views, and other techniques.
See Also: Oracle Database Performance Tuning Guide for detailed
information about execution plans, the EXPLAIN PLAN statement,
and how to interpret the results

Preparing the Database to Store the Plan
Before you can view the execution plan for the distributed query, prepare the database
to store the execution plan. You can perform this preparation by executing a script.
Execute the following script to prepare your database to store an execution plan:
SQL> @UTLXPLAN.SQL

Note: The utlxplan.sql file can be found in the
$ORACLE_HOME/rdbms/admin directory.

After you execute utlxplan.sql, a table, PLAN_TABLE, is created in the current
schema to temporarily store the execution plan.

Generating the Execution Plan
After you have prepared the database to store the execution plan, you are ready to
view the plan for a specified query. Instead of directly executing a SQL statement,
append the statement to the EXPLAIN PLAN FOR clause. For example, you can
execute the following:
EXPLAIN PLAN FOR
SELECT d.dname
FROM dept d
WHERE d.deptno
IN (SELECT deptno
FROM emp@orc2.world
GROUP BY deptno
HAVING COUNT (deptno) >3
)
/

Developing Applications for a Distributed Database System 31-7

Handling Errors in Remote Procedures

Viewing the Execution Plan
After you have executed the preceding SQL statement, the execution plan is stored
temporarily in the PLAN_TABLE that you created earlier. To view the results of the
execution plan, execute the following script:
@UTLXPLS.SQL

Note: The utlxpls.sql can be found in the
$ORACLE_HOME/rdbms/admin directory.

Executing the utlxpls.sql script displays the execution plan for the SELECT
statement that you specified. The results are formatted as follows:
Plan Table
------------------------------------------------------------------------------| Operation
| Name
| Rows | Bytes| Cost | Pstart| Pstop |
------------------------------------------------------------------------------| SELECT STATEMENT
|
|
|
|
|
|
|
| NESTED LOOPS
|
|
|
|
|
|
|
|
VIEW
|
|
|
|
|
|
|
|
REMOTE
|
|
|
|
|
|
|
|
TABLE ACCESS BY INDEX RO|DEPT
|
|
|
|
|
|
|
INDEX UNIQUE SCAN
|PK_DEPT
|
|
|
|
|
|
-------------------------------------------------------------------------------

If you are manually optimizing distributed queries by writing your own collocated
inline views or using hints, it is best to generate an execution plan before and after
your manual optimization. With both execution plans, you can compare the
effectiveness of your manual optimization and make changes as necessary to improve
the performance of the distributed query.
To view the SQL statement that will be executed at the remote site, execute the
following select statement:
SELECT OTHER
FROM PLAN_TABLE
WHERE operation = 'REMOTE';

Following is sample output:
SELECT DISTINCT "A1"."DEPTNO" FROM "EMP" "A1"
GROUP BY "A1"."DEPTNO" HAVING COUNT("A1"."DEPTNO")>3

If you are having difficulty viewing the entire contents of
the OTHER column, execute the following SQL*Plus command:

Note:

SET LONG 9999999

Handling Errors in Remote Procedures
When the database executes a procedure locally or at a remote location, four types of
exceptions can occur:
■

PL/SQL user-defined exceptions, which must be declared using the keyword
EXCEPTION

■

PL/SQL predefined exceptions such as the NO_DATA_FOUND keyword

■

SQL errors such as ORA-00900 and ORA-02015

31-8 Oracle Database Administrator’s Guide

Handling Errors in Remote Procedures

■

Application exceptions generated using the RAISE_APPLICATION_ERROR()
procedure

When using local procedures, you can trap these messages by writing an exception
handler such as the following
BEGIN
...
EXCEPTION
WHEN ZERO_DIVIDE THEN
/* ... handle the exception */
END;

Notice that the WHEN clause requires an exception name. If the exception does not have
a name, for example, exceptions generated with RAISE_APPLICATION_ERROR, you
can assign one using PRAGMA_EXCEPTION_INIT. For example:
DECLARE
null_salary EXCEPTION;
PRAGMA EXCEPTION_INIT(null_salary, -20101);
BEGIN
...
RAISE_APPLICATION_ERROR(-20101, 'salary is missing');
...
EXCEPTION
WHEN null_salary THEN
...
END;

When calling a remote procedure, exceptions can be handled by an exception handler
in the local procedure. The remote procedure must return an error number to the local,
calling procedure, which then handles the exception as shown in the previous
example. Note that PL/SQL user-defined exceptions always return ORA-06510 to the
local procedure.
Therefore, it is not possible to distinguish between two different user-defined
exceptions based on the error number. All other remote exceptions can be handled in
the same manner as local exceptions.
See Also: Oracle Database PL/SQL User's Guide and Reference for
more information about PL/SQL procedures

Developing Applications for a Distributed Database System 31-9

Handling Errors in Remote Procedures

31-10 Oracle Database Administrator’s Guide

32
Distributed Transactions Concepts
This chapter describes what distributed transactions are and how Oracle Database
maintains their integrity. The following topics are contained in this chapter:
■

What Are Distributed Transactions?

■

Session Trees for Distributed Transactions

■

Two-Phase Commit Mechanism

■

In-Doubt Transactions

■

Distributed Transaction Processing: Case Study

What Are Distributed Transactions?
A distributed transaction includes one or more statements that, individually or as a
group, update data on two or more distinct nodes of a distributed database. For
example, assume the database configuration depicted in Figure 32–1:
Figure 32–1

Distributed System
dept table
Oracle Net
database link

HQ

emp table

SALES
bldg table
Oracle Net
database link

MAINT

SCOTT

The following distributed transaction executed by scott updates the local sales
database, the remote hq database, and the remote maint database:
UPDATE scott.dept@hq.us.acme.com
SET loc = 'REDWOOD SHORES'

Distributed Transactions Concepts 32-1

What Are Distributed Transactions?

WHERE deptno = 10;
UPDATE scott.emp
SET deptno = 11
WHERE deptno = 10;
UPDATE scott.bldg@maint.us.acme.com
SET room = 1225
WHERE room = 1163;
COMMIT;

If all statements of a transaction reference only a single
remote node, then the transaction is remote, not distributed.

Note:

There are two types of permissible operations in distributed transactions:
■

DML and DDL Transactions

■

Transaction Control Statements

DML and DDL Transactions
The following are the DML and DDL operations supported in a distributed
transaction:
■

CREATE TABLE AS SELECT

■

DELETE

■

INSERT (default and direct load)

■

LOCK TABLE

■

SELECT

■

SELECT FOR UPDATE

You can execute DML and DDL statements in parallel, and INSERT direct load
statements serially, but note the following restrictions:
■

All remote operations must be SELECT statements.

■

These statements must not be clauses in another distributed transaction.

■

■

■
■

■

If the table referenced in the table_expression_clause of an INSERT, UPDATE, or
DELETE statement is remote, then execution is serial rather than parallel.
You cannot perform remote operations after issuing parallel DML/DDL or direct
load INSERT.
If the transaction begins using XA or OCI, it executes serially.
No loopback operations can be performed on the transaction originating the
parallel operation. For example, you cannot reference a remote object that is
actually a synonym for a local object.
If you perform a distributed operation other than a SELECT in the transaction, no
DML is parallelized.

Transaction Control Statements
The following are the supported transaction control statements:
■

COMMIT

■

ROLLBACK

32-2 Oracle Database Administrator’s Guide

Session Trees for Distributed Transactions

■

SAVEPOINT
Oracle Database SQL Reference for more information
about these SQL statements

See Also:

Session Trees for Distributed Transactions
As the statements in a distributed transaction are issued, the database defines a
session tree of all nodes participating in the transaction. A session tree is a hierarchical
model that describes the relationships among sessions and their roles. Figure 32–2
illustrates a session tree:
Figure 32–2 Example of a Session Tree

INSERT INTO orders...;
UPDATE inventory @ warehouse...;
UPDATE accts_rec @ finance...;
.
COMMIT;

SALES.ACME.COM

Global Coordinator
Commit Point Site
Database Server
WAREHOUSE.ACME.COM

FINANCE.ACME.COM

Client

All nodes participating in the session tree of a distributed transaction assume one or
more of the following roles:
Role

Description

Client

A node that references information in a database belonging to a
different node.

Database server

A node that receives a request for information from another node.

Global coordinator

The node that originates the distributed transaction.

Local coordinator

A node that is forced to reference data on other nodes to complete
its part of the transaction.

Commit point site

The node that commits or rolls back the transaction as instructed
by the global coordinator.

The role a node plays in a distributed transaction is determined by:
■

Whether the transaction is local or remote

■

The commit point strength of the node ("Commit Point Site" on page 32-5)

Distributed Transactions Concepts 32-3

Session Trees for Distributed Transactions

■

■

Whether all requested data is available at a node, or whether other nodes need to
be referenced to complete the transaction
Whether the node is read-only

Clients
A node acts as a client when it references information from a database on another
node. The referenced node is a database server. In Figure 32–2, the node sales is a
client of the nodes that host the warehouse and finance databases.

Database Servers
A database server is a node that hosts a database from which a client requests data.
In Figure 32–2, an application at the sales node initiates a distributed transaction that
accesses data from the warehouse and finance nodes. Therefore,
sales.acme.com has the role of client node, and warehouse and finance are both
database servers. In this example, sales is a database server and a client because the
application also modifies data in the sales database.

Local Coordinators
A node that must reference data on other nodes to complete its part in the distributed
transaction is called a local coordinator. In Figure 32–2, sales is a local coordinator
because it coordinates the nodes it directly references: warehouse and finance.
The node sales also happens to be the global coordinator because it coordinates all
the nodes involved in the transaction.
A local coordinator is responsible for coordinating the transaction among the nodes it
communicates directly with by:
■

Receiving and relaying transaction status information to and from those nodes

■

Passing queries to those nodes

■

Receiving queries from those nodes and passing them on to other nodes

■

Returning the results of queries to the nodes that initiated them

Global Coordinator
The node where the distributed transaction originates is called the global coordinator.
The database application issuing the distributed transaction is directly connected to
the node acting as the global coordinator. For example, in Figure 32–2, the transaction
issued at the node sales references information from the database servers
warehouse and finance. Therefore, sales.acme.com is the global coordinator of
this distributed transaction.
The global coordinator becomes the parent or root of the session tree. The global
coordinator performs the following operations during a distributed transaction:
■

■

■

Sends all of the distributed transaction SQL statements, remote procedure calls,
and so forth to the directly referenced nodes, thus forming the session tree
Instructs all directly referenced nodes other than the commit point site to prepare
the transaction
Instructs the commit point site to initiate the global commit of the transaction if all
nodes prepare successfully

32-4 Oracle Database Administrator’s Guide

Session Trees for Distributed Transactions

■

Instructs all nodes to initiate a global rollback of the transaction if there is an abort
response

Commit Point Site
The job of the commit point site is to initiate a commit or roll back operation as
instructed by the global coordinator. The system administrator always designates one
node to be the commit point site in the session tree by assigning all nodes a commit
point strength. The node selected as commit point site should be the node that stores
the most critical data.
Figure 32–3 illustrates an example of distributed system, with sales serving as the
commit point site:
Figure 32–3

Commit Point Site

WAREHOUSE

COMMIT_POINT_STRENGTH = 75
SALES

COMMIT_POINT_STRENGTH = 100
FINANCE

COMMIT_POINT_STRENGTH = 50

The commit point site is distinct from all other nodes involved in a distributed
transaction in these ways:
■

■

The commit point site never enters the prepared state. Consequently, if the commit
point site stores the most critical data, this data never remains in-doubt, even if a
failure occurs. In failure situations, failed nodes remain in a prepared state,
holding necessary locks on data until in-doubt transactions are resolved.
The commit point site commits before the other nodes involved in the transaction.
In effect, the outcome of a distributed transaction at the commit point site
determines whether the transaction at all nodes is committed or rolled back: the
other nodes follow the lead of the commit point site. The global coordinator
ensures that all nodes complete the transaction in the same manner as the commit
point site.

How a Distributed Transaction Commits
A distributed transaction is considered committed after all non-commit-point sites are
prepared, and the transaction has been actually committed at the commit point site.
The redo log at the commit point site is updated as soon as the distributed transaction
is committed at this node.
Because the commit point log contains a record of the commit, the transaction is
considered committed even though some participating nodes may still be only in the

Distributed Transactions Concepts 32-5

Session Trees for Distributed Transactions

prepared state and the transaction not yet actually committed at these nodes. In the
same way, a distributed transaction is considered not committed if the commit has not
been logged at the commit point site.

Commit Point Strength
Every database server must be assigned a commit point strength. If a database server
is referenced in a distributed transaction, the value of its commit point strength
determines which role it plays in the two-phase commit. Specifically, the commit point
strength determines whether a given node is the commit point site in the distributed
transaction and thus commits before all of the other nodes. This value is specified
using the initialization parameter COMMIT_POINT_STRENGTH. This section explains
how the database determines the commit point site.
The commit point site, which is determined at the beginning of the prepare phase, is
selected only from the nodes participating in the transaction. The following sequence
of events occurs:
1.

Of the nodes directly referenced by the global coordinator, the database selects the
node with the highest commit point strength as the commit point site.

2.

The initially-selected node determines if any of the nodes from which it has to
obtain information for this transaction has a higher commit point strength.

3.

Either the node with the highest commit point strength directly referenced in the
transaction or one of its servers with a higher commit point strength becomes the
commit point site.

4.

After the final commit point site has been determined, the global coordinator
sends prepare responses to all nodes participating in the transaction.

Figure 32–4 shows in a sample session tree the commit point strengths of each node (in
parentheses) and shows the node chosen as the commit point site:
Figure 32–4

Commit Point Strengths and Determination of the Commit Point Site
SALES.ACME.COM
(45)

HQ.ACME.COM
(165)
WAREHOUSE.ACME.COM
(140)

Global Coordinator
Commit Point Site
FINANCE.ACME.COM
(45)

HR.ACME.COM
(45)

Database Server
Client

The following conditions apply when determining the commit point site:
32-6 Oracle Database Administrator’s Guide

Two-Phase Commit Mechanism

■
■

■

A read-only node cannot be the commit point site.
If multiple nodes directly referenced by the global coordinator have the same
commit point strength, then the database designates one of these as the commit
point site.
If a distributed transaction ends with a rollback, then the prepare and commit
phases are not needed. Consequently, the database never determines a commit
point site. Instead, the global coordinator sends a ROLLBACK statement to all
nodes and ends the processing of the distributed transaction.

As Figure 32–4 illustrates, the commit point site and the global coordinator can be
different nodes of the session tree. The commit point strength of each node is
communicated to the coordinators when the initial connections are made. The
coordinators retain the commit point strengths of each node they are in direct
communication with so that commit point sites can be efficiently selected during
two-phase commits. Therefore, it is not necessary for the commit point strength to be
exchanged between a coordinator and a node each time a commit occurs.
See Also:
■

■

"Specifying the Commit Point Strength of a Node" on page 33-1
to learn how to set the commit point strength of a node
Oracle Database Reference for more information about the
initialization parameter COMMIT_POINT_STRENGTH

Two-Phase Commit Mechanism
Unlike a transaction on a local database, a distributed transaction involves altering
data on multiple databases. Consequently, distributed transaction processing is more
complicated, because the database must coordinate the committing or rolling back of
the changes in a transaction as a self-contained unit. In other words, the entire
transaction commits, or the entire transaction rolls back.
The database ensures the integrity of data in a distributed transaction using the
two-phase commit mechanism. In the prepare phase, the initiating node in the
transaction asks the other participating nodes to promise to commit or roll back the
transaction. During the commit phase, the initiating node asks all participating nodes
to commit the transaction. If this outcome is not possible, then all nodes are asked to
roll back.
All participating nodes in a distributed transaction should perform the same action:
they should either all commit or all perform a rollback of the transaction. The database
automatically controls and monitors the commit or rollback of a distributed
transaction and maintains the integrity of the global database (the collection of
databases participating in the transaction) using the two-phase commit mechanism.
This mechanism is completely transparent, requiring no programming on the part of
the user or application developer.
The commit mechanism has the following distinct phases, which the database
performs automatically whenever a user commits a distributed transaction:
Phase

Description

Prepare phase

The initiating node, called the global coordinator, asks
participating nodes other than the commit point site to promise to
commit or roll back the transaction, even if there is a failure. If any
node cannot prepare, the transaction is rolled back.

Distributed Transactions Concepts 32-7

Two-Phase Commit Mechanism

Phase

Description

Commit phase

If all participants respond to the coordinator that they are
prepared, then the coordinator asks the commit point site to
commit. After it commits, the coordinator asks all other nodes to
commit the transaction.

Forget phase

The global coordinator forgets about the transaction.

This section contains the following topics:
■

Prepare Phase

■

Commit Phase

■

Forget Phase

Prepare Phase
The first phase in committing a distributed transaction is the prepare phase. In this
phase, the database does not actually commit or roll back the transaction. Instead, all
nodes referenced in a distributed transaction (except the commit point site, described
in the "Commit Point Site" on page 32-5) are told to prepare to commit. By preparing, a
node:
■

■

Records information in the redo logs so that it can subsequently either commit or
roll back the transaction, regardless of intervening failures
Places a distributed lock on modified tables, which prevents reads

When a node responds to the global coordinator that it is prepared to commit, the
prepared node promises to either commit or roll back the transaction later, but does not
make a unilateral decision on whether to commit or roll back the transaction. The
promise means that if an instance failure occurs at this point, the node can use the redo
records in the online log to recover the database back to the prepare phase.
Queries that start after a node has prepared cannot access
the associated locked data until all phases complete. The time is
insignificant unless a failure occurs (see "Deciding How to Handle
In-Doubt Transactions" on page 33-5).

Note:

Types of Responses in the Prepare Phase
When a node is told to prepare, it can respond in the following ways:
Response

Meaning

Prepared

Data on the node has been modified by a statement in the
distributed transaction, and the node has successfully prepared.

Read-only

No data on the node has been, or can be, modified (only queried),
so no preparation is necessary.

Abort

The node cannot successfully prepare.

Prepared Response When a node has successfully prepared, it issues a prepared
message. The message indicates that the node has records of the changes in the online
log, so it is prepared either to commit or perform a rollback. The message also
guarantees that locks held for the transaction can survive a failure.

32-8 Oracle Database Administrator’s Guide

Two-Phase Commit Mechanism

Read-Only Response When a node is asked to prepare, and the SQL statements affecting
the database do not change any data on the node, the node responds with a read-only
message. The message indicates that the node will not participate in the commit
phase.
There are three cases in which all or part of a distributed transaction is read-only:
Case
Partially read-only

Conditions

Consequence

Any of the following occurs:

The read-only nodes recognize their
status when asked to prepare. They
give their local coordinators a
read-only response. Thus, the
commit phase completes faster
because the database eliminates
read-only nodes from subsequent
processing.

■

■
■

Completely read-only
with prepare phase

No data is changed.
Changes rolled back due
to triggers firing or
constraint violations.

All of following occur:
■
■

Completely read-only
without two-phase
commit

Only queries are issued at
one or more nodes.

All nodes recognize that they are
read-only during prepare phase, so
no commit phase is required. The
global coordinator, not knowing
whether all nodes are read-only,
must still perform the prepare phase.

No data changes.
Transaction is not started
with SET TRANSACTION
READ ONLY statement.

All of following occur:
■
■

No data changes.
Transaction is started with
SET TRANSACTION
READ ONLY statement.

Only queries are allowed in the
transaction, so global coordinator
does not have to perform two-phase
commit. Changes by other
transactions do not degrade global
transaction-level read consistency
because of global SCN coordination
among nodes. The transaction does
not use undo segments.

Note that if a distributed transaction is set to read-only, then it does not use undo
segments. If many users connect to the database and their transactions are not set to
READ ONLY, then they allocate undo space even if they are only performing queries.
Abort Response When a node cannot successfully prepare, it performs the following
actions:
1.

Releases resources currently held by the transaction and rolls back the local
portion of the transaction.

2.

Responds to the node that referenced it in the distributed transaction with an abort
message.

These actions then propagate to the other nodes involved in the distributed transaction
so that they can roll back the transaction and guarantee the integrity of the data in the
global database. This response enforces the primary rule of a distributed transaction:
all nodes involved in the transaction either all commit or all roll back the transaction at the
same logical time.

Steps in the Prepare Phase
To complete the prepare phase, each node excluding the commit point site performs
the following steps:
1.

The node requests that its descendants, that is, the nodes subsequently referenced,
prepare to commit.

Distributed Transactions Concepts 32-9

Two-Phase Commit Mechanism

2.

The node checks to see whether the transaction changes data on itself or its
descendants. If there is no change to the data, then the node skips the remaining
steps and returns a read-only response (see "Read-Only Response" on page 32-9).

3.

The node allocates the resources it needs to commit the transaction if data is
changed.

4.

The node saves redo records corresponding to changes made by the transaction to
its redo log.

5.

The node guarantees that locks held for the transaction are able to survive a
failure.

6.

The node responds to the initiating node with a prepared response (see "Prepared
Response" on page 32-8) or, if its attempt or the attempt of one of its descendents
to prepare was unsuccessful, with an abort response (see "Abort Response" on
page 32-9).

These actions guarantee that the node can subsequently commit or roll back the
transaction on the node. The prepared nodes then wait until a COMMIT or ROLLBACK
request is received from the global coordinator.
After the nodes are prepared, the distributed transaction is said to be in-doubt (see
"In-Doubt Transactions" on page 32-11).It retains in-doubt status until all changes are
either committed or rolled back.

Commit Phase
The second phase in committing a distributed transaction is the commit phase. Before
this phase occurs, all nodes other than the commit point site referenced in the
distributed transaction have guaranteed that they are prepared, that is, they have the
necessary resources to commit the transaction.

Steps in the Commit Phase
The commit phase consists of the following steps:
1.

The global coordinator instructs the commit point site to commit.

2.

The commit point site commits.

3.

The commit point site informs the global coordinator that it has committed.

4.

The global and local coordinators send a message to all nodes instructing them to
commit the transaction.

5.

At each node, the database commits the local portion of the distributed transaction
and releases locks.

6.

At each node, the database records an additional redo entry in the local redo log,
indicating that the transaction has committed.

7.

The participating nodes notify the global coordinator that they have committed.

When the commit phase is complete, the data on all nodes of the distributed system is
consistent.

Guaranteeing Global Database Consistency
Each committed transaction has an associated system change number (SCN) to
uniquely identify the changes made by the SQL statements within that transaction.
The SCN functions as an internal timestamp that uniquely identifies a committed
version of the database.

32-10 Oracle Database Administrator’s Guide

In-Doubt Transactions

In a distributed system, the SCNs of communicating nodes are coordinated when all of
the following actions occur:
■

A connection occurs using the path described by one or more database links

■

A distributed SQL statement executes

■

A distributed transaction commits

Among other benefits, the coordination of SCNs among the nodes of a distributed
system ensures global read-consistency at both the statement and transaction level. If
necessary, global time-based recovery can also be completed.
During the prepare phase, the database determines the highest SCN at all nodes
involved in the transaction. The transaction then commits with the high SCN at the
commit point site. The commit SCN is then sent to all prepared nodes with the commit
decision.
See Also: "Managing Read Consistency" on page 33-19 for
information about managing time lag issues in read consistency

Forget Phase
After the participating nodes notify the commit point site that they have committed,
the commit point site can forget about the transaction. The following steps occur:
1.

After receiving notice from the global coordinator that all nodes have committed,
the commit point site erases status information about this transaction.

2.

The commit point site informs the global coordinator that it has erased the status
information.

3.

The global coordinator erases its own information about the transaction.

In-Doubt Transactions
The two-phase commit mechanism ensures that all nodes either commit or perform a
rollback together. What happens if any of the three phases fails because of a system or
network error? The transaction becomes in-doubt.
Distributed transactions can become in-doubt in the following ways:
■
■

■

A server machine running Oracle Database software crashes
A network connection between two or more Oracle Databases involved in
distributed processing is disconnected
An unhandled software error occurs

The RECO process automatically resolves in-doubt transactions when the machine,
network, or software problem is resolved. Until RECO can resolve the transaction, the
data is locked for both reads and writes. The database blocks reads because it cannot
determine which version of the data to display for a query.
This section contains the following topics:
■

Automatic Resolution of In-Doubt Transactions

■

Manual Resolution of In-Doubt Transactions

■

Relevance of System Change Numbers for In-Doubt Transactions

Distributed Transactions Concepts

32-11

In-Doubt Transactions

Automatic Resolution of In-Doubt Transactions
In the majority of cases, the database resolves the in-doubt transaction automatically.
Assume that there are two nodes, local and remote, in the following scenarios. The
local node is the commit point site. User scott connects to local and executes and
commits a distributed transaction that updates local and remote.

Failure During the Prepare Phase
Figure 32–5 illustrates the sequence of events when there is a failure during the
prepare phase of a distributed transaction:
Figure 32–5

Failure During Prepare Phase

4

All databases perform
rollback

2

Asks REMOTE to prepare

3

Crashes before giving
prepare response

Local
Remote
COMMIT_POINT_STRENGTH = 200

1

COMMIT_POINT_STRENGTH = 100

Issues distributed
transaction

SCOTT

The following steps occur:
1.

User SCOTT connects to Local and executes a distributed transaction.

2.

The global coordinator, which in this example is also the commit point site,
requests all databases other than the commit point site to promise to commit or
roll back when told to do so.

3.

The remote database crashes before issuing the prepare response back to local.

4.

The transaction is ultimately rolled back on each database by the RECO process
when the remote site is restored.

Failure During the Commit Phase
Figure 32–6 illustrates the sequence of events when there is a failure during the
commit phase of a distributed transaction:

32-12 Oracle Database Administrator’s Guide

In-Doubt Transactions

Figure 32–6 Failure During Commit Phase

6

Local

All databases commit after
network restored

4

Asks REMOTE to commit

3

Receives prepare message from REMOTE

2

Asks REMOTE to prepare

COMMIT_POINT_STRENGTH = 200

1

5

Receives commit message,
but cannot respond

Remote

COMMIT_POINT_STRENGTH = 100

Issues distributed
transaction

SCOTT

The following steps occur:
1.

User Scott connects to local and executes a distributed transaction.

2.

The global coordinator, which in this case is also the commit point site, requests all
databases other than the commit point site to promise to commit or roll back when
told to do so.

3.

The commit point site receives a prepared message from remote saying that it
will commit.

4.

The commit point site commits the transaction locally, then sends a commit
message to remote asking it to commit.

5.

The remote database receives the commit message, but cannot respond because
of a network failure.

6.

The transaction is ultimately committed on the remote database by the RECO
process after the network is restored.
"Deciding How to Handle In-Doubt Transactions" on
page 33-5 for a description of failure situations and how the
database resolves intervening failures during two-phase commit

See Also:

Manual Resolution of In-Doubt Transactions
You should only need to resolve an in-doubt transaction in the following cases:
■

The in-doubt transaction has locks on critical data or undo segments.

■

The cause of the machine, network, or software failure cannot be repaired quickly.

Resolution of in-doubt transactions can be complicated. The procedure requires that
you do the following:
■
■

■

Identify the transaction identification number for the in-doubt transaction.
Query the DBA_2PC_PENDING and DBA_2PC_NEIGHBORS views to determine
whether the databases involved in the transaction have committed.
If necessary, force a commit using the COMMIT FORCE statement or a rollback
using the ROLLBACK FORCE statement.
Distributed Transactions Concepts

32-13

Distributed Transaction Processing: Case Study

See Also: The following sections explain how to resolve in-doubt
transactions:
■

"Deciding How to Handle In-Doubt Transactions" on page 33-5

■

"Manually Overriding In-Doubt Transactions" on page 33-8

Relevance of System Change Numbers for In-Doubt Transactions
A system change number (SCN) is an internal timestamp for a committed version of
the database. The Oracle Database server uses the SCN clock value to guarantee
transaction consistency. For example, when a user commits a transaction, the database
records an SCN for this commit in the redo log.
The database uses SCNs to coordinate distributed transactions among different
databases. For example, the database uses SCNs in the following way:
1.

An application establishes a connection using a database link.

2.

The distributed transaction commits with the highest global SCN among all the
databases involved.

3.

The commit global SCN is sent to all databases involved in the transaction.

SCNs are important for distributed transactions because they function as a
synchronized commit timestamp of a transaction, even if the transaction fails. If a
transaction becomes in-doubt, an administrator can use this SCN to coordinate
changes made to the global database. The global SCN for the transaction commit can
also be used to identify the transaction later, for example, in distributed recovery.

Distributed Transaction Processing: Case Study
In this scenario, a company has separate Oracle Database servers, sales.acme.com
and warehouse.acme.com. As users insert sales records into the sales database,
associated records are being updated at the warehouse database.
This case study of distributed processing illustrates:
■

The definition of a session tree

■

How a commit point site is determined

■

When prepare messages are sent

■

When a transaction actually commits

■

What information is stored locally about the transaction

Stage 1: Client Application Issues DML Statements
At the Sales department, a salesperson uses SQL*Plus to enter a sales order and then
commit it. The application issues a number of SQL statements to enter the order into
the sales database and update the inventory in the warehouse database:
CONNECT scott/tiger@sales.acme.com ...;
INSERT INTO orders ...;
UPDATE inventory@warehouse.acme.com ...;
INSERT INTO orders ...;
UPDATE inventory@warehouse.acme.com ...;
COMMIT;

These SQL statements are part of a single distributed transaction, guaranteeing that all
issued SQL statements succeed or fail as a unit. Treating the statements as a unit
32-14 Oracle Database Administrator’s Guide

Distributed Transaction Processing: Case Study

prevents the possibility of an order being placed and then inventory not being
updated to reflect the order. In effect, the transaction guarantees the consistency of
data in the global database.
As each of the SQL statements in the transaction executes, the session tree is defined,
as shown in Figure 32–7.
Figure 32–7 Defining the Session Tree

INSERT INTO orders...;
UPDATE inventory @ warehouse...;
INSERT INTO orders...;
UPDATE inventory @ warehouse...;
COMMIT;

SALES.ACME.COM

SQL

Global Coordinator
Commit Point Site
Database Server
Client

WAREHOUSE.ACME.COM

Note the following aspects of the transaction:
■

■

■

An order entry application running on the sales database initiates the
transaction. Therefore, sales.acme.com is the global coordinator for the
distributed transaction.
The order entry application inserts a new sales record into the sales database
and updates the inventory at the warehouse. Therefore, the nodes
sales.acme.com and warehouse.acme.com are both database servers.
Because sales.acme.com updates the inventory, it is a client of
warehouse.acme.com.

This stage completes the definition of the session tree for this distributed transaction.
Each node in the tree has acquired the necessary data locks to execute the SQL
statements that reference local data. These locks remain even after the SQL statements
have been executed until the two-phase commit is completed.

Stage 2: Oracle Database Determines Commit Point Site
The database determines the commit point site immediately following the COMMIT
statement. sales.acme.com, the global coordinator, is determined to be the commit
point site, as shown in Figure 32–8.
See Also: "Commit Point Strength" on page 32-6 for more
information about how the commit point site is determined

Distributed Transactions Concepts

32-15

Distributed Transaction Processing: Case Study

Figure 32–8

Determining the Commit Point Site

SALES.ACME.COM

Commit
Global Coordinator
Commit Point Site
Database Server
WAREHOUSE.ACME.COM

Client

Stage 3: Global Coordinator Sends Prepare Response
The prepare stage involves the following steps:
1.

After the database determines the commit point site, the global coordinator sends
the prepare message to all directly referenced nodes of the session tree, excluding
the commit point site. In this example, warehouse.acme.com is the only node
asked to prepare.

2.

Node warehouse.acme.com tries to prepare. If a node can guarantee that it can
commit the locally dependent part of the transaction and can record the commit
information in its local redo log, then the node can successfully prepare. In this
example, only warehouse.acme.com receives a prepare message because
sales.acme.com is the commit point site.

3.

Node warehouse.acme.com responds to sales.acme.com with a prepared
message.

As each node prepares, it sends a message back to the node that asked it to prepare.
Depending on the responses, one of the following can happen:
■

■

If any of the nodes asked to prepare responds with an abort message to the global
coordinator, then the global coordinator tells all nodes to roll back the transaction,
and the operation is completed.
If all nodes asked to prepare respond with a prepared or a read-only message to
the global coordinator, that is, they have successfully prepared, then the global
coordinator asks the commit point site to commit the transaction.

32-16 Oracle Database Administrator’s Guide

Distributed Transaction Processing: Case Study

Figure 32–9

Sending and Acknowledging the Prepare Message

SALES.ACME.COM

1.Sales to Warehouse
”Please prepare”
2.Warehouse to Sales
”Prepared”

Global Coordinator
Commit Point Site
Database Server
WAREHOUSE.ACME.COM

Client

Stage 4: Commit Point Site Commits
The committing of the transaction by the commit point site involves the following
steps:
1.

Node sales.acme.com, receiving acknowledgment that
warehouse.acme.com is prepared, instructs the commit point site to commit the
transaction.

2.

The commit point site now commits the transaction locally and records this fact in
its local redo log.

Even if warehouse.acme.com has not yet committed, the outcome of this transaction
is predetermined. In other words, the transaction will be committed at all nodes even if
the ability of a given node to commit is delayed.

Stage 5: Commit Point Site Informs Global Coordinator of Commit
This stage involves the following steps:
1.

The commit point site tells the global coordinator that the transaction has
committed. Because the commit point site and global coordinator are the same
node in this example, no operation is required. The commit point site knows that
the transaction is committed because it recorded this fact in its online log.

2.

The global coordinator confirms that the transaction has been committed on all
other nodes involved in the distributed transaction.

Stage 6: Global and Local Coordinators Tell All Nodes to Commit
The committing of the transaction by all the nodes in the transaction involves the
following steps:
1.

After the global coordinator has been informed of the commit at the commit point
site, it tells all other directly referenced nodes to commit.

2.

In turn, any local coordinators instruct their servers to commit, and so on.

Distributed Transactions Concepts

32-17

Distributed Transaction Processing: Case Study

3.

Each node, including the global coordinator, commits the transaction and records
appropriate redo log entries locally. As each node commits, the resource locks that
were being held locally for that transaction are released.

In Figure 32–10, sales.acme.com, which is both the commit point site and the global
coordinator, has already committed the transaction locally. sales now instructs
warehouse.acme.com to commit the transaction.
Figure 32–10

Instructing Nodes to Commit

SALES.ACME.COM

Sales to Warehouse:
”Commit”

Global Coordinator
Commit Point Site
Database Server
WAREHOUSE.ACME.COM

Client

Stage 7: Global Coordinator and Commit Point Site Complete the Commit
The completion of the commit of the transaction occurs in the following steps:
1.

After all referenced nodes and the global coordinator have committed the
transaction, the global coordinator informs the commit point site of this fact.

2.

The commit point site, which has been waiting for this message, erases the status
information about this distributed transaction.

3.

The commit point site informs the global coordinator that it is finished. In other
words, the commit point site forgets about committing the distributed transaction.
This action is permissible because all nodes involved in the two-phase commit
have committed the transaction successfully, so they will never have to determine
its status in the future.

4.

The global coordinator finalizes the transaction by forgetting about the transaction
itself.

After the completion of the COMMIT phase, the distributed transaction is itself
complete. The steps described are accomplished automatically and in a fraction of a
second.

32-18 Oracle Database Administrator’s Guide

33
Managing Distributed Transactions
This chapter describes how to manage and troubleshoot distributed transactions. The
following topics are included in this chapter:
■

Specifying the Commit Point Strength of a Node

■

Naming Transactions

■

Viewing Information About Distributed Transactions

■

Deciding How to Handle In-Doubt Transactions

■

Manually Overriding In-Doubt Transactions

■

Purging Pending Rows from the Data Dictionary

■

Manually Committing an In-Doubt Transaction: Example

■

Data Access Failures Due to Locks

■

Simulating Distributed Transaction Failure

■

Managing Read Consistency

Specifying the Commit Point Strength of a Node
The database with the highest commit point strength determines which node commits
first in a distributed transaction. When specifying a commit point strength for each
node, ensure that the most critical server will be non-blocking if a failure occurs
during a prepare or commit phase. The COMMIT_POINT_STRENGTH initialization
parameter determines the commit point strength of a node.
The default value is operating system-dependent. The range of values is any integer
from 0 to 255. For example, to set the commit point strength of a database to 200,
include the following line in the database initialization parameter file:
COMMIT_POINT_STRENGTH = 200

The commit point strength is only used to determine the commit point site in a
distributed transaction.
When setting the commit point strength for a database, note the following
considerations:
■

Because the commit point site stores information about the status of the
transaction, the commit point site should not be a node that is frequently
unreliable or unavailable in case other nodes need information about transaction
status.

Managing Distributed Transactions 33-1

Naming Transactions

■

Set the commit point strength for a database relative to the amount of critical
shared data in the database. For example, a database on a mainframe computer
usually shares more data among users than a database on a PC. Therefore, set the
commit point strength of the mainframe to a higher value than the PC.
See Also: "Commit Point Site" on page 32-5 for a conceptual
overview of commit points

Naming Transactions
You can name a transaction. This is useful for identifying a specific distributed
transaction and replaces the use of the COMMIT COMMENT statement for this purpose.
To name a transaction, use the SET TRANSACTION...NAME statement. For example:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
NAME 'update inventory checkpoint 0';

This example shows that the user started a new transaction with isolation level equal
to SERIALIZABLE and named it 'update inventory checkpoint 0'.
For distributed transactions, the name is sent to participating sites when a transaction
is committed. If a COMMIT COMMENT exists, it is ignored when a transaction name
exists.
The transaction name is displayed in the NAME column of the V$TRANSACTION view,
and in the TRAN_COMMENT field of the DBA_2PC_PENDING view when the
transaction is committed.

Viewing Information About Distributed Transactions
The data dictionary of each database stores information about all open distributed
transactions. You can use data dictionary tables and views to gain information about
the transactions. This section contains the following topics:
■

Determining the ID Number and Status of Prepared Transactions

■

Tracing the Session Tree of In-Doubt Transactions

Determining the ID Number and Status of Prepared Transactions
The following view shows the database links that have been defined at the local
database and stored in the data dictionary:
View

Purpose

DBA_2PC_PENDING

Lists all in-doubt distributed transactions. The view is empty
until populated by an in-doubt transaction. After the transaction
is resolved, the view is purged.

Use this view to determine the global commit number for a particular transaction ID.
You can use this global commit number when manually resolving an in-doubt
transaction.
The following table shows the most relevant columns (for a description of all the
columns in the view, see Oracle Database Reference):

33-2 Oracle Database Administrator’s Guide

Viewing Information About Distributed Transactions

Table 33–1

DBA_2PC_PENDING

Column

Description

LOCAL_TRAN_ID

Local transaction identifier in the format integer.integer.integer.
Note: When the LOCAL_TRAN_ID and the GLOBAL_TRAN_ID
for a connection are the same, the node is the global coordinator
of the transaction.

GLOBAL_TRAN_ID

Global database identifier in the format
global_db_name.db_hex_id.local_tran_id, where db_hex_id is an
eight-character hexadecimal value used to uniquely identify the
database. This common transaction ID is the same on every
node for a distributed transaction.
Note: When the LOCAL_TRAN_ID and the GLOBAL_TRAN_ID
for a connection are the same, the node is the global coordinator
of the transaction.

STATE

STATE can have the following values:
■

Collecting
This category normally applies only to the global
coordinator or local coordinators. The node is currently
collecting information from other database servers before it
can decide whether it can prepare.

■

Prepared
The node has prepared and may or may not have
acknowledged this to its local coordinator with a prepared
message. However, no commit request has been received.
The node remains prepared, holding any local resource
locks necessary for the transaction to commit.

■

Committed
The node (any type) has committed the transaction, but
other nodes involved in the transaction may not have done
the same. That is, the transaction is still pending at one or
more nodes.

■

Forced Commit
A pending transaction can be forced to commit at the
discretion of a database administrator. This entry occurs if a
transaction is manually committed at a local node.

■

Forced termination (rollback)
A pending transaction can be forced to roll back at the
discretion of a database administrator. This entry occurs if
this transaction is manually rolled back at a local node.

MIXED

YES means that part of the transaction was committed on one
node and rolled back on another node.

TRAN_COMMENT

Transaction comment or, if using transaction naming, the
transaction name is placed here when the transaction is
committed.

HOST

Name of the host machine.

COMMIT#

Global commit number for committed transactions.

Execute the following script, named pending_txn_script, to query pertinent
information in DBA_2PC_PENDING (sample output included):
COL LOCAL_TRAN_ID FORMAT A13
COL GLOBAL_TRAN_ID FORMAT A30
COL STATE FORMAT A8

Managing Distributed Transactions 33-3

Viewing Information About Distributed Transactions

COL MIXED FORMAT A3
COL HOST FORMAT A10
COL COMMIT# FORMAT A10
SELECT LOCAL_TRAN_ID, GLOBAL_TRAN_ID, STATE, MIXED, HOST, COMMIT#
FROM DBA_2PC_PENDING
/
SQL> @pending_txn_script
LOCAL_TRAN_ID GLOBAL_TRAN_ID
STATE
MIX HOST
COMMIT#
------------- ------------------------------ -------- --- ---------- ---------1.15.870
HQ.ACME.COM.ef192da4.1.15.870 commit
no dlsun183
115499

This output indicates that local transaction 1.15.870 has been committed on this
node, but it may be pending on one or more other nodes. Because LOCAL_TRAN_ID
and the local part of GLOBAL_TRAN_ID are the same, the node is the global
coordinator of the transaction.

Tracing the Session Tree of In-Doubt Transactions
The following view shows which in-doubt transactions are incoming from a remote
client and which are outgoing to a remote server:
View

Purpose

DBA_2PC_NEIGHBORS

Lists all incoming (from remote client) and outgoing (to remote
server) in-doubt distributed transactions. It also indicates
whether the local node is the commit point site in the
transaction.
The view is empty until populated by an in-doubt transaction.
After the transaction is resolved, the view is purged.

When a transaction is in-doubt, you may need to determine which nodes performed
which roles in the session tree. Use to this view to determine:
■

All the incoming and outgoing connections for a given transaction

■

Whether the node is the commit point site in a given transaction

■

Whether the node is a global coordinator in a given transaction (because its local
transaction ID and global transaction ID are the same)

The following table shows the most relevant columns (for an account of all the
columns in the view, see Oracle Database Reference):
Table 33–2

DBA_2PC_NEIGHBORS

Column

Description

LOCAL_TRAN_ID

Local transaction identifier with the format
integer.integer.integer.
Note: When LOCAL_TRAN_ID and
GLOBAL_TRAN_ID.DBA_2PC_PENDING for a connection are the
same, the node is the global coordinator of the transaction.

IN_OUT

IN for incoming transactions; OUT for outgoing transactions.

DATABASE

For incoming transactions, the name of the client database that
requested information from this local node; for outgoing
transactions, the name of the database link used to access
information on a remote server.

33-4 Oracle Database Administrator’s Guide

Deciding How to Handle In-Doubt Transactions

Table 33–2 (Cont.) DBA_2PC_NEIGHBORS
Column

Description

DBUSER_OWNER

For incoming transactions, the local account used to connect by
the remote database link; for outgoing transactions, the owner
of the database link.

INTERFACE

C is a commit message; N is either a message indicating a
prepared state or a request for a read-only commit.
When IN_OUT is OUT, C means that the child at the remote end
of the connection is the commit point site and knows whether to
commit or terminate. N means that the local node is informing
the remote node that it is prepared.
When IN_OUT is IN, C means that the local node or a database
at the remote end of an outgoing connection is the commit point
site. N means that the remote node is informing the local node
that it is prepared.

Execute the following script, named neighbors_script, to query pertinent
information in DBA_2PC_PENDING (sample output included):
COL LOCAL_TRAN_ID FORMAT A13
COL IN_OUT FORMAT A6
COL DATABASE FORMAT A25
COL DBUSER_OWNER FORMAT A15
COL INTERFACE FORMAT A3
SELECT LOCAL_TRAN_ID, IN_OUT, DATABASE, DBUSER_OWNER, INTERFACE
FROM DBA_2PC_NEIGHBORS
/
SQL> CONNECT SYS/password@hq.acme.com
SQL> @neighbors_script
LOCAL_TRAN_ID IN_OUT DATABASE
DBUSER_OWNER
INT
------------- ------ ------------------------- --------------- --1.15.870
out
SALES.ACME.COM
SYS
C

This output indicates that the local node sent an outgoing request to remote server
sales to commit transaction 1.15.870. If sales committed the transaction but no
other node did, then you know that sales is the commit point site, because the
commit point site always commits first.

Deciding How to Handle In-Doubt Transactions
A transaction is in-doubt when there is a failure during any aspect of the two-phase
commit. Distributed transactions become in-doubt in the following ways:
■
■

■

A server machine running Oracle Database software crashes
A network connection between two or more Oracle Databases involved in
distributed processing is disconnected
An unhandled software error occurs
See Also: "In-Doubt Transactions" on page 32-11 for a conceptual
overview of in-doubt transactions

Managing Distributed Transactions 33-5

Deciding How to Handle In-Doubt Transactions

You can manually force the commit or rollback of a local, in-doubt distributed
transaction. Because this operation can generate consistency problems, perform it only
when specific conditions exist.
This section contains the following topics:
■

Discovering Problems with a Two-Phase Commit

■

Determining Whether to Perform a Manual Override

■

Analyzing the Transaction Data

Discovering Problems with a Two-Phase Commit
The user application that commits a distributed transaction is informed of a problem
by one of the following error messages:
ORA-02050: transaction
some remote
ORA-02053: transaction
some remote
ORA-02054: transaction

ID rolled back,
dbs may be in-doubt
ID committed,
dbs may be in-doubt
ID in-doubt

A robust application should save information about a transaction if it receives any of
the preceding errors. This information can be used later if manual distributed
transaction recovery is desired.
No action is required by the administrator of any node that has one or more in-doubt
distributed transactions due to a network or system failure. The automatic recovery
features of the database transparently complete any in-doubt transaction so that the
same outcome occurs on all nodes of a session tree (that is, all commit or all roll back)
after the network or system failure is resolved.
In extended outages, however, you can force the commit or rollback of a transaction to
release any locked data. Applications must account for such possibilities.

Determining Whether to Perform a Manual Override
Override a specific in-doubt transaction manually only when one of the following
conditions exists:
■

■

■

The in-doubt transaction locks data that is required by other transactions. This
situation occurs when the ORA-01591 error message interferes with user
transactions.
An in-doubt transaction prevents the extents of a undo segment from being used
by other transactions. The first portion of the local transaction ID of an in-doubt
distributed transaction corresponds to the ID of the undo segment, as listed by the
data dictionary view DBA_2PC_PENDING.
The failure preventing the two-phase commit phases to complete cannot be
corrected in an acceptable time period. Examples of such cases include a
telecommunication network that has been damaged or a damaged database that
requires a long recovery time.

Normally, you should make a decision to locally force an in-doubt distributed
transaction in consultation with administrators at other locations. A wrong decision
can lead to database inconsistencies that can be difficult to trace and that you must
manually correct.

33-6 Oracle Database Administrator’s Guide

Deciding How to Handle In-Doubt Transactions

If none of these conditions apply, always allow the automatic recovery features of the
database to complete the transaction. If any of these conditions are met, however,
consider a local override of the in-doubt transaction.

Analyzing the Transaction Data
If you decide to force the transaction to complete, analyze available information with
the following goals in mind.

Find a Node that Committed or Rolled Back
Use the DBA_2PC_PENDING view to find a node that has either committed or rolled
back the transaction. If you can find a node that has already resolved the transaction,
then you can follow the action taken at that node.

Look for Transaction Comments
See if any information is given in the TRAN_COMMENT column of DBA_2PC_PENDING
for the distributed transaction. Comments are included in the COMMENT clause of the
COMMIT statement, or if transaction naming is used, the transaction name is placed in
the TRAN_COMMENT field when the transaction is committed.
For example, the comment of an in-doubt distributed transaction can indicate the
origin of the transaction and what type of transaction it is:
COMMIT COMMENT 'Finance/Accts_pay/Trans_type 10B';

The SET TRANSACTION...NAME statement could also have been used (and is
preferable) to provide this information in a transaction name.
See Also:

"Naming Transactions" on page 33-2

Look for Transaction Advice
See if any information is given in the ADVICE column of DBA_2PC_PENDING for the
distributed transaction. An application can prescribe advice about whether to force the
commit or force the rollback of separate parts of a distributed transaction with the
ADVISE clause of the ALTER SESSION statement.
The advice sent during the prepare phase to each node is the advice in effect at the
time the most recent DML statement executed at that database in the current
transaction.
For example, consider a distributed transaction that moves an employee record from
the emp table at one node to the emp table at another node. The transaction can protect
the record--even when administrators independently force the in-doubt transaction at
each node--by including the following sequence of SQL statements:
ALTER SESSION ADVISE COMMIT;
INSERT INTO emp@hq ... ;
/*advice to commit at HQ */
ALTER SESSION ADVISE ROLLBACK;
DELETE FROM emp@sales ... ; /*advice to roll back at SALES*/
ALTER SESSION ADVISE NOTHING;

If you manually force the in-doubt transaction following the given advice, the worst
that can happen is that each node has a copy of the employee record; the record cannot
disappear.

Managing Distributed Transactions 33-7

Manually Overriding In-Doubt Transactions

Manually Overriding In-Doubt Transactions
Use the COMMIT or ROLLBACK statement with the FORCE option and a text string that
indicates either the local or global transaction ID of the in-doubt transaction to
commit.
In all examples, the transaction is committed or rolled back
on the local node, and the local pending transaction table records a
value of forced commit or forced termination for the STATE column
the row for this transaction.
Note:

This section contains the following topics:
■

Manually Committing an In-Doubt Transaction

■

Manually Rolling Back an In-Doubt Transaction

Manually Committing an In-Doubt Transaction
Before attempting to commit the transaction, ensure that you have the proper
privileges. Note the following requirements:
User Committing the Transaction

Privilege Required

You

FORCE TRANSACTION

Another user

FORCE ANY TRANSACTION

Committing Using Only the Transaction ID
The following SQL statement commits an in-doubt transaction:
COMMIT FORCE 'transaction_id';

The variable transaction_id is the identifier of the transaction as specified in either the
LOCAL_TRAN_ID or GLOBAL_TRAN_ID columns of the DBA_2PC_PENDING data
dictionary view.
For example, assume that you query DBA_2PC_PENDING and determine that
LOCAL_TRAN_ID for a distributed transaction is 1:45.13.
You then issue the following SQL statement to force the commit of this in-doubt
transaction:
COMMIT FORCE '1.45.13';

Committing Using an SCN
Optionally, you can specify the SCN for the transaction when forcing a transaction to
commit. This feature lets you commit an in-doubt transaction with the SCN assigned
when it was committed at other nodes.
Consequently, you maintain the synchronized commit time of the distributed
transaction even if there is a failure. Specify an SCN only when you can determine the
SCN of the same transaction already committed at another node.
For example, assume you want to manually commit a transaction with the following
global transaction ID:
SALES.ACME.COM.55d1c563.1.93.29

33-8 Oracle Database Administrator’s Guide

Purging Pending Rows from the Data Dictionary

First, query the DBA_2PC_PENDING view of a remote database also involved with the
transaction in question. Note the SCN used for the commit of the transaction at that
node. Specify the SCN when committing the transaction at the local node. For
example, if the SCN is 829381993, issue:
COMMIT FORCE 'SALES.ACME.COM.55d1c563.1.93.29', 829381993;

Oracle Database SQL Reference for more information
about using the COMMIT statement

See Also:

Manually Rolling Back an In-Doubt Transaction
Before attempting to roll back the in-doubt distributed transaction, ensure that you
have the proper privileges. Note the following requirements:
User Committing the Transaction

Privilege Required

You

FORCE TRANSACTION

Another user

FORCE ANY TRANSACTION

The following SQL statement rolls back an in-doubt transaction:
ROLLBACK FORCE 'transaction_id';

The variable transaction_id is the identifier of the transaction as specified in either the
LOCAL_TRAN_ID or GLOBAL_TRAN_ID columns of the DBA_2PC_PENDING data
dictionary view.
For example, to roll back the in-doubt transaction with the local transaction ID of
2.9.4, use the following statement:
ROLLBACK FORCE '2.9.4';

Note:

You cannot roll back an in-doubt transaction to a savepoint.

Oracle Database SQL Reference for more information
about using the ROLLBACK statement

See Also:

Purging Pending Rows from the Data Dictionary
Before RECO recovers an in-doubt transaction, the transaction appears in
DBA_2PC_PENDING.STATE as COLLECTING, COMMITTED, or PREPARED. If you force
an in-doubt transaction using COMMIT FORCE or ROLLBACK FORCE, then the states
FORCED COMMIT or FORCED ROLLBACK may appear.
Automatic recovery normally deletes entries in these states. The only exception is
when recovery discovers a forced transaction that is in a state inconsistent with other
sites in the transaction. In this case, the entry can be left in the table and the MIXED
column in DBA_2PC_PENDING has a value of YES. These entries can be cleaned up
with the DBMS_TRANSACTION.PURGE_MIXED procedure.
If automatic recovery is not possible because a remote database has been permanently
lost, then recovery cannot identify the re-created database because it receives a new
database ID when it is re-created. In this case, you must use the
PURGE_LOST_DB_ENTRY procedure in the DBMS_TRANSACTION package to clean up
the entries. The entries do not hold up database resources, so there is no urgency in
cleaning them up.
Managing Distributed Transactions 33-9

Purging Pending Rows from the Data Dictionary

See Also: Oracle Database PL/SQL Packages and Types Reference for
more information about the DBMS_TRANSACTION package

Executing the PURGE_LOST_DB_ENTRY Procedure
To manually remove an entry from the data dictionary, use the following syntax
(where trans_id is the identifier for the transaction):
DBMS_TRANSACTION.PURGE_LOST_DB_ENTRY('trans_id');

For example, to purge pending distributed transaction 1.44.99, enter the following
statement in SQL*Plus:
EXECUTE DBMS_TRANSACTION.PURGE_LOST_DB_ENTRY('1.44.99');

Execute this procedure only if significant reconfiguration has occurred so that
automatic recovery cannot resolve the transaction. Examples include:
■

Total loss of the remote database

■

Reconfiguration in software resulting in loss of two-phase commit capability

■

Loss of information from an external transaction coordinator such as a TPMonitor

Determining When to Use DBMS_TRANSACTION
The following tables indicates what the various states indicate about the distributed
transaction what the administrator's action should be:
STATE
Column

State of Global
Transaction

State of Local
Transaction

Normal Action

Alternative Action

Collecting

Rolled back

Rolled back

None

PURGE_LOST_DB_EN
TRY (only if
autorecovery cannot
resolve transaction)

Committed Committed

Committed

None

PURGE_LOST_DB_EN
TRY (only if
autorecovery cannot
resolve transaction)

Prepared

Unknown

Prepared

None

Force commit or
rollback

Forced
commit

Unknown

Committed

None

PURGE_LOST_DB_EN
TRY (only if
autorecovery cannot
resolve transaction)

Forced
rollback

Unknown

Rolled back

None

PURGE_LOST_DB_EN
TRY (only if
autorecovery cannot
resolve transaction)

Forced
commit

Mixed

Committed

Manually remove
inconsistencies
then use
PURGE_MIXED

-

Forced
rollback

Mixed

Rolled back

Manually remove
inconsistencies
then use
PURGE_MIXED

-

33-10 Oracle Database Administrator’s Guide

Manually Committing an In-Doubt Transaction: Example

Manually Committing an In-Doubt Transaction: Example
Figure 33–1, illustrates a failure during the commit of a distributed transaction. In this
failure case, the prepare phase completes. During the commit phase, however, the
commit confirmation of the commit point site never reaches the global coordinator,
even though the commit point site committed the transaction. Inventory data is locked
and cannot be accessed because the in-doubt transaction is critical to other
transactions. Further, the locks must be held until the in-doubt transaction either
commits or rolls back.
Figure 33–1

Example of an In-Doubt Distributed Transaction
SALES.ACME.COM
prepared

Communication break
Global Coordinator
prepared

Commit Point Site

commit

Database Server
WAREHOUSE.ACME.COM

Client

HQ.ACME.COM

You can manually force the local portion of the in-doubt transaction by following the
steps detailed in the following sections:
Step 1: Record User Feedback
Step 2: Query DBA_2PC_PENDING
Step 3: Query DBA_2PC_NEIGHBORS on Local Node
Step 4: Querying Data Dictionary Views on All Nodes
Step 5: Commit the In-Doubt Transaction
Step 6: Check for Mixed Outcome Using DBA_2PC_PENDING

Step 1: Record User Feedback
The users of the local database system that conflict with the locks of the in-doubt
transaction receive the following error message:
ORA-01591: lock held by in-doubt distributed transaction 1.21.17

In this case, 1.21.17 is the local transaction ID of the in-doubt distributed
transaction. You should request and record this ID number from users that report
problems to identify which in-doubt transactions should be forced.

Step 2: Query DBA_2PC_PENDING
After connecting with SQL*Plus to warehouse, query the local DBA_2PC_PENDING
data dictionary view to gain information about the in-doubt transaction:
CONNECT SYS/password@warehouse.acme.com
SELECT * FROM DBA_2PC_PENDING WHERE LOCAL_TRAN_ID = '1.21.17';

The database returns the following information:
Managing Distributed Transactions

33-11

Manually Committing an In-Doubt Transaction: Example

Column Name
---------------------LOCAL_TRAN_ID
GLOBAL_TRAN_ID
STATE
MIXED
ADVICE
TRAN_COMMENT
FAIL_TIME
FORCE_TIME
RETRY_TIME
OS_USER
OS_TERMINAL
HOST
DB_USER
COMMIT#

Value
-------------------------------------1.21.17
SALES.ACME.COM.55d1c563.1.93.29
prepared
no
Sales/New Order/Trans_type 10B
31-MAY-91
31-MAY-91
SWILLIAMS
TWA139:
system1
SWILLIAMS

Determining the Global Transaction ID
The global transaction ID is the common transaction ID that is the same on every node
for a distributed transaction. It is of the form:
global_database_name.hhhhhhhh.local_transaction_id
where:
■
■

■

global_database_name is the database name of the global coordinator.
hhhhhhhh is the internal database identifier of the global coordinator (in
hexadecimal).
local_transaction_id is the corresponding local transaction ID assigned on
the global coordinator.

Note that the last portion of the global transaction ID and the local transaction ID
match at the global coordinator. In the example, you can tell that warehouse is not the
global coordinator because these numbers do not match:
LOCAL_TRAN_ID
GLOBAL_TRAN_ID

1.21.17
... 1.93.29

Determining the State of the Transaction
The transaction on this node is in a prepared state:
STATE

prepared

Therefore, warehouse waits for its coordinator to send either a commit or a rollback
request.

Looking for Comments or Advice
The transaction comment or advice can include information about this transaction. If
so, use this comment to your advantage. In this example, the origin and transaction
type is in the transaction comment:
TRAN_COMMENT

Sales/New Order/Trans_type 10B

It could also be provided as a transaction name with a SET TRANSACTION...NAME
statement.
This information can reveal something that helps you decide whether to commit or
rollback the local portion of the transaction. If useful comments do not accompany an

33-12 Oracle Database Administrator’s Guide

Manually Committing an In-Doubt Transaction: Example

in-doubt transaction, you must complete some extra administrative work to trace the
session tree and find a node that has resolved the transaction.

Step 3: Query DBA_2PC_NEIGHBORS on Local Node
The purpose of this step is to climb the session tree so that you find coordinators,
eventually reaching the global coordinator. Along the way, you may find a coordinator
that has resolved the transaction. If not, you can eventually work your way to the
commit point site, which will always have resolved the in-doubt transaction. To trace
the session tree, query the DBA_2PC_NEIGHBORS view on each node.
In this case, you query this view on the warehouse database:
CONNECT SYS/password@warehouse.acme.com
SELECT * FROM DBA_2PC_NEIGHBORS
WHERE LOCAL_TRAN_ID = '1.21.17'
ORDER BY SESS#, IN_OUT;
Column Name
---------------------LOCAL_TRAN_ID
IN_OUT
DATABASE
DBUSER_OWNER
INTERFACE
DBID
SESS#
BRANCH

Value
-------------------------------------1.21.17
in
SALES.ACME.COM
SWILLIAMS
N
000003F4
1
0100

Obtaining Database Role and Database Link Information
The DBA_2PC_NEIGHBORS view provides information about connections associated
with an in-doubt transaction. Information for each connection is different, based on
whether the connection is inbound (IN_OUT = in) or outbound (IN_OUT = out):
IN_OUT

Meaning

DATABASE

DBUSER_OWNER

in

Your node is a server of
another node.

Lists the name of the
client database that
connected to your node.

Lists the local account for
the database link
connection that
corresponds to the
in-doubt transaction.

out

Your node is a client of
other servers.

Lists the name of the
database link that
connects to the remote
node.

Lists the owner of the
database link for the
in-doubt transaction.

In this example, the IN_OUT column reveals that the warehouse database is a server
for the sales client, as specified in the DATABASE column:
IN_OUT
DATABASE

in
SALES.ACME.COM

The connection to warehouse was established through a database link from the
swilliams account, as shown by the DBUSER_OWNER column:
DBUSER_OWNER

SWILLIAMS

Managing Distributed Transactions

33-13

Manually Committing an In-Doubt Transaction: Example

Determining the Commit Point Site
Additionally, the INTERFACE column tells whether the local node or a subordinate
node is the commit point site:
INTERFACE

N

Neither warehouse nor any of its descendants is the commit point site, as shown by
the INTERFACE column.

Step 4: Querying Data Dictionary Views on All Nodes
At this point, you can contact the administrator at the located nodes and ask each
person to repeat Steps 2 and 3 using the global transaction ID.
If you can directly connect to these nodes with another
network, you can repeat Steps 2 and 3 yourself.

Note:

For example, the following results are returned when Steps 2 and 3 are performed at
sales and hq.

Checking the Status of Pending Transactions at sales
At this stage, the sales administrator queries the DBA_2PC_PENDING data dictionary
view:
SQL> CONNECT SYS/password@sales.acme.com
SQL> SELECT * FROM DBA_2PC_PENDING
> WHERE GLOBAL_TRAN_ID = 'SALES.ACME.COM.55d1c563.1.93.29';
Column Name
---------------------LOCAL_TRAN_ID
GLOBAL_TRAN_ID
STATE
MIXED
ADVICE
TRAN_COMMENT
FAIL_TIME
FORCE_TIME
RETRY_TIME
OS_USER
OS_TERMINAL
HOST
DB_USER
COMMIT#

Value
-------------------------------------1.93.29
SALES.ACME.COM.55d1c563.1.93.29
prepared
no
Sales/New Order/Trans_type 10B
31-MAY-91
31-MAY-91
SWILLIAMS
TWA139:
system1
SWILLIAMS

Determining the Coordinators and Commit Point Site at sales
Next, the sales administrator queries DBA_2PC_NEIGHBORS to determine the global
and local coordinators as well as the commit point site:
SELECT * FROM DBA_2PC_NEIGHBORS
WHERE GLOBAL_TRAN_ID = 'SALES.ACME.COM.55d1c563.1.93.29'
ORDER BY SESS#, IN_OUT;

This query returns three rows:
■

The connection to warehouse

33-14 Oracle Database Administrator’s Guide

Manually Committing an In-Doubt Transaction: Example

■

The connection to hq

■

The connection established by the user

Reformatted information corresponding to the rows for the warehouse connection
appears below:
Column Name
---------------------LOCAL_TRAN_ID
IN_OUT
DATABASE
DBUSER_OWNER
INTERFACE
DBID
SESS#
BRANCH

Value
-------------------------------------1.93.29
OUT
WAREHOUSE.ACME.COM
SWILLIAMS
N
55d1c563
1
1

Reformatted information corresponding to the rows for the hq connection appears
below:
Column Name
---------------------LOCAL_TRAN_ID
IN_OUT
DATABASE
DBUSER_OWNER
INTERFACE
DBID
SESS#
BRANCH

Value
-------------------------------------1.93.29
OUT
HQ.ACME.COM
ALLEN
C
00000390
1
1

The information from the previous queries reveal the following:
■

■

■

sales is the global coordinator because the local transaction ID and global
transaction ID match.
Two outbound connections are established from this node, but no inbound
connections. sales is not the server of another node.
hq or one of its servers is the commit point site.

Checking the Status of Pending Transactions at HQ
At this stage, the hq administrator queries the DBA_2PC_PENDING data dictionary
view:
SELECT * FROM DBA_2PC_PENDING@hq.acme.com
WHERE GLOBAL_TRAN_ID = 'SALES.ACME.COM.55d1c563.1.93.29';
Column Name
---------------------LOCAL_TRAN_ID
GLOBAL_TRAN_ID
STATE
MIXED
ACTION
TRAN_COMMENT
FAIL_TIME
FORCE_TIME
RETRY_TIME
OS_USER
OS_TERMINAL

Value
-------------------------------------1.45.13
SALES.ACME.COM.55d1c563.1.93.29
COMMIT
NO
Sales/New Order/Trans_type 10B
31-MAY-91
31-MAY-91
SWILLIAMS
TWA139:

Managing Distributed Transactions

33-15

Manually Committing an In-Doubt Transaction: Example

HOST
DB_USER
COMMIT#

SYSTEM1
SWILLIAMS
129314

At this point, you have found a node that resolved the transaction. As the view
reveals, it has been committed and assigned a commit ID number:
STATE
COMMIT#

COMMIT
129314

Therefore, you can force the in-doubt transaction to commit at your local database. It is
a good idea to contact any other administrators you know that could also benefit from
your investigation.

Step 5: Commit the In-Doubt Transaction
You contact the administrator of the sales database, who manually commits the
in-doubt transaction using the global ID:
SQL> CONNECT SYS/password@sales.acme.com
SQL> COMMIT FORCE 'SALES.ACME.COM.55d1c563.1.93.29';

As administrator of the warehouse database, you manually commit the in-doubt
transaction using the global ID:
SQL> CONNECT SYS/password@warehouse.acme.com
SQL> COMMIT FORCE 'SALES.ACME.COM.55d1c563.1.93.29';

Step 6: Check for Mixed Outcome Using DBA_2PC_PENDING
After you manually force a transaction to commit or roll back, the corresponding row
in the pending transaction table remains. The state of the transaction is changed
depending on how you forced the transaction.
Every Oracle Database has a pending transaction table. This is a special table that
stores information about distributed transactions as they proceed through the
two-phase commit phases. You can query the pending transaction table of a database
through the DBA_2PC_PENDING data dictionary view (see Table 33–1).
Also of particular interest in the pending transaction table is the mixed outcome flag as
indicated in DBA_2PC_PENDING.MIXED. You can make the wrong choice if a pending
transaction is forced to commit or roll back. For example, the local administrator rolls
back the transaction, but the other nodes commit it. Incorrect decisions are detected
automatically, and the damage flag for the corresponding pending transaction record
is set (MIXED=yes).
The RECO (Recoverer) background process uses the information in the pending
transaction table to finalize the status of in-doubt transactions. You can also use the
information in the pending transaction table to manually override the automatic
recovery procedures for pending distributed transactions.
All transactions automatically resolved by RECO are removed from the pending
transaction table. Additionally, all information about in-doubt transactions correctly
resolved by an administrator (as checked when RECO reestablishes communication)
are automatically removed from the pending transaction table. However, all rows
resolved by an administrator that result in a mixed outcome across nodes remain in
the pending transaction table of all involved nodes until they are manually deleted
using DBMS_TRANSACTIONS.PURGE_MIXED.

33-16 Oracle Database Administrator’s Guide

Simulating Distributed Transaction Failure

Data Access Failures Due to Locks
When you issue a SQL statement, the database attempts to lock the resources needed
to successfully execute the statement. If the requested data is currently held by
statements of other uncommitted transactions, however, and remains locked for a long
time, a timeout occurs.
Consider the following scenarios involving data access failure:
■

Transaction Timeouts

■

Locks from In-Doubt Transactions

Transaction Timeouts
A DML statement that requires locks on a remote database can be blocked if another
transaction own locks on the requested data. If these locks continue to block the
requesting SQL statement, then the following sequence of events occurs:
1.

A timeout occurs.

2.

The database rolls back the statement.

3.

The database returns this error message to the user:
ORA-02049: time-out: distributed transaction waiting for lock

Because the transaction did not modify data, no actions are necessary as a result of the
timeout. Applications should proceed as if a deadlock has been encountered. The user
who executed the statement can try to reexecute the statement later. If the lock persists,
then the user should contact an administrator to report the problem.

Locks from In-Doubt Transactions
A query or DML statement that requires locks on a local database can be blocked
indefinitely due to the locked resources of an in-doubt distributed transaction. In this
case, the database issues the following error message:
ORA-01591: lock held by in-doubt distributed transaction identifier

In this case, the database rolls back the SQL statement immediately. The user who
executed the statement can try to reexecute the statement later. If the lock persists, the
user should contact an administrator to report the problem, including the ID of the
in-doubt distributed transaction.
The chances of these situations occurring are rare considering the low probability of
failures during the critical portions of the two-phase commit. Even if such a failure
occurs, and assuming quick recovery from a network or system failure, problems are
automatically resolved without manual intervention. Thus, problems usually resolve
before they can be detected by users or database administrators.

Simulating Distributed Transaction Failure
You can force the failure of a distributed transaction for the following reasons:
■
■

To observe RECO automatically resolving the local portion of the transaction
To practice manually resolving in-doubt distributed transactions and observing
the results

This section describes the features available and the steps necessary to perform such
operations.
Managing Distributed Transactions

33-17

Simulating Distributed Transaction Failure

Forcing a Distributed Transaction to Fail
You can include comments in the COMMENT parameter of the COMMIT statement. To
intentionally induce a failure during the two-phase commit phases of a distributed
transaction, include the following comment in the COMMENT parameter:
COMMIT COMMENT 'ORA-2PC-CRASH-TEST-n';

where n is one of the following integers:
n

Effect

1

Crash commit point after collect

2

Crash non-commit-point site after collect

3

Crash before prepare (non-commit-point
site)

4

Crash after prepare (non-commit-point site)

5

Crash commit point site before commit

6

Crash commit point site after commit

7

Crash non-commit-point site before commit

8

Crash non-commit-point site after commit

9

Crash commit point site before forget

10

Crash non-commit-point site before forget

For example, the following statement returns the following messages if the local
commit point strength is greater than the remote commit point strength and both
nodes are updated:
COMMIT COMMENT 'ORA-2PC-CRASH-TEST-7';
ORA-02054: transaction 1.93.29 in-doubt
ORA-02059: ORA_CRASH_TERST_7 in commit comment

At this point, the in-doubt distributed transaction appears in the DBA_2PC_PENDING
view. If enabled, RECO automatically resolves the transaction.

Disabling and Enabling RECO
The RECO background process of an Oracle Database instance automatically resolves
failures involving distributed transactions. At exponentially growing time intervals,
the RECO background process of a node attempts to recover the local portion of an
in-doubt distributed transaction.
RECO can use an existing connection or establish a new connection to other nodes
involved in the failed transaction. When a connection is established, RECO
automatically resolves all in-doubt transactions. Rows corresponding to any resolved
in-doubt transactions are automatically removed from the pending transaction table of
each database.
You can enable and disable RECO using the ALTER SYSTEM statement with the
ENABLE/DISABLE DISTRIBUTED RECOVERY options. For example, you can
temporarily disable RECO to force the failure of a two-phase commit and manually
resolve the in-doubt transaction.
The following statement disables RECO:

33-18 Oracle Database Administrator’s Guide

Managing Read Consistency

ALTER SYSTEM DISABLE DISTRIBUTED RECOVERY;

Alternatively, the following statement enables RECO so that in-doubt transactions are
automatically resolved:
ALTER SYSTEM ENABLE DISTRIBUTED RECOVERY;

Single-process instances (for example, a PC running
MS-DOS) have no separate background processes, and therefore no
RECO process. Therefore, when a single-process instance that
participates in a distributed system is started, you must manually
enable distributed recovery using the preceding statement.

Note:

Managing Read Consistency
An important restriction exists in the Oracle Database implementation of distributed
read consistency. The problem arises because each system has its own SCN, which you
can view as the database internal timestamp. The Oracle Database server uses the SCN
to decide which version of data is returned from a query.
The SCNs in a distributed transaction are synchronized at the end of each remote SQL
statement and at the start and end of each transaction. Between two nodes that have
heavy traffic and especially distributed updates, the synchronization is frequent.
Nevertheless, no practical way exists to keep SCNs in a distributed system absolutely
synchronized: a window always exists in which one node may have an SCN that is
somewhat in the past with respect to the SCN of another node.
Because of the SCN gap, you can execute a query that uses a slightly old snapshot, so
that the most recent changes to the remote database are not seen. In accordance with
read consistency, a query can therefore retrieve consistent, but out-of-date data. Note
that all data retrieved by the query will be from the old SCN, so that if a locally
executed update transaction updates two tables at a remote node, then data selected
from both tables in the next remote access contain data prior to the update.
One consequence of the SCN gap is that two consecutive SELECT statements can
retrieve different data even though no DML has been executed between the two
statements. For example, you can issue an update statement and then commit the
update on the remote database. When you issue a SELECT statement on a view based
on this remote table, the view does not show the update to the row. The next time that
you issue the SELECT statement, the update is present.
You can use the following techniques to ensure that the SCNs of the two machines are
synchronized just before a query:
■

■

Because SCNs are synchronized at the end of a remote query, precede each remote
query with a dummy remote query to the same site, for example, SELECT *
FROM DUAL@REMOTE.
Because SCNs are synchronized at the start of every remote transaction, commit or
roll back the current transaction before issuing the remote query.

Managing Distributed Transactions

33-19

Managing Read Consistency

33-20 Oracle Database Administrator’s Guide

Index
A
abort response, 32-9
two-phase commit, 32-9
accounts
DBA operating system account, 1-9
users SYS and SYSTEM, 1-9
ADD LOGFILE clause
ALTER DATABASE statement, 6-9
ADD LOGFILE MEMBER clause
ALTER DATABASE statement, 6-10
ADD PARTITION clause, 17-26
ADD SUBPARTITION clause, 17-27, 17-28
adding
templates to a disk group, 12-31
ADMIN_TABLES procedure
creating admin table, 21-3
DBMS_REPAIR package, 21-2
example, 21-6
ADMINISTER_RESOURCE_MANAGER system
privilege, 24-7
administering
disk groups, 12-14
the Scheduler, 28-1
administration
distributed databases, 30-1
AFTER SUSPEND system event, 14-12
AFTER SUSPEND trigger, 14-12
example of registering, 14-14
agent
Heterogeneous Services, definition of, 29-3
aggregate functions
statement transparency in distributed
databases, 30-23
alert log
about, 4-21
location of, 4-22
size of, 4-22
using, 4-21
when written, 4-22
alert thresholds
setting for locally managed tablespaces, 14-2
alerts
server-generated, 4-18
threshold-based, 4-18
viewing, 14-3

aliases
dropping from a disk group, 12-29
aliases, managing Automatic Storage
Management, 12-28
ALL_DB_LINKS view, 30-16
allocation
extents, 15-18
ALTER CLUSTER statement
ALLOCATE EXTENT clause, 18-6
using for hash clusters, 19-7
using for index clusters, 18-6
ALTER DATABASE ADD LOGFILE statement
using Oracle-managed files, 11-16
ALTER DATABASE statement
ADD LOGFILE clause, 6-9
ADD LOGFILE MEMBER clause, 6-10
ARCHIVELOG clause, 7-4
CLEAR LOGFILE clause, 6-14
CLEAR UNARCHIVED LOGFILE clause, 6-5
database partially available to users, 3-7
DATAFILE...OFFLINE DROP clause, 9-7
datafiles online or offline, 9-8
default temporary tablespace, specifying, 2-14
DROP LOGFILE clause, 6-12
DROP LOGFILE MEMBER clause, 6-13
MOUNT clause, 3-7
NOARCHIVELOG clause, 7-4
OPEN clause, 3-7
READ ONLY clause, 3-8
RENAME FILE clause, 9-10
tempfiles online or offline, 9-8
UNRECOVERABLE DATAFILE clause, 6-14
ALTER DISKGROUP command, 12-20
ALTER FUNCTION statement
COMPILE clause, 13-19
ALTER INDEX statement
COALESCE clause, 16-6
for maintaining partitioned indexes, 17-21
MONITORING USAGE clause, 16-13
ALTER PACKAGE statement
COMPILE clause, 13-19
ALTER PROCEDURE statement
COMPILE clause, 13-19
ALTER SEQUENCE statement, 20-13
ALTER SESSION
Enabling resumable space allocation, 14-11

Index-1

ALTER SESSION statement
ADVISE clause, 33-7
CLOSE DATABASE LINK clause, 31-2
SET SQL_TRACE initialization parameter, 4-23
setting time zone, 2-17
ALTER SYSTEM statement
ARCHIVE LOG ALL clause, 7-5
DISABLE DISTRIBUTED RECOVERY
clause, 33-18
ENABLE DISTRIBUTED RECOVERY
clause, 33-18
ENABLE RESTRICTED SESSION clause, 3-8
enabling Database Resource Manager, 24-24
QUIESCE RESTRICTED, 3-12
RESUME clause, 3-13
SCOPE clause for SET, 2-38
SET RESOURCE_MANAGER_PLAN, 24-24
SET SHARED_SERVERS initialization
parameter, 4-6
setting initialization parameters, 2-38
SUSPEND clause, 3-13
SWITCH LOGFILE clause, 6-13
UNQUIESCE, 3-12
ALTER TABLE
MODIFY DEFAULT ATTRIBUTES FOR
PARTITION clause, 17-38
ALTER TABLE statement
ADD (column) clause, 15-19
ALLOCATE EXTENT clause, 15-18
DEALLOCATE UNUSED clause, 15-18
DISABLE ALL TRIGGERS clause, 13-9
DISABLE integrity constraint clause, 13-12
DROP COLUMN clause, 15-20
DROP integrity constraint clause, 13-13
DROP UNUSED COLUMNS clause, 15-20
ENABLE ALL TRIGGERS clause, 13-8
ENABLE integrity constraint clause, 13-12
external tables, 15-55
for maintaining partitions, 17-21
MODIFY (column) clause, 15-19
MODIFY DEFAULT ATTRIBUTES clause, 17-38
modifying index-organized table attributes, 15-48
MOVE clause, 15-18, 15-48
reasons for use, 15-17
RENAME COLUMN clause, 15-19
SET UNUSED clause, 15-20
ALTER TABLESPACE statement
adding an Oracle-managed datafile,
example, 11-13
adding an Oracle-managed tempfile,
example, 11-14
ONLINE clause, example, 8-15
READ ONLY clause, 8-16
READ WRITE clause, 8-17
RENAME DATAFILE clause, 9-9
RENAME TO clause, 8-19
taking datafiles/tempfiles online/offline, 9-8
ALTER TRIGGER statement
DISABLE clause, 13-9
ENABLE clause, 13-8

Index-2

ALTER VIEW statement
COMPILE clause, 13-18
altering
(Scheduler) windows, 27-22
chain steps, 27-43
event schedule, 27-35
event-based job, 27-35
job classes, 27-19
jobs, 27-5
programs, 27-11
schedules, 27-13
altering indexes, 16-11, 16-13
ANALYZE statement
CASCADE clause, 13-3
corruption reporting, 21-3
listing chained rows, 13-4
remote tables, 31-5
validating structure, 13-3, 21-3
analyzing schema objects, 13-2
analyzing tables
distributed processing, 31-5
application development
distributed databases, 29-31, 31-1, 31-9
application development for distributed
databases, 31-1
analyzing execution plan, 31-7
database links, controlling connections, 31-1
handling errors, 31-2, 31-8
handling remote procedure errors, 31-8
managing distribution of data, 31-1
managing referential integrity constraints, 31-2
terminating remote connections, 31-2
tuning distributed queries, 31-3
tuning using collocated inline views, 31-3
using cost-based optimization, 31-4
using hints to tune queries, 31-6
application services
configuring, 2-42
defining, 2-41
deploying, 2-42
using, 2-43
using, client side, 2-43
using, server side, 2-43
archive log files
creating in Automatic Storage
Management, 12-45
ARCHIVE_LAG_TARGET initialization
parameter, 6-8
archived redo logs
archiving modes, 7-4
destination availability state, controlling, 7-9
destination status, 7-8
destinations, specifying, 7-6
failed destinations and, 7-11
mandatory destinations, 7-11
minimum number of destinations, 7-11
multiplexing, 7-6
normal transmission of, 7-9
re-archiving to failed destination, 7-13
sample destination scenarios, 7-12

standby transmission of, 7-9
status information, 7-15
transmitting, 7-9
ARCHIVELOG mode, 7-3
advantages, 7-3
archiving, 7-2
automatic archiving in, 7-3
definition of, 7-3
distributed databases, 7-3
enabling, 7-4
manual archiving in, 7-3
running in, 7-3
switching to, 7-4
taking datafiles offline and online in, 9-7
archiver process
trace output (controlling), 7-13
archiver process (ARCn), 4-13
archiving
changing archiving mode, 7-4
controlling number of processes, 7-5
destination availability state, controlling, 7-9
destination failure, 7-11
destination status, 7-8
manual, 7-5
NOARCHIVELOG vs. ARCHIVELOG mode, 7-2
setting initial mode, 7-4
to failed destinations, 7-13
trace output, controlling, 7-13
viewing information on, 7-15
ASM
see Automatic Storage Management
ASM_DISKGROUPS, 12-8
ASM_DISKSTRING, 12-8
ASM_POWER_LIMIT, 12-8
ASMLib, 12-6
auditing
database links, 29-21
authentication
database links, 29-17
operating system, 1-14
selecting a method, 1-13
using password file, 1-15
AUTO_TASK_CONSUMER_GROUP
of Resource Manager, 23-3
AUTOEXTEND clause
for bigfile tablespaces, 8-8
automatic segment space management, 8-5
Automatic Storage Management
accessing files with the XML DB virtual
folder, 12-46
administering, 12-4
aliases, 12-28
authentication, 12-6
creating a database in, 12-42
creating archive log files in, 12-45
creating control file in, 12-44
creating files in the database, 12-40
creating redo logs in, 12-43
creating tablespaces in, 12-42
disk discovery, 12-3, 12-12

disk failures in, 12-15
filenames, 12-34
initialization files and, 3-3
initialization parameters, 12-7, 12-8
installation tips, 12-5
installing, 12-5
migrating a database to, 12-46
operating system authentication for, 12-6
overview, 12-1
overview of components, 12-3
password file authentication for, 12-7
shutting down, 12-13
starting up, 12-10
using in database, 12-32
views, 12-48
XML DB virtual folder, 12-46
automatic undo management, 2-13, 10-2

B
background processes, 4-13
FMON, 9-17
BACKGROUND_DUMP_DEST initialization
parameter, 4-22
backups
after creating new databases, 2-10
effects of archiving on, 7-2
batch jobs, authenticating users in, 2-46
bigfile tablespaces
creating, 8-7
creating temporary, 8-9
description, 8-6
setting database default, 2-16
BLANK_TRIMMING initialization parameter,
BLOCKSIZE clause
of CREATE TABLESPACE, 8-12
BUFFER_POOL parameter
description, 14-6
buffers
buffer cache in SGA, 2-30

15-19

C
CACHE option
CREATE SEQUENCE statement, 20-16
caches
sequence numbers, 20-15
calendaring expressions, 27-14
calls
remote procedure, 29-33
capacity planning
space management
capacity planning, 14-35
capacity, managing in disk groups, 12-17
CASCADE clause
when dropping unique or primary keys, 13-12
CATBLOCK.SQL script, 4-23
centralized user management
distributed systems, 29-19
chain rules, 27-39

Index-3

chain steps
defining, 27-38
chained rows
eliminating from table, procedure, 13-4
CHAINED_ROWS table
used by ANALYZE statement, 13-4
chains
creating, 27-38
creating jobs for, 27-41
disabling, 27-42
dropping, 27-41
dropping rules from, 27-42
enabling, 27-40
monitoring, 28-19
overview, 26-6
running, 27-42
stalled, 27-44
using, 27-37
change vectors, 6-2
CHAR datatype
increasing column length, 15-19
character set
choosing, 2-3
CHECK_OBJECT procedure
DBMS_REPAIR package, 21-2
example, 21-7
finding extent of corruption, 21-4
checkpoint process (CKPT), 4-13
checksums
for data blocks, 9-12
redo log blocks, 6-13
CLEAR LOGFILE clause
ALTER DATABASE statement, 6-14
clearing redo log files, 6-5, 6-14
client/server architectures
distributed databases, 29-4
globalization support, 29-34
cloning
a database, 1-6
an Oracle home, 1-6
CLOSE DATABASE LINK clause
ALTER SESSION statement, 31-2
closing database links, 30-14
closing windows, 27-23
clusters
about, 18-1
allocating extents, 18-6
altering, 18-6
analyzing, 13-2
cluster indexes, 18-7
cluster keys, 18-1, 18-3, 18-4
clustered tables, 18-1, 18-3, 18-5, 18-6, 18-8
columns for cluster key, 18-3
creating, 18-4
deallocating extents, 18-6
dropping, 18-7
estimating space, 18-3, 18-4
guidelines for managing, 18-2, 18-4
hash clusters, 19-1
location, 18-4

Index-4

privileges, 18-4, 18-6, 18-8
selecting tables, 18-3
single-table hash clusters, 19-4
sorted hash, 19-3
truncating, 13-5
validating structure, 13-3
COALESCE PARTITION clause, 17-29
coalescing indexes
costs, 16-5
collocated inline views
tuning distributed queries, 31-3
column encryption, 2-45
columns
adding, 15-19
displaying information about, 15-58
dropping, 15-20, 15-21
increasing length, 15-19
modifying definition, 15-19
renaming, 15-19
COMMENT statement, 15-57
COMMIT COMMENT statement
used with distributed transactions, 33-2, 33-7
commit phase, 32-8, 32-17
in two-phase commit, 32-10, 32-11
commit point site, 32-5
commit point strength, 32-6, 33-1
determining, 32-6
distributed transactions, 32-5, 32-6
how the database determines, 32-6
commit point strength
definition, 32-6
specifying, 33-1
COMMIT statement
FORCE clause, 33-8, 33-9
forcing, 33-6
two-phase commit and, 29-25
COMMIT_POINT_STRENGTH initialization
parameter, 32-6, 33-1
committing transactions
commit point site for distributed
transactions, 32-5
composite partitioning
default partition, 17-7
range-hash, 17-5, 17-11
range-list, 17-6, 17-12
subpartition template, modifying, 17-41
CONNECT command
starting an instance, 3-4
CONNECT INTERNAL
desupported, 1-13
connected user database links, 30-9
advantages and disadvantages, 29-12
definition, 29-11
example, 29-13
REMOTE_OS_AUTHENT initialization
parameter, 29-12
connection qualifiers
database links and, 30-10
connections
terminating remote, 31-2

constraints
See also integrity constraints
disabling at table creation, 13-11
distributed system application development
issues, 31-2
dropping integrity constraints, 13-13
enable novalidate state, 13-10
enabling example, 13-11
enabling when violations exist, 13-10
exceptions, 13-10, 13-14
exceptions to integrity constraints, 13-14
integrity constraint states, 13-9
keeping index when disabling, 13-12
keeping index when dropping, 13-12
ORA-02055 constraint violation, 31-2
renaming, 13-13
setting at table creation, 13-11
when to disable, 13-10
control file
creating in Automatic Storage
Management, 12-44
control files
adding, 5-4
changing size, 5-3
conflicts with data dictionary, 5-7
creating, 5-1, 5-3, 5-5
creating as Oracle-managed files, 11-14
creating as Oracle-managed files, examples, 11-20
default name, 2-22, 5-3
dropping, 5-9
errors during creation, 5-7
guidelines for, 5-2
importance of multiplexed, 5-2
initial creation, 5-3
location of, 5-2
log sequence numbers, 6-3
mirroring, 2-23, 5-2
moving, 5-4
multiplexed, 5-2
names, 5-2
number of, 5-2
overwriting existing, 2-22
relocating, 5-4
renaming, 5-4
requirement of one, 5-1
size of, 5-3
specifying names before database creation, 2-22
troubleshooting, 5-7
unavailable during startup, 3-5
CONTROL_FILES initialization parameter
overwriting existing control files, 2-22
specifying file names, 5-2
warning about setting, 2-23
when creating a database, 2-22, 5-3
copying jobs, 27-5
coraenv and oraenv, 1-7
corruption
repairing data block, 21-1
cost-based optimization, 31-4
distributed databases, 29-33

hints, 31-6
using for distributed queries, 31-4
CPU_COUNT initialization parameter, 24-31
CREATE BIGFILE TABLESPACE statement, 8-7
CREATE BIGFILE TEMPORARY TABLESPACE
statement, 8-9
CREATE CLUSTER statement
creating clusters, 18-5
example, 18-5
for hash clusters, 19-2
HASH IS clause, 19-3, 19-5
HASHKEYS clause, 19-3, 19-6
SIZE clause, 19-5
CREATE CONTROLFILE statement
about, 5-5
checking for inconsistencies, 5-7
creating as Oracle-managed files,
examples, 11-15, 11-20
NORESETLOGS clause, 5-6
Oracle-managed files, using, 11-14
RESETLOGS clause, 5-6
CREATE DATABASE LINK statement, 30-7
CREATE DATABASE statement
CONTROLFILE REUSE clause, 5-3
DEFAULT TEMPORARY TABLESPACE
clause, 2-8, 2-14
example of database creation, 2-7
EXTENT MANAGEMENT LOCAL clause, 2-11
MAXLOGFILES parameter, 6-7
MAXLOGMEMBERS parameter, 6-7
password for SYS, 2-10
password for SYSTEM, 2-10
setting time zone, 2-17
specifying FORCE LOGGING, 2-18
SYSAUX DATAFILE clause, 2-8
UNDO TABLESPACE clause, 2-8, 2-13
used to create an undo tablespace, 10-7
using Oracle-managed files, 11-7
using Oracle-managed files, examples, 11-10,
11-19, 11-22
CREATE INDEX statement
NOLOGGING, 16-5
ON CLUSTER clause, 18-6
partitioned indexes, 17-9
using, 16-7
with a constraint, 16-8
CREATE SCHEMA statement
multiple tables and views, 13-1
CREATE SEQUENCE statement, 20-13
CACHE option, 20-16
examples, 20-16
NOCACHE option, 20-16
CREATE SPFILE statement, 2-37
CREATE SYNONYM statement, 20-17
CREATE TABLE statement
AS SELECT clause, 15-4, 15-8
AS SELECT vs. direct-path INSERT, 15-13
CLUSTER clause, 18-5
COMPRESS clause, 15-47
creating partitioned tables, 17-8

Index-5

creating temporary table, 15-6
INCLUDING clause, 15-46
index-organized tables, 15-44
MONITORING clause, 15-16
NOLOGGING clause, 15-4
ORGANIZATION EXTERNAL clause, 15-53
OVERFLOW clause, 15-45
parallelizing, 15-8
PCTTHRESHOLD clause, 15-45
TABLESPACE clause, specifying, 15-3
use of, 15-6
CREATE TABLESPACE statement
BLOCKSIZE CLAUSE, using, 8-12
FORCE LOGGING clause, using, 8-13
using Oracle-managed files, 11-12
using Oracle-managed files, examples, 11-12
CREATE TEMPORARY TABLESPACE
statement, 8-9
using Oracle-managed files, 11-13
using Oracle-managed files, example, 11-14
CREATE UNDO TABLESPACE statement
using Oracle-managed files, 11-12
using Oracle-Managed files, example, 11-13
using to create an undo tablespace, 10-7
CREATE UNIQUE INDEX statement
using, 16-7
CREATE VIEW statement
about, 20-2
OR REPLACE clause, 20-4
WITH CHECK OPTION, 20-2, 20-5
CREATE_SIMPLE_PLAN procedure
Database Resource Manager, 24-8
creating
chains, 27-38
control files, 5-3
database using Automatic Storage
Management, 12-42
disk group, 12-19
event schedule, 27-35
event-based job, 27-34
job classes, 27-19
jobs, 27-2
programs, 27-9
Scheduler windows, 27-21
schedules, 27-13
sequences, 20-16
window groups, 27-28
creating database links, 30-6
connected user, 30-9
connected user scenarios, 30-25
current user, 30-9
current user scenario, 30-26
examples, 29-13
fixed user, 30-8
fixed user scenario, 30-25
obtaining necessary privileges, 30-6
private, 30-7
public, 30-7
service names within link names, 30-10
shared, 30-10

Index-6

shared connected user scenario, 30-26
specifying types, 30-7
creating databases, 2-1
backing up the new database, 2-10
default temporary tablespace, specifying, 2-14
example, 2-7
manually from a script, 2-1
overriding default tablespace type, 2-17
planning, 2-2
preparing to, 2-2
prerequisites for, 2-4
problems encountered while, 2-35
setting default tablespace type, 2-16
specifying bigfile tablespaces, 2-16, 2-17
UNDO TABLESPACE clause, 2-13
upgrading to a new release, 2-1
using Database Configuration Assistant, 2-1
using Oracle-managed files, 2-15, 11-7
with locally managed tablespaces, 2-11
creating datafiles, 9-4
creating indexes
after inserting table data, 16-2
associated with integrity constraints, 16-7
NOLOGGING, 16-5
USING INDEX clause, 16-8
creating sequences, 20-13
creating synonyms, 20-17
creating views, 20-2
current user database links
advantages and disadvantages, 29-12
cannot access in shared schema, 29-20
definition, 29-11
example, 29-13
schema independence, 29-19
CURRVAL pseudo-column, 20-14
restrictions, 20-15
cursors
and closing database links, 31-2

D
data
loading using external tables, 15-53
data block corruption
repairing, 21-1
data blocks
altering size of, 2-23
managing space in, 14-4
nonstandard block size, 2-24
shared in clusters, 18-1
specifying size of, 2-23
standard block size, 2-23
transaction entry settings, 14-4
verifying, 9-12
data dictionary
conflicts with control files, 5-7
purging pending rows from, 33-9, 33-10
schema object views, 13-21, 14-31
data encryption
distributed systems, 29-21

data manipulation language
statements allowed in distributed
transactions, 29-23
database
cloning, 1-6
monitoring, 4-18
starting up, 3-1
database administrators
DBA role, 1-10
operating system account, 1-9
password files for, 1-13
responsibilities of, 1-2
security and privileges of, 1-9
security officer versus, 22-1
SYS and SYSTEM accounts, 1-9
task definitions, 1-3
utilities for, 1-22
Database Configuration Assistant, 2-1
shared server configuration, 4-8
database links
advantages, 29-8
auditing, 29-21
authentication, 29-17
authentication without passwords, 29-18
closing, 30-14, 31-2
connected user, 29-11, 29-12, 30-9, 30-25
connections, determining open, 30-17
controlling connections, 31-1
creating, 30-6, 30-25, 30-26
creating shared, 30-12
creating, examples, 29-13
creating, scenarios, 30-24
current user, 29-11, 29-12, 30-9
data dictionary USER views, 30-16
definition, 29-6
distributed queries, 29-24
distributed transactions, 29-24
dropping, 30-15
enforcing global naming, 30-2
enterprise users and, 29-19
fixed user, 29-11, 29-12, 30-25
global, 29-10
global names, 29-8
global object names, 29-25
handling errors, 31-2
limiting number of connections, 30-16
listing, 30-16, 33-2, 33-4
managing, 30-14
minimizing network connections, 30-11
name resolution, 29-25
names for, 29-9
private, 29-10
public, 29-10
referential integrity in, 31-2
remote transactions, 29-23, 29-24
resolution, 29-25
restrictions, 29-16
roles on remote database, 29-16
schema objects and, 29-14
service names used within link names, 30-10

shared, 29-7, 30-11, 30-12, 30-13
shared SQL, 29-24
synonyms for schema objects, 29-15
tuning distributed queries, 31-3
tuning queries with hints, 31-6
tuning using collocated inline views, 31-3
types of links, 29-10
types of users, 29-11
users, specifying, 30-8
using cost-based optimization, 31-4
viewing, 30-16
database objects
obtaining growth trends for, 14-36
Database Resource Manager
active session pool with queuing, 24-5
administering system privilege, 24-7
and operating system control, 24-30
automatic consumer group switching, 24-6
CREATE_SIMPLE_PLAN procedure, 24-8
description, 24-1
enabling, 24-24
execution time limit, 24-6
pending area, 24-10
resource allocation methods, 24-3, 24-12, 24-14
resource consumer groups, 24-3, 24-13, 24-17
resource plan directives, 24-3, 24-10, 24-14
resource plans, 24-3, 24-4, 24-5, 24-8, 24-14, 24-24,
24-27, 24-32
specifying a parallel degree limit, 24-5
undo pool, 24-6
used for quiescing a database, 3-11
validating plan schema changes, 24-10
views, 24-31
database writer process
calculating checksums for data blocks, 9-12
database writer process (DBWn), 4-13
DATABASE_PROPERTIES view
name of default temporary tablespace, 2-14
rename of default temporary tablespace, 8-19
databases
administering, 1-1
administration of distributed, 30-1
altering availability, 3-7
backing up, 2-10
control files of, 5-2
creating manually, 2-2
default temporary tablespace, specifying, 2-14
dropping, 2-35
global database names in distributed
systems, 2-22
mounting a database, 3-5
mounting to an instance, 3-7
names, about, 2-21
names, conflicts in, 2-21
opening a closed database, 3-7
planning, 1-4
planning creation, 2-2
quiescing, 3-11
read-only, opening, 3-8
recovery, 3-7

Index-7

renaming, 5-4, 5-5, 5-6
restricting access, 3-8
resuming, 3-13
shutting down, 3-9
specifying control files, 2-22
starting up, 3-2
suspending, 3-13
troubleshooting creation problems, 2-35
undo management, 2-13
upgrading, 2-1
with locally managed tablespaces, 2-11
datafile headers
when renaming tablespaces, 8-19
datafiles
adding to a tablespace, 9-4
bringing online and offline, 9-6
checking associated tablespaces, 8-41
copying using database, 9-12
creating, 9-4
creating Oracle-managed files, 11-5, 11-17
database administrators access, 1-9
default directory, 9-4
definition, 9-1
deleting, 8-20
dropping, 9-7, 9-11
dropping Oracle-managed files, 11-18
file numbers, 9-1
fully specifying filenames, 9-4
guidelines for managing, 9-1
headers when renaming tablespaces, 8-19
identifying OS filenames, 9-10
location, 9-4
mapping files to physical devices, 9-15
minimum number of, 9-2
MISSING, 5-7
monitoring using views, 9-25
online, 9-7
Oracle-managed, 11-1
relocating, 9-8
renaming, 9-8
reusing, 9-4
size of, 9-3
statements to create, 9-4
storing separately from redo log files, 9-4
unavailable when database is opened, 3-5
verifying data blocks, 9-12
DB_BLOCK_CHECKING initialization
parameter, 21-3
DB_BLOCK_CHECKSUM initialization
parameter, 9-12
enabling redo block checking with, 6-13
DB_BLOCK_SIZE initialization parameter
and nonstandard block sizes, 8-12
setting, 2-23
DB_CACHE_SIZE initialization parameter
setting, 2-30
specifying multiple block sizes, 8-12
DB_DOMAIN initialization parameter
setting for database creation, 2-21, 2-22
DB_FILES initialization parameter

Index-8

determining value for, 9-2
DB_NAME initialization parameter
setting before database creation, 2-21
DB_nK_CACHE_SIZE initialization parameter
setting, 2-31
specifying multiple block sizes, 8-12
using with transportable tablespaces, 8-36
DBA role, 1-10
DBA. See database administrators.
DBA_2PC_NEIGHBORS view, 33-4
using to trace session tree, 33-4
DBA_2PC_PENDING view, 33-2, 33-9, 33-16
using to list in-doubt transactions, 33-2
DBA_DB_LINKS view, 30-16
DBA_RESUMABLE view, 14-12
DBA_UNDO_EXTENTS view
undo tablespace extents, 10-10
DBCA. See Database Configuration Assistant
DBMS_FILE_TRANSFER package
copying datafiles, 9-12
DBMS_METADATA package
GET_DDL function, 13-21
using for object definition, 13-21
DBMS_REDEFINITION package
performing online redefinition with, 15-22
required privileges, 15-35
DBMS_REPAIR
logical corruptions, 21-4
DBMS_REPAIR package
examples, 21-5
limitations, 21-2
procedures, 21-2
using, 21-2, 21-10
DBMS_RESOURCE_MANAGER package, 24-3,
24-8, 24-17, 24-18
procedures (table of), 24-7
DBMS_RESOURCE_MANAGER_PRIVS
package, 24-8, 24-18
procedures (table of), 24-7
DBMS_RESUMABLE package, 14-13
DBMS_SERVER_ALERT package
setting alert thresholds, 14-2
DBMS_SESSION package, 24-19
DBMS_SPACE package, 14-31
example for unused space, 14-32
FREE_BLOCK procedure, 14-32
SPACE_USAGE procedure, 14-32
UNUSED_SPACE procedure, 14-31
DBMS_STATS package, 13-3
MONITORING clause of CREATE TABLE, 15-16
DBMS_STORAGE_MAP package
invoking for file mapping, 9-21
views detailing mapping information, 9-22
DBMS_TRANSACTION package
PURGE_LOST_DB_ENTRY procedure, 33-10
DBVERIFY utility, 21-3
DEALLOCATE UNUSED clause, 14-31
deallocating unused space, 14-15
DBMS_SPACE package, 14-31
DEALLOCATE UNUSED clause, 14-31

declarative referential integrity constraints, 31-2
dedicated server processes, 4-1
trace files for, 4-21
DEFAULT keyword
list partitioning, 17-11
default partitions, 17-5
default subpartition, 17-7
default temporary tablespace
renaming, 8-19
default temporary tablespaces
specifying at database creation, 2-8, 2-14
specifying bigfile tempfile, 2-17
DEFAULT_CONSUMER_GROUP for Database
Resource Manager, 24-14, 24-20
defining
chain steps, 27-38
dependencies
between schema objects, 13-16
displaying, 13-23
dictionary-managed tablespaces
migrating SYSTEM to locally managed, 8-24
Digital POLYCENTER Manager on NetView, 29-23
directories
managing disk group, 12-26
direct-path INSERT
benefits, 15-13
how it works, 15-14
index maintenance, 15-15
locking considerations, 15-15
logging mode, 15-14
parallel INSERT, 15-13
parallel load compared with parallel
INSERT, 15-13
serial INSERT, 15-13
space considerations, 15-15
DISABLE ROW MOVEMENT clause, 17-8
disabling
chains, 27-42
jobs, 27-8
programs, 27-12
window groups, 27-29
windows, 27-24
disabling recoverer process, 33-18
disk discovery
in Automatic Storage Management, 12-3, 12-12
disk failure
in Automatic Storage Management, 12-15
disk group
adding templates to, 12-31
altering membership of, 12-20
creating, 12-19
dropping, 12-26
dropping disks from, 12-23
managing capacity in, 12-17
manually rebalancing, 12-24
mounting and dismounting, 12-25
resizing disks in, 12-24
undropping disks in, 12-24
disk groups, administering, 12-14
dispatcher process (Dnnn), 4-14

dispatcher processes, 4-9, 4-12
DISPATCHERS initialization parameter
setting attributes of, 4-8
setting initially, 4-9
distributed applications
distributing data, 31-1
distributed databases
administration overview, 29-16
application development, 29-31, 31-1, 31-9
client/server architectures, 29-4
commit point strength, 32-6
cost-based optimization, 29-33
direct and indirect connections, 29-5
distributed processing, 29-2
distributed queries, 29-24
distributed updates, 29-24
forming global database names, 30-1
global object names, 29-15, 30-1
globalization support, 29-33
location transparency, 29-31, 30-18
management tools, 29-22
managing read consistency, 33-19
nodes of, 29-4
overview, 29-1
remote object security, 30-20
remote queries and updates, 29-23
replicated databases and, 29-3
resumable space allocation, 14-10
running in ARCHIVELOG mode, 7-3
running in NOARCHIVELOG mode, 7-3
scenarios, 30-24
schema object name resolution, 29-27
schema-dependent global users, 29-19
schema-independent global users, 29-19
security, 29-17
site autonomy of, 29-16
SQL transparency, 29-32
starting a remote instance, 3-7
transaction processing, 29-23
transparency, 29-31
distributed processing
distributed databases, 29-2
distributed queries, 29-24
analyzing tables, 31-5
application development issues, 31-3
cost-based optimization, 31-4
optimizing, 29-33
distributed systems
data encryption, 29-21
distributed transactions, 29-24
case study, 32-14
commit point site, 32-5
commit point strength, 32-6, 33-1
committing, 32-6
database server role, 32-4
defined, 32-1
DML and DDL, 32-2
failure during, 33-17
global coordinator, 32-4
local coordinator, 32-4

Index-9

lock timeout interval, 33-17
locked resources, 33-17
locks for in-doubt, 33-17
manually overriding in-doubt, 33-6
naming, 33-2, 33-7
session trees, 32-3, 32-4, 32-5, 33-4
setting advice, 33-7
transaction control statements, 32-2
transaction timeouts, 33-17
two-phase commit, 32-14, 33-6
viewing database links, 33-2
distributed updates, 29-24
DML error logging, inserting data with, 15-9
DML. See data manipulation language
DRIVING_SITE hint, 31-7
DROP CLUSTER statement
CASCADE CONSTRAINTS clause, 18-7
dropping cluster, 18-7
dropping cluster index, 18-7
dropping hash cluster, 19-7
INCLUDING TABLES clause, 18-7
DROP DATABASE statement, 2-35
DROP LOGFILE clause
ALTER DATABASE statement, 6-12
DROP LOGFILE MEMBER clause
ALTER DATABASE statement, 6-13
DROP PARTITION clause, 17-30
DROP SYNONYM statement, 20-18
DROP TABLE statement
about, 15-37
CASCADE CONSTRAINTS clause, 15-38
for clustered tables, 18-8
DROP TABLESPACE statement, 8-20
dropping
aliases from a disk group, 12-29
Automatic Storage Management template, 12-32
chain steps, 27-43
chains, 27-41
datafiles, 9-11
disk groups, 12-26
disks from a disk group, 12-23
files from a disk group, 12-29
job classes, 27-19
jobs, 27-7, 28-16
programs, 27-11
rules from chains, 27-42
running jobs, 28-16
schedules, 27-14
tempfiles, 9-11
window groups, 27-28
windows, 27-23
dropping columns from tables, 15-20
marking unused, 15-20
remove unused columns, 15-20
dropping database links, 30-15
dropping datafiles
Oracle-managed, 11-18
dropping partitioned tables, 17-52
dropping tables
CASCADE clause, 15-38

Index-10

consequences of, 15-37
dropping tempfiles
Oracle-managed, 11-18
DUMP_ORPHAN_KEYS procedure, 21-4
checking sync, 21-4
DBMS_REPAIR package, 21-2
example, 21-9
recovering data, 21-5

E
EMPHASIS resource allocation method, 24-12
ENABLE ROW MOVEMENT clause, 17-8
enabling
chains, 27-40
jobs, 27-8
programs, 27-12
window groups, 27-29
windows, 27-24
enabling recoverer process
distributed transactions, 33-18
encryption, transparent data, 2-45
enterprise users
definition, 29-19
environment variables
selecting an instance with, 1-7
error logging, DML
inserting data with, 15-9
errors
alert log and, 4-21
assigning names with
PRAGMA_EXCEPTION_INIT, 31-9
exception handler, 31-9
integrity constrain violation, 31-2
ORA-00028, 4-17
ORA-01090, 3-9
ORA-01173, 5-7
ORA-01176, 5-7
ORA-01177, 5-7
ORA-01578, 9-12
ORA-01591, 33-17
ORA-02049, 33-17
ORA-02050, 33-6
ORA-02051, 33-6
ORA-02054, 33-6
ORA-1215, 5-7
ORA-1216, 5-7
RAISE_APPLICATION_ERROR()
procedure, 31-9
remote procedure, 31-8
rollback required, 31-2
trace files and, 4-21
when creating a database, 2-35
when creating control file, 5-7
while starting a database, 3-6
while starting an instance, 3-6
event message
passing to event-based job, 27-36
event schedule
altering, 27-35

creating, 27-35
event-based job
altering, 27-35
creating, 27-34
passing event messages to, 27-36
events (Scheduler)
overview, 26-5
using, 27-30
exception handler, 31-9
EXCEPTION keyword, 31-8
exceptions
assigning names with
PRAGMA_EXCEPTION_INIT, 31-9
integrity constraints, 13-14
user-defined, 31-9
EXCHANGE PARTITION clause, 17-33, 17-34
execution plans
analyzing for distributed queries, 31-7
export operations
restricted mode and, 3-6
export utilities
about, 1-22
expressions, calendaring, 27-14
EXTENT MANAGEMENT LOCAL clause
CREATE DATABASE, 2-11
extents
allocating cluster extents, 18-6
allocating for tables, 15-18
data dictionary views for, 14-32
deallocating cluster extents, 18-6
displaying free extents, 14-34
external jobs
running, 27-6
external procedures
managing processes for, 4-16
external tables
altering, 15-55
creating, 15-53
defined, 15-52
dropping, 15-56
privileges required, 15-56
uploading data example, 15-53

F
failure groups, 12-3, 12-16
features
new, xxxvii
file mapping
examples, 9-23
how it works, 9-16
how to use, 9-20
overview, 9-16
structures, 9-18
views, 9-22
file system
used for Oracle-managed files, 11-2
FILE_MAPPING initialization parameter, 9-20
filenames
Automatic Storage Management, 12-34

Oracle-managed files, 11-6
files
creating Oracle-managed files, 11-5, 11-17
FIX_CORRUPT_BLOCKS procedure
DBMS_REPAIR, 21-2
example, 21-8
marking blocks corrupt, 21-5
fixed user database links
advantages and disadvantages, 29-12
creating, 30-8
definition, 29-11
example, 29-13
flash recovery area
initialization parameters to specify, 2-22
Flashback Drop
about, 15-38
purging recycle bin, 15-40
querying recycle bin, 15-40
recycle bin, 15-38
restoring objects, 15-41
Flashback Table
overview, 15-37
Flashback Transaction Query, 15-36
FMON background process, 9-17
FMPUTL external process
used for file mapping, 9-17
FOR PARTITION clause, 17-39
FORCE clause
COMMIT statement, 33-8
ROLLBACK statement, 33-8
FORCE LOGGING clause
CREATE CONTROLFILE, 2-19
CREATE DATABASE, 2-18
CREATE TABLESPACE, 8-13
performance considerations, 2-19
FORCE LOGGING mode, 15-15
forcing
COMMIT or ROLLBACK, 33-3, 33-6
forcing a log switch, 6-13
using ARCHIVE_LAG_TARGET, 6-8
with the ALTER SYSTEM statement, 6-13
forget phase
in two-phase commit, 32-11
free space
listing free extents, 14-34
tablespaces and, 8-42
function-based indexes, 16-10
functions
recompiling, 13-19

G
generic connectivity
definition, 29-4
global cache service (LMS), 4-14
global coordinators, 32-4
distributed transactions, 32-4
global database consistency
distributed databases and, 32-10
global database links, 29-10

Index-11

creating, 30-8
global database names
changing the domain, 30-3
database links, 29-8
enforcing for database links, 29-10
enforcing global naming, 30-2
forming distributed database names, 30-1
impact of changing, 29-30
querying, 30-3
global object names
database links, 29-25
distributed databases, 30-1
global users, 30-26
schema-dependent in distributed systems, 29-19
schema-independent in distributed
systems, 29-19
GLOBAL_NAME view
using to determine global database name, 30-3
GLOBAL_NAMES initialization parameter
database links, 29-9
globalization support
client/server architectures, 29-34
distributed databases, 29-33
GRANT statement
SYSOPER/SYSDBA privileges, 1-19
granting privileges and roles
SYSOPER/SYSDBA privileges, 1-19
growth trends
of database objects, 14-36
GV$DBLINK view, 30-17

H
hash clusters
advantages and disadvantages, 19-1
altering, 19-7
choosing key, 19-4
contrasted with index clusters, 19-1
controlling space use of, 19-4
creating, 19-2
dropping, 19-7
estimating storage, 19-7
examples, 19-6
hash function, 19-1, 19-2, 19-3, 19-5
HASH IS clause, 19-3, 19-5
HASHKEYS clause, 19-3, 19-6
single-table, 19-4
SIZE clause, 19-5
sorted, 19-3
hash functions
for hash cluster, 19-1
hash partitioning
creating tables using, 17-9
index-organized tables, 17-20
multicolumn partitioning keys, 17-15
heterogeneous distributed systems
definition, 29-3
Heterogeneous Services
overview, 29-3
hints, 31-6

Index-12

DRIVING_SITE, 31-7
NO_MERGE, 31-6
using to tune distributed queries, 31-6
historical tables
moving time window, 17-52
HP OpenView, 29-23

I
IBM NetView/6000, 29-23
import operations
restricted mode and, 3-6
import utilities
about, 1-22
index clusters. See clusters.
indexes
altering, 16-11
analyzing, 13-2
choosing columns to index, 16-3
cluster indexes, 18-5, 18-6, 18-7
coalescing, 16-5, 16-13
column order for performance, 16-3
creating, 16-6
disabling and dropping constraints cost, 16-6
dropping, 16-4, 16-14
estimating size, 16-4
estimating space use, 14-36
explicitly creating a unique index, 16-7
function-based, 16-10
guidelines for managing, 16-1
keeping when disabling constraint, 13-12
keeping when dropping constraint, 13-12
key compression, 16-11
limiting for a table, 16-3
monitoring space use of, 16-13
monitoring usage, 16-13
parallelizing index creation, 16-5
partitioned, 17-1
rebuilding, 16-5, 16-12, 16-13
rebuilt after direct-path INSERT, 15-15
setting storage parameters for, 16-4
shrinking, 14-29
space used by, 16-13
statement for creating, 16-7
tablespace for, 16-4
temporary segments and, 16-2
updating global indexes, 17-24
validating structure, 13-3
when to create, 16-3
index-organized tables
analyzing, 15-50
AS subquery, 15-47
converting to heap, 15-51
creating, 15-44
described, 15-43
INCLUDING clause, 15-46
key compression, 15-47
maintaining, 15-48
ORDER BY clause, using, 15-51
overflow clause, 15-45

parallel creation, 15-47
partitioning, 17-8, 17-18
partitioning secondary indexes, 17-19
rebuilding with MOVE clause, 15-48
storing nested tables, 15-45
storing object types, 15-44
threshold value, 15-45
in-doubt transactions, 32-11
after a system failure, 33-6
automatic resolution, 32-12
deciding how to handle, 33-5
deciding whether to perform manual
override, 33-6
defined, 32-10
manual resolution, 32-13
manually committing, 33-8
manually committing, example, 33-11
manually overriding, 33-6, 33-8
manually overriding, scenario, 33-11
manually rolling back, 33-9
overview, 32-11
pending transactions table, 33-16
purging rows from data dictionary, 33-9, 33-10
recoverer process and, 33-18
rolling back, 33-8, 33-9
SCNs and, 32-14
simulating, 33-17
tracing session tree, 33-4
viewing database links, 33-2
INITIAL parameter
cannot alter, 14-7, 15-18
description, 14-6
initialization parameter file
and Automatic Storage Management, 3-3
creating, 2-6
creating for database creation, 2-6
editing before database creation, 2-21
individual parameter names, 2-21
server parameter file, 2-35
understanding, 3-2
initialization parameters
ARCHIVE_LAG_TARGET, 6-8
BACKGROUND_DUMP_DEST, 4-22
COMMIT_POINT_STRENGTH, 32-6, 33-1
CONTROL_FILES, 2-22, 2-23, 5-2, 5-3
DB_BLOCK_CHECKING, 21-3
DB_BLOCK_CHECKSUM, 6-13, 9-12
DB_BLOCK_SIZE, 2-23, 8-12
DB_CACHE_SIZE, 2-30, 8-12
DB_DOMA, 2-21
DB_DOMAIN, 2-22
DB_FILES, 9-2
DB_NAME, 2-21
DB_nK_CACHE_SIZE, 2-31, 8-12, 8-36
DISPATCHERS, 4-9
FILE_MAPPING, 9-20
for Automatic Storage Management, 12-7
for Automatic Storage Management
instance, 12-8
for buffer cache, 2-30

GLOBAL_NAMES, 29-9
LOG_ARCHIVE_DEST, 7-6
LOG_ARCHIVE_DEST_n, 7-6, 7-13
LOG_ARCHIVE_DEST_STATE_n, 7-9
LOG_ARCHIVE_MAX_PROCESSES, 7-5
LOG_ARCHIVE_MIN_SUCCEED_DEST, 7-11
LOG_ARCHIVE_TRACE, 7-13
MAX_DUMP_FILE_SIZE, 4-22
OPEN_LINKS, 30-16
PROCESSES, 2-33
REMOTE_LOGIN_PASSWORDFILE, 1-18
REMOTE_OS_AUTHENT, 29-12
RESOURCE_MANAGER_PLAN, 24-24
server parameter file and, 2-35, 2-40
SET SQL_TRACE, 4-23
SGA_MAX_SIZE, 2-24
shared server and, 4-4
SHARED_SERVERS, 4-6
SORT_AREA_SIZE, 16-2
SPFILE, 2-38, 3-3
SQL_TRACE, 4-21
STATISTICS_LEVEL, 15-16
UNDO_MANAGEMENT, 2-13, 10-2
UNDO_TABLESPACE, 2-33, 10-2
USER_DUMP_DEST, 4-22
INITRANS parameter
altering, 15-17
guidelines for setting, 14-5
INSERT statement
with DML error logging, 15-9
installing
patches, 1-6
installing Automatic Storage Management, 12-5
instance
selecting with environment variables, 1-7
INSTANCE_TYPE, 12-8
instances
aborting, 3-10
shutting down immediately, 3-9
shutting down normally, 3-9
transactional shutdown, 3-10
integrity constraints
See also constraints
cost of disabling, 16-6
cost of dropping, 16-6
creating indexes associated with, 16-7
dropping tablespaces and, 8-20
ORA-02055 constraint violation, 31-2
INTERNAL username
connecting for shutdown, 3-9
IOT. See index-organized tables.

J
job classes
altering, 27-19
creating, 27-19
dropping, 27-19
overview, 26-7
using, 27-18

Index-13

job coordinator, 26-11, 28-10
job recovery (Scheduler), 28-18
jobs
altering, 27-5
copying, 27-5
creating, 27-2
creating for chains, 27-41
disabling, 27-8
dropping, 27-7, 28-16
dropping running, 28-16
enabling, 27-8
overview, 26-4
priorities, 28-18
running, 27-5
stopping, 27-7
using, 27-2
viewing information on running, 28-9
join views
definition, 20-2
DELETE statements, 20-8
key-preserved tables in, 20-7
modifying, 20-5
rules for modifying, 20-7
updating, 20-5
joins
statement transparency in distributed
databases, 30-23

K
key compression, 15-47
indexes, 16-11
key-preserved tables
in join views, 20-7
in outer joins, 20-10
keys
cluster, 18-1, 18-4

L
links
See database links
LIST CHAINED ROWS clause
of ANALYZE statement, 13-4
list partitioning
adding values to value list, 17-40
creating tables using, 17-10
DEFAULT keyword, 17-11
dropping values from value-list, 17-40
when to use, 17-4
listing database links, 30-16, 33-2, 33-4
loading data
using external tables, 15-53
LOBs
storage parameters for, 14-7
local coordinators, 32-4
distributed transactions, 32-4
locally managed tablespaces, 8-3
automatic segment space management in, 8-5
DBMS_SPACE_ADMIN package, 8-22

Index-14

detecting and repairing defects, 8-22
migrating SYSTEM from
dictionary-managed, 8-24
tempfiles, 8-9
temporary, creating, 8-9
location transparency in distributed databases
creating using synonyms, 30-20
creating using views, 30-19
restrictions, 30-23
using procedures, 30-22
lock timeout interval
distributed transactions, 33-17
locks
in-doubt distributed transactions, 33-17
monitoring, 4-23
log
window (Scheduler), 27-20
log sequence number
control files, 6-3
log switches
description, 6-3
forcing, 6-13
log sequence numbers, 6-3
multiplexed redo log files and, 6-5
privileges, 6-13
using ARCHIVE_LAG_TARGET, 6-8
waiting for archiving to complete, 6-5
log writer process (LGWR), 4-13
multiplexed redo log files and, 6-5
online redo logs available for use, 6-2
trace file monitoring, 4-22
trace files and, 6-5
writing to online redo log files, 6-2
LOG_ARCHIVE_DEST initialization parameter
specifying destinations using, 7-6
LOG_ARCHIVE_DEST_n initialization
parameter, 7-6
REOPEN attribute, 7-13
LOG_ARCHIVE_DEST_STATE_n initialization
parameter, 7-9
LOG_ARCHIVE_DUPLEX_DEST initialization
parameter
specifying destinations using, 7-6
LOG_ARCHIVE_MAX_PROCESSES initialization
parameter, 7-5
LOG_ARCHIVE_MIN_SUCCEED_DEST initialization
parameter, 7-11
LOG_ARCHIVE_TRACE initialization
parameter, 7-13
LOGGING clause
CREATE TABLESPACE, 8-13
logging mode
direct-path INSERT, 15-14
NOARCHIVELOG mode and, 15-15
logical corruptions from DBMS_REPAIR, 21-4
logical volume managers
mapping files to physical devices, 9-15, 9-24
used for Oracle-managed files, 11-2
LOGON trigger
setting resumable mode, 14-12

logs
job, 28-10
window (Scheduler), 27-20, 28-10
LONG columns, 30-24
LONG RAW columns, 30-24
LOW_GROUP for Database Resource
Manager, 24-14, 24-27

M
maintenance windows
Scheduler, 23-1
managing
Automatic Storage Management templates, 12-29
undo tablespace, 10-1
managing capacity in disk groups, 12-17
managing datafiles, 9-1
managing sequences, 20-12
managing synonyms, 20-17
managing tables, 15-1
managing views, 20-1
manual archiving
in ARCHIVELOG mode, 7-5
manual overrides
in-doubt transactions, 33-8
MAX_DUMP_FILE_SIZE initialization
parameter, 4-22
MAXDATAFILES parameter
changing, 5-5
MAXINSTANCES, 5-5
MAXLOGFILES parameter
changing, 5-5
CREATE DATABASE statement, 6-7
MAXLOGHISTORY parameter
changing, 5-5
MAXLOGMEMBERS parameter
changing, 5-5
CREATE DATABASE statement, 6-7
MAXTRANS parameter
altering, 15-17
media recovery
effects of archiving on, 7-2
migrated rows
eliminating from table, procedure, 13-4
migrating a database to Automatic Storage
Management, 12-46
MINEXTENTS parameter
cannot alter, 14-7, 15-18
description, 14-6
mirrored files
control files, 2-23, 5-2
online redo log, 6-5
online redo log location, 6-6
online redo log size, 6-7
MISSING datafiles, 5-7
MODIFY DEFAULT ATTRIBUTES clause, 17-39
using for partitioned tables, 17-38
MODIFY DEFAULT ATTRIBUTES FOR PARTITION
clause
of ALTER TABLE, 17-38

MODIFY PARTITION clause, 17-38, 17-39, 17-42,
17-44
MODIFY SUBPARTITION clause, 17-39
monitoring
chains, 28-19
MONITORING clause
CREATE TABLE, 15-16
monitoring datafiles, 9-25
MONITORING USAGE clause
of ALTER INDEX statement, 16-13
mounting a database, 3-5
mounting and dismounting disk groups, 12-25
MOVE PARTITION clause, 17-38, 17-42
MOVE SUBPARTITION clause, 17-38, 17-43
moving control files, 5-4
multiple temporary tablespaces, 8-11, 8-12
multiplexed control files
importance of, 5-2
multiplexing
archived redo logs, 7-6
control files, 5-2
redo log file groups, 6-4
redo log files, 6-4

N
name resolution in distributed databases
database links, 29-25
impact of global name changes, 29-30
procedures, 29-29
schema objects, 29-15, 29-27
synonyms, 29-29
views, 29-29
when global database name is complete, 29-26
when global database name is partial, 29-26
when no global database name is specified, 29-26
named user limits
setting initially, 2-34
nested tables
storage parameters for, 14-7
networks
connections, minimizing, 30-11
distributed databases use of, 29-1
NEXT parameter
altering, 14-7, 15-18
NEXTVAL pseudo-column, 20-14
restrictions, 20-15
NO_DATA_FOUND keyword, 31-8
NO_MERGE hint, 31-6
NOARCHIVELOG mode
archiving, 7-2
definition, 7-2
dropping datafiles, 9-7
LOGGING mode and, 15-15
media failure, 7-2
no hot backups, 7-2
running in, 7-2
switching to, 7-4
taking datafiles offline in, 9-7
NOCACHE option

Index-15

CREATE SEQUENCE statement, 20-16
NOLOGGING clause
CREATE TABLESPACE, 8-13
NOLOGGING mode
direct-path INSERT, 15-14
NOMOUNT clause
STARTUP command, 3-5
normal transmission mode
definition, 7-9
Novell NetWare Management System, 29-23
NOWAIT keyword
in REBALANCE clause, 12-20

O
object privileges
for external tables, 15-56
objects
See also schema objects
offline tablespaces
priorities, 8-13
taking offline, 8-13
online redefinition of tables, 15-21
abort and cleanup, 15-27
examples, 15-30
features of, 15-22
intermediate synchronization, 15-27
redefining a single partition, 15-28
rules for, 15-29
restrictions, 15-27
with DBMS_REDEFINITION, 15-22
online redo log files
See also online redo logs
online redo logs
See also redo log files
creating groups, 6-9
creating members, 6-10
dropping groups, 6-12
dropping members, 6-12
forcing a log switch, 6-13
guidelines for configuring, 6-4
INVALID members, 6-13
location of, 6-6
managing, 6-1
moving files, 6-11
number of files in the, 6-7
optimum configuration for the, 6-7
renaming files, 6-11
renaming members, 6-10
specifying ARCHIVE_LAG_TARGET, 6-8
STALE members, 6-13
viewing information about, 6-15
online segment shrink, 14-29
OPEN_LINKS initialization parameter, 30-16
opening windows, 27-22
operating system authentication, 1-14
for Automatic Storage Management, 12-6
operating systems
database administrators requirements for, 1-9
renaming and relocating files, 9-8

Index-16

ORA_TZFILE environment variable
specifying time zone file for database, 2-18
ORA-01555 error
snapshot too old, 10-3
ORA-02055 error
integrity constraint violation, 31-2
ORA-02067 error
rollback required, 31-2
ORA-04068
existing state of package has been
discarded, 13-17
Oracle Call Interface. See OCI
Oracle Database
release numbers, 1-8
Oracle Database users
types of, 1-1
Oracle Enterprise Manager, 3-2
Oracle home
cloning, 1-6
Oracle Managed Files feature
See also Oracle-managed files
Oracle Net
service names in, 7-10
transmitting archived logs via, 7-10
Oracle Universal Installer, 2-1
Oracle-managed files
adding to an existing database, 11-23
behavior, 11-17
benefits, 11-3
CREATE DATABASE statement, 11-7
creating, 11-5
creating control files, 11-14
creating datafiles, 11-12
creating online redo log files, 11-16
creating tempfiles, 11-13
described, 11-1
dropping datafile, 11-18
dropping online redo log files, 11-18
dropping tempfile, 11-18
initialization parameters, 11-3
introduction, 2-15
naming, 11-6
renaming, 11-18
scenarios for using, 11-18
Oracle-managed files feature
See also Oracle-managed files
oraenv and coraenv, 1-7
ORAPWD utility, 1-16
ORGANIZATION EXTERNAL clause
of CREATE TABLE, 15-53
orphan key table
example of building, 21-6
OSDBA group, 1-14
OSOPER group, 1-14
OTHER_GROUPS for Database Resource
Manager, 24-5, 24-11, 24-14, 24-16, 24-27
outer joins, 20-9
key-preserved tables in, 20-10
overlapping windows, 27-25

P
package state discarded error, 13-17
packages
DBMS_FILE_TRANSFER, 9-12
DBMS_METADATA, 13-21
DBMS_REDEFINITION, 15-22, 15-35
DBMS_REPAIR, 21-1
DBMS_RESOURCE_MANAGER, 24-3, 24-7, 24-8,
24-17, 24-18
DBMS_RESOURCE_MANAGER_PRIVS, 24-7,
24-8, 24-18
DBMS_RESUMABLE, 14-13
DBMS_SESSION, 24-19
DBMS_SPACE, 14-31
DBMS_STATS, 13-3, 15-16
DBMS_STORAGE_MAP, 9-21, 9-22
privileges for recompiling, 13-19
recompiling, 13-19
parallel execution
managing, 4-14
parallel hints, 4-14
parallelizing index creation, 16-5
resumable space allocation, 14-10
parallel hints, 4-14
PARALLEL_DEGREE_LIMIT_ABSOLUTE resource
allocation method, 24-12
parallelizing table creation, 15-4, 15-8
parameter files
See also initialization parameter file.
PARTITION BY HASH clause, 17-9
PARTITION BY LIST clause, 17-10
PARTITION BY RANGE clause, 17-8
for composite-partitioned tables, 17-11, 17-12
PARTITION clause
for composite-partitioned tables, 17-11, 17-12
for hash partitions, 17-9
for list partitions, 17-10
for range partitions, 17-8
partitioned indexes, 17-1
adding partitions, 17-28
creating local index on composite partitioned
table, 17-12
creating local index on hash partitioned
table, 17-10
creating range partitions, 17-9
description, 17-1
dropping partitions, 17-32
global, 17-2
local, 17-2
maintenance operations, 17-21
maintenance operations, table of, 17-23
modifying partition default attributes, 17-38
modifying real attributes of partitions, 17-39
moving partitions, 17-43
rebuilding index partitions, 17-43
renaming index partitions/subpartitions, 17-44
secondary indexes on index-organized
tables, 17-19
splitting partitions, 17-49
partitioned tables, 17-1

adding partitions, 17-26
adding subpartitions, 17-27, 17-28
coalescing partitions, 17-29
creating hash partitions, 17-9
creating list partitions, 17-10
creating range partitions, 17-8, 17-9
creating range-hash partitions, 17-11
creating range-list partitions, 17-12
description, 17-1
DISABLE ROW MOVEMENT, 17-8
dropping, 17-52
dropping partitions, 17-30
ENABLE ROW MOVEMENT, 17-8
exchanging partitions, 17-32
exchanging subpartitions, 17-34
global indexes on, 17-2
index-organized tables, 17-8, 17-19, 17-20
local indexes on, 17-2
maintenance operations, 17-21
maintenance operations, table of, 17-21
marking indexes UNUSABLE, 17-26, 17-29, 17-30,
17-32, 17-33, 17-34, 17-38, 17-39, 17-42, 17-45,
17-50
merging partitions, 17-34
modifying default attributes, 17-38
modifying real attributes of partitions, 17-38
modifying real attributes of subpartitions, 17-39
moving partitions, 17-42
moving subpartitions, 17-43
multicolumn partitioning keys, 17-15
rebuilding index partitions, 17-43
redefining partitions online, 15-28, 17-43
rules for, 15-29
renaming partitions, 17-44
renaming subpartitions, 17-44
splitting partitions, 17-45
truncating partitions, 17-50
truncating subpartitions, 17-51
updating global indexes automatically, 17-24
partitioning
See also partitioned tables
creating partitions, 17-7
default partition, 17-5
default subpartition, 17-7
indexes, 17-1
index-organized tables, 17-8, 17-19, 17-20
list, 17-4, 17-40
maintaining partitions, 17-21
methods, 17-2
range-hash, 17-5, 17-11
range-list, 17-6, 17-12
subpartition templates, 17-13
tables, 17-1
partitions
See also partitioned tables.
See also partitioned indexes.
See also partitioning.
PARTITIONS clause
for hash partitions, 17-9
password file

Index-17

adding users, 1-18
creating, 1-16
ORAPWD utility, 1-16
removing, 1-20
setting REMOTE_LOGIN_PASSWORD, 1-18
viewing members, 1-19
password file authentication, 1-15
for Automatic Storage Management, 12-7
passwords
default for SYS and SYSTEM, 1-9
password file, 1-18
setting REMOTE_LOGIN_PASSWORD
parameter, 1-18
patches, installing, 1-6
PCTINCREASE parameter, 15-18
altering, 14-7
pending area for Database Resource Manager
plans, 24-10, 24-11
validating plan schema changes, 24-10
pending transaction tables, 33-16
performance
index column order, 16-3
location of datafiles and, 9-4
plan schemas for Database Resource Manager, 24-4,
24-5, 24-10, 24-13, 24-24, 24-32
examples, 24-24
validating plan changes, 24-10
PL/SQL
replaced views and program units, 20-4
PRAGMA_EXCEPTION_INIT procedure
assigning exception names, 31-9
prepare phase
abort response, 32-9
in two-phase commit, 32-8
prepared response, 32-8
read-only response, 32-9
recognizing read-only nodes, 32-9
steps, 32-9
prepare/commit phases
effects of failure, 33-17
failures during, 33-6
locked resources, 33-17
pending transaction table, 33-16
prepared response
two-phase commit, 32-8
prerequisites
for creating a database, 2-4
PRIMARY KEY constraints
associated indexes, 16-8
dropping associated indexes, 16-14
enabling on creation, 16-7
foreign key references when dropped, 13-12
indexes associated with, 16-7
priorities
job, 28-18
private database links, 29-10
private synonyms, 20-17
privileges
adding redo log groups, 6-9
altering indexes, 16-11

Index-18

altering tables, 15-16
closing a database link, 31-2
creating database links, 30-6
creating tables, 15-6
creating tablespaces, 8-2
database administrator, 1-9
dropping indexes, 16-14
dropping online redo log members, 6-12
dropping redo log groups, 6-12
dropping tables, 15-37
enabling and disabling triggers, 13-8
for external tables, 15-56
forcing a log switch, 6-13
managing with procedures, 30-23
managing with synonyms, 30-21
managing with views, 30-20
manually archiving, 7-5
recompiling packages, 13-19
recompiling procedures, 13-19
recompiling views, 13-18
renaming objects, 13-16
renaming redo log members, 6-10
RESTRICTED SESSION system privilege, 3-6
Scheduler, 28-14
sequences, 20-13, 20-16
synonyms, 20-17, 20-18
taking tablespaces offline, 8-13
truncating, 13-6
using a view, 20-4
using sequences, 20-14
views, 20-1, 20-3, 20-12
procedures
external, 4-16
location transparency in distributed
databases, 30-21
name resolution in distributed databases, 29-29
recompiling, 13-19
remote calls, 29-33
process monitor (PMON), 4-13
processes
See also server processes
PROCESSES initialization parameter
setting before database creation, 2-33
PRODUCT_COMPONENT_VERSION view, 1-8
programs
altering, 27-11
creating, 27-9
disabling, 27-12
dropping, 27-11
enabling, 27-12
overview, 26-3
using, 27-9
public database links, 29-10
connected user, 30-25
fixed user, 30-25
public fixed user database links, 30-25
public synonyms, 20-17
PURGE_LOST_DB_ENTRY procedure
DBMS_TRANSACTION package, 33-10

Q
queries
distributed, 29-24
distributed application development issues, 31-3
location transparency and, 29-32
remote, 29-23
quiescing a database, 3-11
quotas
tablespace, 8-2

R
RAISE_APPLICATION_ERROR() procedure, 31-9
range partitioning
creating tables using, 17-8
index-organized tables, 17-19
multicolumn partitioning keys, 17-15
range-hash partitioning
creating tables using, 17-11
subpartitioning template, 17-13
when to use, 17-5
range-list partitioning
creating tables using, 17-12
subpartitioning template, 17-14
when to use, 17-6
read consistency
managing in distributed databases, 33-19
read-only database
opening, 3-8
read-only response
two-phase commit, 32-9
read-only tablespaces
datafile headers when rename, 8-19
delaying opening of datafiles, 8-18
making read-only, 8-16
making writable, 8-17
WORM devices, 8-18
Real Application Clusters
allocating extents for cluster, 18-6
sequence numbers and, 20-13
threads of online redo log, 6-1
rebalance
tuning, 12-9
REBALANCE NOWAIT clause, 12-20
REBALANCE WAIT clause, 12-20
rebalancing a disk group, 12-24
REBUILD PARTITION clause, 17-43, 17-44
REBUILD UNUSABLE LOCAL INDEXES
clause, 17-44
rebuilding indexes, 16-12
costs, 16-5
online, 16-13
reclaiming unused space, 14-15
RECOVER clause
STARTUP command, 3-7
recoverer process
disabling, 33-18
distributed transaction recovery, 33-18
enabling, 33-18
pending transaction table, 33-18

recoverer process (RECO), 4-14
recovering
Scheduler jobs, 28-18
recovery
creating new control files, 5-5
Recovery Manager
starting a database, 3-2
starting an instance, 3-2
recycle bin
about, 15-38
purging, 15-40
renamed objects, 15-39
restoring objects from, 15-41
viewing, 15-40
redefining tables online
See online redefinition of tables
redo log files
See also online redo logs
active (current), 6-3
archiving, 7-2
available for use, 6-2
circular use of, 6-2
clearing, 6-5, 6-14
contents of, 6-2
creating as Oracle-managed files, 11-16
creating as Oracle-managed files, example, 11-21
creating groups, 6-9
creating members, 6-9, 6-10
distributed transaction information in, 6-2
dropping groups, 6-12
dropping members, 6-12
group members, 6-4
groups, defined, 6-4
how many in redo log, 6-7
inactive, 6-3
instance recovery use of, 6-1
legal and illegal configurations, 6-5
LGWR and the, 6-2
log switches, 6-3
maximum number of members, 6-7
members, 6-4
mirrored, log switches and, 6-5
multiplexed, 6-4, 6-5
online, defined, 6-1
planning the, 6-4, 6-8
redo entries, 6-2
requirements, 6-5
storing separately from datafiles, 9-4
threads, 6-1
unavailable when database is opened, 3-5
verifying blocks, 6-13
redo logs
See also online redo log
See also redo log files
creating in Automatic Storage
Management, 12-43
redo records, 6-2
LOGGING and NOLOGGING, 8-13
REDUNDANCY_LOWERED column
in V$ASM_FILE, 12-17

Index-19

referential integrity
distributed database application
development, 31-2
release number format, 1-8
releases, 1-8
checking the Oracle Database release number, 1-8
relocating control files, 5-4
remote connections
connecting as SYSOPER/SYSDBA, 1-11
password files, 1-18
remote data
querying, 30-23
updating, 30-23
remote procedure calls, 29-33
distributed databases and, 29-33
remote queries
distributed databases and, 29-23
remote transactions, 29-24
defined, 29-24
REMOTE_LOGIN_PASSWORDFILE initialization
parameter, 1-18
REMOTE_OS_AUTHENT initialization parameter
connected user database links, 29-12
RENAME PARTITION clause, 17-44
RENAME statement, 13-16
renaming control files, 5-4
renaming files
Oracle-managed files, 11-18
REOPEN attribute
LOG_ARCHIVE_DEST_n initialization
parameter, 7-13
repair table
example of building, 21-6
repairing data block corruption
DBMS_REPAIR, 21-1
repeat interval, schedule, 27-14
RESIZE clause
for single-file tablespace, 8-8
resizing disks in disk groups, 12-24
resource allocation methods, 24-3
active session pool, 24-12
ACTIVE_SESS_POOL_MTH, 24-12
CPU resource, 24-12
EMPHASIS, 24-12
limit on degree of parallelism, 24-12
PARALLEL_DEGREE_LIMIT_ABSOLUTE, 24-12
PARALLEL_DEGREE_LIMIT_MTH, 24-12
QUEUEING_MTH, 24-12
queuing resource allocation method, 24-12
ROUND-ROBIN, 24-14
resource consumer groups, 24-3
changing, 24-18
creating, 24-13
DEFAULT_CONSUMER_GROUP, 24-14, 24-20
deleting, 24-14
granting the switch privilege, 24-20
LOW_GROUP, 24-14, 24-27
managing, 24-17, 24-19
OTHER_GROUPS, 24-5, 24-11, 24-14, 24-16, 24-27
parameters, 24-13

Index-20

revoking the switch privilege, 24-20
setting initial, 24-18
switching a session, 24-18
switching sessions for a user, 24-18
SYS_GROUP, 24-14, 24-27
updating, 24-14
Resource Manager
AUTO_TASK_CONSUMER_GROUP consumer
group, 23-3
resource plan directives, 24-3, 24-10
deleting, 24-17
specifying, 24-14
updating, 24-16
resource plans, 24-3, 24-4
creating, 24-8
DELETE_PLAN_CASCADE, 24-13
deleting, 24-12
examples, 24-3, 24-24
parameters, 24-12
plan schemas, 24-4, 24-5, 24-10, 24-13, 24-24,
24-32
subplans, 24-4, 24-5, 24-13
SYSTEM_PLAN, 24-12, 24-14, 24-27
top plan, 24-10, 24-24
updating, 24-12
validating, 24-10
RESOURCE_MANAGER_PLAN initialization
parameter, 24-24
RESTRICTED SESSION system privilege
restricted mode and, 3-6
resumable space allocation
correctable errors, 14-10
detecting suspended statements, 14-12
disabling, 14-10
distributed databases, 14-10
enabling, 14-10
example, 14-14
how resumable statements work, 14-8
naming statements, 14-12
parallel execution and, 14-10
resumable operations, 14-9
setting as default for session, 14-12
timeout interval, 14-11, 14-12
RESUMABLE_TIMEOUT Initialization Parameter
setting, 14-11
RESUMABLE_TIMEOUT initialization
parameter, 14-8
retention guarantee (for undo), 10-3
RMAN. See Recovery Manager.
roles
DBA role, 1-10
obtained through database links, 29-16
ROLLBACK statement
FORCE clause, 33-8, 33-9
forcing, 33-6
rollbacks
ORA-02, 31-2
ROUND-ROBIN resource allocation method, 24-14
row movement clause for partitioned tables, 17-8
rows

listing chained or migrated, 13-4
rules
adding to a chain, 27-39
dropping from chains, 27-42
running
chains, 27-42
jobs, 27-5

S
Sample Schemas
description, 2-46
savepoints
in-doubt transactions, 33-8, 33-9
Scheduler
administering, 28-1
architecture, 26-10
configuring, 28-1
examples of using, 28-20
GATHER_STATS_JOB job, 23-2
GATHER_STATS_PROG program, 23-2
import and export, 28-20
maintenance windows, 23-1
monitoring and managing, 28-6
overview, 26-1
privileges, 28-8, 28-14
security, 28-20
statistics collection, 23-2
using, 27-1
using in RAC, 26-12
views, 28-7
Scheduler objects, naming, 27-1
schedules
altering, 27-13
creating, 27-13
dropping, 27-14
overview, 26-3
using, 27-12
schema objects
analyzing, 13-2
creating multiple objects, 13-1
defining using DBMS_METADATA
package, 13-21
dependencies between, 13-16
distributed database naming conventions
for, 29-15
global names, 29-15
listing by type, 13-22
name resolution in distributed databases, 29-15,
29-27
name resolution in SQL statements, 13-19
privileges to rename, 13-16
referencing with synonyms, 30-20
renaming, 13-16
validating structure, 13-3
viewing information, 13-21, 14-31
SCN. See system change number.
SCOPE clause
ALTER SYSTEM SET, 2-38
scripts, authenticating users in, 2-46

security
accessing a database, 22-1
administrator of, 22-1
centralized user management in distributed
databases, 29-19
database security, 22-1
distributed databases, 29-17
establishing policies, 22-1
privileges, 22-1
remote objects, 30-20
Scheduler, 28-20
using synonyms, 30-21
Segment Advisor, 14-16
configuring Scheduler job, 14-27
invoking with Enterprise Manager, 14-18
invoking with PL/SQL, 14-19
running manually, 14-17
viewing results, 14-21
views, 14-28
SEGMENT_FIX_STATUS procedure
DBMS_REPAIR, 21-2
segments
available space, 14-31
data dictionary views for, 14-32
deallocating unused space, 14-15
displaying information on, 14-33
shrinking, 14-29
storage parameters for temporary, 14-8
SELECT statement
FOR UPDATE clause and location
transparency, 30-23
selecting an instance with environment
variables, 1-7
SEQUENCE_CACHE_ENTRIES parameter, 20-16
sequences
accessing, 20-14
altering, 20-13
caching sequence numbers, 20-15
creating, 20-13, 20-16
CURRVAL, 20-15
dropping, 20-16
managing, 20-12
NEXTVAL, 20-14
Oracle Real Applications Clusters and, 20-13
SERVER parameter
net service name, 30-13
server parameter file
and Automatic Storage Management, 3-3
creating, 2-37
defined, 2-36
error recovery, 2-40
exporting, 2-39
migrating to, 2-36
RMAN backup, 2-40
setting initialization parameter values, 2-38
SPFILE initialization parameter, 2-38
STARTUP command behavior, 2-36
viewing parameter settings, 2-40
server processes
archiver (ARCn), 4-13

Index-21

background, 4-13
checkpoint (CKPT), 4-13
database writer (DBWn), 4-13
dedicated, 4-1
dispatcher (Dnnn), 4-14
dispatchers, 4-9
global cache service (LMS), 4-14
log writer (LGWR), 4-13
monitoring, 4-18
monitoring locks, 4-23
process monitor (PMON), 4-13
recoverer (RECO), 4-14
shared server, 4-2
system monitor (SMON), 4-13
trace files for, 4-21
server-generated alerts, 4-18
servers
role in two-phase commit, 32-4
service names
database links and, 30-10
services
application, 2-41
application, configuring, 2-42
application, deploying, 2-42
application, using, 2-43
session trees for distributed transactions
clients, 32-4
commit point site, 32-5, 32-6
database servers, 32-4
definition, 32-3
global coordinators, 32-4
local coordinators, 32-4
tracing transactions, 33-4
sessions
active, 4-17
inactive, 4-18
setting advice for transactions, 33-7
terminating, 4-16
SET TIME_ZONE clause
ALTER SESSION, 2-17
CREATE DATABASE, 2-17
SET TRANSACTION statement
naming transactions, 33-2
SGA
See Also system global area
SGA. See system global area.
SGA_MAX_SIZE initialization parameter, 2-24
setting size, 2-25
shared database links
configuring, 30-12
creating, 30-12
dedicated servers, creating links to, 30-12
determining whether to use, 30-11
example, 29-14
shared servers, creating links to, 30-13
SHARED keyword
CREATE DATABASE LINK statement, 30-12
shared server, 4-2
configuring dispatchers, 4-7
disabling, 4-6, 4-12

Index-22

initialization parameters, 4-4
interpreting trace output, 4-23
setting minimum number of servers, 4-6
trace files for processes, 4-21
views, 4-12
shared SQL
for remote and distributed statements, 29-24
shrinking segments online, 14-29
SHUTDOWN command
ABORT clause, 3-10
IMMEDIATE clause, 3-9
NORMAL clause, 3-9
TRANSACTIONAL clause, 3-10
Simple Network Management Protocol (SNMP)
support
database management, 29-23
single-file tablespaces
description, 8-6
single-table hash clusters, 19-4
site autonomy
distributed databases, 29-16
SKIP_CORRUPT_BLOCKS procedure, 21-5
DBMS_REPAIR, 21-2
example, 21-9
snapshot too old error, 10-3
SORT_AREA_SIZE initialization parameter
index creation and, 16-2
space
deallocating unused, 14-31
reclaiming unused, 14-15
space allocation
resumable, 14-8
space management
data blocks, 14-4
datatypes, space requirements, 14-31
deallocating unused space, 14-15
Segment Advisor, 14-15
setting storage parameters, 14-5, 14-7
shrink segment, 14-15
SPACE_ERROR_INFO procedure, 14-12
SPFILE initialization parameter, 2-38
specifying from client machine, 3-3
SPLIT PARTITION clause, 17-26, 17-45
SQL statements
distributed databases and, 29-23
SQL*Loader
about, 1-22
SQL*Plus
starting, 3-4
starting a database, 3-1
starting an instance, 3-1
SQL_TRACE initialization parameter
trace files and, 4-21
STALE status
of redo log members, 6-13
stalled chain (Scheduler), 27-44
standby transmission mode
definition of, 7-10
Oracle Net and, 7-10
RFS processes and, 7-10

starting
Automatic Storage Management instance, 12-39
starting a database, 3-1
forcing, 3-6
Oracle Enterprise Manager, 3-2
recovery and, 3-7
Recovery Manager, 3-2
restricted mode, 3-5
SQL*Plus, 3-1
when control files unavailable, 3-5
when redo logs unavailable, 3-5
starting an instance
automatically at system startup, 3-7
database closed and mounted, 3-5
database name conflicts and, 2-21
forcing, 3-6
mounting and opening the database, 3-5
normally, 3-5
Oracle Enterprise Manager, 3-2
recovery and, 3-7
Recovery Manager, 3-2
remote instance startup, 3-7
restricted mode, 3-5
SQL*Plus, 3-1
when control files unavailable, 3-5
when redo logs unavailable, 3-5
without mounting a database, 3-5
STARTUP command
default behavior, 2-36
NOMOUNT clause, 2-7, 3-5
RECOVER clause, 3-7
starting a database, 3-1, 3-4
statement transparency in distributed database
managing, 30-23
statistics
automatically collecting for tables, 15-16
statistics collection
using Scheduler, 23-2
STATISTICS_LEVEL initialization parameter
automatic statistics collection, 15-16
steps, chain
altering, 27-43
dropping, 27-43
stopping
jobs, 27-7
STORAGE clause
See also storage parameters
storage parameters
applicable objects, 14-5
BUFFER POOL, 14-6
INITIAL, 14-6, 15-18
INITRANS, altering, 15-17
MAXTRANS, altering, 15-17
MINEXTENTS, 14-6, 15-18
NEXT, 15-18
PCTINCREASE, 15-18
precedence of, 14-7
setting, 14-5
temporary segments, 14-8
storage subsystems

mapping files to physical devices, 9-15, 9-24
STORE IN clause, 17-11
stored procedures
managing privileges, 30-23
privileges for recompiling, 13-19
remote object security, 30-23
SUBPARTITION BY HASH clause
for composite-partitioned tables, 17-11
SUBPARTITION BY LIST clause
for composite-partitioned tables, 17-12
SUBPARTITION clause, 17-27, 17-28, 17-47
for composite-partitioned tables, 17-11, 17-12
subpartition templates, 17-13
modifying, 17-41
subpartitions, 17-1
SUBPARTITIONS clause, 17-27, 17-47
for composite-partitioned tables, 17-11
subqueries
in remote updates, 29-23
statement transparency in distributed
databases, 30-23
SunSoft SunNet Manager, 29-23
SWITCH LOGFILE clause
ALTER SYSTEM statement, 6-13
synonyms, 20-18
creating, 20-17, 30-20
definition and creation, 30-20
displaying dependencies of, 13-23
dropping, 20-18
examples, 30-21
location transparency in distributed
databases, 30-20
managing, 20-17, 20-18
managing privileges in remote database, 30-21
name resolution in distributed databases, 29-29
private, 20-17
public, 20-17
remote object security, 30-21
SYS account
default password, 1-9
objects owned, 1-10
privileges, 1-10
specifying password for CREATE DATABASE
statement, 2-10
SYS_GROUP for Database Resource Manager, 24-14,
24-27
SYSAUX tablespace, 8-2
about, 2-12
cannot rename, 8-19
creating at database creation, 2-8, 2-12
DATAFILE clause, 2-12
monitoring occupants, 8-21
moving occupants, 8-21
SYSDBA system privilege
adding users to the password file, 1-18
connecting to database, 1-11
determining who has privileges, 1-19
granting and revoking, 1-19
SYSOPER system privilege
adding users to the password file, 1-18

Index-23

connecting to database, 1-11
determining who has privileges, 1-19
granting and revoking, 1-19
SYSTEM account
default password, 1-9
objects owned, 1-10
specifying password for CREATE
DATABASE, 2-10
system change numbers
coordination in a distributed database
system, 32-11
in-doubt transactions, 33-8
using V$DATAFILE to view information
about, 9-25
when assigned, 6-2
system global area
holds sequence number cache
initialization parameters affecting size, 2-24
specifying buffer cache sizes, 2-30
system monitor process (SMON), 4-13
system privileges
ADMINISTER_RESOURCE_MANAGER, 24-7
for external tables, 15-56
SYSTEM tablespace
cannot rename, 8-19
creating at database creation, 2-8
creating locally managed, 2-8, 2-11
restrictions on taking offline, 9-6
when created, 8-2
SYSTEM_PLAN for Database Resource
Manager, 24-12, 24-14, 24-27

T
tables
about, 15-1
adding columns, 15-19
allocating extents, 15-18
altering, 15-17
altering physical attributes, 15-17
analyzing, 13-2
clustered (hash). See hash clusters
creating, 15-6
designing before creating, 15-2
dropping, 15-37
dropping columns, 15-20
estimating size, 15-5
estimating space use, 14-36
external, 15-52
Flashback Drop, 15-38
Flashback Table, 15-36
Flashback Transaction Query, 15-36
guidelines for managing, 15-2
hash clustered. See hash clusters
increasing column length, 15-19
index-organized, 15-42
index-organized, partitioning, 17-18
key-preserved, 20-7
limiting indexes on, 16-3
managing, 15-1

Index-24

modifying column definition, 15-19
moving, 15-18
moving time windows in historical, 17-52
parallelizing creation, 15-4, 15-8
partitioned, 17-1
redefining online, 15-21
renaming columns, 15-19
restrictions when creating, 15-5
setting storage parameters, 15-5
shrinking, 14-29
specifying location, 15-3
statistics collection, automatic, 15-16
temporary, 15-6
truncating, 13-5
unrecoverable (NOLOGGING), 15-4
validating structure, 13-3
views, 15-57
tablespace set, 8-30
tablespaces
adding datafiles, 9-4
assigning user quotas, 8-2
automatic segment space management, 8-5
bigfile, 2-16, 8-6
checking default storage parameters, 8-41
containing XMLTypes, 8-27
creating in Automatic Storage
Management, 12-42
creating undo tablespace at database
creation, 2-13, 2-17
DBMS_SPACE_ADMIN package, 8-22
default temporary tablespace, creating, 2-14, 2-17
detecting and repairing defects, 8-22
diagnosing and repairing problems in locally
managed, 8-22
dictionary managed, 8-8
dropping, 8-20
guidelines for managing, 8-1
listing files of, 8-41
listing free space in, 8-42
locally managed, 8-3
locally managed SYSTEM, 2-11
locally managed temporary, 8-9
location, 9-4
migrating SYSTEM to locally managed, 8-24
multiple block sizes, 8-36
on a WORM device, 8-18
Oracle-managed files, managing, 11-21, 11-23
overriding default type, 2-17
quotas, assigning, 8-2
read-only, 8-15
renaming, 8-19
setting default type, 2-16
single-file, 2-16, 2-17, 8-6, 8-8
specifying nonstandard block sizes, 8-12
SYSAUX, 8-2, 8-19
SYSAUX creation, 2-12
SYSAUX, managing, 8-20
SYSTEM, 8-2, 8-4, 8-16, 8-24
taking offline normal, 8-13
taking offline temporarily, 8-14

tempfiles in locally managed, 8-9
temporary, 8-8, 8-12
temporary bigfile, 8-9
temporary for creating large indexes, 16-9
transportable
see transportable tablespaces
undo, 10-1
using multiple, 8-1
using Oracle-managed files, 11-12
tempfiles, 8-9
creating as Oracle-managed, 11-13
dropping, 9-11
dropping Oracle-managed tempfiles, 11-18
template
dropping an Automatic Storage
Management, 12-32
managing Automatic Storage Management, 12-29
modifying an Automatic Storage
Management, 12-32
temporary segments
index creation and, 16-2
temporary tables
creating, 15-6
temporary tablespaces
altering, 8-10
bigfile, 8-9
creating, 8-9
groups, 8-11
renaming default, 8-19
terminating user sessions
active sessions, 4-17
identifying sessions, 4-17
inactive session, example, 4-18
inactive sessions, 4-18
threads
online redo log, 6-1
threshold based alerts
managing with Oracle Enterprise Manager, 4-19
threshold-based alerts
server-generated, 4-18
thresholds
setting alert, 14-2
time zone
files, 2-18
setting for database, 2-17
TNSNAMES.ORA file, 7-7
trace files
location of, 4-22
log writer process and, 6-5
size of, 4-22
using, 4-21, 4-22
when written, 4-22
tracing
archivelog process, 7-13
transaction control statements
distributed transactions and, 32-2
transaction failures
simulating, 33-17
transaction management
overview, 32-7

transaction processing
distributed systems, 29-23
transactions
closing database links, 31-2
distributed and two-phase commit, 29-25
in-doubt, 32-10, 32-11, 32-14, 33-5
naming distributed, 33-2, 33-7
remote, 29-24
transmitting archived redo logs, 7-9
transparent data encryption, 2-45
transportable set
See transportable tablespace set
transportable tablespace set
defined, 8-29
transportable tablespaces, 8-25
compatibility considerations, 8-28
from backup, 8-25
introduction, 8-25
limitations, 8-27
multiple block sizes, 8-36
procedure, 8-29
when to use, 8-37
wizard in Enterprise Manager, 8-26
XMLTypes in, 8-27
transporting tablespaces between databases
See transportable tablespaces
triggers
disabling, 13-8
enabling, 13-8
TRUNCATE PARTITION clause, 17-50
TRUNCATE statement, 13-6
DROP STORAGE clause, 13-7
REUSE STORAGE clause, 13-7
vs. dropping table, 15-38
TRUNCATE SUBPARTITION clause, 17-51
tuning
analyzing tables, 31-5
cost-based optimization, 31-4
two-phase commit
case study, 32-14
commit phase, 32-10, 32-17
described, 29-25
discovering problems with, 33-6
distributed transactions, 32-7
example, 32-14
forget phase, 32-11
in-doubt transactions, 32-11, 32-14
phases, 32-7
prepare phase, 32-8, 32-10
recognizing read-only nodes, 32-9
specifying commit point strength, 33-1
steps in commit phase, 32-10
tracing session tree in distributed
transactions, 33-4
viewing database links, 33-2

U
undo retention, 10-3
automatic tuning of, 10-4

Index-25

guaranteeing, 10-3
setting, 10-5
undo segments
in-doubt distributed transactions, 33-6
undo space management
automatic undo management mode, 10-2
described, 10-1
undo tablespace
initialization parameters for, 10-2
managing, 10-1
undo tablespaces
altering, 10-8
creating, 10-7
dropping, 10-8
monitoring, 10-11
PENDING OFFLINE status, 10-9
renaming, 8-19
specifying at database creation, 2-8, 2-13, 2-17
starting an instance using, 10-2
statistics for, 10-10
switching, 10-9
user quotas, 10-9
viewing information about, 10-10
UNDO_MANAGEMENT initialization
parameter, 2-13
starting instance as AUTO, 10-2
UNDO_TABLESPACE initialization parameter
for undo tablespaces, 2-33
starting an instance using, 10-2
undropping disks in disk groups, 12-24
UNIQUE key constraints
associated indexes, 16-8
dropping associated indexes, 16-14
enabling on creation, 16-7
foreign key references when dropped, 13-12
indexes associated with, 16-7
UNRECOVERABLE DATAFILE clause
ALTER DATABASE statement, 6-14
UPDATE GLOBAL INDEX clause
of ALTER TABLE, 17-24
updates
location transparency and, 29-32
upgrading a database, 2-1
USER_DB_LINKS view, 30-16
USER_DUMP_DEST initialization parameter, 4-22
USER_RESUMABLE view, 14-12
usernames
SYS and SYSTEM, 1-9
users
assigning tablespace quotas, 8-2
in a newly created database, 2-44
limiting number of, 2-34
session, terminating, 4-18
utilities
export, 1-22
for the database administrator, 1-22
import, 1-22
SQL*Loader, 1-22
UTLCHAIN.SQL script
listing chained rows, 13-4

Index-26

UTLCHN1.SQL script
listing chained rows, 13-4
UTLLOCKT.SQL script, 4-23

V
V$ARCHIVE view, 7-14
V$ARCHIVE_DEST view
obtaining destination status, 7-8
V$BLOCKING_QUIESCE view, 3-12, 24-32
V$DATABASE view, 7-15
V$DBLINK view, 30-17
V$DISPATCHER view
monitoring shared server dispatchers, 4-10
V$DISPATCHER_RATE view
monitoring shared server dispatchers, 4-10
V$INSTANCE view
for database quiesce state, 3-13
V$LOG view, 7-14
displaying archiving status, 7-14
online redo log, 6-15
viewing redo data with, 6-15
V$LOG_HISTORY view
viewing redo data, 6-15
V$LOGFILE view
log file status, 6-13
viewing redo data, 6-15
V$OBJECT_USAGE view
for monitoring index usage, 16-13
V$PWFILE_USERS view, 1-19
V$QUEUE view
monitoring shared server dispatchers, 4-10
V$ROLLSTAT view
undo segments, 10-10
V$SESSION view, 4-18
V$SYSAUX_OCCUPANTS view
occupants of SYSAUX tablespace, 8-21
V$THREAD view, 6-15
V$TIMEZONE_NAMES view
time zone table information, 2-18
V$TRANSACTION view
undo tablespaces information, 10-10
V$UNDOSTAT view
statistics for undo tablespaces, 10-10
V$VERSION view, 1-9
VALIDATE STRUCTURE clause
of ANALYZE statement, 13-3
VALIDATE STRUCTURE ONLINE clause
of ANALYZE statement, 13-3
varrays
storage parameters for, 14-7
verifying blocks
redo log files, 6-13
viewing
alerts, 14-3
views, 6-15
creating, 20-2
creating with errors, 20-3
Database Resource Manager, 24-31
DATABASE_PROPERTIES, 2-14

DBA_2PC_NEIGHBORS, 33-4
DBA_2PC_PENDING, 33-2
DBA_DB_LINKS, 30-16
DBA_RESUMABLE, 14-12
displaying dependencies of, 13-23
dropping, 20-12
file mapping views, 9-22
for monitoring datafiles, 9-25
FOR UPDATE clause and, 20-2
invalid, 20-5
join. See join views.
location transparency in distributed
databases, 30-19
managing, 20-1, 20-4
managing privileges with, 30-20
name resolution in distributed databases, 29-29
ORDER BY clause and, 20-2
remote object security, 30-20
restrictions, 20-5
tables, 15-57
tablespace information, 8-40
USER_RESUMABLE, 14-12
using, 20-4
V$ARCHIVE, 7-14
V$ARCHIVE_DEST, 7-8
V$DATABASE, 7-15
V$LOG, 6-15, 7-14
V$LOG_HISTORY, 6-15
V$LOGFILE, 6-13, 6-15
V$OBJECT_USAGE, 16-13
wildcards in, 20-3
WITH CHECK OPTION, 20-2

WRH$_UNDOSTAT view, 10-11

X
XML DB
virtual folder for Automatic Storage
Management, 12-46
XMLTypes
in transportable tablespaces, 8-27

W
WAIT keyword, in REBALANCE clause, 12-20
wildcards
in views, 20-3
window groups
creating, 27-28
disabling, 27-29
dropping, 27-28
dropping a member from, 27-29
enabling, 27-29
overview, 26-9
using, 27-27
window logs, 27-20
windows (Scheduler)
altering, 27-22
closing, 27-23
creating, 27-21
disabling, 27-24
dropping, 27-23
enabling, 27-24
opening, 27-22
overlapping, 27-25
overview, 26-8
using, 27-19
WORM devices
and read-only tablespaces, 8-18

Index-27

Index-28



Source Exif Data:
File Type                       : PDF
File Type Extension             : pdf
MIME Type                       : application/pdf
PDF Version                     : 1.3
Linearized                      : Yes
Marked                          : True
Subject                         : Oracle Database
Modify Date                     : 2006:05:23 14:17:16Z
Create Date                     : 2006:05:23 14:17:16Z
Page Count                      : 876
Creation Date                   : 2006:05:23 14:17:16Z
Mod Date                        : 2006:05:23 14:17:16Z
Producer                        : Acrobat Distiller 5.0.5 (Windows)
Author                          : Oracle Corporation
Metadata Date                   : 2006:05:23 14:17:16Z
Creator                         : Oracle Corporation
Description                     : Oracle Database
Title                           : Oracle Database Administrator’s Guide 10g Release 2
Page Mode                       : UseOutlines
EXIF Metadata provided by EXIF.tools

Navigation menu