Oracle Database Administrator’s Guide 10g Release 2 Administrator
Administrator's%20Guide
User Manual: Pdf
Open the PDF directly: View PDF .
Page Count: 876
Download | ![]() |
Open PDF In Browser | View PDF |
Oracle® Database Administrator's Guide 10g Release 2 (10.2) B14231-02 May 2006 Oracle Database Administrator’s Guide, 10g Release 2 (10.2) B14231-02 Copyright © 2001, 2006, Oracle. All rights reserved. Primary Author: Steve Fogel Contributing Author: Paul Lane Contributors: David Austin, Prasad Bagal, Cathy Baird, Mark Bauer, Eric Belden, Bill Bridge, Allen Brumm, Sudip Datta, Mark Dilman, Jacco Draaijer, Harvey Eneman, Amit Ganesh, Vira Goorah, Carolyn Gray, Joan Gregoire, Daniela Hansell, Lilian Hobbs, Wei Huang, Robert Jenkins, Bhushan Khaladkar, Balaji Krishnan, Vasudha Krishnaswamy, Sushil Kumar, Bill Lee, Sue K. Lee, Yunrui Li, Ilya Listvinsky, Bryn Llewellyn, Rich Long, Catherine Luu, Scott Lynn, Raghu Mani, Colin McGregor, Mughees Minhas, Valarie Moore, Niloy Mukherjee, Sujatha Muthulingam, Gary Ngai, Waleed Ojeil, Rod Payne, Hanlin Qian, Ananth Raghavan, Ravi Ramkissoon, Ann Rhee, Tom Sepez, Vikram Shukla, Bipul Sinha, Wayne Smith, Jags Srinivasan, Deborah Steiner, Janet Stern, Michael Stewart, Mahesh Subramaniam, Anh-Tuan Tran, Alex Tsukerman, Kothanda Umamageswaran, Eric Voss, Daniel M. Wong, Wanli Yang, Wei Zhang The Programs (which include both the software and documentation) contain proprietary information; they are provided under a license agreement containing restrictions on use and disclosure and are also protected by copyright, patent, and other intellectual and industrial property laws. Reverse engineering, disassembly, or decompilation of the Programs, except to the extent required to obtain interoperability with other independently created software or as specified by law, is prohibited. The information contained in this document is subject to change without notice. If you find any problems in the documentation, please report them to us in writing. This document is not warranted to be error-free. Except as may be expressly permitted in your license agreement for these Programs, no part of these Programs may be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose. If the Programs are delivered to the United States Government or anyone licensing or using the Programs on behalf of the United States Government, the following notice is applicable: U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the Programs, including documentation and technical data, shall be subject to the licensing restrictions set forth in the applicable Oracle license agreement, and, to the extent applicable, the additional rights set forth in FAR 52.227-19, Commercial Computer Software--Restricted Rights (June 1987). Oracle USA, Inc., 500 Oracle Parkway, Redwood City, CA 94065. The Programs are not intended for use in any nuclear, aviation, mass transit, medical, or other inherently dangerous applications. It shall be the licensee's responsibility to take all appropriate fail-safe, backup, redundancy and other measures to ensure the safe use of such applications if the Programs are used for such purposes, and we disclaim liability for any damages caused by such use of the Programs. Oracle, JD Edwards, PeopleSoft, and Siebel are registered trademarks of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. The Programs may provide links to Web sites and access to content, products, and services from third parties. Oracle is not responsible for the availability of, or any content provided on, third-party Web sites. You bear all risks associated with the use of such content. If you choose to purchase any products or services from a third party, the relationship is directly between you and the third party. Oracle is not responsible for: (a) the quality of third-party products or services; or (b) fulfilling any of the terms of the agreement with the third party, including delivery of products or services and warranty obligations related to purchased products or services. Oracle is not responsible for any loss or damage of any sort that you may incur from dealing with any third party. Contents Send Us Your Comments .................................................................................................................... xxv Preface ............................................................................................................................................................ xxvii Audience.................................................................................................................................................. Documentation Accessibility ................................................................................................................ Structure ................................................................................................................................................. Related Documents ................................................................................................................................. Conventions ............................................................................................................................................ xxvii xxvii xxviii xxxi xxxii What's New in Oracle Database Administrator's Guide? .............................................. xxxvii Oracle Database 10g Release 2 (10.2) New Features in the Administrator's Guide.................... xxxvii Oracle Database 10g Release 1 (10.1) New Features in the Administrator's Guide.......................... xli Part I 1 Basic Database Administration Overview of Administering an Oracle Database Types of Oracle Database Users ............................................................................................................ Database Administrators .................................................................................................................. Security Officers ................................................................................................................................. Network Administrators................................................................................................................... Application Developers..................................................................................................................... Application Administrators.............................................................................................................. Database Users ................................................................................................................................... Tasks of a Database Administrator ....................................................................................................... Task 1: Evaluate the Database Server Hardware .......................................................................... Task 2: Install the Oracle Database Software ................................................................................. Task 3: Plan the Database.................................................................................................................. Task 4: Create and Open the Database ........................................................................................... Task 5: Back Up the Database........................................................................................................... Task 6: Enroll System Users.............................................................................................................. Task 7: Implement the Database Design......................................................................................... Task 8: Back Up the Fully Functional Database ............................................................................ Task 9: Tune Database Performance ............................................................................................... Task 10: Download and Install Patches .......................................................................................... Task 11: Roll Out to Additional Hosts ............................................................................................ 1-1 1-2 1-2 1-2 1-2 1-3 1-3 1-3 1-4 1-4 1-4 1-5 1-5 1-5 1-5 1-5 1-6 1-6 1-6 iii Selecting an Instance with Environment Variables........................................................................... 1-7 Identifying Your Oracle Database Software Release ........................................................................ 1-7 Release Number Format.................................................................................................................... 1-8 Checking Your Current Release Number....................................................................................... 1-8 Database Administrator Security and Privileges............................................................................... 1-9 The Database Administrator's Operating System Account ......................................................... 1-9 Database Administrator Usernames ............................................................................................... 1-9 Database Administrator Authentication .......................................................................................... 1-11 Administrative Privileges .............................................................................................................. 1-11 Selecting an Authentication Method............................................................................................ 1-13 Using Operating System Authentication..................................................................................... 1-14 Using Password File Authentication............................................................................................ 1-15 Creating and Maintaining a Password File...................................................................................... 1-16 Using ORAPWD.............................................................................................................................. 1-16 Setting REMOTE_LOGIN_ PASSWORDFILE ............................................................................ 1-18 Adding Users to a Password File.................................................................................................. 1-18 Maintaining a Password File ......................................................................................................... 1-19 Server Manageability ........................................................................................................................... 1-20 Automatic Manageability Features .............................................................................................. 1-20 Data Utilities .................................................................................................................................... 1-22 2 Creating an Oracle Database Deciding How to Create an Oracle Database ..................................................................................... 2-1 Manually Creating an Oracle Database ............................................................................................... 2-2 Considerations Before Creating the Database ............................................................................... 2-2 Creating the Database........................................................................................................................ 2-4 Understanding the CREATE DATABASE Statement ..................................................................... 2-10 Protecting Your Database: Specifying Passwords for Users SYS and SYSTEM .................... 2-10 Creating a Locally Managed SYSTEM Tablespace..................................................................... 2-11 Creating the SYSAUX Tablespace ................................................................................................ 2-12 Using Automatic Undo Management: Creating an Undo Tablespace.................................... 2-13 Creating a Default Permanent Tablespace .................................................................................. 2-14 Creating a Default Temporary Tablespace.................................................................................. 2-14 Specifying Oracle-Managed Files at Database Creation ........................................................... 2-15 Supporting Bigfile Tablespaces During Database Creation...................................................... 2-16 Specifying the Database Time Zone and Time Zone File.......................................................... 2-17 Specifying FORCE LOGGING Mode ........................................................................................... 2-18 Understanding Initialization Parameters ......................................................................................... 2-19 Determining the Global Database Name..................................................................................... 2-21 Specifying a Flash Recovery Area ................................................................................................ 2-22 Specifying Control Files ................................................................................................................ 2-22 Specifying Database Block Sizes ................................................................................................... 2-23 Managing the System Global Area (SGA) .................................................................................. 2-24 Specifying the Maximum Number of Processes......................................................................... 2-33 Specifying the Method of Undo Space Management ................................................................ 2-33 The COMPATIBLE Initialization Parameter and Irreversible Compatibility ........................ 2-34 Setting the License Parameter ....................................................................................................... 2-34 iv Troubleshooting Database Creation .................................................................................................. Dropping a Database ............................................................................................................................ Managing Initialization Parameters Using a Server Parameter File ........................................... What Is a Server Parameter File? .................................................................................................. Migrating to a Server Parameter File ........................................................................................... Creating a Server Parameter File .................................................................................................. The SPFILE Initialization Parameter ............................................................................................ Using ALTER SYSTEM to Change Initialization Parameter Values ....................................... Exporting the Server Parameter File ............................................................................................ Backing Up the Server Parameter File ......................................................................................... Errors and Recovery for the Server Parameter File.................................................................... Viewing Parameter Settings .......................................................................................................... Defining Application Services for Oracle Database 10g .............................................................. Deploying Services ......................................................................................................................... Configuring Services ..................................................................................................................... Using Services ................................................................................................................................. Considerations After Creating a Database....................................................................................... Some Security Considerations....................................................................................................... Enabling Transparent Data Encryption ....................................................................................... Creating a Secure External Password Store ................................................................................ Installing the Oracle Database Sample Schemas ........................................................................ Viewing Information About the Database ....................................................................................... 3 2-35 2-35 2-35 2-36 2-36 2-37 2-38 2-38 2-39 2-40 2-40 2-40 2-41 2-42 2-42 2-43 2-44 2-44 2-45 2-46 2-46 2-46 Starting Up and Shutting Down Starting Up a Database............................................................................................................................ 3-1 Options for Starting Up a Database................................................................................................. 3-1 Understanding Initialization Parameter Files................................................................................ 3-2 Preparing to Start Up an Instance.................................................................................................... 3-4 Starting Up an Instance ..................................................................................................................... 3-4 Altering Database Availability .............................................................................................................. 3-7 Mounting a Database to an Instance ............................................................................................... 3-7 Opening a Closed Database.............................................................................................................. 3-7 Opening a Database in Read-Only Mode....................................................................................... 3-8 Restricting Access to an Open Database......................................................................................... 3-8 Shutting Down a Database..................................................................................................................... 3-9 Shutting Down with the NORMAL Clause ................................................................................... 3-9 Shutting Down with the IMMEDIATE Clause .............................................................................. 3-9 Shutting Down with the TRANSACTIONAL Clause ............................................................... 3-10 Shutting Down with the ABORT Clause ..................................................................................... 3-10 Shutdown Timeout ......................................................................................................................... 3-11 Quiescing a Database ........................................................................................................................... 3-11 Placing a Database into a Quiesced State .................................................................................... 3-12 Restoring the System to Normal Operation ................................................................................ 3-12 Viewing the Quiesce State of an Instance .................................................................................... 3-13 Suspending and Resuming a Database ............................................................................................ 3-13 v 4 Managing Oracle Database Processes About Dedicated and Shared Server Processes.................................................................................. 4-1 Dedicated Server Processes .............................................................................................................. 4-1 Shared Server Processes .................................................................................................................... 4-2 Configuring Oracle Database for Shared Server ............................................................................... 4-4 Initialization Parameters for Shared Server ................................................................................... 4-4 Enabling Shared Server ..................................................................................................................... 4-5 Configuring Dispatchers ................................................................................................................... 4-7 Monitoring Shared Server.............................................................................................................. 4-12 About Oracle Database Background Processes............................................................................... 4-13 Managing Processes for Parallel SQL Execution ............................................................................ 4-14 About Parallel Execution Servers ................................................................................................. 4-15 Altering Parallel Execution for a Session..................................................................................... 4-15 Managing Processes for External Procedures .................................................................................. 4-16 Terminating Sessions............................................................................................................................ 4-16 Identifying Which Session to Terminate...................................................................................... 4-17 Terminating an Active Session...................................................................................................... 4-17 Terminating an Inactive Session ................................................................................................... 4-18 Monitoring the Operation of Your Database ................................................................................... 4-18 Server-Generated Alerts................................................................................................................. 4-18 Monitoring the Database Using Trace Files and the Alert Log ................................................ 4-21 Monitoring Locks ............................................................................................................................ 4-23 Monitoring Wait Events ................................................................................................................. 4-24 Process and Session Views ............................................................................................................ 4-24 Part II 5 Oracle Database Structure and Storage Managing Control Files What Is a Control File? ............................................................................................................................ Guidelines for Control Files .................................................................................................................. Provide Filenames for the Control Files ......................................................................................... Multiplex Control Files on Different Disks .................................................................................... Back Up Control Files ........................................................................................................................ Manage the Size of Control Files ..................................................................................................... Creating Control Files ............................................................................................................................. Creating Initial Control Files ............................................................................................................ Creating Additional Copies, Renaming, and Relocating Control Files .................................... Creating New Control Files .............................................................................................................. Troubleshooting After Creating Control Files.................................................................................... Checking for Missing or Extra Files ................................................................................................ Handling Errors During CREATE CONTROLFILE ..................................................................... Backing Up Control Files........................................................................................................................ Recovering a Control File Using a Current Copy .............................................................................. Recovering from Control File Corruption Using a Control File Copy....................................... Recovering from Permanent Media Failure Using a Control File Copy.................................... Dropping Control Files ........................................................................................................................... vi 5-1 5-2 5-2 5-2 5-3 5-3 5-3 5-3 5-4 5-4 5-7 5-7 5-7 5-7 5-8 5-8 5-8 5-9 Displaying Control File Information ................................................................................................... 5-9 6 Managing the Redo Log What Is the Redo Log?............................................................................................................................. 6-1 Redo Threads ...................................................................................................................................... 6-1 Redo Log Contents............................................................................................................................. 6-2 How Oracle Database Writes to the Redo Log .............................................................................. 6-2 Planning the Redo Log ............................................................................................................................ 6-4 Multiplexing Redo Log Files ............................................................................................................ 6-4 Placing Redo Log Members on Different Disks............................................................................. 6-6 Setting the Size of Redo Log Members ........................................................................................... 6-7 Choosing the Number of Redo Log Files ....................................................................................... 6-7 Controlling Archive Lag .................................................................................................................. 6-8 Creating Redo Log Groups and Members........................................................................................... 6-9 Creating Redo Log Groups ............................................................................................................... 6-9 Creating Redo Log Members......................................................................................................... 6-10 Relocating and Renaming Redo Log Members .............................................................................. 6-10 Dropping Redo Log Groups and Members ..................................................................................... 6-12 Dropping Log Groups .................................................................................................................... 6-12 Dropping Redo Log Members....................................................................................................... 6-12 Forcing Log Switches............................................................................................................................ 6-13 Verifying Blocks in Redo Log Files ................................................................................................... 6-13 Clearing a Redo Log File...................................................................................................................... 6-14 Viewing Redo Log Information ......................................................................................................... 6-15 7 Managing Archived Redo Logs What Is the Archived Redo Log?........................................................................................................... 7-1 Choosing Between NOARCHIVELOG and ARCHIVELOG Mode .............................................. 7-2 Running a Database in NOARCHIVELOG Mode ........................................................................ 7-2 Running a Database in ARCHIVELOG Mode ............................................................................... 7-3 Controlling Archiving ............................................................................................................................. 7-4 Setting the Initial Database Archiving Mode................................................................................. 7-4 Changing the Database Archiving Mode ...................................................................................... 7-4 Performing Manual Archiving......................................................................................................... 7-5 Adjusting the Number of Archiver Processes ............................................................................... 7-5 Specifying the Archive Destination ..................................................................................................... 7-6 Specifying Archive Destinations...................................................................................................... 7-6 Understanding Archive Destination Status ................................................................................... 7-8 Specifying the Mode of Log Transmission.......................................................................................... 7-9 Normal Transmission Mode............................................................................................................. 7-9 Standby Transmission Mode ........................................................................................................ 7-10 Managing Archive Destination Failure ............................................................................................ 7-11 Specifying the Minimum Number of Successful Destinations................................................. 7-11 Rearchiving to a Failed Destination ............................................................................................. 7-13 Controlling Trace Output Generated by the Archivelog Process ................................................ 7-13 Viewing Information About the Archived Redo Log .................................................................... 7-14 vii Dynamic Performance Views ........................................................................................................ 7-14 The ARCHIVE LOG LIST Command........................................................................................... 7-15 8 Managing Tablespaces Guidelines for Managing Tablespaces................................................................................................. 8-1 Using Multiple Tablespaces.............................................................................................................. 8-1 Assigning Tablespace Quotas to Users ........................................................................................... 8-2 Creating Tablespaces ............................................................................................................................... 8-2 Locally Managed Tablespaces.......................................................................................................... 8-3 Bigfile Tablespaces ............................................................................................................................. 8-6 Temporary Tablespaces..................................................................................................................... 8-8 Multiple Temporary Tablespaces: Using Tablespace Groups .................................................. 8-11 Specifying Nonstandard Block Sizes for Tablespaces ................................................................... 8-12 Controlling the Writing of Redo Records......................................................................................... 8-13 Altering Tablespace Availability ....................................................................................................... 8-13 Taking Tablespaces Offline............................................................................................................ 8-13 Bringing Tablespaces Online ......................................................................................................... 8-15 Using Read-Only Tablespaces ............................................................................................................ 8-15 Making a Tablespace Read-Only .................................................................................................. 8-16 Making a Read-Only Tablespace Writable ................................................................................. 8-17 Creating a Read-Only Tablespace on a WORM Device ............................................................ 8-18 Delaying the Opening of Datafiles in Read-Only Tablespaces ................................................ 8-18 Renaming Tablespaces ......................................................................................................................... 8-19 Dropping Tablespaces ......................................................................................................................... 8-20 Managing the SYSAUX Tablespace ................................................................................................... 8-20 Monitoring Occupants of the SYSAUX Tablespace ................................................................... 8-21 Moving Occupants Out Of or Into the SYSAUX Tablespace.................................................... 8-21 Controlling the Size of the SYSAUX Tablespace ........................................................................ 8-21 Diagnosing and Repairing Locally Managed Tablespace Problems........................................... 8-22 Scenario 1: Fixing Bitmap When Allocated Blocks are Marked Free (No Overlap).............. 8-23 Scenario 2: Dropping a Corrupted Segment ............................................................................... 8-23 Scenario 3: Fixing Bitmap Where Overlap is Reported ............................................................. 8-23 Scenario 4: Correcting Media Corruption of Bitmap Blocks..................................................... 8-24 Scenario 5: Migrating from a Dictionary-Managed to a Locally Managed Tablespace........ 8-24 Migrating the SYSTEM Tablespace to a Locally Managed Tablespace ...................................... 8-24 Transporting Tablespaces Between Databases ................................................................................ 8-25 Introduction to Transportable Tablespaces................................................................................. 8-25 About Transporting Tablespaces Across Platforms................................................................... 8-26 Limitations on Transportable Tablespace Use............................................................................ 8-27 Compatibility Considerations for Transportable Tablespaces ................................................. 8-28 Transporting Tablespaces Between Databases: A Procedure and Example .......................... 8-29 Using Transportable Tablespaces: Scenarios .............................................................................. 8-37 Moving Databases Across Platforms Using Transportable Tablespaces ................................ 8-40 Viewing Tablespace Information ....................................................................................................... 8-40 Example 1: Listing Tablespaces and Default Storage Parameters ........................................... 8-41 Example 2: Listing the Datafiles and Associated Tablespaces of a Database......................... 8-41 Example 3: Displaying Statistics for Free Space (Extents) of Each Tablespace ...................... 8-42 viii 9 Managing Datafiles and Tempfiles Guidelines for Managing Datafiles...................................................................................................... 9-1 Determine the Number of Datafiles ................................................................................................ 9-2 Determine the Size of Datafiles ........................................................................................................ 9-3 Place Datafiles Appropriately .......................................................................................................... 9-4 Store Datafiles Separate from Redo Log Files................................................................................ 9-4 Creating Datafiles and Adding Datafiles to a Tablespace .............................................................. 9-4 Changing Datafile Size ........................................................................................................................... 9-5 Enabling and Disabling Automatic Extension for a Datafile....................................................... 9-5 Manually Resizing a Datafile ........................................................................................................... 9-6 Altering Datafile Availability ................................................................................................................ 9-6 Bringing Datafiles Online or Taking Offline in ARCHIVELOG Mode...................................... 9-7 Taking Datafiles Offline in NOARCHIVELOG Mode.................................................................. 9-7 Altering the Availability of All Datafiles or Tempfiles in a Tablespace .................................... 9-8 Renaming and Relocating Datafiles..................................................................................................... 9-8 Procedures for Renaming and Relocating Datafiles in a Single Tablespace ............................. 9-9 Procedure for Renaming and Relocating Datafiles in Multiple Tablespaces ......................... 9-10 Dropping Datafiles ............................................................................................................................... 9-11 Verifying Data Blocks in Datafiles .................................................................................................... 9-12 Copying Files Using the Database Server ........................................................................................ 9-12 Copying a File on a Local File System.......................................................................................... 9-13 Third-Party File Transfer ............................................................................................................... 9-14 File Transfer and the DBMS_SCHEDULER Package................................................................. 9-14 Advanced File Transfer Mechanisms........................................................................................... 9-15 Mapping Files to Physical Devices .................................................................................................... 9-15 Overview of Oracle Database File Mapping Interface .............................................................. 9-16 How the Oracle Database File Mapping Interface Works......................................................... 9-16 Using the Oracle Database File Mapping Interface.................................................................... 9-20 File Mapping Examples.................................................................................................................. 9-23 Viewing Datafile Information ...................................................................................................... 9-25 10 Managing the Undo Tablespace What Is Undo?........................................................................................................................................ Introduction to Automatic Undo Management .............................................................................. Overview of Automatic Undo Management .............................................................................. Undo Retention ............................................................................................................................... Setting the Undo Retention Period.................................................................................................... Sizing the Undo Tablespace ................................................................................................................ Using Auto-Extensible Tablespaces ............................................................................................. Sizing Fixed-Size Undo Tablespaces ............................................................................................ Managing Undo Tablespaces .............................................................................................................. Creating an Undo Tablespace ....................................................................................................... Altering an Undo Tablespace ........................................................................................................ Dropping an Undo Tablespace ..................................................................................................... Switching Undo Tablespaces......................................................................................................... Establishing User Quotas for Undo Space................................................................................... 10-1 10-2 10-2 10-3 10-5 10-5 10-5 10-5 10-6 10-7 10-8 10-8 10-9 10-9 ix Migrating to Automatic Undo Management.................................................................................. 10-10 Viewing Information About Undo .................................................................................................. 10-10 Part III 11 Automated File and Storage Management Using Oracle-Managed Files What Are Oracle-Managed Files?....................................................................................................... Who Can Use Oracle-Managed Files?.......................................................................................... Benefits of Using Oracle-Managed Files...................................................................................... Oracle-Managed Files and Existing Functionality ..................................................................... Enabling the Creation and Use of Oracle-Managed Files ............................................................. Setting the DB_CREATE_FILE_DEST Initialization Parameter............................................... Setting the DB_RECOVERY_FILE_DEST Parameter................................................................. Setting the DB_CREATE_ONLINE_LOG_DEST_n Initialization Parameter ........................ Creating Oracle-Managed Files .......................................................................................................... How Oracle-Managed Files Are Named ..................................................................................... Creating Oracle-Managed Files at Database Creation............................................................... Creating Datafiles for Tablespaces Using Oracle-Managed Files .......................................... Creating Tempfiles for Temporary Tablespaces Using Oracle-Managed Files ................... Creating Control Files Using Oracle-Managed Files ............................................................... Creating Redo Log Files Using Oracle-Managed Files............................................................ Creating Archived Logs Using Oracle-Managed Files ............................................................ Behavior of Oracle-Managed Files................................................................................................... Dropping Datafiles and Tempfiles ............................................................................................. Dropping Redo Log Files ............................................................................................................. Renaming Files .............................................................................................................................. Managing Standby Databases .................................................................................................... Scenarios for Using Oracle-Managed Files .................................................................................... Scenario 1: Create and Manage a Database with Multiplexed Redo Logs ........................... Scenario 2: Create and Manage a Database with Database and Flash Recovery Areas ..... Scenario 3: Adding Oracle-Managed Files to an Existing Database...................................... 12 Using Automatic Storage Management What Is Automatic Storage Management? ....................................................................................... Overview of the Components of Automatic Storage Management ............................................ Administering an Automatic Storage Management Instance ...................................................... Installing ASM ................................................................................................................................. Authentication for Accessing an ASM Instance ......................................................................... Setting Initialization Parameters for an ASM Instance.............................................................. Starting Up an ASM Instance ...................................................................................................... Shutting Down an ASM Instance................................................................................................ Administering Automatic Storage Management Disk Groups ................................................. Considerations and Guidelines for Configuring Disk Groups .............................................. Creating a Disk Group.................................................................................................................. Altering the Disk Membership of a Disk Group ...................................................................... Mounting and Dismounting Disk Groups ................................................................................ x 11-1 11-2 11-3 11-3 11-3 11-4 11-5 11-5 11-5 11-6 11-7 11-12 11-13 11-14 11-16 11-17 11-17 11-18 11-18 11-18 11-18 11-18 11-19 11-22 11-23 12-1 12-3 12-4 12-5 12-6 12-7 12-10 12-13 12-14 12-14 12-19 12-20 12-25 Checking Internal Consistency of Disk Group Metadata ...................................................... Dropping Disk Groups................................................................................................................. Managing Disk Group Directories ............................................................................................ Managing Alias Names for ASM Filenames ............................................................................. Dropping Files and Associated Aliases from a Disk Group................................................... Managing Disk Group Templates .............................................................................................. Using Automatic Storage Management in the Database............................................................. What Types of Files Does ASM Support?.................................................................................. About ASM Filenames.................................................................................................................. Starting the ASM and Database Instances................................................................................. Creating and Referencing ASM Files in the Database ............................................................. Creating a Database in ASM........................................................................................................ Creating Tablespaces in ASM...................................................................................................... Creating Redo Logs in ASM ........................................................................................................ Creating a Control File in ASM................................................................................................... Creating Archive Log Files in ASM............................................................................................ Recovery Manager (RMAN) and ASM ...................................................................................... Migrating a Database to Automatic Storage Management ......................................................... Accessing Automatic Storage Management Files with the XML DB Virtual Folder ............. Inside /sys/asm ............................................................................................................................ Sample FTP Session ...................................................................................................................... Viewing Information About Automatic Storage Management.................................................. Part IV 13 12-26 12-26 12-26 12-28 12-29 12-29 12-32 12-33 12-34 12-39 12-40 12-42 12-42 12-43 12-44 12-45 12-45 12-46 12-46 12-47 12-48 12-48 Schema Objects Managing Schema Objects Creating Multiple Tables and Views in a Single Operation......................................................... Analyzing Tables, Indexes, and Clusters.......................................................................................... Using DBMS_STATS to Collect Table and Index Statistics....................................................... Validating Tables, Indexes, Clusters, and Materialized Views ................................................ Listing Chained Rows of Tables and Clusters ............................................................................ Truncating Tables and Clusters .......................................................................................................... Using DELETE................................................................................................................................. Using DROP and CREATE ............................................................................................................ Using TRUNCATE.......................................................................................................................... Enabling and Disabling Triggers ....................................................................................................... Enabling Triggers ............................................................................................................................ Disabling Triggers........................................................................................................................... Managing Integrity Constraints ......................................................................................................... Integrity Constraint States ............................................................................................................ Setting Integrity Constraints Upon Definition.......................................................................... Modifying, Renaming, or Dropping Existing Integrity Constraints ..................................... Deferring Constraint Checks ....................................................................................................... Reporting Constraint Exceptions ................................................................................................ Viewing Constraint Information................................................................................................. Renaming Schema Objects................................................................................................................ 13-1 13-2 13-2 13-3 13-4 13-5 13-6 13-6 13-6 13-7 13-8 13-8 13-9 13-9 13-11 13-12 13-13 13-14 13-15 13-16 xi Managing Object Dependencies ...................................................................................................... About Dependencies Among Schema Objects.......................................................................... Manually Recompiling Views ..................................................................................................... Manually Recompiling Procedures and Functions .................................................................. Manually Recompiling Packages................................................................................................ Managing Object Name Resolution ................................................................................................ Switching to a Different Schema ..................................................................................................... Displaying Information About Schema Objects .......................................................................... Using a PL/SQL Package to Display Information About Schema Objects .......................... Using Views to Display Information About Schema Objects................................................. 14 Managing Space for Schema Objects Managing Tablespace Alerts ............................................................................................................... Setting Alert Thresholds ................................................................................................................ Viewing Alerts ................................................................................................................................. Limitations ....................................................................................................................................... Managing Space in Data Blocks......................................................................................................... Specifying the INITRANS Parameter........................................................................................... Managing Storage Parameters ............................................................................................................ Identifying the Storage Parameters .............................................................................................. Specifying Storage Parameters at Object Creation..................................................................... Setting Storage Parameters for Clusters ...................................................................................... Setting Storage Parameters for Partitioned Tables..................................................................... Setting Storage Parameters for Index Segments ......................................................................... Setting Storage Parameters for LOBs, Varrays, and Nested Tables ........................................ Changing Values of Storage Parameters ..................................................................................... Understanding Precedence in Storage Parameters .................................................................... Managing Resumable Space Allocation ........................................................................................... Resumable Space Allocation Overview ....................................................................................... Enabling and Disabling Resumable Space Allocation............................................................. Using a LOGON Trigger to Set Default Resumable Mode ..................................................... Detecting Suspended Statements................................................................................................ Operation-Suspended Alert......................................................................................................... Resumable Space Allocation Example: Registering an AFTER SUSPEND Trigger ............ Reclaiming Wasted Space .................................................................................................................. Understanding Reclaimable Unused Space .............................................................................. Using the Segment Advisor......................................................................................................... Shrinking Database Segments Online ........................................................................................ Deallocating Unused Space ......................................................................................................... Understanding Space Usage of Datatypes ..................................................................................... Displaying Information About Space Usage for Schema Objects ............................................ Using PL/SQL Packages to Display Information About Schema Object Space Usage ...... Using Views to Display Information About Space Usage in Schema Objects ..................... Capacity Planning for Database Objects ....................................................................................... Estimating the Space Use of a Table .......................................................................................... Estimating the Space Use of an Index ....................................................................................... Obtaining Object Growth Trends .............................................................................................. xii 13-16 13-17 13-18 13-19 13-19 13-19 13-21 13-21 13-21 13-22 14-1 14-2 14-3 14-3 14-4 14-4 14-5 14-5 14-6 14-6 14-6 14-6 14-7 14-7 14-7 14-8 14-8 14-10 14-12 14-12 14-14 14-14 14-15 14-16 14-16 14-29 14-31 14-31 14-31 14-31 14-32 14-35 14-36 14-36 14-36 15 Managing Tables About Tables........................................................................................................................................... Guidelines for Managing Tables........................................................................................................ Design Tables Before Creating Them........................................................................................... Consider Your Options for the Type of Table to Create............................................................ Specify the Location of Each Table ............................................................................................... Consider Parallelizing Table Creation ......................................................................................... Consider Using NOLOGGING When Creating Tables ............................................................ Consider Using Table Compression when Creating Tables ..................................................... Estimate Table Size and Plan Accordingly.................................................................................. Restrictions to Consider When Creating Tables ......................................................................... Creating Tables ...................................................................................................................................... Creating a Table............................................................................................................................... Creating a Temporary Table.......................................................................................................... Parallelizing Table Creation .......................................................................................................... Loading Tables ....................................................................................................................................... Inserting Data with DML Error Logging..................................................................................... Inserting Data Into Tables Using Direct-Path INSERT............................................................ Automatically Collecting Statistics on Tables ............................................................................... Altering Tables..................................................................................................................................... Reasons for Using the ALTER TABLE Statement .................................................................... Altering Physical Attributes of a Table...................................................................................... Moving a Table to a New Segment or Tablespace ................................................................... Manually Allocating Storage for a Table ................................................................................... Modifying an Existing Column Definition................................................................................ Adding Table Columns ................................................................................................................ Renaming Table Columns............................................................................................................ Dropping Table Columns ........................................................................................................... Redefining Tables Online.................................................................................................................. Features of Online Table Redefinition ....................................................................................... Performing Online Redefinition with DBMS_REDEFINITION............................................. Results of the Redefinition Process............................................................................................. Performing Intermediate Synchronization................................................................................ Aborting Online Table Redefinition and Cleaning Up After Errors ..................................... Restrictions for Online Redefinition of Tables .......................................................................... Online Redefinition of a Single Partition................................................................................... Online Table Redefinition Examples.......................................................................................... Privileges Required for the DBMS_REDEFINITION Package ............................................... Auditing Table Changes Using Flashback Transaction Query .................................................. Recovering Tables Using the Flashback Table Feature................................................................ Dropping Tables .................................................................................................................................. Using Flashback Drop and Managing the Recycle Bin ............................................................... What Is the Recycle Bin? .............................................................................................................. Enabling and Disabling the Recycle Bin .................................................................................... Viewing and Querying Objects in the Recycle Bin .................................................................. Purging Objects in the Recycle Bin ............................................................................................. Restoring Tables from the Recycle Bin....................................................................................... 15-1 15-2 15-2 15-3 15-3 15-4 15-4 15-4 15-5 15-5 15-6 15-6 15-6 15-8 15-8 15-9 15-12 15-16 15-16 15-17 15-17 15-18 15-18 15-19 15-19 15-19 15-20 15-21 15-22 15-22 15-26 15-27 15-27 15-27 15-28 15-30 15-35 15-36 15-36 15-37 15-38 15-38 15-39 15-40 15-40 15-41 xiii Managing Index-Organized Tables ................................................................................................ What Are Index-Organized Tables? ........................................................................................... Creating Index-Organized Tables............................................................................................... Maintaining Index-Organized Tables ........................................................................................ Creating Secondary Indexes on Index-Organized Tables....................................................... Analyzing Index-Organized Tables ........................................................................................... Using the ORDER BY Clause with Index-Organized Tables.................................................. Converting Index-Organized Tables to Regular Tables .......................................................... Managing External Tables ................................................................................................................. Creating External Tables .............................................................................................................. Altering External Tables............................................................................................................... Dropping External Tables ............................................................................................................ System and Object Privileges for External Tables .................................................................... Viewing Information About Tables ................................................................................................ 16 Managing Indexes About Indexes ........................................................................................................................................ Guidelines for Managing Indexes ..................................................................................................... Create Indexes After Inserting Table Data .................................................................................. Index the Correct Tables and Columns ....................................................................................... Order Index Columns for Performance ....................................................................................... Limit the Number of Indexes for Each Table .............................................................................. Drop Indexes That Are No Longer Required ............................................................................ Estimate Index Size and Set Storage Parameters........................................................................ Specify the Tablespace for Each Index ......................................................................................... Consider Parallelizing Index Creation......................................................................................... Consider Creating Indexes with NOLOGGING ........................................................................ Consider Costs and Benefits of Coalescing or Rebuilding Indexes ......................................... Consider Cost Before Disabling or Dropping Constraints........................................................ Creating Indexes.................................................................................................................................... Creating an Index Explicitly .......................................................................................................... Creating a Unique Index Explicitly .............................................................................................. Creating an Index Associated with a Constraint........................................................................ Collecting Incidental Statistics when Creating an Index........................................................... Creating a Large Index ................................................................................................................... Creating an Index Online............................................................................................................... Creating a Function-Based Index................................................................................................ Creating a Key-Compressed Index............................................................................................. Altering Indexes .................................................................................................................................. Altering Storage Characteristics of an Index............................................................................. Rebuilding an Existing Index ...................................................................................................... Monitoring Index Usage .............................................................................................................. Monitoring Space Use of Indexes .................................................................................................... Dropping Indexes................................................................................................................................ Viewing Index Information............................................................................................................... xiv 15-42 15-43 15-44 15-48 15-49 15-50 15-51 15-51 15-52 15-53 15-55 15-56 15-56 15-57 16-1 16-2 16-2 16-3 16-3 16-3 16-4 16-4 16-4 16-5 16-5 16-5 16-6 16-6 16-7 16-7 16-7 16-9 16-9 16-9 16-10 16-11 16-11 16-12 16-12 16-13 16-13 16-14 16-15 17 Managing Partitioned Tables and Indexes About Partitioned Tables and Indexes .............................................................................................. Partitioning Methods............................................................................................................................ When to Use Range Partitioning................................................................................................... When to Use Hash Partitioning .................................................................................................... When to Use List Partitioning ...................................................................................................... When to Use Composite Range-Hash Partitioning ................................................................... When to Use Composite Range-List Partitioning ..................................................................... Creating Partitioned Tables................................................................................................................. Creating Range-Partitioned Tables and Global Indexes ........................................................... Creating Hash-Partitioned Tables and Global Indexes ............................................................. Creating List-Partitioned Tables ................................................................................................. Creating Composite Range-Hash Partitioned Tables .............................................................. Creating Composite Range-List Partitioned Tables................................................................. Using Subpartition Templates to Describe Composite Partitioned Tables .......................... Using Multicolumn Partitioning Keys ....................................................................................... Using Table Compression with Partitioned Tables.................................................................. Using Key Compression with Partitioned Indexes .................................................................. Creating Partitioned Index-Organized Tables.......................................................................... Partitioning Restrictions for Multiple Block Sizes.................................................................... Maintaining Partitioned Tables........................................................................................................ Updating Indexes Automatically................................................................................................ Adding Partitions.......................................................................................................................... Coalescing Partitions .................................................................................................................... Dropping Partitions ...................................................................................................................... Exchanging Partitions................................................................................................................... Merging Partitions ........................................................................................................................ Modifying Default Attributes...................................................................................................... Modifying Real Attributes of Partitions .................................................................................... Modifying List Partitions: Adding Values ................................................................................ Modifying List Partitions: Dropping Values............................................................................. Modifying a Subpartition Template ........................................................................................... Moving Partitions.......................................................................................................................... Redefining Partitions Online ....................................................................................................... Rebuilding Index Partitions......................................................................................................... Renaming Partitions ..................................................................................................................... Splitting Partitions ........................................................................................................................ Truncating Partitions .................................................................................................................... Dropping Partitioned Tables............................................................................................................. Partitioned Tables and Indexes Example........................................................................................ Viewing Information About Partitioned Tables and Indexes .................................................... 18 17-1 17-2 17-3 17-4 17-4 17-5 17-6 17-7 17-8 17-9 17-10 17-11 17-12 17-13 17-15 17-18 17-18 17-18 17-20 17-21 17-24 17-26 17-29 17-30 17-32 17-34 17-38 17-38 17-40 17-40 17-41 17-42 17-43 17-43 17-44 17-45 17-50 17-52 17-52 17-53 Managing Clusters About Clusters ....................................................................................................................................... 18-1 Guidelines for Managing Clusters .................................................................................................... 18-2 Choose Appropriate Tables for the Cluster ................................................................................ 18-3 xv Choose Appropriate Columns for the Cluster Key.................................................................... Specify the Space Required by an Average Cluster Key and Its Associated Rows .............. Specify the Location of Each Cluster and Cluster Index Rows ................................................ Estimate Cluster Size and Set Storage Parameters..................................................................... Creating Clusters ................................................................................................................................... Creating Clustered Tables.............................................................................................................. Creating Cluster Indexes................................................................................................................ Altering Clusters ................................................................................................................................... Altering Clustered Tables .............................................................................................................. Altering Cluster Indexes ................................................................................................................ Dropping Clusters................................................................................................................................. Dropping Clustered Tables............................................................................................................ Dropping Cluster Indexes.............................................................................................................. Viewing Information About Clusters ............................................................................................... 19 Managing Hash Clusters About Hash Clusters............................................................................................................................. When to Use Hash Clusters................................................................................................................. Situations Where Hashing Is Useful............................................................................................. Situations Where Hashing Is Not Advantageous ...................................................................... Creating Hash Clusters ........................................................................................................................ Creating a Sorted Hash Cluster..................................................................................................... Creating Single-Table Hash Clusters ........................................................................................... Controlling Space Use Within a Hash Cluster............................................................................ Estimating Size Required by Hash Clusters................................................................................ Altering Hash Clusters ......................................................................................................................... Dropping Hash Clusters ...................................................................................................................... Viewing Information About Hash Clusters..................................................................................... 20 19-1 19-2 19-2 19-2 19-2 19-3 19-4 19-4 19-7 19-7 19-7 19-8 Managing Views, Sequences, and Synonyms Managing Views.................................................................................................................................... About Views .................................................................................................................................... Creating Views ................................................................................................................................ Replacing Views .............................................................................................................................. Using Views in Queries ................................................................................................................. Updating a Join View ..................................................................................................................... Altering Views ............................................................................................................................... Dropping Views ............................................................................................................................ Managing Sequences.......................................................................................................................... About Sequences ........................................................................................................................... Creating Sequences ....................................................................................................................... Altering Sequences........................................................................................................................ Using Sequences............................................................................................................................ Dropping Sequences ..................................................................................................................... Managing Synonyms.......................................................................................................................... About Synonyms........................................................................................................................... Creating Synonyms....................................................................................................................... xvi 18-3 18-3 18-4 18-4 18-4 18-5 18-5 18-6 18-6 18-7 18-7 18-8 18-8 18-8 20-1 20-1 20-1 20-3 20-4 20-5 20-12 20-12 20-12 20-12 20-13 20-13 20-14 20-16 20-17 20-17 20-17 Using Synonyms in DML Statements ....................................................................................... 20-18 Dropping Synonyms..................................................................................................................... 20-18 Viewing Information About Views, Synonyms, and Sequences............................................... 20-18 21 Using DBMS_REPAIR to Repair Data Block Corruption Options for Repairing Data Block Corruption................................................................................ About the DBMS_REPAIR Package .................................................................................................. DBMS_REPAIR Procedures........................................................................................................... Limitations and Restrictions.......................................................................................................... Using the DBMS_REPAIR Package................................................................................................... Task 1: Detect and Report Corruptions ....................................................................................... Task 2: Evaluate the Costs and Benefits of Using DBMS_REPAIR.......................................... Task 3: Make Objects Usable ......................................................................................................... Task 4: Repair Corruptions and Rebuild Lost Data ................................................................... DBMS_REPAIR Examples................................................................................................................... Examples: Building a Repair Table or Orphan Key Table ........................................................ Example: Detecting Corruption .................................................................................................... Example: Fixing Corrupt Blocks ................................................................................................... Example: Finding Index Entries Pointing to Corrupt Data Blocks .......................................... Example: Skipping Corrupt Blocks .............................................................................................. Part V 22 Database Security Managing Users and Securing the Database The Importance of Establishing a Security Policy for Your Database ........................................ Managing Users and Resources.......................................................................................................... Managing User Privileges and Roles ................................................................................................ Auditing Database Use ........................................................................................................................ Part VI 23 22-1 22-1 22-2 22-2 Database Resource Management and Task Scheduling Managing Automatic System Tasks Using the Maintenance Window Maintenance Windows ........................................................................................................................ Predefined Automatic System Tasks ................................................................................................. Automatic Statistics Collection Job............................................................................................... Automatic Segment Advisor Job .................................................................................................. Resource Management for Automatic System Tasks ..................................................................... 24 21-1 21-1 21-2 21-2 21-2 21-3 21-4 21-4 21-5 21-5 21-6 21-7 21-8 21-9 21-9 23-1 23-2 23-2 23-2 23-3 Using the Database Resource Manager What Is the Database Resource Manager? ....................................................................................... What Problems Does the Database Resource Manager Address? ........................................... How Does the Database Resource Manager Address These Problems?................................. What Are the Elements of the Database Resource Manager?................................................... Understanding Resource Plans ..................................................................................................... Administering the Database Resource Manager ........................................................................... 24-1 24-1 24-2 24-2 24-3 24-7 xvii Creating a Simple Resource Plan ...................................................................................................... Creating Complex Resource Plans ..................................................................................................... Using the Pending Area for Creating Plan Schemas ............................................................... Creating Resource Plans............................................................................................................... Creating Resource Consumer Groups ...................................................................................... Specifying Resource Plan Directives ......................................................................................... Managing Resource Consumer Groups ......................................................................................... Assigning an Initial Resource Consumer Group...................................................................... Changing Resource Consumer Groups ..................................................................................... Managing the Switch Privilege ................................................................................................... Automatically Assigning Resource Consumer Groups to Sessions ...................................... Enabling the Database Resource Manager..................................................................................... Putting It All Together: Database Resource Manager Examples .............................................. Multilevel Schema Example ........................................................................................................ Example of Using Several Resource Allocation Methods ....................................................... An Oracle-Supplied Plan ............................................................................................................. Monitoring and Tuning the Database Resource Manager .......................................................... Creating the Environment ........................................................................................................... Why Is This Necessary to Produce Expected Results? ........................................................... Monitoring Results........................................................................................................................ Interaction with Operating-System Resource Control ................................................................ Guidelines for Using Operating-System Resource Control .................................................... Dynamic Reconfiguration ............................................................................................................ Viewing Database Resource Manager Information ..................................................................... Viewing Consumer Groups Granted to Users or Roles .......................................................... Viewing Plan Schema Information............................................................................................. Viewing Current Consumer Groups for Sessions .................................................................... Viewing the Currently Active Plans........................................................................................... 25 Moving from DBMS_JOB to DBMS_SCHEDULER Moving from DBMS_JOB to DBMS_SCHEDULER ...................................................................... Creating a Job................................................................................................................................... Altering a Job ................................................................................................................................... Removing a Job from the Job Queue ............................................................................................ 26 25-1 25-1 25-2 25-2 Scheduler Concepts Overview of the Scheduler.................................................................................................................. What Can the Scheduler Do?......................................................................................................... Basic Scheduler Concepts .................................................................................................................... Programs .......................................................................................................................................... Schedules .......................................................................................................................................... Jobs .................................................................................................................................................... Events................................................................................................................................................ Chains ............................................................................................................................................... How Programs, Jobs, and Schedules are Related....................................................................... Advanced Scheduler Concepts ........................................................................................................... Job Classes ........................................................................................................................................ xviii 24-8 24-9 24-10 24-12 24-13 24-14 24-17 24-18 24-18 24-20 24-21 24-24 24-24 24-25 24-26 24-27 24-27 24-28 24-28 24-29 24-30 24-30 24-31 24-31 24-32 24-33 24-33 24-33 26-1 26-2 26-3 26-3 26-3 26-4 26-5 26-6 26-6 26-7 26-7 Windows........................................................................................................................................... Window Groups.............................................................................................................................. Scheduler Architecture....................................................................................................................... The Job Table.................................................................................................................................. The Job Coordinator ..................................................................................................................... How Jobs Execute.......................................................................................................................... Job Slaves........................................................................................................................................ Using the Scheduler in Real Application Clusters Environments ......................................... 27 26-8 26-9 26-10 26-10 26-11 26-11 26-11 26-12 Using the Scheduler Scheduler Objects and Their Naming............................................................................................... Using Jobs ............................................................................................................................................... Job Tasks and Their Procedures.................................................................................................... Creating Jobs .................................................................................................................................... Copying Jobs.................................................................................................................................... Altering Jobs .................................................................................................................................... Running Jobs.................................................................................................................................... Stopping Jobs ................................................................................................................................... Dropping Jobs.................................................................................................................................. Disabling Jobs .................................................................................................................................. Enabling Jobs ................................................................................................................................... Using Programs...................................................................................................................................... Program Tasks and Their Procedures .......................................................................................... Creating Programs .......................................................................................................................... Altering Programs......................................................................................................................... Dropping Programs ...................................................................................................................... Disabling Programs ...................................................................................................................... Enabling Programs........................................................................................................................ Using Schedules .................................................................................................................................. Schedule Tasks and Their Procedures ....................................................................................... Creating Schedules........................................................................................................................ Altering Schedules ........................................................................................................................ Dropping Schedules...................................................................................................................... Setting the Repeat Interval........................................................................................................... Using Job Classes ................................................................................................................................ Job Class Tasks and Their Procedures ....................................................................................... Creating Job Classes...................................................................................................................... Altering Job Classes ...................................................................................................................... Dropping Job Classes.................................................................................................................... Using Windows.................................................................................................................................... Window Tasks and Their Procedures ........................................................................................ Creating Windows ........................................................................................................................ Altering Windows......................................................................................................................... Opening Windows ........................................................................................................................ Closing Windows .......................................................................................................................... Dropping Windows ...................................................................................................................... Disabling Windows ...................................................................................................................... 27-1 27-2 27-2 27-2 27-5 27-5 27-5 27-7 27-7 27-8 27-8 27-9 27-9 27-9 27-11 27-11 27-12 27-12 27-12 27-13 27-13 27-13 27-14 27-14 27-18 27-18 27-19 27-19 27-19 27-19 27-20 27-21 27-22 27-22 27-23 27-23 27-24 xix Enabling Windows........................................................................................................................ Overlapping Windows ................................................................................................................. Using Window Groups....................................................................................................................... Window Group Tasks and Their Procedures ........................................................................... Creating Window Groups............................................................................................................ Dropping Window Groups.......................................................................................................... Adding a Member to a Window Group .................................................................................... Dropping a Member from a Window Group............................................................................ Enabling a Window Group.......................................................................................................... Disabling a Window Group......................................................................................................... Using Events......................................................................................................................................... Using Events Raised by the Scheduler....................................................................................... Using Events Raised by an Application..................................................................................... Using Chains ........................................................................................................................................ Chain Tasks and Their Procedures............................................................................................. Creating Chains ............................................................................................................................. Defining Chain Steps .................................................................................................................... Adding Rules to a Chain .............................................................................................................. Enabling Chains ............................................................................................................................ Creating Jobs for Chains .............................................................................................................. Dropping Chains ........................................................................................................................... Running Chains............................................................................................................................. Dropping Rules from a Chain ..................................................................................................... Disabling Chains ........................................................................................................................... Dropping Chain Steps .................................................................................................................. Altering Chain Steps..................................................................................................................... Handling Stalled Chains .............................................................................................................. Allocating Resources Among Jobs................................................................................................... Allocating Resources Among Jobs Using Resource Manager ................................................ Example of Resource Allocation for Jobs................................................................................... 28 Administering the Scheduler Configuring the Scheduler .................................................................................................................. Monitoring and Managing the Scheduler ........................................................................................ How to View Scheduler Information ........................................................................................... How to View the Currently Active Window and Resource Plan ............................................ How to View Scheduler Privileges ............................................................................................... How to Find Information About Currently Running Jobs ....................................................... How the Job Coordinator Works ................................................................................................ How to Monitor and Manage Window and Job Logs ............................................................. How to Manage Scheduler Privileges ........................................................................................ How to Drop a Job ........................................................................................................................ How to Drop a Running Job ........................................................................................................ Why Does a Job Fail to Run? ....................................................................................................... Job Recovery After a Failure........................................................................................................ How to Change Job Priorities...................................................................................................... How to Monitor Running Chains ............................................................................................... xx 27-24 27-25 27-27 27-27 27-28 27-28 27-28 27-29 27-29 27-29 27-30 27-30 27-33 27-37 27-37 27-38 27-38 27-39 27-40 27-41 27-41 27-42 27-42 27-42 27-43 27-43 27-44 27-44 27-44 27-45 28-1 28-6 28-7 28-8 28-8 28-9 28-10 28-10 28-14 28-16 28-16 28-17 28-18 28-18 28-19 Why Does a Program Become Disabled? .................................................................................. Why Does a Window Fail to Take Effect? ................................................................................. How the Scheduler Guarantees Availability............................................................................. How to Handle Scheduler Security ............................................................................................ How to Manage the Scheduler in a RAC Environment .......................................................... Import/Export and the Scheduler ..................................................................................................... Examples of Using the Scheduler .................................................................................................... Examples of Creating Jobs ........................................................................................................... Examples of Creating Job Classes............................................................................................... Examples of Creating Programs ................................................................................................. Examples of Creating Windows ................................................................................................. Example of Creating Window Groups ...................................................................................... Examples of Setting Attributes.................................................................................................... Examples of Creating Chains ...................................................................................................... Examples of Creating Jobs and Schedules Based on Events................................................... Part VII 29 28-19 28-19 28-19 28-20 28-20 28-20 28-20 28-21 28-21 28-22 28-23 28-24 28-24 28-26 28-27 Distributed Database Management Distributed Database Concepts Distributed Database Architecture .................................................................................................... Homogenous Distributed Database Systems.............................................................................. Heterogeneous Distributed Database Systems........................................................................... Client/Server Database Architecture........................................................................................... Database Links....................................................................................................................................... What Are Database Links?............................................................................................................. What Are Shared Database Links? ............................................................................................... Why Use Database Links?.............................................................................................................. Global Database Names in Database Links................................................................................. Names for Database Links ............................................................................................................. Types of Database Links .............................................................................................................. Users of Database Links ............................................................................................................... Creation of Database Links: Examples....................................................................................... Schema Objects and Database Links .......................................................................................... Database Link Restrictions........................................................................................................... Distributed Database Administration ............................................................................................ Site Autonomy ............................................................................................................................... Distributed Database Security..................................................................................................... Auditing Database Links ............................................................................................................. Administration Tools.................................................................................................................... Transaction Processing in a Distributed System........................................................................... Remote SQL Statements ............................................................................................................... Distributed SQL Statements ........................................................................................................ Shared SQL for Remote and Distributed Statements .............................................................. Remote Transactions..................................................................................................................... Distributed Transactions.............................................................................................................. Two-Phase Commit Mechanism ................................................................................................. 29-1 29-1 29-3 29-4 29-5 29-6 29-7 29-8 29-8 29-9 29-10 29-11 29-13 29-14 29-16 29-16 29-16 29-17 29-21 29-22 29-23 29-23 29-24 29-24 29-24 29-24 29-25 xxi Database Link Name Resolution ................................................................................................ Schema Object Name Resolution ................................................................................................ Global Name Resolution in Views, Synonyms, and Procedures .......................................... Distributed Database Application Development ......................................................................... Transparency in a Distributed Database System...................................................................... Remote Procedure Calls (RPCs).................................................................................................. Distributed Query Optimization ................................................................................................ Character Set Support for Distributed Environments ................................................................. Client/Server Environment......................................................................................................... Homogeneous Distributed Environment .................................................................................. Heterogeneous Distributed Environment ................................................................................. 30 Managing a Distributed Database Managing Global Names in a Distributed System ........................................................................ Understanding How Global Database Names Are Formed ..................................................... Determining Whether Global Naming Is Enforced ................................................................... Viewing a Global Database Name................................................................................................ Changing the Domain in a Global Database Name ................................................................... Changing a Global Database Name: Scenario ............................................................................ Creating Database Links...................................................................................................................... Obtaining Privileges Necessary for Creating Database Links.................................................. Specifying Link Types .................................................................................................................... Specifying Link Users..................................................................................................................... Using Connection Qualifiers to Specify Service Names Within Link Names...................... Using Shared Database Links........................................................................................................... Determining Whether to Use Shared Database Links ............................................................. Creating Shared Database Links................................................................................................. Configuring Shared Database Links .......................................................................................... Managing Database Links ................................................................................................................. Closing Database Links ................................................................................................................ Dropping Database Links ............................................................................................................ Limiting the Number of Active Database Link Connections ................................................. Viewing Information About Database Links ................................................................................ Determining Which Links Are in the Database........................................................................ Determining Which Link Connections Are Open .................................................................... Creating Location Transparency....................................................................................................... Using Views to Create Location Transparency ........................................................................ Using Synonyms to Create Location Transparency................................................................. Using Procedures to Create Location Transparency ............................................................. Managing Statement Transparency ................................................................................................. Managing a Distributed Database: Examples ............................................................................... Example 1: Creating a Public Fixed User Database Link ........................................................ Example 2: Creating a Public Fixed User Shared Database Link........................................... Example 3: Creating a Public Connected User Database Link ............................................... Example 4: Creating a Public Connected User Shared Database Link.................................. Example 5: Creating a Public Current User Database Link .................................................... xxii 29-25 29-27 29-29 29-31 29-31 29-33 29-33 29-33 29-34 29-34 29-35 30-1 30-1 30-2 30-3 30-3 30-3 30-6 30-6 30-7 30-8 30-10 30-10 30-11 30-12 30-12 30-14 30-14 30-15 30-16 30-16 30-16 30-17 30-18 30-19 30-20 30-21 30-23 30-24 30-25 30-25 30-25 30-26 30-26 31 Developing Applications for a Distributed Database System Managing the Distribution of Application Data ............................................................................ Controlling Connections Established by Database Links ............................................................ Maintaining Referential Integrity in a Distributed System ......................................................... Tuning Distributed Queries ................................................................................................................ Using Collocated Inline Views ...................................................................................................... Using Cost-Based Optimization.................................................................................................... Using Hints ...................................................................................................................................... Analyzing the Execution Plan ....................................................................................................... Handling Errors in Remote Procedures ............................................................................................ 32 Distributed Transactions Concepts What Are Distributed Transactions? ................................................................................................. DML and DDL Transactions ......................................................................................................... Transaction Control Statements.................................................................................................... Session Trees for Distributed Transactions...................................................................................... Clients .............................................................................................................................................. Database Servers ............................................................................................................................. Local Coordinators ......................................................................................................................... Global Coordinator ......................................................................................................................... Commit Point Site .......................................................................................................................... Two-Phase Commit Mechanism ........................................................................................................ Prepare Phase................................................................................................................................... Commit Phase................................................................................................................................ Forget Phase................................................................................................................................... In-Doubt Transactions........................................................................................................................ Automatic Resolution of In-Doubt Transactions...................................................................... Manual Resolution of In-Doubt Transactions........................................................................... Relevance of System Change Numbers for In-Doubt Transactions ...................................... Distributed Transaction Processing: Case Study .......................................................................... Stage 1: Client Application Issues DML Statements ................................................................ Stage 2: Oracle Database Determines Commit Point Site ....................................................... Stage 3: Global Coordinator Sends Prepare Response ............................................................ Stage 4: Commit Point Site Commits ......................................................................................... Stage 5: Commit Point Site Informs Global Coordinator of Commit .................................... Stage 6: Global and Local Coordinators Tell All Nodes to Commit...................................... Stage 7: Global Coordinator and Commit Point Site Complete the Commit ....................... 33 31-1 31-1 31-2 31-3 31-3 31-4 31-6 31-7 31-8 32-1 32-2 32-2 32-3 32-4 32-4 32-4 32-4 32-5 32-7 32-8 32-10 32-11 32-11 32-12 32-13 32-14 32-14 32-14 32-15 32-16 32-17 32-17 32-17 32-18 Managing Distributed Transactions Specifying the Commit Point Strength of a Node .......................................................................... Naming Transactions ............................................................................................................................ Viewing Information About Distributed Transactions ................................................................. Determining the ID Number and Status of Prepared Transactions ........................................ Tracing the Session Tree of In-Doubt Transactions ................................................................... Deciding How to Handle In-Doubt Transactions ........................................................................... Discovering Problems with a Two-Phase Commit .................................................................... 33-1 33-2 33-2 33-2 33-4 33-5 33-6 xxiii Determining Whether to Perform a Manual Override .............................................................. Analyzing the Transaction Data ................................................................................................... Manually Overriding In-Doubt Transactions.................................................................................. Manually Committing an In-Doubt Transaction........................................................................ Manually Rolling Back an In-Doubt Transaction ....................................................................... Purging Pending Rows from the Data Dictionary.......................................................................... Executing the PURGE_LOST_DB_ENTRY Procedure ............................................................ Determining When to Use DBMS_TRANSACTION ............................................................... Manually Committing an In-Doubt Transaction: Example ........................................................ Step 1: Record User Feedback ..................................................................................................... Step 2: Query DBA_2PC_PENDING.......................................................................................... Step 3: Query DBA_2PC_NEIGHBORS on Local Node .......................................................... Step 4: Querying Data Dictionary Views on All Nodes .......................................................... Step 5: Commit the In-Doubt Transaction................................................................................. Step 6: Check for Mixed Outcome Using DBA_2PC_PENDING........................................... Data Access Failures Due to Locks .................................................................................................. Transaction Timeouts ................................................................................................................... Locks from In-Doubt Transactions ............................................................................................. Simulating Distributed Transaction Failure .................................................................................. Forcing a Distributed Transaction to Fail .................................................................................. Disabling and Enabling RECO.................................................................................................... Managing Read Consistency............................................................................................................. Index xxiv 33-6 33-7 33-8 33-8 33-9 33-9 33-10 33-10 33-11 33-11 33-11 33-13 33-14 33-16 33-16 33-17 33-17 33-17 33-17 33-18 33-18 33-19 Send Us Your Comments Oracle Database Administrator’s Guide 10g Release 2 (10.2) B14231-02 Oracle welcomes your comments and suggestions on the quality and usefulness of this publication. Your input is an important part of the information used for revision. ■ Did you find any errors? ■ Is the information clearly presented? ■ Do you need more information? If so, where? ■ Are the examples correct? Do you need more examples? ■ What features did you like most about this manual? If you find any errors or have any other suggestions for improvement, please indicate the title and part number of the documentation and the chapter, section, and page number (if available). You can send comments to us in the following ways: ■ Electronic mail: infodev_us@oracle.com ■ FAX: (650) 506-7227. Attn: Server Technologies Documentation Manager ■ Postal service: Oracle Corporation Server Technologies Documentation Manager 500 Oracle Parkway, Mailstop 4op11 Redwood Shores, CA 94065 USA If you would like a reply, please give your name, address, telephone number, and electronic mail address (optional). If you have problems with the software, please contact your local Oracle Support Services. xxv xxvi Preface This document describes how to create and administer an Oracle Database. This preface contains these topics: ■ Audience ■ Documentation Accessibility ■ Structure ■ Related Documents ■ Conventions Audience This document is intended for database administrators who perform the following tasks: ■ Create an Oracle Database ■ Ensure the smooth operation of an Oracle Database ■ Monitor the operation of an Oracle Database To use this document, you need to be familiar with relational database concepts. You should also be familiar with the operating system environment under which you are running the Oracle Database. Documentation Accessibility Our goal is to make Oracle products, services, and supporting documentation accessible, with good usability, to the disabled community. To that end, our documentation includes features that make information available to users of assistive technology. This documentation is available in HTML format, and contains markup to facilitate access by the disabled community. Accessibility standards will continue to evolve over time, and Oracle is actively engaged with other market-leading technology vendors to address technical obstacles so that our documentation can be accessible to all of our customers. For more information, visit the Oracle Accessibility Program Web site at http://www.oracle.com/accessibility/ Accessibility of Code Examples in Documentation Screen readers may not always correctly read the code examples in this document. The conventions for writing code require that closing braces should appear on an xxvii otherwise empty line; however, some screen readers may not always read a line of text that consists solely of a bracket or brace. Accessibility of Links to External Web Sites in Documentation This documentation may contain links to Web sites of other companies or organizations that Oracle does not own or control. Oracle neither evaluates nor makes any representations regarding the accessibility of these Web sites. TTY Access to Oracle Support Services Oracle provides dedicated Text Telephone (TTY) access to Oracle Support Services within the United States of America 24 hours a day, seven days a week. For TTY support, call 800.446.2398. Structure This document contains: Part I, "Basic Database Administration" This part contains information about creating a database, starting and shutting down a database, and managing Oracle processes. Chapter 1, "Overview of Administering an Oracle Database" This chapter serves as an introduction to typical tasks performed by database administrators, such as installing software and planning a database. Chapter 2, "Creating an Oracle Database" This chapter describes how to create a database. Consult this chapter when you are planning a database. Chapter 3, "Starting Up and Shutting Down" This chapter describes how to start a database, alter its availability, and shut it down. It also describes the parameter files related to starting up and shutting down. Chapter 4, "Managing Oracle Database Processes" This chapter describes how to identify different Oracle Database processes, such as dedicated server processes and shared server processes. Consult this chapter when configuring, modifying, tracking and managing processes. Part II, "Oracle Database Structure and Storage" This part describes the structure and management of the Oracle Database and its storage. Chapter 5, "Managing Control Files" This chapter describes how to manage control files, including the following tasks: naming, creating, troubleshooting, and dropping control files. Chapter 6, "Managing the Redo Log" This chapter describes how to manage the online redo log, including the following tasks: planning, creating, renaming, dropping, or clearing redo log files. xxviii Chapter 7, "Managing Archived Redo Logs" This chapter describes archiving. Chapter 8, "Managing Tablespaces" This chapter provides guidelines for managing tablespaces. It describes how to create, manage, alter, and drop tablespaces and how to move data between tablespaces. Chapter 9, "Managing Datafiles and Tempfiles" This chapter provides guidelines for managing datafiles. It describes how to create, change, alter, and rename datafiles and how to view information about datafiles. Chapter 10, "Managing the Undo Tablespace" This chapter describes how to manage undo space using an undo tablespace. Part III, "Automated File and Storage Management" This part describes how to use Oracle-Managed Files and Automatic Storage Management. Chapter 11, "Using Oracle-Managed Files" This chapter describes how to use the Oracle Database server to create and manage database files. Chapter 12, "Using Automatic Storage Management" This chapter describes how to use Automatic Storage Management. Part IV, "Schema Objects" This section describes how to manage schema objects, including the following: tables, indexes, clusters, hash clusters, views, sequences, and synonyms. Chapter 13, "Managing Schema Objects" This chapter describes management of schema objects. It contains information about analyzing objects, truncation of tables and clusters, database triggers, integrity constraints, and object dependencies. Chapter 14, "Managing Space for Schema Objects" This chapter describes common tasks such as setting storage parameters, deallocating space, and managing space. Chapter 15, "Managing Tables" This chapter contains table management guidelines, as well as information about creating, altering, maintaining and dropping tables. Chapter 16, "Managing Indexes" This chapter contains guidelines about indexes, including creating, altering, monitoring and dropping indexes. Chapter 17, "Managing Partitioned Tables and Indexes" This chapter describes partitioned tables and indexes and how to create and manage them. xxix Chapter 18, "Managing Clusters" This chapter contains guidelines for creating, altering, or dropping clusters. Chapter 19, "Managing Hash Clusters" This chapter contains guidelines for creating, altering, or dropping hash clusters. Chapter 20, "Managing Views, Sequences, and Synonyms" This chapter describes how to manage views, sequences and synonyms. Chapter 21, "Using DBMS_REPAIR to Repair Data Block Corruption" This chapter describes methods for detecting and repairing data block corruption. Part V, "Database Security" This part discusses the importance of establishing a security policy for your database and users. Chapter 22, "Managing Users and Securing the Database" This chapter discusses the importance of establishing a security policy for your database and users. Part VI, "Database Resource Management and Task Scheduling" This part describes database resource management and task scheduling. Chapter 23, "Managing Automatic System Tasks Using the Maintenance Window" This chapter describes how to use automatic system tasks. Chapter 24, "Using the Database Resource Manager" This chapter describes how to use the Database Resource Manager to allocate resources. Chapter 25, "Moving from DBMS_JOB to DBMS_SCHEDULER" This chapter describes how to take statements created with DBMS_JOB and rewrite them using DBMS_SCHEDULER. Chapter 26, "Scheduler Concepts" Oracle Database provides advanced scheduling capabilities through the database Scheduler. This chapter introduces you to its concepts. Chapter 27, "Using the Scheduler" This chapter describes how to use the Scheduler. Chapter 28, "Administering the Scheduler" This chapter describes the tasks a database administrator needs to perform so end users can schedule jobs using the Scheduler. Part VII, "Distributed Database Management" This part describes distributed database management. xxx Chapter 29, "Distributed Database Concepts" This chapter describes the basic concepts and terminology of Oracle's distributed database architecture. Chapter 30, "Managing a Distributed Database" This chapter describes how to manage and maintain a distributed database system. Chapter 31, "Developing Applications for a Distributed Database System" This chapter describes the considerations for developing an application to run in a distributed database system. Chapter 32, "Distributed Transactions Concepts" This chapter describes what distributed transactions are and how the Oracle Databases maintains their integrity. Chapter 33, "Managing Distributed Transactions" This chapter describes how to manage and troubleshoot distributed transactions. Related Documents For more information, see these Oracle resources: ■ Oracle Database 2 Day DBA ■ Oracle Database Concepts ■ Oracle Database SQL Reference ■ Oracle Database Reference ■ Oracle Database PL/SQL Packages and Types Reference ■ Oracle Database Error Messages ■ Oracle Database Net Services Administrator's Guide ■ Oracle Database Backup and Recovery Basics ■ Oracle Database Backup and Recovery Advanced User's Guide ■ Oracle Database Performance Tuning Guide ■ Oracle Database Application Developer's Guide - Fundamentals ■ Oracle Database PL/SQL User's Guide and Reference ■ SQL*Plus User's Guide and Reference Many of the examples in this book use the sample schemas, which are installed by default when you select the Basic Installation option with an Oracle Database installation. Refer to Oracle Database Sample Schemas for information on how these schemas were created and how you can use them yourself. Printed documentation is available for sale in the Oracle Store at http://oraclestore.oracle.com/ To download free release notes, installation documentation, white papers, or other collateral, please visit the Oracle Technology Network (OTN). You must register online before using OTN; registration is free and can be done at http://www.oracle.com/technology/membership/ xxxi If you already have a username and password for OTN, then you can go directly to the documentation section of the OTN Web site at http://www.oracle.com/technology/documentation/ Conventions This section describes the conventions used in the text and code examples of this documentation set. It describes: ■ Conventions in Text ■ Conventions in Code Examples ■ Conventions for Windows Operating Systems Conventions in Text We use various conventions in text to help you more quickly identify special terms. The following table describes those conventions and provides examples of their use. Convention Meaning Bold Bold typeface indicates terms that are When you specify this clause, you create an defined in the text or terms that appear in a index-organized table. glossary, or both. Italics Italic typeface indicates book titles or emphasis. Oracle Database Concepts Uppercase monospace typeface indicates elements supplied by the system. Such elements include parameters, privileges, datatypes, Recovery Manager keywords, SQL keywords, SQL*Plus or utility commands, packages and methods, as well as system-supplied column names, database objects and structures, usernames, and roles. You can specify this clause only for a NUMBER column. Lowercase monospace typeface indicates executable programs, filenames, directory names, and sample user-supplied elements. Such elements include computer and database names, net service names and connect identifiers, user-supplied database objects and structures, column names, packages and classes, usernames and roles, program units, and parameter values. Enter sqlplus to start SQL*Plus. UPPERCASE monospace (fixed-width) font lowercase monospace (fixed-width) font Note: Some programmatic elements use a mixture of UPPERCASE and lowercase. Enter these elements as shown. lowercase italic monospace (fixed-width) font xxxii Example Ensure that the recovery catalog and target database do not reside on the same disk. You can back up the database by using the BACKUP command. Query the TABLE_NAME column in the USER_TABLES data dictionary view. Use the DBMS_STATS.GENERATE_STATS procedure. The password is specified in the orapwd file. Back up the datafiles and control files in the /disk1/oracle/dbs directory. The department_id, department_name, and location_id columns are in the hr.departments table. Set the QUERY_REWRITE_ENABLED initialization parameter to true. Connect as oe user. The JRepUtil class implements these methods. Lowercase italic monospace font represents You can specify the parallel_clause. placeholders or variables. Run old_release.SQL where old_release refers to the release you installed prior to upgrading. Conventions in Code Examples Code examples illustrate SQL, PL/SQL, SQL*Plus, or other command-line statements. They are displayed in a monospace (fixed-width) font and separated from normal text as shown in this example: SELECT username FROM dba_users WHERE username = 'MIGRATE'; The following table describes typographic conventions used in code examples and provides examples of their use. Convention Meaning Example [ ] Brackets enclose one or more optional items. Do not enter the brackets. DECIMAL (digits [ , precision ]) { } Braces enclose two or more items, one of which is required. Do not enter the braces. {ENABLE | DISABLE} | A vertical bar represents a choice of two or more options within brackets or braces. Enter one of the options. Do not enter the vertical bar. {ENABLE | DISABLE} [COMPRESS | NOCOMPRESS] ... Horizontal ellipsis points indicate either: ■ ■ CREATE TABLE ... AS subquery; That we have omitted parts of the code that are not directly related to the SELECT col1, col2, ... , coln FROM example employees; That you can repeat a portion of the code Vertical ellipsis points indicate that we have omitted several lines of code not directly related to the example. SQL> SELECT NAME FROM V$DATAFILE; NAME -----------------------------------/fsl/dbs/tbs_01.dbf /fs1/dbs/tbs_02.dbf . . . /fsl/dbs/tbs_09.dbf 9 rows selected. Other notation You must enter symbols other than brackets, braces, vertical bars, and ellipsis points as shown. acctbal NUMBER(11,2); acct CONSTANT NUMBER(4) := 3; Italics Italicized text indicates placeholders or variables for which you must supply particular values. CONNECT SYSTEM/system_password DB_NAME = database_name UPPERCASE Uppercase typeface indicates elements supplied by the system. We show these terms in uppercase in order to distinguish them from terms you define. Unless terms appear in brackets, enter them in the order and with the spelling shown. However, because these terms are not case sensitive, you can enter them in lowercase. SELECT last_name, employee_id FROM employees; SELECT * FROM USER_TABLES; DROP TABLE hr.employees; . . . xxxiii Convention Meaning Example lowercase Lowercase typeface indicates programmatic elements that you supply. For example, lowercase indicates names of tables, columns, or files. SELECT last_name, employee_id FROM employees; sqlplus hr/hr CREATE USER mjones IDENTIFIED BY ty3MU9; Note: Some programmatic elements use a mixture of UPPERCASE and lowercase. Enter these elements as shown. Conventions for Windows Operating Systems The following table describes conventions for Windows operating systems and provides examples of their use. Convention Meaning Example Choose Start > menu item How to start a program. The '>' character indicates a hierarchical menu (submenu). To start the Database Configuration Assistant, choose Start > Programs > Oracle HOME_NAME > Configuration and Migration Tools > Database Configuration Assistant. File and directory names c:\winnt"\"system32 is the same as File and directory names are not case sensitive. The following special characters C:\WINNT\SYSTEM32 are not allowed: left angle bracket (<), right angle bracket (>), colon (:), double quotation marks ("), slash (/), pipe (|), and dash (-). The special character backslash (\) is treated as an element separator, even when it appears in quotes. If the filename begins with \\, then Windows assumes it uses the Universal Naming Convention. C:\> Represents the Windows command prompt of the current hard disk drive. The escape character in a command prompt is the caret (^). Your prompt reflects the subdirectory in which you are working. Referred to as the command prompt in this manual. Special characters The backslash (\) special character is C:\> exp HR/HR TABLES=emp QUERY=\"WHERE sometimes required as an escape character job='REP'\" for the double quotation mark (") special character at the Windows command prompt. Parentheses and the single quotation mark (') do not require an escape character. Refer to your Windows operating system documentation for more information on escape and special characters. HOME_NAME Represents the Oracle home name. The home name can be up to 16 alphanumeric characters. The only special character allowed in the home name is the underscore. xxxiv C:\oracle\oradata> C:\> net start OracleHOME_NAMETNSListener Convention Meaning Example ORACLE_HOME and ORACLE_BASE In releases prior to Oracle8i release 8.1.3, when you installed Oracle components, all subdirectories were located under a top level ORACLE_HOME directory. The default for Windows NT was C:\orant. Go to the ORACLE_BASE\ORACLE_HOME\rdbms\admin directory. This release complies with Optimal Flexible Architecture (OFA) guidelines. All subdirectories are not under a top level ORACLE_HOME directory. There is a top level directory called ORACLE_BASE that by default is C:\oracle\product\10.1.0. If you install the latest Oracle release on a computer with no other Oracle software installed, then the default setting for the first Oracle home directory is C:\oracle\product\10.1.0\db_n, where n is the latest Oracle home number. The Oracle home directory is located directly under ORACLE_BASE. All directory path examples in this guide follow OFA conventions. Refer to Oracle Database Installation Guide for Microsoft Windows (32-Bit) for additional information about OFA compliances and for information about installing Oracle products in non-OFA compliant directories. xxxv xxxvi What's New in Oracle Database Administrator's Guide? This section describes new features of the Oracle Database 10g Release 2 (10.2) and provides pointers to additional information. New features information from previous releases is also retained to help those users migrating to the current release. The following sections describe the new features in Oracle Database: ■ Oracle Database 10g Release 2 (10.2) New Features in the Administrator's Guide ■ Oracle Database 10g Release 1 (10.1) New Features in the Administrator's Guide Oracle Database 10g Release 2 (10.2) New Features in the Administrator's Guide ■ Transparent data encryption With the CREATE TABLE or ALTER TABLE commands, you can specify table columns for which data is encrypted before being stored in the datafile. If users attempt to circumvent the database access control mechanisms by looking inside datafiles directly with operating system tools, encryption prevents such users from viewing sensitive data. Oracle Database Security Guide for information on Transparent Data Encryption. See Also: ■ Increased maximum number of partitions per schema object The maximum number of partitions and subpartitions that can be defined for a table or index has been increased to 1024K - 1. ■ DML error logging A new error logging clause for all DML statements enables certain types of errors (for example, constraint violations or data conversion errors) to be logged to an error logging table, allowing the statement to continue instead of terminating and rolling back. See Also: ■ "Inserting Data with DML Error Logging" on page 15-9 Enhancements to Automatic Shared Memory Management The Streams Pool is now also automatically tuned when automatic shared memory management is enabled. A new view, V$SGA_TARGET_ADVICE, provides information to help with tuning SGA_TARGET. xxxvii See Also: "Using Automatic Shared Memory Management" on page 2-25 ■ Improved automatic tuning of undo retention results in fewer "ORA-01555: snapshot too old" messages. Automatic tuning of undo retention now always tunes for the maximum possible retention for the undo tablespace based on tablespace size and current system activity. This improves the success rate of Oracle Flashback operations and long-running queries. This new tuning method for maximum possible retention applies only to fixed size undo tablespaces. For AUTOEXTEND tablespaces, the behavior is unchanged. Note: See Also: ■ The Segment Advisor now reports tables with excessive row chaining. See Also: ■ "Undo Retention" on page 10-3 "Using the Segment Advisor" on page 14-16 The Segment Advisor now runs automatically during the maintenance window. Upon installation, a Scheduler job is automatically created to run the Segment Advisor during the maintenance window. You can also continue to run the Segment Advisor manually. The automatically run Segment Advisor (the "Automatic Segment Advisor") examines statistics on the entire database and selects segments and tablespaces to analyze. You can view recommendations with Enterprise Manager, with the DBA_ADVISOR_* views, or with the new DBMS_SPACE.ASA_RECOMMENDATIONS procedure. See Also: ■ "Using the Segment Advisor" on page 14-16 Enhancements to the online segment shrink capability Online segment shrink now supports: – LOB segments – IOT overflow segments (both standalone and as a dependent object of an IOT index segment) See Also: ■ "Shrinking Database Segments Online" on page 14-29 Enhancements to online table redefinition Online redefinition of tables now supports: xxxviii – Redefining a single partition of a partitioned table – Clustered tables, Advanced Queuing queue tables, and materialized view logs – Object data types (objects, VARRAYs, and nested tables, including nested table dependent objects) – Check and NOT NULL constraints – Preservation of table statistics – Parallel execution for Long-to-LOB migration In addition, dependent PL/SQL package recompilation is no longer required when the redefined table has the same number, types, and order of columns as the original table. See Also: ■ "Redefining Tables Online" on page 15-21 Support for XMLTypes in the transportable tablespace facility See Also: "Transporting Tablespaces Between Databases" on page 8-25 ■ Enhancements to space management The DBMS_SPACE_ADMIN package provides new tools to troubleshoot space management problems. See Also: "Diagnosing and Repairing Locally Managed Tablespace Problems" on page 8-22 ■ Control files no longer need to be recreated when changing certain configuration parameters Control files can now dynamically grow in size when you increase the values of the following parameters: MAXLOGFILES, MAXLOGMEMBERS, MAXLOGHISTORY, and MAXINSTANCES. See Also: ■ "When to Create New Control Files" on page 5-4 Tablespace low-space alert thresholds by free space remaining Low-space alert thresholds for locally managed tablespaces can now be by percent full or by free space remaining (in KB). Free-space-remaining thresholds are more useful for very large tablespaces. See Also: ■ "Managing Tablespace Alerts" on page 14-1 Fast partition split is now supported for partitioned index-organized tables. See Also: "Optimizing SPLIT PARTITION and SPLIT SUBPARTITION Operations" on page 17-49 ■ Automatically enabled resource manager The Resource Manager no longer needs to be enabled at startup to quiesce the database or to activate a resource plan when a Scheduler window opens. See Also: "Quiescing a Database" on page 3-11 and "Using Windows" on page 27-19 ■ Enhanced Resource Manager monitoring The Resource Manager now provides more extensive statistics on sessions, plans, and consumer groups, enabling you to better monitor and tune your Resource Manager settings. xxxix See Also: ■ "Monitoring Results" on page 24-29 Automatic Storage Management (ASM) files are now accessible through an XML DB virtual folder ASM files can now be accessed through the XML DB repository, either programmatically or with protocols like FTP and HTTP/WebDAV. See Also: "Accessing Automatic Storage Management Files with the XML DB Virtual Folder" on page 12-46 and Oracle XML DB Developer's Guide ■ Automatic Storage Management now has a command-line utility (ASMCMD) With ASMCMD you can easily view and manipulate files and directories within disk groups. ASMCMD can list the contents of disk groups, perform searches, create and remove directories and aliases, display space utilization, and more. See Also: ■ Oracle Database Utilities Automatic Storage Management now supports high-redundancy (3-way mirrored) files in normal redundancy disk groups Individual files in a normal redundancy disk group can be specified as high redundancy (with 3-way mirroring) by creating the file with a template that has the redundancy attribute set to HIGH. This applies when explicitly naming a template when creating a file, or when modifying a system default template for the disk group. In addition, the system default template CONTROLFILE in normal redundancy disk groups now has the redundancy attribute set to HIGH. See Also: ■ Chapter 12, "Using Automatic Storage Management" Automatic Storage Management (ASM) supports multiple database versions ASM maintains forward and backward compatibility between most 10.x versions of Oracle Database and 10.x versions of ASM. That is, any combination of versions 10.1.x.y and 10.2.x.y for either the ASM instance or the database instance works correctly, with this caveat: For a 10.1.x.y database instance to connect to a 10.2.x.y ASM instance, the database must be version 10.1.0.3 or later. "Using Automatic Storage Management in the Database" on page 12-32 See Also: ■ The DBMS_FILE_TRANSFER package can now copy files between a local file system and an Automatic Storage Management (ASM) disk group. The DBMS_FILE_TRANSFER package can use a local file system or an ASM disk group as the source or destination for a file transfer. You can now copy files from ASM to the file system or from the file system to ASM. See Also: ■ "Copying Files Using the Database Server" on page 9-12 The ALTER DISKGROUP command has a new REBALANCE WAIT clause. ALTER DISKGROUP commands that cause a rebalance of an ASM disk group—commands that add, drop, or resize disks, or the command that starts a manual rebalance operation—can now wait until the rebalance operation completes before returning. This is especially useful in scripts. xl See Also: "Altering the Disk Membership of a Disk Group" on page 12-20 ■ Enhancements to the Scheduler The Scheduler supports a new type of object called a chain, which is a grouping of programs that are linked together for a single, combined objective. Scheduler jobs can now be started when a specified event occurs, and the Scheduler can raise events when a job state changes (for example, from running to complete). See Also: "Using Events" on page 27-30 and "Using Chains" on page 27-37 The Scheduler also has an expanded calendaring syntax that enables you to define more complex schedules, such as "the last work day of each fiscal quarter." Existing schedules can be combined to create composite schedules. See Also: ■ "Using Schedules" on page 27-12 Resource optimized DROP TABLE...PURGE for partitioned tables To avoid running into resource constraints, the DROP TABLE...PURGE command for a partitioned table drops the table in multiple transactions, where each transaction drops a subset of the partitions or subpartitions and then commits. If the DROP TABLE...PURGE command fails, you can take corrective action, if any, and then restart the command. See Also: "Dropping Partitioned Tables" on page 17-52 Oracle Database 10g Release 1 (10.1) New Features in the Administrator's Guide ■ ■ ■ Server manageability features are introduced in "Server Manageability" on page 1-20. Automatic system task maintenance is discussed in Chapter 23, "Managing Automatic System Tasks Using the Maintenance Window". Bigfile tablespaces Oracle Database lets you create single-file tablespaces, called bigfile tablespaces, which can contain up to 232 or 4G blocks. The benefits of bigfile tablespaces are the following: – They significantly enhance the storage capacity of an Oracle Database. – They reduce the number of datafiles needed for an ultra large database. – They simplify database management by providing datafile transparency. See"Supporting Bigfile Tablespaces During Database Creation" on page 2-16 and "Bigfile Tablespaces" on page 8-6. ■ Multiple default temporary tablespace support for SQL operations You can create a temporary tablespace group that can be specifically assigned to users in the same way that a single temporary tablespace is assigned. A tablespace group can also be specified as the default temporary tablespace for the database. xli See "Multiple Temporary Tablespaces: Using Tablespace Groups" on page 8-11. ■ Rename tablespace The RENAME TO clause of the ALTER TABLESPACE statement enables you to rename tablespaces. See "Renaming Tablespaces" on page 8-19. ■ Cross-platform transportable tablespaces Tablespaces can be transported from one platform to another. The RMAN CONVERT command is used to do the conversion. See "Transporting Tablespaces Between Databases: A Procedure and Example" on page 8-29. ■ SYSAUX tablespace Oracle Database creates an auxiliary system tablespace called SYSAUX at database creation. This tablespace can be used by various Oracle Database features and products, rather than saving their data in separate tablespaces or in the SYSTEM tablespace. See "Creating the SYSAUX Tablespace" on page 2-12 and "Managing the SYSAUX Tablespace" on page 8-20. ■ Automatic Storage Management Automatic Storage Management provides a logical volume manager integrated with Oracle Database, eliminating the need for you to purchase a third-party product. Oracle Database creates Oracle-managed files within user-defined disk groups that provide redundancy and striping. See Chapter 12, "Using Automatic Storage Management". ■ Drop database The new DROP DATABASE statement lets you delete a database and all of its files that are listed in the control file. See "Dropping a Database" on page 2-35. ■ Oracle Flashback Transaction Query This feature introduces the FLASHBACK_TRANSACTION_QUERY view, which lets you examine changes to the database at the transaction level. As a result, you can diagnose problems, perform analysis, and audit transactions. See "Auditing Table Changes Using Flashback Transaction Query" on page 15-36. ■ Oracle Flashback Version Query Using undo data stored in the database, you can now view multiple changes to one or more rows, along with the metadata for the changes. ■ Oracle Flashback Table A new FLASHBACK TABLE statement lets you quickly recover a table to a point in time in the past without restoring a backup. See "Recovering Tables Using the Flashback Table Feature" on page 15-36. ■ Oracle Flashback Drop Oracle Database now provides a way to restore accidentally dropped tables. When tables are dropped, they are placed into a recycle bin from which they can later be recovered. xlii See "Using Flashback Drop and Managing the Recycle Bin" on page 15-38. ■ Enhanced online redefinition New procedures have been added to the DBMS_REDEFINITION package that automate the cloning of dependent objects such as indexes, triggers, privileges, and constraints. Some restrictions have been lifted, allowing more types of tables to be redefined. See "Redefining Tables Online" on page 15-21. ■ Automatic statistics collection You no longer need to specify the MONITORING keyword in the CREATE TABLE or ALTER TABLE statement to enable the automatic collecting of statistics for a table. Statistics are now collected automatically as controlled by the STATISTICS_LEVEL initialization parameter. Automatic statistics collection is the default. See "Automatically Collecting Statistics on Tables" on page 15-16. ■ Scheduler Oracle Database provides advanced scheduling capabilities through the database Scheduler. See Part VI, "Database Resource Management and Task Scheduling". ■ Database Resource Manager enhancement The following are enhancements to the Database Resource Manager: – Adaptive consumer group mapping You can configure the Database Resource Manager to automatically assign consumer groups to sessions by providing mappings between session attributes and consumer groups. See "Automatically Assigning Resource Consumer Groups to Sessions" on page 24-21 – New plan directives New resource plan directives let you set idle timeouts, cancel long-running SQL statements, terminate long-running sessions, and restore sessions to their original consumer group at the end of a top call. See "Specifying Resource Plan Directives" on page 24-14. – New policies Two new resource manager policies have also been added: the RATIO CPU allocation policy and the RUN_TO_COMPLETION scheduling policy. ■ New initialization parameter RESUMABLE_TIMEOUT The RESUMABLE_TIMEOUT initialization parameter lets you enable resumable space allocation and set a timeout period across all sessions. See "Enabling and Disabling Resumable Space Allocation" on page 14-10. ■ Application services Tuning by "service and SQL" augments tuning by "session and SQL" in the majority of systems where all sessions are anonymous and shared. See "Defining Application Services for Oracle Database 10g" and Oracle Database Performance Tuning Guide for more information. xliii ■ Simplified recovery through resetlogs The format for archive log file naming, as specified by the ARCHIVE_LOG_FORMAT initialization parameter, now includes the resetlogs ID, which allows for easier recovery of a database from a previous backup. See "Specifying Archive Destinations" on page 7-6. ■ Automated shared server configuration and simplified shared server configuration parameters. You no longer need to preconfigure initialization parameters for shared server. Parameters can be configured dynamically, and most parameters are now limiting parameters to control resources. The recommended method for enabling shared server now is by setting SHARED_SERVERS initialization parameter, rather than the DISPATCHERS initialization parameter. See "Configuring Oracle Database for Shared Server" on page 4-4. ■ Consolidation of session-specific trace output For shared server sessions, the trcsess command-line utility consolidates in one place the trace pertaining to a user session. See "Reading the Trace File for Shared Server Sessions" on page 4-23. ■ Block remote access to restricted instances Remote access to a restricted instance through an Oracle Net listener is blocked. See "Restricting Access to an Instance at Startup" on page 3-5. ■ Dynamic SGA enhancements The JAVA_POOL_SIZE initialization parameter is now dynamic. There is a new STREAMS_POOL_SIZE initialization parameter, which is also dynamic. A new view, V$SGAINFO, provides a consolidated and concise summary of SGA information. See "Managing the System Global Area (SGA)" on page 2-24. ■ Irreversible database compatibility In previous releases you were allowed to lower the compatibility setting for your database. Now, when you advance the compatibility of the database with the COMPATIBLE initialization parameter, you can no longer start the database using a lower compatibility setting, except by doing a point-in-time recovery to a time before the compatibility was advanced. See "The COMPATIBLE Initialization Parameter and Irreversible Compatibility" on page 2-34. ■ Flash recovery area You can create a flash recovery area in your database where Oracle Database can store and manage files related to backup and recovery. See "Specifying a Flash Recovery Area" on page 2-22. ■ Sorted hash clusters Sorted hash clusters are new data structures that allow faster retrieval of data for applications where data is consumed in the order in which it was inserted. See "Creating a Sorted Hash Cluster" on page 19-3. ■ xliv Copying Files Using the Database Server You do not have to use the operating system to copy database files. You can use the DBMS_FILE_TRANSFER package to copy files. See "Copying Files Using the Database Server" on page 9-12 ■ Deprecation of MAXTRANS physical attribute parameter The MAXTRANS physical attribute parameter for database objects has been deprecated. Oracle Database now automatically allows up to 255 concurrent update transactions for any data block, depending on the available space in the block. See "Specifying the INITRANS Parameter" on page 14-4. ■ Deprecation of use of rollback segments (manual undo management mode) Manual undo management mode has been deprecated and is no longer documented in this book. Use an undo tablespace and automatic undo management instead. See Chapter 10, "Managing the Undo Tablespace". ■ Deprecation of the UNDO_SUPPRESS_ERRORS initialization parameter When operating in automatic undo management mode, the database now ignores any manual undo management mode SQL statements instead of returning error messages. See "Overview of Automatic Undo Management" on page 10-2. ■ Deprecation of the PARALLEL_AUTOMATIC_TUNING initialization parameter Oracle Database provides defaults for the parallel execution initialization parameters that are adequate and tuned for most situations. The PARALLEL_AUTOMATIC_TUNING initialization parameter is now redundant and has been deprecated. ■ Removal of LogMiner chapter The chapter on LogMiner has been moved to Oracle Database Utilities ■ Removal of job queues chapter on using the DBMS_JOB package Using the DBMS_JOB package to submit jobs has been replaced by Scheduler functionality. See Part VI, "Database Resource Management and Task Scheduling". xlv xlvi Part I Basic Database Administration Part I provides an overview of the responsibilities of a database administrator. This part describes the creation of a database and how to start up and shut down an instance of the database. Part I contains the following chapters: ■ Chapter 1, "Overview of Administering an Oracle Database" ■ Chapter 2, "Creating an Oracle Database" ■ Chapter 3, "Starting Up and Shutting Down" ■ Chapter 4, "Managing Oracle Database Processes" 1 Overview of Administering an Oracle Database This chapter presents an overview of the environment and tasks of an Oracle Database administrator (DBA). It also discusses DBA security and how you obtain the necessary administrative privileges. The following topics are discussed: ■ Types of Oracle Database Users ■ Tasks of a Database Administrator ■ Selecting an Instance with Environment Variables ■ Identifying Your Oracle Database Software Release ■ Database Administrator Security and Privileges ■ Database Administrator Authentication ■ Creating and Maintaining a Password File ■ Server Manageability Types of Oracle Database Users The types of users and their roles and responsibilities depend on the database site. A small site can have one database administrator who administers the database for application developers and users. A very large site can find it necessary to divide the duties of a database administrator among several people and among several areas of specialization. This section contains the following topics: ■ Database Administrators ■ Security Officers ■ Network Administrators ■ Application Developers ■ Application Administrators ■ Database Users Overview of Administering an Oracle Database 1-1 Types of Oracle Database Users Database Administrators Each database requires at least one database administrator (DBA). An Oracle Database system can be large and can have many users. Therefore, database administration is sometimes not a one-person job, but a job for a group of DBAs who share responsibility. A database administrator's responsibilities can include the following tasks: ■ ■ ■ ■ ■ Installing and upgrading the Oracle Database server and application tools Allocating system storage and planning future storage requirements for the database system Creating primary database storage structures (tablespaces) after application developers have designed an application Creating primary objects (tables, views, indexes) once application developers have designed an application Modifying the database structure, as necessary, from information given by application developers ■ Enrolling users and maintaining system security ■ Ensuring compliance with Oracle license agreements ■ Controlling and monitoring user access to the database ■ Monitoring and optimizing the performance of the database ■ Planning for backup and recovery of database information ■ Maintaining archived data on tape ■ Backing up and restoring the database ■ Contacting Oracle for technical support Security Officers In some cases, a site assigns one or more security officers to a database. A security officer enrolls users, controls and monitors user access to the database, and maintains system security. As a DBA, you might not be responsible for these duties if your site has a separate security officer. Please refer to Oracle Database Security Guide for information about the duties of security officers. Network Administrators Some sites have one or more network administrators. A network administrator, for example, administers Oracle networking products, such as Oracle Net Services. Please refer to Oracle Database Net Services Administrator's Guide for information about the duties of network administrators. See Also: Part VII, "Distributed Database Management", for information on network administration in a distributed environment Application Developers Application developers design and implement database applications. Their responsibilities include the following tasks: ■ Designing and developing the database application 1-2 Oracle Database Administrator’s Guide Tasks of a Database Administrator ■ Designing the database structure for an application ■ Estimating storage requirements for an application ■ Specifying modifications of the database structure for an application ■ Relaying this information to a database administrator ■ Tuning the application during development ■ Establishing security measures for an application during development Application developers can perform some of these tasks in collaboration with DBAs. Please refer to Oracle Database Application Developer's Guide - Fundamentals for information about application development tasks. Application Administrators An Oracle Database site can assign one or more application administrators to administer a particular application. Each application can have its own administrator. Database Users Database users interact with the database through applications or utilities. A typical user's responsibilities include the following tasks: ■ Entering, modifying, and deleting data, where permitted ■ Generating reports from the data Tasks of a Database Administrator The following tasks present a prioritized approach for designing, implementing, and maintaining an Oracle Database: Task 1: Evaluate the Database Server Hardware Task 2: Install the Oracle Database Software Task 3: Plan the Database Task 4: Create and Open the Database Task 5: Back Up the Database Task 6: Enroll System Users Task 7: Implement the Database Design Task 8: Back Up the Fully Functional Database Task 9: Tune Database Performance Task 10: Download and Install Patches Task 11: Roll Out to Additional Hosts These tasks are discussed in the sections that follow. When upgrading to a new release, back up your existing production environment, both software and database, before installation. For information on preserving your existing production database, see Oracle Database Upgrade Guide. Note: Overview of Administering an Oracle Database 1-3 Tasks of a Database Administrator Task 1: Evaluate the Database Server Hardware Evaluate how Oracle Database and its applications can best use the available computer resources. This evaluation should reveal the following information: ■ How many disk drives are available to the Oracle products ■ How many, if any, dedicated tape drives are available to Oracle products ■ How much memory is available to the instances of Oracle Database you will run (see your system configuration documentation) Task 2: Install the Oracle Database Software As the database administrator, you install the Oracle Database server software and any front-end tools and database applications that access the database. In some distributed processing installations, the database is controlled by a central computer (database server) and the database tools and applications are executed on remote computers (clients). In this case, you must also install the Oracle Net components necessary to connect the remote machines to the computer that executes Oracle Database. For more information on what software to install, see "Identifying Your Oracle Database Software Release" on page 1-7. See Also: For specific requirements and instructions for installation, refer to the following documentation: ■ ■ The Oracle documentation specific to your operating system The installation guides for your front-end tools and Oracle Net drivers Task 3: Plan the Database As the database administrator, you must plan: ■ The logical storage structure of the database ■ The overall database design ■ A backup strategy for the database It is important to plan how the logical storage structure of the database will affect system performance and various database management operations. For example, before creating any tablespaces for your database, you should know how many datafiles will make up the tablespace, what type of information will be stored in each tablespace, and on which disk drives the datafiles will be physically stored. When planning the overall logical storage of the database structure, take into account the effects that this structure will have when the database is actually created and running. Consider how the logical storage structure of the database will affect: ■ The performance of the computer executing running Oracle Database ■ The performance of the database during data access operations ■ The efficiency of backup and recovery procedures for the database Plan the relational design of the database objects and the storage characteristics for each of these objects. By planning the relationship between each object and its physical storage before creating it, you can directly affect the performance of the database as a unit. Be sure to plan for the growth of the database. 1-4 Oracle Database Administrator’s Guide Tasks of a Database Administrator In distributed database environments, this planning stage is extremely important. The physical location of frequently accessed data dramatically affects application performance. During the planning stage, develop a backup strategy for the database. You can alter the logical storage structure or design of the database to improve backup efficiency. It is beyond the scope of this book to discuss relational and distributed database design. If you are not familiar with such design issues, please refer to accepted industry-standard documentation. Part II, "Oracle Database Structure and Storage", and Part IV, "Schema Objects", provide specific information on creating logical storage structures, objects, and integrity constraints for your database. Task 4: Create and Open the Database After you complete the database design, you can create the database and open it for normal use. You can create a database at installation time, using the Database Configuration Assistant, or you can supply your own scripts for creating a database. Please refer to Chapter 2, "Creating an Oracle Database", for information on creating a database and Chapter 3, "Starting Up and Shutting Down" for guidance in starting up the database. Task 5: Back Up the Database After you create the database structure, carry out the backup strategy you planned for the database. Create any additional redo log files, take the first full database backup (online or offline), and schedule future database backups at regular intervals. See Also: ■ Oracle Database Backup and Recovery Basics ■ Oracle Database Backup and Recovery Advanced User's Guide Task 6: Enroll System Users After you back up the database structure, you can enroll the users of the database in accordance with your Oracle license agreement, and grant appropriate privileges and roles to these users. Please refer to Chapter 22, "Managing Users and Securing the Database" for guidance in this task. Task 7: Implement the Database Design After you create and start the database, and enroll the system users, you can implement the planned logical structure database by creating all necessary tablespaces. When you have finished creating tablespaces, you can create the database objects. Part II, "Oracle Database Structure and Storage" and Part IV, "Schema Objects" provide information on creating logical storage structures and objects for your database. Task 8: Back Up the Fully Functional Database When the database is fully implemented, again back up the database. In addition to regularly scheduled backups, you should always back up your database immediately after implementing changes to the database structure. Overview of Administering an Oracle Database 1-5 Tasks of a Database Administrator Task 9: Tune Database Performance Optimizing the performance of the database is one of your ongoing responsibilities as a DBA. Oracle Database provides a database resource management feature that helps you to control the allocation of resources among various user groups. The database resource manager is described in Chapter 24, "Using the Database Resource Manager". See Also: Oracle Database Performance Tuning Guide for information about tuning your database and applications Task 10: Download and Install Patches After installation and on a regular basis, download and install patches. Patches are available as single interim patches and as patchsets (or patch releases). Interim patches address individual software bugs and may or may not be needed at your installation. Patch releases are collections of bug fixes that are applicable for all customers. Patch releases have release numbers. For example, if you installed Oracle Database 10.2.0.0, the first patch release will have a release number of 10.2.0.1. See Also: Oracle Database Installation Guide for your platform for instructions on downloading and installing patches. Task 11: Roll Out to Additional Hosts After you have an Oracle Database installation properly configured, tuned, patched, and tested, you may want to roll that exact installation out to other hosts. Reasons to do this include the following: ■ ■ You have multiple production database systems. You want to create development and test systems that are identical to your production system. Instead of installing, tuning, and patching on each additional host, you can clone your tested Oracle Database installation to other hosts, saving time and eliminating inconsistencies. There are two types of cloning available to you: ■ Cloning an Oracle home—Just the configured and patched binaries from the Oracle home directory and subdirectories are copied to the destination host and "fixed" to match the new environment. You can then start an instance with this cloned home and create a database. You can use the Enterprise Manager Clone Oracle Home tool to clone an Oracle home to one or more destination hosts. You can also manually clone an Oracle home using a set of provided scripts and Oracle Universal Installer. ■ Cloning a database—The tuned database, including database files, initialization parameters, and so on, are cloned to an existing Oracle home (possibly a cloned home). You can use the Enterprise Manager Clone Database tool to clone an Oracle database instance to an existing Oracle home. See Also: ■ ■ Oracle Universal Installer and OPatch User's Guide and Enterprise Manager online help for details on how to clone an Oracle home. Enterprise Manager online help for instructions for cloning a database. 1-6 Oracle Database Administrator’s Guide Identifying Your Oracle Database Software Release Selecting an Instance with Environment Variables Before you attempt to use SQL*Plus to connect locally to an Oracle instance, you must ensure that environment variables are set properly. When multiple database instances exist on one server, or when an Automatic Storage Management (ASM) instance exists on the same server as one or more database instances, the environment variables determine which instance SQL*Plus connects to. (This is also true when there is only one Oracle instance on a server.) For example, each Oracle instance (database or ASM) has a unique system identifier (SID). To connect to an instance, you must at a minimum set the ORACLE_SID environment variable to the SID of that instance. Depending on the operating system, you may need to set other environment variables to properly change from one instance to another. Refer to the Oracle Database Installation Guide or administration guide for your operating system for details on environment variables and for information on switching instances. This discussion applies only when you make a local connection—that is, when you initiate a SQL*Plus connection from the same machine on which the target instance resides, without specifying an Oracle Net Services connect identifier. When you make a connection through Oracle Net Services, either with SQL*Plus on the local or a remote machine, or with Enterprise Manager, the environment is automatically set for you. Note: For more information on connect identifiers, see Oracle Database Net Services Administrator's Guide. Solaris Example The following Solaris example sets the environment variables that are required for selecting an instance. When switching between instances with different Oracle homes, the ORACLE_HOME environment variable must be changed. % setenv ORACLE_SID SAL1 % setenv ORACLE_HOME /u01/app/oracle/product/10.1.0/db_1 % setenv LD_LIBRARY_PATH /usr/lib:/usr/dt/lib:/usr/openwin/lib:/usr/ccs/lib Most UNIX installations come with two scripts, oraenv and coraenv, that can be used to easily set these environment variables. For more information, see Administrator's Reference for UNIX Systems. Windows Example On Windows, you must set only the ORACLE_SID environment variable to select an instance before starting SQL*Plus. SET ORACLE_SID=SAL1 Identifying Your Oracle Database Software Release Because Oracle Database continues to evolve and can require maintenance, Oracle periodically produces new releases. Not all customers initially subscribe to a new release or require specific maintenance for their existing release. As a result, multiple releases of the product exist simultaneously. Overview of Administering an Oracle Database 1-7 Identifying Your Oracle Database Software Release As many as five numbers may be required to fully identify a release. The significance of these numbers is discussed in the sections that follow. Release Number Format To understand the release nomenclature used by Oracle, examine the following example of an Oracle Database server labeled "Release 10.1.0.1.0". Figure 1–1 Example of an Oracle Database Release Number 10.1.0.1.0 Platform specific release number Major database release number Component specific release number Database maintenance release number Application server release number Starting with release 9.2, maintenance releases of Oracle Database are denoted by a change to the second digit of a release number. In previous releases, the third digit indicated a particular maintenance release. Note: Major Database Release Number The first digit is the most general identifier. It represents a major new version of the software that contains significant new functionality. Database Maintenance Release Number The second digit represents a maintenance release level. Some new features may also be included. Application Server Release Number The third digit reflects the release level of the Oracle Application Server (OracleAS). Component-Specific Release Number The fourth digit identifies a release level specific to a component. Different components can have different numbers in this position depending upon, for example, component patch sets or interim releases. Platform-Specific Release Number The fifth digit identifies a platform-specific release. Usually this is a patch set. When different platforms require the equivalent patch set, this digit will be the same across the affected platforms. Checking Your Current Release Number To identify the release of Oracle Database that is currently installed and to see the release levels of other database components you are using, query the data dictionary view PRODUCT_COMPONENT_VERSION. A sample query follows. (You can also query 1-8 Oracle Database Administrator’s Guide Database Administrator Security and Privileges the V$VERSION view to see component-level information.) Other product release levels may increment independent of the database server. COL PRODUCT FORMAT A35 COL VERSION FORMAT A15 COL STATUS FORMAT A15 SELECT * FROM PRODUCT_COMPONENT_VERSION; PRODUCT ---------------------------------------NLSRTL Oracle Database 10g Enterprise Edition PL/SQL ... VERSION ----------10.2.0.1.0 10.2.0.1.0 10.2.0.1.0 STATUS ----------Production Prod Production It is important to convey to Oracle the results of this query when you report problems with the software. Database Administrator Security and Privileges To perform the administrative tasks of an Oracle Database DBA, you need specific privileges within the database and possibly in the operating system of the server on which the database runs. Access to a database administrator's account should be tightly controlled. This section contains the following topics: ■ The Database Administrator's Operating System Account ■ Database Administrator Usernames The Database Administrator's Operating System Account To perform many of the administrative duties for a database, you must be able to execute operating system commands. Depending on the operating system on which Oracle Database is running, you might need an operating system account or ID to gain access to the operating system. If so, your operating system account might require operating system privileges or access rights that other database users do not require (for example, to perform Oracle Database software installation). Although you do not need the Oracle Database files to be stored in your account, you should have access to them. See Also: Your operating system specific Oracle documentation. The method of creating the account of the database administrator is specific to the operating system. Database Administrator Usernames Two user accounts are automatically created when Oracle Database is installed: ■ SYS (default password: CHANGE_ON_INSTALL) ■ SYSTEM (default password: MANAGER) Overview of Administering an Oracle Database 1-9 Database Administrator Security and Privileges Both Oracle Universal Installer (OUI) and Database Configuration Assistant (DBCA) now prompt for SYS and SYSTEM passwords and do not accept the default passwords "change_on_install" or "manager", respectively. Note: If you create the database manually, Oracle strongly recommends that you specify passwords for SYS and SYSTEM at database creation time, rather than using these default passwords. Please refer to "Protecting Your Database: Specifying Passwords for Users SYS and SYSTEM" on page 2-10 for more information. Create at least one additional administrative user and grant to that user an appropriate administrative role to use when performing daily administrative tasks. Do not use SYS and SYSTEM for these purposes. Note Regarding Security Enhancements: In this release of Oracle Database and in subsequent releases, several enhancements are being made to ensure the security of default database user accounts. You can find a security checklist for this release in Oracle Database Security Guide. Oracle recommends that you read this checklist and configure your database accordingly. SYS When you create an Oracle Database, the user SYS is automatically created and granted the DBA role. All of the base tables and views for the database data dictionary are stored in the schema SYS. These base tables and views are critical for the operation of Oracle Database. To maintain the integrity of the data dictionary, tables in the SYS schema are manipulated only by the database. They should never be modified by any user or database administrator, and no one should create any tables in the schema of user SYS. (However, you can change the storage parameters of the data dictionary settings if necessary.) Ensure that most database users are never able to connect to Oracle Database using the SYS account. SYSTEM When you create an Oracle Database, the user SYSTEM is also automatically created and granted the DBA role. The SYSTEM username is used to create additional tables and views that display administrative information, and internal tables and views used by various Oracle Database options and tools. Never use the SYSTEM schema to store tables of interest to non-administrative users. The DBA Role A predefined DBA role is automatically created with every Oracle Database installation. This role contains most database system privileges. Therefore, the DBA role should be granted only to actual database administrators. 1-10 Oracle Database Administrator’s Guide Database Administrator Authentication The DBA role does not include the SYSDBA or SYSOPER system privileges. These are special administrative privileges that allow an administrator to perform basic database administration tasks, such as creating the database and instance startup and shutdown. These system privileges are discussed in "Administrative Privileges" on page 1-11. Note: Database Administrator Authentication As a DBA, you often perform special operations such as shutting down or starting up a database. Because only a DBA should perform these operations, the database administrator usernames require a secure authentication scheme. This section contains the following topics: ■ Administrative Privileges ■ Selecting an Authentication Method ■ Using Operating System Authentication ■ Using Password File Authentication Administrative Privileges Administrative privileges that are required for an administrator to perform basic database operations are granted through two special system privileges, SYSDBA and SYSOPER. You must have one of these privileges granted to you, depending upon the level of authorization you require. Note: The SYSDBA and SYSOPER system privileges allow access to a database instance even when the database is not open. Control of these privileges is totally outside of the database itself. The SYSDBA and SYSOPER privileges can also be thought of as types of connections that enable you to perform certain database operations for which privileges cannot be granted in any other fashion. For example, you if you have the SYSDBA privilege, you can connect to the database by specifying CONNECT AS SYSDBA. SYSDBA and SYSOPER The following operations are authorized by the SYSDBA and SYSOPER system privileges: Overview of Administering an Oracle Database 1-11 Database Administrator Authentication System Privilege Operations Authorized SYSDBA ■ Perform STARTUP and SHUTDOWN operations ■ ALTER DATABASE: open, mount, back up, or change character set ■ CREATE DATABASE ■ DROP DATABASE ■ CREATE SPFILE ■ ALTER DATABASE ARCHIVELOG ■ ALTER DATABASE RECOVER ■ Includes the RESTRICTED SESSION privilege Effectively, this system privilege allows a user to connect as user SYS. SYSOPER ■ Perform STARTUP and SHUTDOWN operations ■ CREATE SPFILE ■ ALTER DATABASE OPEN/MOUNT/BACKUP ■ ALTER DATABASE ARCHIVELOG ■ ■ ALTER DATABASE RECOVER (Complete recovery only. Any form of incomplete recovery, such as UNTIL TIME|CHANGE|CANCEL|CONTROLFILE requires connecting as SYSDBA.) Includes the RESTRICTED SESSION privilege This privilege allows a user to perform basic operational tasks, but without the ability to look at user data. The manner in which you are authorized to use these privileges depends upon the method of authentication that you use. When you connect with SYSDBA or SYSOPER privileges, you connect with a default schema, not with the schema that is generally associated with your username. For SYSDBA this schema is SYS; for SYSOPER the schema is PUBLIC. Connecting with Administrative Privileges: Example This example illustrates that a user is assigned another schema (SYS) when connecting with the SYSDBA system privilege. Assume that the sample user oe has been granted the SYSDBA system privilege and has issued the following statements: CONNECT oe/oe CREATE TABLE admin_test(name VARCHAR2(20)); Later, user oe issues these statements: CONNECT oe/oe AS SYSDBA SELECT * FROM admin_test; User oe now receives the following error: ORA-00942: table or view does not exist Having connected as SYSDBA, user oe now references the SYS schema, but the table was created in the oe schema. See Also: ■ "Using Operating System Authentication" on page 1-14 ■ "Using Password File Authentication" on page 1-15 1-12 Oracle Database Administrator’s Guide Database Administrator Authentication Selecting an Authentication Method The following methods are available for authenticating database administrators: ■ Operating system (OS) authentication ■ A password file Notes: ■ ■ These methods replace the CONNECT INTERNAL syntax provided with earlier versions of Oracle Database. CONNECT INTERNAL is no longer supported. Operating system authentication takes precedence over password file authentication. If you meet the requirements for operating system authentication, then even if you use a password file, you will be authenticated by operating system authentication. Your choice will be influenced by whether you intend to administer your database locally on the same machine where the database resides, or whether you intend to administer many different databases from a single remote client. Figure 1–2 illustrates the choices you have for database administrator authentication schemes. Figure 1–2 Database Administrator Authentication Methods Remote Database Administration Do you have a secure connection? Local Database Administration Yes Do you want to use OS authentication? No Yes Use OS authentication No Use a password file If you are performing remote database administration, consult your Oracle Net documentation to determine whether you are using a secure connection. Most popular connection protocols, such as TCP/IP and DECnet, are not secure. See Also: Oracle Database Net Services Administrator's Guide Nonsecure Remote Connections To connect to Oracle Database as a privileged user over a nonsecure connection, you must be authenticated by a password file. When using password file authentication, the database uses a password file to keep track of database usernames that have been granted the SYSDBA or SYSOPER system privilege. This form of authentication is discussed in "Using Password File Authentication" on page 1-15. Overview of Administering an Oracle Database 1-13 Database Administrator Authentication Local Connections and Secure Remote Connections You can connect to Oracle Database as a privileged user over a local connection or a secure remote connection in two ways: ■ ■ If the database has a password file and you have been granted the SYSDBA or SYSOPER system privilege, then you can connect and be authenticated by a password file. If the server is not using a password file, or if you have not been granted SYSDBA or SYSOPER privileges and are therefore not in the password file, you can use operating system authentication. On most operating systems, authentication for database administrators involves placing the operating system username of the database administrator in a special group, generically referred to as OSDBA. Users in that group are granted SYSDBA privileges. A similar group, OSOPER, is used to grant SYSOPER privileges to users. Using Operating System Authentication This section describes how to authenticate an administrator using the operating system. OSDBA and OSOPER Two special operating system groups control database administrator connections when using operating system authentication. These groups are generically referred to as OSDBA and OSOPER. The groups are created and assigned specific names as part of the database installation process. The specific names vary depending upon your operating system and are listed in the following table: Operating System Group UNIX User Group Windows User Group OSDBA dba ORA_DBA OSOPER oper ORA_OPER The default names assumed by the Oracle Universal Installer can be overridden. How you create the OSDBA and OSOPER groups is operating system specific. Membership in the OSDBA or OSOPER group affects your connection to the database in the following ways: ■ ■ ■ If you are a member of the OSDBA group and you specify AS SYSDBA when you connect to the database, then you connect to the database with the SYSDBA system privilege. If you are a member of the OSOPER group and you specify AS SYSOPER when you connect to the database, then you connect to the database with the SYSOPER system privilege. If you are not a member of either of these operating system groups and you attempt to connect as SYSDBA or SYSOPER, the CONNECT command fails. See Also: Your operating system specific Oracle documentation for information about creating the OSDBA and OSOPER groups Preparing to Use Operating System Authentication To enable operating system authentication of an administrative user: 1. Create an operating system account for the user. 1-14 Oracle Database Administrator’s Guide Database Administrator Authentication 2. Add the account to the OSDBA or OSOPER operating system defined groups. Connecting Using Operating System Authentication A user can be authenticated, enabled as an administrative user, and connected to a local database by typing one of the following SQL*Plus commands: CONNECT / AS SYSDBA CONNECT / AS SYSOPER For a remote database connection over a secure connection, the user must also specify the net service name of the remote database: CONNECT /@net_service_name AS SYSDBA CONNECT /@net_service_name AS SYSOPER SQL*Plus User's Guide and Reference for syntax of the CONNECT command See Also: Using Password File Authentication This section describes how to authenticate an administrative user using password file authentication. Preparing to Use Password File Authentication To enable authentication of an administrative user using password file authentication you must do the following: 1. If not already created, create the password file using the ORAPWD utility: ORAPWD FILE=filename PASSWORD=password ENTRIES=max_users 2. Set the REMOTE_LOGIN_PASSWORDFILE initialization parameter to EXCLUSIVE. (This is the default). Note: REMOTE_LOGIN_PASSWORDFILE is a static initialization parameter and therefore cannot be changed without restarting the database. 3. Connect to the database as user SYS (or as another user with the administrative privileges). 4. If the user does not already exist in the database, create the user. 5. Grant the SYSDBA or SYSOPER system privilege to the user: GRANT SYSDBA to oe; This statement adds the user to the password file, thereby enabling connection AS SYSDBA. See Also: "Creating and Maintaining a Password File" on page 1-16 for instructions for creating and maintaining a password file. Connecting Using Password File Authentication Administrative users can be connected and authenticated to a local or remote database by using the SQL*Plus CONNECT command. They must connect using their username Overview of Administering an Oracle Database 1-15 Creating and Maintaining a Password File and password and the AS SYSDBA or AS SYSOPER clause. For example, user oe has been granted the SYSDBA privilege, so oe can connect as follows: CONNECT oe/oe AS SYSDBA However, user oe has not been granted the SYSOPER privilege, so the following command will fail: CONNECT oe/oe AS SYSOPER Operating system authentication takes precedence over password file authentication. Specifically, if you are a member of the OSDBA or OSOPER group for the operating system, and you connect as SYSDBA or SYSOPER, you will be connected with associated administrative privileges regardless of the username/password that you specify. Note: If you are not in the OSDBA or OSOPER groups, and you are not in the password file, then attempting to connect as SYSDBA or as SYSOPER fails. SQL*Plus User's Guide and Reference for syntax of the CONNECT command See Also: Creating and Maintaining a Password File You can create a password file using the password file creation utility, ORAPWD. For some operating systems, you can create this file as part of your standard installation. This section contains the following topics: ■ Using ORAPWD ■ Setting REMOTE_LOGIN_ PASSWORDFILE ■ Adding Users to a Password File ■ Maintaining a Password File Using ORAPWD When you invoke this password file creation utility without supplying any parameters, you receive a message indicating the proper use of the command as shown in the following sample output: > orapwd Usage: orapwd file=password= entries= force= where file - name of password file (mand), password - password for SYS (mand), entries - maximum number of distinct DBAs and OPERs (opt), force - whether to overwrite existing file (opt) There are no spaces around the equal-to (=) character. The following command creates a password file named acct.pwd that allows up to 30 privileged users with different passwords. In this example, the file is initially created with the password secret for users connecting as SYS. orapwd FILE=acct.pwd PASSWORD=secret ENTRIES=30 1-16 Oracle Database Administrator’s Guide Creating and Maintaining a Password File The parameters in the ORAPWD utility are described in the sections that follow. FILE This parameter sets the name of the password file being created. You must specify the full path name for the file. The contents of this file are encrypted, and the file cannot be read directly. This parameter is mandatory. The types of filenames allowed for the password file are operating system specific. Some operating systems require the password file to adhere to a specific format and be located in a specific directory. Other operating systems allow the use of environment variables to specify the name and location of the password file. For name and location information for the Unix and Linux operating systems, see Administrator's Reference for UNIX-Based Operating Systems. For Windows, see Platform Guide for Microsoft Windows. For other operating systems, see your operating system documentation. If you are running multiple instances of Oracle Database using Oracle Real Application Clusters, the environment variable for each instance should point to the same password file. Caution: It is critically important to the security of your system that you protect your password file and the environment variables that identify the location of the password file. Any user with access to these could potentially compromise the security of the connection. PASSWORD This parameter sets the password for user SYS. If you issue the ALTER USER statement to change the password for SYS after connecting to the database, both the password stored in the data dictionary and the password stored in the password file are updated. This parameter is mandatory. Note: You cannot change the password for SYS if REMOTE_LOGIN_PASSWORDFILE is set to SHARED. An error message is issued if you attempt to do so. ENTRIES This parameter specifies the number of entries that you require the password file to accept. This number corresponds to the number of distinct users allowed to connect to the database as SYSDBA or SYSOPER. The actual number of allowable entries can be higher than the number of users, because the ORAPWD utility continues to assign password entries until an operating system block is filled. For example, if your operating system block size is 512 bytes, it holds four password entries. The number of password entries allocated is always a multiple of four. Entries can be reused as users are added to and removed from the password file. If you intend to specify REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE, and to allow the granting of SYSDBA and SYSOPER privileges to users, this parameter is required. Caution: When you exceed the allocated number of password entries, you must create a new password file. To avoid this necessity, allocate a number of entries that is larger than you think you will ever need. Overview of Administering an Oracle Database 1-17 Creating and Maintaining a Password File FORCE This parameter, if set to Y, enables you to overwrite an existing password file. An error is returned if a password file of the same name already exists and this parameter is omitted or set to N. Setting REMOTE_LOGIN_ PASSWORDFILE In addition to creating the password file, you must also set the initialization parameter REMOTE_LOGIN_PASSWORDFILE to the appropriate value. The values recognized are: ■ ■ ■ NONE: Setting this parameter to NONE causes Oracle Database to behave as if the password file does not exist. That is, no privileged connections are allowed over nonsecure connections. EXCLUSIVE: (The default) An EXCLUSIVE password file can be used with only one instance of one database. Only an EXCLUSIVE file can be modified. Using an EXCLUSIVE password file enables you to add, modify, and delete users. It also enables you to change the SYS password with the ALTER USER command. SHARED: A SHARED password file can be used by multiple databases running on the same server, or multiple instances of a Real Application Clusters (RAC) database. A SHARED password file cannot be modified. This means that you cannot add users to a SHARED password file. Any attempt to do so or to change the password of SYS or other users with the SYSDBA or SYSOPER privileges generates an error. All users needing SYSDBA or SYSOPER system privileges must be added to the password file when REMOTE_LOGIN_PASSWORDFILE is set to EXCLUSIVE. After all users are added, you can change REMOTE_LOGIN_PASSWORDFILE to SHARED, and then share the file. This option is useful if you are administering multiple databases or a RAC database. If REMOTE_LOGIN_PASSWORDFILE is set to EXCLUSIVE or SHARED and the password file is missing, this is equivalent to setting REMOTE_LOGIN_PASSWORDFILE to NONE. Adding Users to a Password File When you grant SYSDBA or SYSOPER privileges to a user, that user's name and privilege information are added to the password file. If the server does not have an EXCLUSIVE password file (that is, if the initialization parameter REMOTE_LOGIN_PASSWORDFILE is NONE or SHARED, or the password file is missing), Oracle Database issues an error if you attempt to grant these privileges. A user's name remains in the password file only as long as that user has at least one of these two privileges. If you revoke both of these privileges, Oracle Database removes the user from the password file. Creating a Password File and Adding New Users to It Use the following procedure to create a password and add new users to it: 1. Follow the instructions for creating a password file as explained in "Using ORAPWD" on page 1-16. 2. Set the REMOTE_LOGIN_PASSWORDFILE initialization parameter to EXCLUSIVE. (This is the default.) 1-18 Oracle Database Administrator’s Guide Creating and Maintaining a Password File Note: REMOTE_LOGIN_PASSWORDFILE is a static initialization parameter and therefore cannot be changed without restarting the database. 3. Connect with SYSDBA privileges as shown in the following example: CONNECT SYS/password AS SYSDBA 4. Start up the instance and create the database if necessary, or mount and open an existing database. 5. Create users as necessary. Grant SYSDBA or SYSOPER privileges to yourself and other users as appropriate. See "Granting and Revoking SYSDBA and SYSOPER Privileges", later in this section. Granting and Revoking SYSDBA and SYSOPER Privileges If your server is using an EXCLUSIVE password file, use the GRANT statement to grant the SYSDBA or SYSOPER system privilege to a user, as shown in the following example: GRANT SYSDBA TO oe; Use the REVOKE statement to revoke the SYSDBA or SYSOPER system privilege from a user, as shown in the following example: REVOKE SYSDBA FROM oe; Because SYSDBA and SYSOPER are the most powerful database privileges, the WITH ADMIN OPTION is not used in the GRANT statement. That is, the grantee cannot in turn grant the SYSDBA or SYSOPER privilege to another user. Only a user currently connected as SYSDBA can grant or revoke another user's SYSDBA or SYSOPER system privileges. These privileges cannot be granted to roles, because roles are available only after database startup. Do not confuse the SYSDBA and SYSOPER database privileges with operating system roles. Oracle Database Security Guide for more information on system privileges See Also: Viewing Password File Members Use the V$PWFILE_USERS view to see the users who have been granted SYSDBA or SYSOPER system privileges for a database. The columns displayed by this view are as follows: Column Description USERNAME This column contains the name of the user that is recognized by the password file. SYSDBA If the value of this column is TRUE, then the user can log on with SYSDBA system privileges. SYSOPER If the value of this column is TRUE, then the user can log on with SYSOPER system privileges. Maintaining a Password File This section describes how to: Overview of Administering an Oracle Database 1-19 Server Manageability ■ Expand the number of password file users if the password file becomes full ■ Remove the password file Expanding the Number of Password File Users If you receive the file full error (ORA-1996) when you try to grant SYSDBA or SYSOPER system privileges to a user, you must create a larger password file and regrant the privileges to the users. Replacing a Password File Use the following procedure to replace a password file: 1. Identify the users who have SYSDBA or SYSOPER privileges by querying the V$PWFILE_USERS view. 2. Delete the existing password file. 3. Follow the instructions for creating a new password file using the ORAPWD utility in "Using ORAPWD" on page 1-16. Ensure that the ENTRIES parameter is set to a number larger than you think you will ever need. 4. Follow the instructions in "Adding Users to a Password File" on page 1-18. Removing a Password File If you determine that you no longer require a password file to authenticate users, you can delete the password file and then optionally reset the REMOTE_LOGIN_PASSWORDFILE initialization parameter to NONE. After you remove this file, only those users who can be authenticated by the operating system can perform SYSDBA or SYSOPER database administration operations. Server Manageability Oracle Database is a sophisticated self-managing database that automatically monitors, adapts, and repairs itself. It automates routine DBA tasks and reduces the complexity of space, memory, and resource administration. Several advisors are provided to help you analyze specific objects. The advisors report on a variety of aspects of the object and describe recommended actions. Oracle Database proactively sends alerts when a problem is anticipated or when any of the user-selected metrics. In addition to its self-managing features, Oracle Database provides utilities to help you move data in and out of the database. This section describes these server manageability topics: ■ Automatic Manageability Features ■ Data Utilities Automatic Manageability Features Oracle Database has a self-management infrastructure that allows the database to learn about itself and use this information to adapt to workload variations and to automatically remedy any potential problem. This section discusses the automatic manageability features of Oracle Database. 1-20 Oracle Database Administrator’s Guide Server Manageability Automatic Workload Repository Automatic Workload Repository (AWR) is a built-in repository in every Oracle Database. At regular intervals, the database makes a snapshot of all its vital statistics and workload information and stores them in AWR. By default, the snapshots are made every 60 minutes, but you can change this frequency. The snapshots are stored in the AWR for a period of time (seven days by default) after which they are automatically purged. See Also: ■ ■ Oracle Database Concepts and Oracle Database 2 Day DBA for overviews of the Automatic Workload Repository Oracle Database Performance Tuning Guide for details on using Automatic Workload Repository for statistics collection Automatic Maintenance Tasks Oracle Database uses the information stored in AWR to identify the need to perform routine maintenance tasks, such as optimizer statistics refresh and rebuilding indexes. Then the database uses the Scheduler to run such tasks in a predefined maintenance window. See Also: ■ ■ Oracle Database Concepts for an overview of the maintenance window Chapter 23, "Managing Automatic System Tasks Using the Maintenance Window" for detailed information on using the predefined maintenance window Server-Generated Alerts Some problems cannot be resolved automatically and require the database administrator's attention. For these problems, such as space shortage, Oracle Database provides server-generated alerts to notify you when then problem arises. The alerts also provide recommendations on how to resolve the problem. See Also: "Server-Generated Alerts" on page 4-18 in this book for detailed information on using APIs to administer server-generated alerts Advisors Oracle Database provides advisors to help you optimize a number of subsystems in the database. An advisory framework ensures consistency in the way in which advisors are invoked and results are reported. The advisors are use primarily by the database to optimize its own performance. However, they can also be invoked by administrators to get more insight into the functioning of a particular subcomponent. See Also: ■ ■ Oracle Database Concepts for an overview of the advisors Oracle Database 2 Day DBA for more detailed information on using the advisors Overview of Administering an Oracle Database 1-21 Server Manageability Data Utilities Several utilities are available to help you maintain the data in your Oracle Database. This section introduces two of these utilities: ■ SQL*Loader ■ Export and Import Utilities Oracle Database Utilities for detailed information about these utilities See Also: SQL*Loader SQL*Loader is used both by database administrators and by other users of Oracle Database. It loads data from standard operating system files (such as, files in text or C data format) into database tables. Export and Import Utilities Oracle export and import utilities enable you to move existing data in Oracle format between one Oracle Database and another. For example, export files can archive database data or move data among different databases that run on the same or different operating systems. 1-22 Oracle Database Administrator’s Guide 2 Creating an Oracle Database This chapter discusses the process of creating an Oracle Database, and contains the following topics: ■ Deciding How to Create an Oracle Database ■ Manually Creating an Oracle Database ■ Understanding the CREATE DATABASE Statement ■ Understanding Initialization Parameters ■ Troubleshooting Database Creation ■ Dropping a Database ■ Managing Initialization Parameters Using a Server Parameter File ■ Defining Application Services for Oracle Database 10g ■ Considerations After Creating a Database ■ Viewing Information About the Database See Also: ■ ■ Part III, "Automated File and Storage Management", for information about creating a database whose underlying operating system files are automatically created and managed by the Oracle Database server Oracle Real Application Clusters Installation and Configuration Guide for additional information specific to an Oracle Real Application Clusters environment Deciding How to Create an Oracle Database You can create an Oracle Database in three ways: ■ Use the Database Configuration Assistant (DBCA). DBCA can be launched by the Oracle Universal Installer, depending upon the type of install that you select, and provides a graphical user interface (GUI) that guides you through the creation of a database. You can also launch DBCA as a standalone tool at any time after Oracle Database installation to create or make a copy (clone) of a database. Refer to Oracle Database 2 Day DBA for detailed information on creating a database using DBCA. ■ Use the CREATE DATABASE statement. Creating an Oracle Database 2-1 Manually Creating an Oracle Database You can use the CREATE DATABASE SQL statement to create a database. If you do so, you must complete additional actions before you have an operational database. These actions include creating users and temporary tablespaces, building views of the data dictionary tables, and installing Oracle built-in packages. These actions can be performed by executing prepared scripts, many of which are supplied for you. If you have existing scripts for creating your database, consider editing those scripts to take advantage of new Oracle Database features. Oracle provides a sample database creation script and a sample initialization parameter file with the Oracle Database software files. Both the script and the file can be edited to suit your needs. See "Manually Creating an Oracle Database" on page 2-2. ■ Upgrade an existing database. If you are already using a earlier release of Oracle Database, database creation is required only if you want an entirely new database. You can upgrade your existing Oracle Database and use it with the new release of the database software. The Oracle Database Upgrade Guide manual contains information about upgrading an existing Oracle Database. The remainder of this chapter discusses creating a database manually. Manually Creating an Oracle Database This section takes you through the planning stage and the actual creation of the database. Considerations Before Creating the Database Database creation prepares several operating system files to work together as an Oracle Database. You need only create a database once, regardless of how many datafiles it has or how many instances access it. You can create a database to erase information in an existing database and create a new database with the same name and physical structure. The following topics can help prepare you for database creation. ■ Planning for Database Creation ■ Meeting Creation Prerequisites Planning for Database Creation Prepare to create the database by research and careful planning. Table 2–1 lists some recommended actions: Table 2–1 Database Planning Tasks Action Additional Information Plan the database tables and indexes and estimate the amount of space they will require. Part II, "Oracle Database Structure and Storage" Part IV, "Schema Objects" 2-2 Oracle Database Administrator’s Guide Manually Creating an Oracle Database Table 2–1 (Cont.) Database Planning Tasks Action Additional Information Plan the layout of the underlying operating system files your database will comprise. Proper distribution of files can improve database performance dramatically by distributing the I/O during file access. You can distribute I/O in several ways when you install Oracle software and create your database. For example, you can place redo log files on separate disks or use striping. You can situate datafiles to reduce contention. And you can control data density (number of rows to a data block). Oracle Database Performance Tuning Guide Consider using Oracle-managed files and Automatic Storage Management to create and manage the operating system files that make up your database storage. Part III, "Automated File and Storage Management" Select the global database name, which is the name and location of the database within the network structure. Create the global database name by setting both the DB_NAME and DB_DOMAIN initialization parameters. "Determining the Global Database Name" on page 2-21 Familiarize yourself with the initialization parameters contained in the initialization parameter file. Become familiar with the concept and operation of a server parameter file. A server parameter file lets you store and manage your initialization parameters persistently in a server-side disk file. "Understanding Initialization Parameters" on page 2-19 Your Oracle operating system specific documentation "What Is a Server Parameter File?" on page 2-36 Oracle Database Reference Oracle Database Globalization Support All character data, including data in the data dictionary, is stored in Guide the database character set. You must specify the database character set when you create the database. Select the database character set. If clients using different character sets will access the database, then choose a superset that includes all client character sets. Otherwise, character conversions may be necessary at the cost of increased overhead and potential data loss. You can also specify an alternate character set. Caution: AL32UTF8 is the Oracle Database character set that is appropriate for XMLType data. It is equivalent to the IANA registered standard UTF-8 encoding, which supports all valid XML characters. Do not confuse Oracle Database database character set UTF8 (no hyphen) with database character set AL32UTF8 or with character encoding UTF-8. Database character set UTF8 has been superseded by AL32UTF8. Do not use UTF8 for XML data. UTF8 supports only Unicode version 3.1 and earlier; it does not support all valid XML characters. AL32UTF8 has no such limitation. Using database character set UTF8 for XML data could potentially cause a fatal error or affect security negatively. If a character that is not supported by the database character set appears in an input-document element name, a replacement character (usually "?") is substituted for it. This will terminate parsing and raise an exception. Consider what time zones your database must support. Oracle Database uses one of two time zone files as the source of valid time zones. The default time zone file is timezonelrg.dat. It contains more time zones than the other time zone file, timezone.dat. "Specifying the Database Time Zone File" on page 2-17 Creating an Oracle Database 2-3 Manually Creating an Oracle Database Table 2–1 (Cont.) Database Planning Tasks Action Additional Information Select the standard database block size. This is specified at database "Specifying Database creation by the DB_BLOCK_SIZE initialization parameter and Block Sizes" on page 2-23 cannot be changed after the database is created. The SYSTEM tablespace and most other tablespaces use the standard block size. Additionally, you can specify up to four nonstandard block sizes when creating tablespaces. Determine the appropriate initial sizing for the SYSAUX tablespace. "Creating the SYSAUX Tablespace" on page 2-12 Plan to use a default tablespace for non-SYSTEM users to prevent inadvertent saving of database objects in the SYSTEM tablespace. "Creating a Default Permanent Tablespace" on page 2-14 Plan to use an undo tablespace to manage your undo data. Chapter 10, "Managing the Undo Tablespace" Develop a backup and recovery strategy to protect the database from failure. It is important to protect the control file by multiplexing, to choose the appropriate backup mode, and to manage the online and archived redo logs. Chapter 6, "Managing the Redo Log" Chapter 7, "Managing Archived Redo Logs" Chapter 5, "Managing Control Files" Oracle Database Backup and Recovery Basics Familiarize yourself with the principles and options of starting up and shutting down an instance and mounting and opening a database. Chapter 3, "Starting Up and Shutting Down" Meeting Creation Prerequisites Before you can create a new database, the following prerequisites must be met: ■ ■ ■ ■ The desired Oracle software must be installed. This includes setting various environment variables unique to your operating system and establishing the directory structure for software and database files. You must have the operating system privileges associated with a fully operational database administrator. You must be specially authenticated by your operating system or through a password file, allowing you to start up and shut down an instance before the database is created or opened. This authentication is discussed in "Database Administrator Authentication" on page 1-11. Sufficient memory must be available to start the Oracle Database instance. Sufficient disk storage space must be available for the planned database on the computer that runs Oracle Database. All of these are discussed in the Oracle Database Installation Guide specific to your operating system. If you use the Oracle Universal Installer, it will guide you through your installation and provide help in setting environment variables and establishing directory structure and authorizations. Creating the Database This section presents the steps involved when you create a database manually. These steps should be followed in the order presented. The prerequisites described in the preceding section must already have been completed. That is, you have established the 2-4 Oracle Database Administrator’s Guide Manually Creating an Oracle Database environment for creating your Oracle Database, including most operating system dependent environmental variables, as part of the Oracle software installation process. Step 1: Decide on Your Instance Identifier (SID) Step 2: Establish the Database Administrator Authentication Method Step 3: Create the Initialization Parameter File Step 4: Connect to the Instance Step 5: Create a Server Parameter File (Recommended) Step 6: Start the Instance Step 7: Issue the CREATE DATABASE Statement Step 8: Create Additional Tablespaces Step 9: Run Scripts to Build Data Dictionary Views Step 10: Run Scripts to Install Additional Options (Optional) Step 11: Back Up the Database. The examples shown in these steps create an example database mynewdb. Notes: ■ ■ The steps in this section contain cross-references to other parts of this book and to other books. These cross-references take you to material that will help you to learn about and understand the initialization parameters and database structures with which you are not yet familiar. If you are using Oracle Automatic Storage Management to manage your disk storage, you must start the ASM instance and configure your disk groups before performing the following steps. For information about Automatic Storage Management, see Chapter 12, "Using Automatic Storage Management". Step 1: Decide on Your Instance Identifier (SID) An instance is made up of the system global area (SGA) and the background processes of an Oracle Database. Decide on a unique Oracle system identifier (SID) for your instance and set the ORACLE_SID environment variable accordingly. This identifier is used to distinguish this instance from other Oracle Database instances that you may create later and run concurrently on your system. The following example for UNIX operating systems sets the SID for the instance that you will connect to in Step 4: Connect to the Instance: % setenv ORACLE_SID mynewdb Step 2: Establish the Database Administrator Authentication Method You must be authenticated and granted appropriate system privileges in order to create a database. You can use the password file or operating system authentication method. Database administrator authentication and authorization is discussed in the following sections of this book: ■ "Database Administrator Security and Privileges" on page 1-9 Creating an Oracle Database 2-5 Manually Creating an Oracle Database ■ "Database Administrator Authentication" on page 1-11 ■ "Creating and Maintaining a Password File" on page 1-16 Step 3: Create the Initialization Parameter File When an Oracle instance starts, it reads an initialization parameter file. This file can be a read-only text file, which must be modified with a text editor, or a read/write binary file, which can be modified dynamically by the database (for tuning) or with SQL commands that you submit. The binary file, which is preferred, is called a server parameter file. In this step, you create a text initialization parameter file. In a later step, you can optionally create a server parameter file from the text file. One way to create the text initialization parameter file is to edit a copy of the sample initialization parameter file that Oracle provides on the distribution media, or the sample presented in this book. On Unix operating systems, the Oracle Universal Installer installs a sample text initialization parameter file in the following location: Note: $ORACLE_HOME/dbs/init.ora For convenience, store your initialization parameter file in the Oracle Database default location, using the default name. Then when you start your database, it will not be necessary to specify the PFILE clause of the STARTUP command, because Oracle Database automatically looks in the default location for the initialization parameter file. For name, location, and sample content for the initialization parameter file, and for a discussion of how to set initialization parameters, see "Understanding Initialization Parameters" on page 2-19. Step 4: Connect to the Instance Start SQL*Plus and connect to your Oracle Database instance AS SYSDBA. $ SQLPLUS /nolog CONNECT SYS/password AS SYSDBA Step 5: Create a Server Parameter File (Recommended) Oracle recommends that you create a server parameter file. The server parameter file enables you to change initialization parameters with database commands and persist the changes across a shutdown and startup. You create the server parameter file from your edited text initialization file. For more information, see "Managing Initialization Parameters Using a Server Parameter File" on page 2-35. The following script creates a server parameter file from the text initialization parameter file and writes it to the default location. The script can be executed before or after instance startup, but after you connect as SYSDBA. The database must be restarted before the server parameter file takes effect. -- create the server parameter file CREATE SPFILE='/u01/oracle/dbs/spfilemynewdb.ora' FROM PFILE='/u01/oracle/admin/initmynewdb/scripts/init.ora'; SHUTDOWN -- the next startup will use the server parameter file EXIT 2-6 Oracle Database Administrator’s Guide Manually Creating an Oracle Database Step 6: Start the Instance Start an instance without mounting a database. Typically, you do this only during database creation or while performing maintenance on the database. Use the STARTUP command with the NOMOUNT clause. In this example, because the server parameter file is stored in the default location, you are not required to specify the PFILE clause: STARTUP NOMOUNT At this point, the SGA is created and background processes are started in preparation for the creation of a new database. The database itself does not yet exist. See Also: ■ ■ "Managing Initialization Parameters Using a Server Parameter File" on page 2-35 Chapter 3, "Starting Up and Shutting Down", to learn how to use the STARTUP command Step 7: Issue the CREATE DATABASE Statement To create the new database, use the CREATE DATABASE statement. The following statement creates database mynewdb: CREATE DATABASE mynewdb USER SYS IDENTIFIED BY pz6r58 USER SYSTEM IDENTIFIED BY y1tz5p LOGFILE GROUP 1 ('/u01/oracle/oradata/mynewdb/redo01.log') SIZE 100M, GROUP 2 ('/u01/oracle/oradata/mynewdb/redo02.log') SIZE 100M, GROUP 3 ('/u01/oracle/oradata/mynewdb/redo03.log') SIZE 100M MAXLOGFILES 5 MAXLOGMEMBERS 5 MAXLOGHISTORY 1 MAXDATAFILES 100 MAXINSTANCES 1 CHARACTER SET US7ASCII NATIONAL CHARACTER SET AL16UTF16 DATAFILE '/u01/oracle/oradata/mynewdb/system01.dbf' SIZE 325M REUSE EXTENT MANAGEMENT LOCAL SYSAUX DATAFILE '/u01/oracle/oradata/mynewdb/sysaux01.dbf' SIZE 325M REUSE DEFAULT TABLESPACE tbs_1 DEFAULT TEMPORARY TABLESPACE tempts1 TEMPFILE '/u01/oracle/oradata/mynewdb/temp01.dbf' SIZE 20M REUSE UNDO TABLESPACE undotbs DATAFILE '/u01/oracle/oradata/mynewdb/undotbs01.dbf' SIZE 200M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED; A database is created with the following characteristics: ■ ■ ■ The database is named mynewdb. Its global database name is mynewdb.us.oracle.com. See "DB_NAME Initialization Parameter" and "DB_DOMAIN Initialization Parameter" on page 2-22. Three control files are created as specified by the CONTROL_FILES initialization parameter, which was set before database creation in the initialization parameter file. See "Sample Initialization Parameter File" on page 2-20 and "Specifying Control Files" on page 2-22. The password for user SYS is pz6r58 and the password for SYSTEM is y1tz5p. The two clauses that specify the passwords for SYS and SYSTEM are not Creating an Oracle Database 2-7 Manually Creating an Oracle Database mandatory in this release of Oracle Database. However, if you specify either clause, you must specify both clauses. For further information about the use of these clauses, see "Protecting Your Database: Specifying Passwords for Users SYS and SYSTEM" on page 2-10. ■ ■ The new database has three redo log files as specified in the LOGFILE clause. MAXLOGFILES, MAXLOGMEMBERS, and MAXLOGHISTORY define limits for the redo log. See Chapter 6, "Managing the Redo Log". MAXDATAFILES specifies the maximum number of datafiles that can be open in the database. This number affects the initial sizing of the control file. You can set several limits during database creation. Some of these limits are limited by and affected by operating system limits. For example, if you set MAXDATAFILES, Oracle Database allocates enough space in the control file to store MAXDATAFILES filenames, even if the database has only one datafile initially. However, because the maximum control file size is limited and operating system dependent, you might not be able to set all CREATE DATABASE parameters at their theoretical maximums. Note: For more information about setting limits during database creation, see the Oracle Database SQL Reference and your operating system specific Oracle documentation. ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ MAXINSTANCES specifies that only one instance can have this database mounted and open. The US7ASCII character set is used to store data in this database. The AL16UTF16 character set is specified as the NATIONAL CHARACTER SET, used to store data in columns specifically defined as NCHAR, NCLOB, or NVARCHAR2. The SYSTEM tablespace, consisting of the operating system file /u01/oracle/oradata/mynewdb/system01.dbf is created as specified by the DATAFILE clause. If a file with that name already exists, it is overwritten. The SYSTEM tablespace is a locally managed tablespace. See "Creating a Locally Managed SYSTEM Tablespace" on page 2-11. A SYSAUX tablespace is created, consisting of the operating system file /u01/oracle/oradata/mynewdb/sysaux01.dbf as specified in the SYSAUX DATAFILE clause. See "Creating the SYSAUX Tablespace" on page 2-12. The DEFAULT TABLESPACE clause creates and names a default permanent tablespace for this database. The DEFAULT TEMPORARY TABLESPACE clause creates and names a default temporary tablespace for this database. See "Creating a Default Temporary Tablespace" on page 2-14. The UNDO TABLESPACE clause creates and names an undo tablespace that is used to store undo data for this database if you have specified UNDO_MANAGEMENT=AUTO in the initialization parameter file. See "Using Automatic Undo Management: Creating an Undo Tablespace" on page 2-13. Redo log files will not initially be archived, because the ARCHIVELOG clause is not specified in this CREATE DATABASE statement. This is customary during database creation. You can later use an ALTER DATABASE statement to switch to 2-8 Oracle Database Administrator’s Guide Manually Creating an Oracle Database ARCHIVELOG mode. The initialization parameters in the initialization parameter file for mynewdb relating to archiving are LOG_ARCHIVE_DEST_1 and LOG_ARCHIVE_FORMAT. See Chapter 7, "Managing Archived Redo Logs". See Also: ■ ■ "Understanding the CREATE DATABASE Statement" on page 2-10 Oracle Database SQL Reference for more information about specifying the clauses and parameter values for the CREATE DATABASE statement Step 8: Create Additional Tablespaces To make the database functional, you need to create additional files and tablespaces for users. The following sample script creates some additional tablespaces: CONNECT SYS/password AS SYSDBA -- create a user tablespace to be assigned as the default tablespace for users CREATE TABLESPACE users LOGGING DATAFILE '/u01/oracle/oradata/mynewdb/users01.dbf' SIZE 25M REUSE AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL; -- create a tablespace for indexes, separate from user tablespace CREATE TABLESPACE indx LOGGING DATAFILE '/u01/oracle/oradata/mynewdb/indx01.dbf' SIZE 25M REUSE AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL; For information about creating tablespaces, see Chapter 8, "Managing Tablespaces". Step 9: Run Scripts to Build Data Dictionary Views Run the scripts necessary to build views, synonyms, and PL/SQL packages: CONNECT SYS/password AS SYSDBA @/u01/oracle/rdbms/admin/catalog.sql @/u01/oracle/rdbms/admin/catproc.sql EXIT The following table contains descriptions of the scripts: Script Description CATALOG.SQL Creates the views of the data dictionary tables, the dynamic performance views, and public synonyms for many of the views. Grants PUBLIC access to the synonyms. CATPROC.SQL Runs all scripts required for or used with PL/SQL. Step 10: Run Scripts to Install Additional Options (Optional) You may want to run other scripts. The scripts that you run are determined by the features and options you choose to use or install. Many of the scripts available to you are described in the Oracle Database Reference. If you plan to install other Oracle products to work with this database, see the installation instructions for those products. Some products require you to create additional data dictionary tables. Usually, command files are provided to create and load these tables into the database data dictionary. Creating an Oracle Database 2-9 Understanding the CREATE DATABASE Statement See your Oracle documentation for the specific products that you plan to install for installation and administration instructions. Step 11: Back Up the Database. Take a full backup of the database to ensure that you have a complete set of files from which to recover if a media failure occurs. For information on backing up a database, see Oracle Database Backup and Recovery Basics. Understanding the CREATE DATABASE Statement When you execute a CREATE DATABASE statement, Oracle Database performs (at least) a number of operations. The actual operations performed depend on the clauses that you specify in the CREATE DATABASE statement and the initialization parameters that you have set. Oracle Database performs at least these operations: ■ Creates the datafiles for the database ■ Creates the control files for the database ■ Creates the redo log files for the database and establishes the ARCHIVELOG mode. ■ Creates the SYSTEM tablespace ■ Creates the SYSAUX tablespace ■ Creates the data dictionary ■ Sets the character set that stores data in the database ■ Sets the database time zone ■ Mounts and opens the database for use This section discusses several of the clauses of the CREATE DATABASE statement. It expands upon some of the clauses discussed in "Step 7: Issue the CREATE DATABASE Statement" on page 2-7 and introduces additional ones. Many of the CREATE DATABASES clauses discussed here can be used to simplify the creation and management of your database. The following topics are contained in this section: ■ Protecting Your Database: Specifying Passwords for Users SYS and SYSTEM ■ Creating a Locally Managed SYSTEM Tablespace ■ Creating the SYSAUX Tablespace ■ Using Automatic Undo Management: Creating an Undo Tablespace ■ Creating a Default Temporary Tablespace ■ Specifying Oracle-Managed Files at Database Creation ■ Supporting Bigfile Tablespaces During Database Creation ■ Specifying the Database Time Zone and Time Zone File ■ Specifying FORCE LOGGING Mode Protecting Your Database: Specifying Passwords for Users SYS and SYSTEM The clauses of the CREATE DATABASE statement used for specifying the passwords for users SYS and SYSTEM are: ■ USER SYS IDENTIFIED BY password 2-10 Oracle Database Administrator’s Guide Understanding the CREATE DATABASE Statement ■ USER SYSTEM IDENTIFIED BY password If you omit these clauses, these users are assigned the default passwords change_on_install and manager, respectively. A record is written to the alert log indicating that the default passwords were used. To protect your database, you should change these passwords using the ALTER USER statement immediately after database creation. Oracle strongly recommends that you specify these clauses, even though they are optional in this release of Oracle Database. The default passwords are commonly known, and if you neglect to change them later, you leave database vulnerable to attack by malicious users. See Also: "Some Security Considerations" on page 2-44 Creating a Locally Managed SYSTEM Tablespace Specify the EXTENT MANAGEMENT LOCAL clause in the CREATE DATABASE statement to create a locally managed SYSTEM tablespace. The COMPATIBLE initialization parameter must be set to 9.2 or higher for this statement to be successful. If you do not specify the EXTENT MANAGEMENT LOCAL clause, by default the database creates a dictionary-managed SYSTEM tablespace. Dictionary-managed tablespaces are deprecated. A locally managed SYSTEM tablespace has AUTOALLOCATE enabled by default, which means that the system determines and controls the number and size of extents. You may notice an increase in the initial size of objects created in a locally managed SYSTEM tablespace because of the autoallocate policy. It is not possible to create a locally managed SYSTEM tablespace and specify UNIFORM extent size. When you create your database with a locally managed SYSTEM tablespace, ensure that the following conditions are met: ■ A default temporary tablespace must exist, and that tablespace cannot be the SYSTEM tablespace. To meet this condition, you can specify the DEFAULT TEMPORARY TABLESPACE clause in the CREATE DATABASE statement, or you can omit the clause and let Oracle Database create the tablespace for you using a default name and in a default location. ■ You can include the UNDO TABLESPACE clause in the CREATE DATABASE statement to create a specific undo tablespace. If you omit that clause, Oracle Database creates a locally managed undo tablespace for you using the default name and in a default location. See Also: ■ ■ ■ Oracle Database SQL Reference for more specific information about the use of the DEFAULT TEMPORARY TABLESPACE and UNDO TABLESPACE clauses when EXTENT MANAGEMENT LOCAL is specified for the SYSTEM tablespace "Locally Managed Tablespaces" on page 8-3 "Migrating the SYSTEM Tablespace to a Locally Managed Tablespace" on page 8-24 Creating an Oracle Database 2-11 Understanding the CREATE DATABASE Statement Creating the SYSAUX Tablespace The SYSAUX tablespace is always created at database creation. The SYSAUX tablespace serves as an auxiliary tablespace to the SYSTEM tablespace. Because it is the default tablespace for many Oracle Database features and products that previously required their own tablespaces, it reduces the number of tablespaces required by the database and that you must maintain. Other functionality or features that previously used the SYSTEM tablespace can now use the SYSAUX tablespace, thus reducing the load on the SYSTEM tablespace. You can specify only datafile attributes for the SYSAUX tablespace, using the SYSAUX DATAFILE clause in the CREATE DATABASE statement. Mandatory attributes of the SYSAUX tablespace are set by Oracle Database and include: ■ PERMANENT ■ READ WRITE ■ EXTENT MANAGMENT LOCAL ■ SEGMENT SPACE MANAGMENT AUTO You cannot alter these attributes with an ALTER TABLESPACE statement, and any attempt to do so will result in an error. You cannot drop or rename the SYSAUX tablespace. The size of the SYSAUX tablespace is determined by the size of the database components that occupy SYSAUX. See Table 2–2 for a list of all SYSAUX occupants. Based on the initial sizes of these components, the SYSAUX tablespace needs to be at least 240 MB at the time of database creation. The space requirements of the SYSAUX tablespace will increase after the database is fully deployed, depending on the nature of its use and workload. For more information on how to manage the space consumption of the SYSAUX tablespace on an ongoing basis, please refer to the "Managing the SYSAUX Tablespace" on page 8-20. If you include a DATAFILE clause for the SYSTEM tablespace, then you must specify the SYSAUX DATAFILE clause as well, or the CREATE DATABASE statement will fail. This requirement does not exist if the Oracle-managed files feature is enabled (see "Specifying Oracle-Managed Files at Database Creation" on page 2-15). If you issue the CREATE DATABASE statement with no other clauses, then the software creates a default database with datafiles for the SYSTEM and SYSAUX tablespaces stored in system-determined default locations, or where specified by an Oracle-managed files initialization parameter. The SYSAUX tablespace has the same security attributes as the SYSTEM tablespace. Note: This book discusses the creation of the SYSAUX database at database creation. When upgrading from a release of Oracle Database that did not require the SYSAUX tablespace, you must create the SYSAUX tablespace as part of the upgrade process. This is discussed in Oracle Database Upgrade Guide. Table 2–2 lists the components that use the SYSAUX tablespace as their default tablespace during installation, and the tablespace in which they were stored in earlier releases: 2-12 Oracle Database Administrator’s Guide Understanding the CREATE DATABASE Statement Table 2–2 Database Components and the SYSAUX Tablespace Component Using SYSAUX Tablespace in Earlier Releases Analytical Workspace Object Table SYSTEM Enterprise Manager Repository OEM_REPOSITORY LogMiner SYSTEM Logical Standby SYSTEM OLAP API History Tables CWMLITE Oracle Data Mining ODM Oracle Spatial SYSTEM Oracle Streams SYSTEM Oracle Text DRSYS Oracle Ultra Search DRSYS Oracle interMedia ORDPLUGINS Components SYSTEM Oracle interMedia ORDSYS Components SYSTEM Oracle interMedia SI_INFORMTN_SCHEMA Components SYSTEM Server Manageability Components New in Oracle Database 10g Statspack Repository User-defined Oracle Scheduler New in Oracle Database 10g Workspace Manager SYSTEM The installation procedures for these components provide the means of establishing their occupancy of the SYSAUX tablespace. See Also: "Managing the SYSAUX Tablespace" on page 8-20 for information about managing the SYSAUX tablespace Using Automatic Undo Management: Creating an Undo Tablespace Automatic undo management uses an undo tablespace.To enable automatic undo management, set the UNDO_MANAGEMENT initialization parameter to AUTO in your initialization parameter file. In this mode, undo data is stored in an undo tablespace and is managed by Oracle Database. If you want to define and name the undo tablespace yourself, you must also include the UNDO TABLESPACE clause in the CREATE DATABASE statement at database creation time. If you omit this clause, and automatic undo management is enabled (by setting the UNDO_MANAGEMENT initialization parameter to AUTO), the database creates a default undo tablespace named SYS_UNDOTBS. See Also: ■ ■ "Specifying the Method of Undo Space Management" on page 2-33 Chapter 10, "Managing the Undo Tablespace", for information about the creation and use of undo tablespaces Creating an Oracle Database 2-13 Understanding the CREATE DATABASE Statement Creating a Default Permanent Tablespace The DEFAULT TABLESPACE clause of the CREATE DATABASE statement specifies a default permanent tablespace for the database. Oracle Database assigns to this tablespace any non-SYSTEM users for whom you do not explicitly specify a different permanent tablespace. If you do not specify this clause, then the SYSTEM tablespace is the default permanent tablespace for non-SYSTEM users. Oracle strongly recommends that you create a default permanent tablespace. Oracle Database SQL Reference for the syntax of the DEFAULT TABLESPACE clause of CREATE DATABASE and ALTER DATABASE See Also: Creating a Default Temporary Tablespace The DEFAULT TEMPORARY TABLESPACE clause of the CREATE DATABASE statement creates a default temporary tablespace for the database. Oracle Database assigns this tablespace as the temporary tablespace for users who are not explicitly assigned a temporary tablespace. You can explicitly assign a temporary tablespace or tablespace group to a user in the CREATE USER statement. However, if you do not do so, and if no default temporary tablespace has been specified for the database, then by default these users are assigned the SYSTEM tablespace as their temporary tablespace. It is not good practice to store temporary data in the SYSTEM tablespace, and it is cumbersome to assign every user a temporary tablespace individually. Therefore, Oracle recommends that you use the DEFAULT TEMPORARY TABLESPACE clause of CREATE DATABASE. When you specify a locally managed SYSTEM tablespace, the SYSTEM tablespace cannot be used as a temporary tablespace. In this case the database creates a default temporary tablespace. This behavior is explained in "Creating a Locally Managed SYSTEM Tablespace" on page 2-11. Note: You can add or change the default temporary tablespace after database creation. You do this by creating a new temporary tablespace or tablespace group with a CREATE TEMPORARY TABLESPACE statement, and then assign it as the temporary tablespace using the ALTER DATABASE DEFAULT TEMPORARY TABLESPACE statement. Users will automatically be switched (or assigned) to the new default temporary tablespace. The following statement assigns a new default temporary tablespace: ALTER DATABASE DEFAULT TEMPORARY TABLESPACE tempts2; The new default temporary tablespace must already exist. When using a locally managed SYSTEM tablespace, the new default temporary tablespace must also be locally managed. You cannot drop or take offline a default temporary tablespace, but you can assign a new default temporary tablespace and then drop or take offline the former one. You cannot change a default temporary tablespace to a permanent tablespace. Users can obtain the name of the current default temporary tablespace by querying the PROPERTY_NAME and PROPERTY_VALUE columns of the DATABASE_PROPERTIES view. These columns contain the values "DEFAULT_TEMP_TABLESPACE" and the default temporary tablespace name, respectively. 2-14 Oracle Database Administrator’s Guide Understanding the CREATE DATABASE Statement See Also: ■ ■ ■ Oracle Database SQL Reference for the syntax of the DEFAULT TEMPORARY TABLESPACE clause of CREATE DATABASE and ALTER DATABASE "Temporary Tablespaces" on page 8-8 for information about creating and using temporary tablespaces "Multiple Temporary Tablespaces: Using Tablespace Groups" on page 8-11 for information about creating and using temporary tablespace groups Specifying Oracle-Managed Files at Database Creation You can minimize the number of clauses and parameters that you specify in your CREATE DATABASE statement by using the Oracle-managed files feature. You do this either by specifying a directory in which your files are created and managed by Oracle Database, or by using Automatic Storage Management. When you use Automatic Storage Management, you specify a disk group in which the database creates and manages your files, including file redundancy and striping. By including any of the initialization parameters DB_CREATE_FILE_DEST, DB_CREATE_ONLINE_LOG_DEST_n, or DB_RECOVERY_FILE_DEST in your initialization parameter file, you instruct Oracle Database to create and manage the underlying operating system files of your database. Oracle Database will automatically create and manage the operating system files for the following database structures, depending on which initialization parameters you specify and how you specify clauses in your CREATE DATABASE statement: ■ Tablespaces ■ Temporary tablespaces ■ Control files ■ Redo log files ■ Archive log files ■ Flashback logs ■ Block change tracking files ■ RMAN backups See Also: "Specifying a Flash Recovery Area" on page 2-22 for information about setting initialization parameters that create a flash recovery area The following CREATE DATABASE statement shows briefly how the Oracle-managed files feature works, assuming you have specified required initialization parameters: CREATE DATABASE rbdb1 USER SYS IDENTIFIED BY pz6r58 USER SYSTEM IDENTIFIED BY y1tz5p UNDO TABLESPACE undotbs DEFAULT TEMPORARY TABLESPACE tempts1; ■ No DATAFILE clause is specified, so the database creates an Oracle-managed datafile for the SYSTEM tablespace. Creating an Oracle Database 2-15 Understanding the CREATE DATABASE Statement ■ ■ ■ ■ ■ ■ No LOGFILE clauses are included, so the database creates two Oracle-managed redo log file groups. No SYSAUX DATAFILE is included, so the database creates an Oracle-managed datafile for the SYSAUX tablespace. No DATAFILE subclause is specified for the UNDO TABLESPACE clause, so the database creates an Oracle-managed datafile for the undo tablespace. No TEMPFILE subclause is specified for the DEFAULT TEMPORARY TABLESPACE clause, so the database creates an Oracle-managed tempfile. If no CONTROL_FILES initialization parameter is specified in the initialization parameter file, then the database also creates an Oracle-managed control file. If you are using a server parameter file (see "Managing Initialization Parameters Using a Server Parameter File" on page 2-35), the database automatically sets the appropriate initialization parameters. See Also: ■ ■ Chapter 11, "Using Oracle-Managed Files", for information about the Oracle-managed files feature and how to use it Chapter 12, "Using Automatic Storage Management", for information about Automatic Storage Management Supporting Bigfile Tablespaces During Database Creation Oracle Database simplifies management of tablespaces and enables support for ultra-large databases by letting you create bigfile tablespaces. Bigfile tablespaces can contain only one file, but that file can have up to 4G blocks. The maximum number of datafiles in an Oracle Database is limited (usually to 64K files). Therefore, bigfile tablespaces can significantly enhance the storage capacity of an Oracle Database. This section discusses the clauses of the CREATE DATABASE statement that let you include support for bigfile tablespaces. "Bigfile Tablespaces" on page 8-6 for more information about bigfile tablespaces See Also: Specifying the Default Tablespace Type The SET DEFAULT...TABLESPACE clause of the CREATE DATABASE statement to determines the default type of tablespace for this database in subsequent CREATE TABLESPACE statements. Specify either SET DEFAULT BIGFILE TABLESPACE or SET DEFAULT SMALLFILE TABLESPACE. If you omit this clause, the default is a smallfile tablespace, which is the traditional type of Oracle Database tablespace. A smallfile tablespace can contain up to 1022 files with up to 4M blocks each. The use of bigfile tablespaces further enhances the Oracle-managed files feature, because bigfile tablespaces make datafiles completely transparent for users. SQL syntax for the ALTER TABLESPACE statement has been extended to allow you to perform operations on tablespaces, rather than the underlying datafiles. The CREATE DATABASE statement shown in "Specifying Oracle-Managed Files at Database Creation" on page 2-15 can be modified as follows to specify that the default type of tablespace is a bigfile tablespace: CREATE DATABASE rbdb1 USER SYS IDENTIFIED BY pz6r58 USER SYSTEM IDENTIFIED BY y1tz5p 2-16 Oracle Database Administrator’s Guide Understanding the CREATE DATABASE Statement SET DEFAULT BIGFILE TABLESPACE UNDO TABLESPACE undotbs DEFAULT TEMPORARY TABLESPACE tempts1; To dynamically change the default tablespace type after database creation, use the SET DEFAULT TABLESPACE clause of the ALTER DATABASE statement: ALTER DATABASE SET DEFAULT BIGFILE TABLESPACE; You can determine the current default tablespace type for the database by querying the DATABASE_PROPERTIES data dictionary view as follows: SELECT PROPERTY_VALUE FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME = 'DEFAULT_TBS_TYPE'; Overriding the Default Tablespace Type The SYSTEM and SYSAUX tablespaces are always created with the default tablespace type. However, you can explicitly override the default tablespace type for the UNDO and DEFAULT TEMPORARY tablespace during the CREATE DATABASE operation. For example, you can create a bigfile UNDO tablespace in a database with the default tablespace type of smallfile as follows: CREATE DATABASE rbdb1 ... BIGFILE UNDO TABLESPACE undotbs DATAFILE '/u01/oracle/oradata/mynewdb/undotbs01.dbf' SIZE 200M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED; You can create a smallfile DEFAULT TEMPORARY tablespace in a database with the default tablespace type of bigfile as follows: CREATE DATABASE rbdb1 SET DEFAULT BIGFILE TABLSPACE ... SMALLFILE DEFAULT TEMPORARY TABLESPACE tempts1 TEMPFILE '/u01/oracle/oradata/mynewdb/temp01.dbf' SIZE 20M REUSE ... Specifying the Database Time Zone and Time Zone File You can specify the database time zone and the supporting time zone file. Setting the Database Time Zone Set the database time zone when the database is created by using the SET TIME_ZONE clause of the CREATE DATABASE statement. If you do not set the database time zone, then it defaults to the time zone of the server's operating system. You can change the database time zone for a session by using the SET TIME_ZONE clause of the ALTER SESSION statement. See Also: Oracle Database Globalization Support Guide for more information about setting the database time zone Specifying the Database Time Zone File Two time zone files are included in the Oracle home directory. The default time zone file is $ORACLE_HOME/oracore/zoneinfo/timezonelrg.dat. A smaller time zone file can be found in $ORACLE_HOME/oracore/zoneinfo/timezone.dat. Creating an Oracle Database 2-17 Understanding the CREATE DATABASE Statement If you are already using the smaller time zone file and you want to continue to use it in an Oracle Database 10g environment or if you want to use the smaller time zone file instead of the default time zone file, then complete the following tasks: 1. Shut down the database. 2. Set the ORA_TZFILE environment variable to the full path name of the timezone.dat file. 3. Restart the database. If you are already using the default time zone file, then it is not practical to change to the smaller time zone file because the database may contain data with time zones that are not part of the smaller time file. All databases that share information must use the same time zone datafile. The time zone files contain the valid time zone names. The following information is also included for each time zone: ■ Offset from Coordinated Universal Time (UTC) ■ Transition times for Daylight Saving Time ■ Abbreviations for standard time and Daylight Saving Time To view the time zone names in the file being used by your database, use the following query: SELECT * FROM V$TIMEZONE_NAMES; Specifying FORCE LOGGING Mode Some data definition language statements (such as CREATE TABLE) allow the NOLOGGING clause, which causes some database operations not to generate redo records in the database redo log. The NOLOGGING setting can speed up operations that can be easily recovered outside of the database recovery mechanisms, but it can negatively affect media recovery and standby databases. Oracle Database lets you force the writing of redo records even when NOLOGGING has been specified in DDL statements. The database never generates redo records for temporary tablespaces and temporary segments, so forced logging has no affect for objects. Oracle Database SQL Reference for information about operations that can be done in NOLOGGING mode See Also: Using the FORCE LOGGING Clause To put the database into FORCE LOGGING mode, use the FORCE LOGGING clause in the CREATE DATABASE statement. If you do not specify this clause, the database is not placed into FORCE LOGGING mode. Use the ALTER DATABASE statement to place the database into FORCE LOGGING mode after database creation. This statement can take a considerable time for completion, because it waits for all unlogged direct writes to complete. You can cancel FORCE LOGGING mode using the following SQL statement: ALTER DATABASE NO FORCE LOGGING; Independent of specifying FORCE LOGGING for the database, you can selectively specify FORCE LOGGING or NO FORCE LOGGING at the tablespace level. However, if FORCE LOGGING mode is in effect for the database, it takes precedence over the 2-18 Oracle Database Administrator’s Guide Understanding Initialization Parameters tablespace setting. If it is not in effect for the database, then the individual tablespace settings are enforced. Oracle recommends that either the entire database is placed into FORCE LOGGING mode, or individual tablespaces be placed into FORCE LOGGING mode, but not both. The FORCE LOGGING mode is a persistent attribute of the database. That is, if the database is shut down and restarted, it remains in the same logging mode. However, if you re-create the control file, the database is not restarted in the FORCE LOGGING mode unless you specify the FORCE LOGGING clause in the CREATE CONTROL FILE statement. "Controlling the Writing of Redo Records" on page 8-13 for information about using the FORCE LOGGING clause for tablespace creation. See Also: Performance Considerations of FORCE LOGGING Mode FORCE LOGGING mode results in some performance degradation. If the primary reason for specifying FORCE LOGGING is to ensure complete media recovery, and there is no standby database active, then consider the following: ■ How many media failures are likely to happen? ■ How serious is the damage if unlogged direct writes cannot be recovered? ■ Is the performance degradation caused by forced logging tolerable? If the database is running in NOARCHIVELOG mode, then generally there is no benefit to placing the database in FORCE LOGGING mode. Media recovery is not possible in NOARCHIVELOG mode, so if you combine it with FORCE LOGGING, the result may be performance degradation with little benefit. Understanding Initialization Parameters When an Oracle instance starts, it reads initialization parameters from an initialization parameter file. This file can be either a read-only text file, or a read/write binary file. The binary file is called a server parameter file, and it always resides on the server. A server parameter file enables you to change initialization parameters with ALTER SYSTEM commands and to persist the changes across a shutdown and startup. It also provides a basis for self-tuning by the Oracle Database server. For these reasons, it is recommended that you use a server parameter file. You can create one from your edited text initialization file or by using the Database Configuration Assistant. Before you create a server parameter file, you can start an instance with a text initialization parameter file. Upon startup, the Oracle instance first searches for a server parameter file in a default location, and if it does not find one, searches for a text initialization parameter file. You can also override an existing server parameter file by naming a text initialization parameter file as an argument of the STARTUP command. For more information on server parameter files, see "Managing Initialization Parameters Using a Server Parameter File" on page 2-35. For more information on the STARTUP command, see "Understanding Initialization Parameter Files" on page 3-2. Default file names and locations for the text initialization parameter file are shown in the following table: Creating an Oracle Database 2-19 Understanding Initialization Parameters Platform Default Name Default Location UNIX and Linux init$ORACLE_SID.ora $ORACLE_HOME/dbs For example, the initialization parameter file for the mynewdb database is named: For example, the initialization parameter file for the mynewdb database is stored in the following location: /u01/oracle/dbs/initmynewdb.ora initmynewdb.ora Windows init%ORACLE_SID%.ora %ORACLE_HOME%\database Sample Initialization Parameter File The following is an example of an initialization parameter file used to create a database on a UNIX system. control_files db_name = (/u0d/lcg03/control.001.dbf, /u0d/lcg03/control.002.dbf, /u0d/lcg03/control.003.dbf) = lcg03 db_domain = us.oracle.com log_archive_dest_1 = "LOCATION=/net/fstlcg03/private/yaliu/testlog/log.lcg03.fstlcg03/lcg03/arch" log_archive_dest_state_1 = enable db_block_size pga_aggregate_target = 8192 = 2500M processes sessions open_cursors = 1000 = 1200 = 1024 undo_management = AUTO shared_servers remote_listener = 3 = tnsfstlcg03 undo_tablespace compatible = smu_nd1 = 10.2.0 sga_target = 1500M nls_language = AMERICAN nls_territory = AMERICA db_recovery_file_dest = /net/fstlcg03/private/yaliu/testlog/log.lcg03.fstlcg03/lcg03/arch db_recovery_file_dest_size = 100G Oracle Database provides generally appropriate values in the sample initialization parameter file provided with your database software or created for you by the Database Configuration Assistant. You can edit these Oracle-supplied initialization parameters and add others, depending upon your configuration and options and how you plan to tune the database. For any relevant initialization parameters not specifically included in the initialization parameter file, the database supplies defaults. If you are creating an Oracle Database for the first time, Oracle suggests that you minimize the number of parameter values that you alter. As you become more familiar with your database and environment, you can dynamically tune many initialization 2-20 Oracle Database Administrator’s Guide Understanding Initialization Parameters parameters using the ALTER SYSTEM statement. If you are using a text initialization parameter file, your changes are effective only for the current instance. To make them permanent, you must update them manually in the initialization parameter file, or they will be lost over the next shutdown and startup of the database. If you are using a server parameter file, initialization parameter file changes made by the ALTER SYSTEM statement can persist across shutdown and startup. This is discussed in "Managing Initialization Parameters Using a Server Parameter File". This section introduces you to some of the basic initialization parameters you can add or edit before you create your new database. The following topics are contained in this section: ■ Determining the Global Database Name ■ Specifying a Flash Recovery Area ■ Specifying Control Files ■ Specifying Database Block Sizes ■ Managing the System Global Area (SGA) ■ Specifying the Maximum Number of Processes ■ Specifying the Method of Undo Space Management ■ The COMPATIBLE Initialization Parameter and Irreversible Compatibility ■ Setting the License Parameter Oracle Database Reference for descriptions of all initialization parameters including their default settings See Also: Determining the Global Database Name The global database name consists of the user-specified local database name and the location of the database within a network structure. The DB_NAME initialization parameter determines the local name component of the database name, and the DB_DOMAIN parameter indicates the domain (logical location) within a network structure. The combination of the settings for these two parameters must form a database name that is unique within a network. For example, to create a database with a global database name of test.us.acme.com, edit the parameters of the new parameter file as follows: DB_NAME = test DB_DOMAIN = us.acme.com You can rename the GLOBAL_NAME of your database using the ALTER DATABASE RENAME GLOBAL_NAME statement. However, you must also shut down and restart the database after first changing the DB_NAME and DB_DOMAIN initialization parameters and re-creating the control file. Oracle Database Utilities for information about using the DBNEWID utility, which is another means of changing a database name See Also: DB_NAME Initialization Parameter DB_NAME must be set to a text string of no more than eight characters. During database creation, the name provided for DB_NAME is recorded in the datafiles, redo log files, and control file of the database. If during database instance startup the value of the Creating an Oracle Database 2-21 Understanding Initialization Parameters DB_NAME parameter (in the parameter file) and the database name in the control file are not the same, the database does not start. DB_DOMAIN Initialization Parameter DB_DOMAIN is a text string that specifies the network domain where the database is created. This is typically the name of the organization that owns the database. If the database you are about to create will ever be part of a distributed database system, give special attention to this initialization parameter before database creation. See Also: Part VII, "Distributed Database Management" for more information about distributed databases Specifying a Flash Recovery Area A flash recovery area is a location in which Oracle Database can store and manage files related to backup and recovery. It is distinct from the database area, which is a location for the Oracle-managed current database files (datafiles, control files, and online redo logs). You specify a flash recovery area with the following initialization parameters: ■ DB_RECOVERY_FILE_DEST: Location of the flash recovery area. This can be a directory, file system, or Automatic Storage Management (ASM) disk group. It cannot be a raw file system. In a RAC environment, this location must be on a cluster file system, ASM disk group, or a shared directory configured through NFS. ■ DB_RECOVERY_FILE_DEST_SIZE: Specifies the maximum total bytes to be used by the flash recovery area. This initialization parameter must be specified before DB_RECOVERY_FILE_DEST is enabled. In a RAC environment, the settings for these two parameters must be the same on all instances. You cannot enable these parameters if you have set values for the LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST parameters. You must disable those parameters before setting up the flash recovery area. You can instead set values for the LOG_ARCHIVE_DEST_n parameters. If you do not set values for local LOG_ARCHIVE_DEST_n, then setting up the flash recovery area will implicitly set LOG_ARCHIVE_DEST_10 to the flash recovery area. Oracle recommends using a flash recovery area, because it can simplify backup and recovery operations for your database. See Also: Oracle Database Backup and Recovery Basics to learn how to create and use a flash recovery area Specifying Control Files The CONTROL_FILES initialization parameter specifies one or more control filenames for the database. When you execute the CREATE DATABASE statement, the control files listed in the CONTROL_FILES parameter are created. If you do not include CONTROL_FILES in the initialization parameter file, then Oracle Database creates a control file using a default operating system dependent filename or, if you have enabled Oracle-managed files, creates Oracle-managed control files. If you want the database to create new operating system files when creating database control files, the filenames listed in the CONTROL_FILES parameter must not match 2-22 Oracle Database Administrator’s Guide Understanding Initialization Parameters any filenames that currently exist on your system. If you want the database to reuse or overwrite existing files when creating database control files, ensure that the filenames listed in the CONTROL_FILES parameter match the filenames that are to be reused. Caution: Use extreme caution when setting this specifying CONTROL_FILE filenames. If you inadvertently specify a file that already exists and execute the CREATE DATABASE statement, the previous contents of that file will be overwritten. Oracle strongly recommends you use at least two control files stored on separate physical disk drives for each database. See Also: ■ ■ Chapter 5, "Managing Control Files" "Specifying Oracle-Managed Files at Database Creation" on page 2-15 Specifying Database Block Sizes The DB_BLOCK_SIZE initialization parameter specifies the standard block size for the database. This block size is used for the SYSTEM tablespace and by default in other tablespaces. Oracle Database can support up to four additional nonstandard block sizes. DB_BLOCK_SIZE Initialization Parameter The most commonly used block size should be picked as the standard block size. In many cases, this is the only block size that you need to specify. Typically, DB_BLOCK_SIZE is set to either 4K or 8K. If you do not set a value for this parameter, the default data block size is operating system specific, which is generally adequate. You cannot change the block size after database creation except by re-creating the database. If the database block size is different from the operating system block size, ensure that the database block size is a multiple of the operating system block size. For example, if your operating system block size is 2K (2048 bytes), the following setting for the DB_BLOCK_SIZE initialization parameter is valid: DB_BLOCK_SIZE=4096 A larger data block size provides greater efficiency in disk and memory I/O (access and storage of data). Therefore, consider specifying a block size larger than your operating system block size if the following conditions exist: ■ ■ Oracle Database is on a large computer system with a large amount of memory and fast disk drives. For example, databases controlled by mainframe computers with vast hardware resources typically use a data block size of 4K or greater. The operating system that runs Oracle Database uses a small operating system block size. For example, if the operating system block size is 1K and the default data block size matches this, the database may be performing an excessive amount of disk I/O during normal operation. For best performance in this case, a database block should consist of multiple operating system blocks. See Also: Your operating system specific Oracle documentation for details about the default block size. Creating an Oracle Database 2-23 Understanding Initialization Parameters Nonstandard Block Sizes Tablespaces of nonstandard block sizes can be created using the CREATE TABLESPACE statement and specifying the BLOCKSIZE clause. These nonstandard block sizes can have any of the following power-of-two values: 2K, 4K, 8K, 16K or 32K. Platform-specific restrictions regarding the maximum block size apply, so some of these sizes may not be allowed on some platforms. To use nonstandard block sizes, you must configure subcaches within the buffer cache area of the SGA memory for all of the nonstandard block sizes that you intend to use. The initialization parameters used for configuring these subcaches are described in the next section, "Managing the System Global Area (SGA)". The ability to specify multiple block sizes for your database is especially useful if you are transporting tablespaces between databases. You can, for example, transport a tablespace that uses a 4K block size from an OLTP environment to a data warehouse environment that uses a standard block size of 8K. See Also: ■ "Creating Tablespaces" on page 8-2 ■ "Transporting Tablespaces Between Databases" on page 8-25 Managing the System Global Area (SGA) This section discusses the initialization parameters that affect the amount of memory allocated to the System Global Area (SGA). Except for the SGA_MAX_SIZE initialization parameter, they are dynamic parameters whose values can be changed by the ALTER SYSTEM statement. The size of the SGA is dynamic, and can grow or shrink by dynamically altering these parameters. This section contains the following topics: ■ Components and Granules in the SGA ■ Limiting the Size of the SGA ■ Using Automatic Shared Memory Management ■ Setting the Buffer Cache Initialization Parameters ■ Using Manual Shared Memory Management ■ Viewing Information About the SGA See Also: ■ ■ Oracle Database Performance Tuning Guide for information about tuning the components of the SGA Oracle Database Concepts for a conceptual discussion of automatic shared memory management Components and Granules in the SGA The SGA comprises a number of memory components, which are pools of memory used to satisfy a particular class of memory allocation requests. Examples of memory components include the shared pool (used to allocate memory for SQL and PL/SQL execution), the java pool (used for java objects and other java execution memory), and the buffer cache (used for caching disk blocks). All SGA components allocate and deallocate space in units of granules. Oracle Database tracks SGA memory use in internal numbers of granules for each SGA component. 2-24 Oracle Database Administrator’s Guide Understanding Initialization Parameters The memory for dynamic components in the SGA is allocated in the unit of granules. Granule size is determined by total SGA size. Generally speaking, on most platforms, if the total SGA size is equal to or less than 1 GB, then granule size is 4 MB. For SGAs larger than 1 GB, granule size is 16 MB. Some platform dependencies may arise. For example, on 32-bit Windows NT, the granule size is 8 MB for SGAs larger than 1 GB. Consult your operating system specific documentation for more details. You can query the V$SGAINFO view to see the granule size that is being used by an instance. The same granule size is used for all dynamic components in the SGA. If you specify a size for a component that is not a multiple of granule size, Oracle Database rounds the specified size up to the nearest multiple. For example, if the granule size is 4 MB and you specify DB_CACHE_SIZE as 10 MB, the database actually allocates 12 MB. Limiting the Size of the SGA The SGA_MAX_SIZE initialization parameter specifies the maximum size of the System Global Area for the lifetime of the instance. You can dynamically alter the initialization parameters affecting the size of the buffer caches, shared pool, large pool, Java pool, and streams pool but only to the extent that the sum of these sizes and the sizes of the other components of the SGA (fixed SGA, variable SGA, and redo log buffers) does not exceed the value specified by SGA_MAX_SIZE. If you do not specify SGA_MAX_SIZE, then Oracle Database selects a default value that is the sum of all components specified or defaulted at initialization time. If you do specify SGA_MAX_SIZE, and at the time the database is initialized the value is less than the sum of the memory allocated for all components, either explicitly in the parameter file or by default, then the database ignores the setting for SGA_MAX_SIZE. Using Automatic Shared Memory Management You enable the automatic shared memory management feature by setting the SGA_TARGET parameter to a non-zero value. This parameter in effect replaces the parameters that control the memory allocated for a specific set of individual components, which are now automatically and dynamically resized (tuned) as needed. Note: The STATISTICS_LEVEL initialization parameter must be set to TYPICAL (the default) or ALL for automatic shared memory management to function. The SGA_TARGET initialization parameter reflects the total size of the SGA. Table 2–3 lists the SGA components for which SGA_TARGET includes memory—the automatically sized SGA components—and the initialization parameters corresponding to those components. Table 2–3 Automatically Sized SGA Components and Corresponding Parameters SGA Component Initialization Parameter Fixed SGA and other internal allocations needed by the Oracle Database instance N/A The shared pool SHARED_POOL_SIZE The large pool LARGE_POOL_SIZE The Java pool JAVA_POOL_SIZE The buffer cache DB_CACHE_SIZE Creating an Oracle Database 2-25 Understanding Initialization Parameters Table 2–3 (Cont.) Automatically Sized SGA Components and Corresponding Parameters SGA Component Initialization Parameter The Streams pool STREAMS_POOL_SIZE The parameters listed in Table 2–4, if they are set, take their memory from SGA_TARGET, leaving what is available for the components listed in Table 2–3. Table 2–4 Manually Sized SGA Components that Use SGA_TARGET Space SGA Component Initialization Parameter The log buffer LOG_BUFFER The keep and recycle buffer caches DB_KEEP_CACHE_SIZE DB_RECYCLE_CACHE_SIZE Nonstandard block size buffer caches DB_nK_CACHE_SIZE In addition to setting SGA_TARGET to a non-zero value, you must set the value of all automatically sized SGA components to zero to enable full automatic tuning of these components. Alternatively, you can set one or more of the automatically sized SGA components to a non-zero value, which is then used as the minimum setting for that component during SGA tuning. This is discussed in detail later in this section. An easier way to enable automatic shared memory management is to use Oracle Enterprise Manager (EM). When you enable automatic shared memory management and set the Total SGA Size, EM automatically generates the ALTER SYSTEM statements to set SGA_TARGET to the specified size and to set all automatically sized SGA components to zero. Note: If you use SQL*Plus to set SGA_TARGET, you must then set the automatically sized SGA components to zero or to a minimum value. The V$SGA_TARGET_ADVICE view provides information that helps you decide on a value for SGA_TARGET. For more information, see Oracle Database Performance Tuning Guide. Enabling Automatic Shared Memory Management To enable automatic shared memory management: 1. If you are migrating from a manual management scheme, execute the following query on the instance running in manual mode to get a value for SGA_TARGET: SELECT ( (SELECT SUM(value) FROM V$SGA) (SELECT CURRENT_SIZE FROM V$SGA_DYNAMIC_FREE_MEMORY) ) "SGA_TARGET" FROM DUAL; 2. Set the value of SGA_TARGET, either by editing the text initialization parameter file and restarting the database, or by issuing the following statement: ALTER SYSTEM SET SGA_TARGET=value [SCOPE={SPFILE|MEMORY|BOTH}] 2-26 Oracle Database Administrator’s Guide Understanding Initialization Parameters where value is the value computed in step 1 or is some value between the sum of all SGA component sizes and SGA_MAX_SIZE. For more information on the SCOPE clause, see "Using ALTER SYSTEM to Change Initialization Parameter Values" on page 2-38. 3. Do one of the following: ■ ■ For more complete automatic tuning, set the values of the automatically sized SGA components listed in Table 2–3 to zero. Do this by editing the text initialization parameter file, or by issuing ALTER SYSTEM statements similar to the one in step 2. To control the minimum size of one or more automatically sized SGA components, set those component sizes to the desired value. (See the next section for details.) Set the values of the other automatically sized SGA components to zero. Do this by editing the text initialization parameter file, or by issuing ALTER SYSTEM statements similar to the one in step 2. For example, suppose you currently have the following configuration of parameters on a manual mode instance with SGA_MAX_SIZE set to 1200M: ■ SHARED_POOL_SIZE = 200M ■ DB_CACHE_SIZE = 500M ■ LARGE_POOL_SIZE=200M Also assume that the result of the queries is as follows: SELECT SUM(value) FROM V$SGA = 1200M SELECT CURRENT_SIZE FROM V$SGA_DYNAMIC_FREE_MEMORY = 208M You can take advantage of automatic shared memory management by setting Total SGA Size to 992M in Oracle Enterprise Manager, or by issuing the following statements: ALTER ALTER ALTER ALTER ALTER ALTER SYSTEM SYSTEM SYSTEM SYSTEM SYSTEM SYSTEM SET SET SET SET SET SET SGA_TARGET = 992M; SHARED_POOL_SIZE = 0; LARGE_POOL_SIZE = 0; JAVA_POOL_SIZE = 0; DB_CACHE_SIZE = 0; STREAMS_POOL_SIZE = 0; where 992M = 1200M minus 208M. Setting Minimums for Automatically Sized SGA Components You can exercise some control over the size of the automatically sized SGA components by specifying minimum values for the parameters corresponding to these components. Doing so can be useful if you know that an application cannot function properly without a minimum amount of memory in specific components. You specify the minimum amount of SGA space for a component by setting a value for its corresponding initialization parameter. Here is an example configuration: ■ SGA_TARGET = 256M ■ SHARED_POOL_SIZE = 32M ■ DB_CACHE_SIZE = 100M In this example, the shared pool and the default buffer pool will not be sized smaller than the specified values (32 M and 100M, respectively). The remaining 124M (256 minus 132) is available for use by all the manually and automatically sized components. Creating an Oracle Database 2-27 Understanding Initialization Parameters The actual distribution of values among the SGA components might look like this: ■ Actual shared pool size = 64M ■ Actual buffer cache size = 128M ■ Actual Java pool size = 60M ■ Actual large pool size = 4M ■ Actual Streams pool size = 0 The parameter values determine the minimum amount of SGA space allocated. The fixed views V$SGA_DYNAMIC_COMPONENTS and V$SGAINFO display the current actual size of each SGA component. You can also see the current actual values of the SGA components in the Oracle Enterprise Manager memory configuration page. Manually limiting the minimum size of one or more automatically sized components reduces the total amount of memory available for dynamic adjustment. This reduction in turn limits the ability of the system to adapt to workload changes. Therefore, this practice is not recommended except in exceptional cases. The default automatic management behavior maximizes both system performance and the use of available resources. Automatic Tuning and the Shared Pool When the automatic shared memory management feature is enabled, the internal tuning algorithm tries to determine an optimal size for the shared pool based on the workload. It usually converges on this value by increasing in small increments over time. However, the internal tuning algorithm typically does not attempt to shrink the shared pool, because the presence of open cursors, pinned PL/SQL packages, and other SQL execution state in the shared pool make it impossible to find granules that can be freed. Therefore, the tuning algorithm only tries to increase the shared pool in conservative increments, starting from a conservative size and stabilizing the shared pool at a size that produces the optimal performance benefit. Dynamic Modification of SGA Parameters You can modify the value of SGA_TARGET and the parameters controlling individual components dynamically using the ALTER SYSTEM statement, as described in "Using ALTER SYSTEM to Change Initialization Parameter Values" on page 2-38. Dynamic Modification of SGA_TARGET The SGA_TARGET parameter can be increased up to the value specified for the SGA_MAX_SIZE parameter, and it can also be reduced. If you reduce the value of SGA_TARGET, the system identifies one or more automatically tuned components for which to release memory. You can reduce SGA_TARGET until one or more automatically tuned components reach their minimum size. Oracle Database determines the minimum allowable value for SGA_TARGET taking into account several factors, including values set for the automatically sized components, manually sized components that use SGA_TARGET space, and number of CPUs. The change in the amount of physical memory consumed when SGA_TARGET is modified depends on the operating system. On some UNIX platforms that do not support dynamic shared memory, the physical memory in use by the SGA is equal to the value of the SGA_MAX_SIZE parameter. On such platforms, there is no real benefit in setting SGA_TARGET to a value smaller than SGA_MAX_SIZE. Therefore, setting SGA_MAX_SIZE on those platforms is not recommended. On other platforms, such as Solaris and Windows, the physical memory consumed by the SGA is equal to the value of SGA_TARGET. 2-28 Oracle Database Administrator’s Guide Understanding Initialization Parameters When SGA_TARGET is resized, the only components affected are the automatically tuned components for which you have not set a minimum value in their corresponding initialization parameter. Any manually configured components remain unaffected. For example, suppose you have an environment with the following configuration: ■ SGA_MAX_SIZE = 1024M ■ SGA_TARGET = 512M ■ DB_8K_CACHE_SIZE = 128M In this example, the value of SGA_TARGET can be resized up to 1024M and can also be reduced until one or more of the automatically sized components reaches its minimum size. The exact value depends on environmental factors such as the number of CPUs on the system. However, the value of DB_8K_CACHE_SIZE remains fixed at all times at 128M When SGA_TARGET is reduced, if minimum values for any automatically tuned components are specified, those components are not reduced smaller than that minimum. Consider the following combination of parameters: ■ SGA_MAX_SIZE = 1024M ■ SGA_TARGET = 512M ■ DB_CACHE_SIZE = 96M ■ DB_8K_CACHE_SIZE = 128M As in the last example, if SGA_TARGET is reduced, the DB_8K_CACHE_SIZE parameter is permanently fixed at 128M. In addition, the primary buffer cache (determined by the DB_CACHE_SIZE parameter) is not reduced smaller than 96M. Thus the amount that SGA_TARGET can be reduced is restricted. When enabling automatic shared memory management, it is best to set SGA_TARGET to the desired non-zero value before starting the database. Dynamically modifying SGA_TARGET from zero to a non-zero value may not achieve the desired results because the shared pool may not be able to shrink. After startup, you can dynamically tune SGA_TARGET up or down as required. Note: Modifying Parameters for Automatically Managed Components When SGA_TARGET is not set, the automatic shared memory management feature is not enabled. Therefore the rules governing resize for all component parameters are the same as in earlier releases. However, when automatic shared memory management is enabled, the manually specified sizes of automatically sized components serve as a lower bound for the size of the components. You can modify this limit dynamically by changing the values of the corresponding parameters. If the specified lower limit for the size of a given SGA component is less than its current size, there is no immediate change in the size of that component. The new setting only limits the automatic tuning algorithm to that reduced minimum size in the future. For example, consider the following configuration: ■ SGA_TARGET = 512M ■ LARGE_POOL_SIZE = 256M ■ Current actual large pool size = 284M Creating an Oracle Database 2-29 Understanding Initialization Parameters In this example, if you increase the value of LARGE_POOL_SIZE to a value greater than the actual current size of the component, the system expands the component to accommodate the increased minimum size. For example, if you increase the value of LARGE_POOL_SIZE to 300M, then the system increases the large pool incrementally until it reaches 300M. This resizing occurs at the expense of one or more automatically tuned components. If you decrease the value of LARGE_POOL_SIZE to 200, there is no immediate change in the size of that component. The new setting only limits the reduction of the large pool size to 200 M in the future. Modifying Parameters for Manually Sized Components Parameters for manually sized components can be dynamically altered as well. However, rather than setting a minimum size, the value of the parameter specifies the precise size of the corresponding component. When you increase the size of a manually sized component, extra memory is taken away from one or more automatically sized components. When you decrease the size of a manually sized component, the memory that is released is given to the automatically sized components. For example, consider this configuration: ■ SGA_TARGET = 512M ■ DB_8K_CACHE_SIZE = 128M In this example, increasing DB_8K_CACHE_SIZE by 16 M to 144M means that the 16M is taken away from the automatically sized components. Likewise, reducing DB_8K_CACHE_SIZE by 16M to 112M means that the 16M is given to the automatically sized components. Using Manual Shared Memory Management If you decide not to use automatic shared memory management by not setting the SGA_TARGET parameter, you must manually configure each component of the SGA. This section provides guidelines on setting the parameters that control the size of each SGA components. Setting the Buffer Cache Initialization Parameters The buffer cache initialization parameters determine the size of the buffer cache component of the SGA. You use them to specify the sizes of caches for the various block sizes used by the database. These initialization parameters are all dynamic. If you intend to use multiple block sizes in your database, you must have the DB_CACHE_SIZE and at least one DB_nK_CACHE_SIZE parameter set. Oracle Database assigns an appropriate default value to the DB_CACHE_SIZE parameter, but the DB_nK_CACHE_SIZE parameters default to 0, and no additional block size caches are configured. The size of a buffer cache affects performance. Larger cache sizes generally reduce the number of disk reads and writes. However, a large cache may take up too much memory and induce memory paging or swapping. DB_CACHE_SIZE Initialization Parameter The DB_CACHE_SIZE initialization parameter has replaced the DB_BLOCK_BUFFERS initialization parameter, which was used in earlier releases. The DB_CACHE_SIZE parameter specifies the size in bytes of the cache of standard block size buffers. Thus, to specify a value for DB_CACHE_SIZE, you would determine the number of buffers that you need and multiple that value times the block size specified in DB_BLOCK_SIZE. 2-30 Oracle Database Administrator’s Guide Understanding Initialization Parameters For backward compatibility, the DB_BLOCK_BUFFERS parameter still functions, but it remains a static parameter and cannot be combined with any of the dynamic sizing parameters. The DB_nK_CACHE_SIZE Initialization Parameters The sizes and numbers of nonstandard block size buffers are specified by the following initialization parameters: ■ DB_2K_CACHE_SIZE ■ DB_4K_CACHE_SIZE ■ DB_8K_CACHE_SIZE ■ DB_16K_CACHE_SIZE ■ DB_32K_CACHE_SIZE Each parameter specifies the size of the buffer cache for the corresponding block size. For example: DB_BLOCK_SIZE=4096 DB_CACHE_SIZE=12M DB_2K_CACHE_SIZE=8M DB_8K_CACHE_SIZE=4M In this example, the parameters specify that the standard block size of the database is 4K. The size of the cache of standard block size buffers is 12M. Additionally, 2K and 8K caches will be configured with sizes of 8M and 4M respectively. You cannot use a DB_nK_CACHE_SIZE parameter to size the cache for the standard block size. For example, if the value of DB_BLOCK_SIZE is 2K, it is invalid to set DB_2K_CACHE_SIZE. The size of the cache for the standard block size is always determined from the value of DB_CACHE_SIZE. Note: Specifying the Shared Pool Size The SHARED_POOL_SIZE initialization parameter is a dynamic parameter that lets you specify or adjust the size of the shared pool component of the SGA. Oracle Database selects an appropriate default value. In releases before Oracle Database 10g Release 1, the amount of shared pool memory that was allocated was equal to the value of the SHARED_POOL_SIZE initialization parameter plus the amount of internal SGA overhead computed during instance startup. The internal SGA overhead refers to memory that is allocated by Oracle during startup, based on the values of several other initialization parameters. This memory is used to maintain state for different server components in the SGA. For example, if the SHARED_POOL_SIZE parameter is set to 64MB and the internal SGA overhead is computed to be 12MB, the real size of shared pool in the SGA is 64+12=76MB, although the value of the SHARED_POOL_SIZE parameter is still displayed as 64MB. Starting with Oracle Database 10g Release 1, the size of internal SGA overhead is included in the user-specified value of SHARED_POOL_SIZE. In other words, if you are not using the automatic shared memory management feature, then the amount of shared pool memory that is allocated at startup is exactly equal to the value of SHARED_POOL_SIZE initialization parameter. In manual SGA mode, this parameter must be set so that it includes the internal SGA overhead in addition to the desired value of shared pool size. In the previous example, if the SHARED_POOL_SIZE parameter is set to 64MB at startup, then the available shared pool after startup is Creating an Oracle Database 2-31 Understanding Initialization Parameters 64-12=52MB, assuming the value of internal SGA overhead remains unchanged. In order to maintain an effective value of 64MB for shared pool memory after startup, you must set the SHARED_POOL_SIZE parameter to 64+12=76MB. The Oracle Database 10g migration utilities recommend a new value for this parameter based on the value of internal SGA overhead in the pre-upgrade environment and based on the old value of this parameter. In Oracle Database 10g, the exact value of internal SGA overhead, also known as startup overhead in the shared pool, can be queried from the V$SGAINFO view. Also, in manual SGA mode, if the user-specified value of SHARED_POOL_SIZE is too small to accommodate even the requirements of internal SGA overhead, then Oracle generates an ORA-371 error during startup, along with a suggested value to use for the SHARED_POOL_SIZE parameter. When you use automatic shared memory management in Oracle Database 10g, the shared pool is automatically tuned, and an ORA-371 error would not be generated by Oracle. Specifying the Large Pool Size The LARGE_POOL_SIZE initialization parameter is a dynamic parameter that lets you specify or adjust the size of the large pool component of the SGA. The large pool is an optional component of the SGA. You must specifically set the LARGE_POOL_SIZE parameter if you want to create a large pool. Configuring the large pool is discussed in Oracle Database Performance Tuning Guide. Specifying the Java Pool Size The JAVA_POOL_SIZE initialization parameter is a dynamic parameter that lets you specify or adjust the size of the java pool component of the SGA. Oracle Database selects an appropriate default value. Configuration of the java pool is discussed in Oracle Database Java Developer's Guide. Specifying the Streams Pool Size The STREAMS_POOL_SIZE initialization parameter is a dynamic parameter that lets you specify or adjust the size of the Streams Pool component of the SGA. If STREAMS_POOL_SIZE is set to 0, then the Oracle Streams product transfers memory from the buffer cache to the Streams Pool when it is needed. For details, see the discussion of the Streams Pool in Oracle Streams Concepts and Administration. Viewing Information About the SGA The following views provide information about the SGA components and their dynamic resizing: View Description V$SGA Displays summary information about the system global area (SGA). V$SGAINFO Displays size information about the SGA, including the sizes of different SGA components, the granule size, and free memory. V$SGASTAT Displays detailed information about the SGA. V$SGA_DYNAMIC_COMPONENTS Displays information about the dynamic SGA components. This view summarizes information based on all completed SGA resize operations since instance startup. V$SGA_DYNAMIC_FREE_MEMORY Displays information about the amount of SGA memory available for future dynamic SGA resize operations. 2-32 Oracle Database Administrator’s Guide Understanding Initialization Parameters View Description V$SGA_RESIZE_OPS Displays information about the last 400 completed SGA resize operations. V$SGA_CURRENT_RESIZE_OPS Displays information about SGA resize operations that are currently in progress. A resize operation is an enlargement or reduction of a dynamic SGA component. V$SGA_TARGET_ADVICE Displays information that helps you tune SGA_TARGET. For more information, see Oracle Database Performance Tuning Guide. Specifying the Maximum Number of Processes The PROCESSES initialization parameter determines the maximum number of operating system processes that can be connected to Oracle Database concurrently. The value of this parameter must be a minimum of one for each background process plus one for each user process. The number of background processes will vary according the database features that you are using. For example, if you are using Advanced Queuing or the file mapping feature, you will have additional background processes. If you are using Automatic Storage Management, then add three additional processes. If you plan on running 50 user processes, a good estimate would be to set the PROCESSES initialization parameter to 70. Specifying the Method of Undo Space Management Every Oracle Database must have a method of maintaining information that is used to undo changes to the database. Such information consists of records of the actions of transactions, primarily before they are committed. Collectively these records are called undo data.This section provides instructions for setting up an environment for automatic undo management using an undo tablespace. See Also: Chapter 10, "Managing the Undo Tablespace" UNDO_MANAGEMENT Initialization Parameter The UNDO_MANAGEMENT initialization parameter determines whether an instance starts in automatic undo management mode which stores undo in an undo tablespace. By default, this parameter is set to MANUAL. Set this parameter to AUTO to enable automatic undo management mode. UNDO_TABLESPACE Initialization Parameter When an instance starts up in automatic undo management mode, it attempts to select an undo tablespace for storage of undo data. If the database was created in undo management mode, then the default undo tablespace (either the system-created SYS_UNDOTS tablespace or the user-specified undo tablespace) is the undo tablespace used at instance startup. You can override this default for the instance by specifying a value for the UNDO_TABLESPACE initialization parameter. This parameter is especially useful for assigning a particular undo tablespace to an instance in an Oracle Real Application Clusters environment. If no undo tablespace has been specified during database creation or by the UNDO_TABLESPACE initialization parameter, then the first available undo tablespace in the database is chosen. If no undo tablespace is available, then the instance starts without an undo tablespace. You should avoid running in this mode. Creating an Oracle Database 2-33 Understanding Initialization Parameters The COMPATIBLE Initialization Parameter and Irreversible Compatibility The COMPATIBLE initialization parameter enables or disables the use of features in the database that affect file format on disk. For example, if you create an Oracle Database 10g database, but specify COMPATIBLE = 9.2.0.2 in the initialization parameter file, then features that requires 10.0 compatibility will generate an error if you try to use them. Such a database is said to be at the 9.2.0.2 compatibility level. You can advance the compatibility level of your database. If you do advance the compatibility of your database with the COMPATIBLE initialization parameter, there is no way to start the database using a lower compatibility level setting, except by doing a point-in-time recovery to a time before the compatibility was advanced. The default value for the COMPATIBLE parameter is the release number of the most recent major release. Note: For Oracle Database 10g Release 2 (10.2), the default value of the COMPATIBLE parameter is 10.2.0. The minimum value is 9.2.0. If you create an Oracle Database using the default value, you can immediately use all the new features in this release, and you can never downgrade the database. See Also: ■ ■ Oracle Database Upgrade Guide for a detailed discussion of database compatibility and the COMPATIBLE initialization parameter Oracle Database Backup and Recovery Advanced User's Guide for information about point-in-time recovery of your database Setting the License Parameter Oracle no longer offers licensing by the number of concurrent sessions. Therefore the LICENSE_MAX_SESSIONS and LICENSE_SESSIONS_WARNING initialization parameters are no longer needed and have been deprecated. Note: If you use named user licensing, Oracle Database can help you enforce this form of licensing. You can set a limit on the number of users created in the database. Once this limit is reached, you cannot create more users. This mechanism assumes that each person accessing the database has a unique user name and that no people share a user name. Therefore, so that named user licensing can help you ensure compliance with your Oracle license agreement, do not allow multiple users to log in using the same user name. Note: To limit the number of users created in a database, set the LICENSE_MAX_USERS initialization parameter in the database initialization parameter file, as shown in the following example: LICENSE_MAX_USERS = 200 2-34 Oracle Database Administrator’s Guide Managing Initialization Parameters Using a Server Parameter File Troubleshooting Database Creation If database creation fails, you can look at the alert log to determine the reason for the failure and to determine corrective action. The alert log is discussed in "Monitoring the Operation of Your Database" on page 4-18. You should shut down the instance and delete any files created by the CREATE DATABASE statement before you attempt to create it again. After correcting the error that caused the failure of the database creation, try re-creating the database. Dropping a Database Dropping a database involves removing its datafiles, redo log files, control files, and initialization parameter files. The DROP DATABASE statement deletes all control files and all other database files listed in the control file. To use the DROP DATABASE statement successfully, all of the following conditions must apply: ■ The database must be mounted and closed. ■ The database must be mounted exclusively--not in shared mode. ■ The database must be mounted as RESTRICTED. An example of this statement is: DROP DATABASE; The DROP DATABASE statement has no effect on archived log files, nor does it have any effect on copies or backups of the database. It is best to use RMAN to delete such files. If the database is on raw disks, the actual raw disk special files are not deleted. If you used the Database Configuration Assistant to create your database, you can use that tool to delete (drop) your database and remove the files. Managing Initialization Parameters Using a Server Parameter File Initialization parameters for the Oracle Database have traditionally been stored in a text initialization parameter file. For better manageability, you can choose to maintain initialization parameters in a binary server parameter file that is persistent across database startup and shutdown. This section introduces the server parameter file, and explains how to manage initialization parameters using either method of storing the parameters. The following topics are contained in this section. ■ What Is a Server Parameter File? ■ Migrating to a Server Parameter File ■ Creating a Server Parameter File ■ The SPFILE Initialization Parameter ■ Using ALTER SYSTEM to Change Initialization Parameter Values ■ Exporting the Server Parameter File ■ Backing Up the Server Parameter File ■ Errors and Recovery for the Server Parameter File ■ Viewing Parameter Settings Creating an Oracle Database 2-35 Managing Initialization Parameters Using a Server Parameter File What Is a Server Parameter File? A server parameter file can be thought of as a repository for initialization parameters that is maintained on the machine running the Oracle Database server. It is, by design, a server-side initialization parameter file. Initialization parameters stored in a server parameter file are persistent, in that any changes made to the parameters while an instance is running can persist across instance shutdown and startup. This arrangement eliminates the need to manually update initialization parameters to make persistent any changes effected by ALTER SYSTEM statements. It also provides a basis for self-tuning by the Oracle Database server. A server parameter file is initially built from a text initialization parameter file using the CREATE SPFILE statement. (It can also be created directly by the Database Configuration Assistant.) The server parameter file is a binary file that cannot be edited using a text editor. Oracle Database provides other interfaces for viewing and modifying parameter settings in a server parameter file. Caution: Although you can open the binary server parameter file with a text editor and view its text, do not manually edit it. Doing so will corrupt the file. You will not be able to start your instance, and if the instance is running, it could fail. When you issue a STARTUP command with no PFILE clause, the Oracle instance searches an operating system–specific default location for a server parameter file from which to read initialization parameter settings. If no server parameter file is found, the instance searches for a text initialization parameter file. If a server parameter file exists but you want to override it with settings in a text initialization parameter file, you must specify the PFILE clause when issuing the STARTUP command. Instructions for starting an instance using a server parameter file are contained in "Starting Up a Database" on page 3-1. Migrating to a Server Parameter File If you are currently using a text initialization parameter file, use the following steps to migrate to a server parameter file. 1. If the initialization parameter file is located on a client machine, transfer the file (for example, FTP) from the client machine to the server machine. If you are migrating to a server parameter file in an Oracle Real Application Clusters environment, you must combine all of your instance-specific initialization parameter files into a single initialization parameter file. Instructions for doing this, and other actions unique to using a server parameter file for instances that are part of an Oracle Real Application Clusters installation, are discussed in: Note: ■ ■ Oracle Real Application Clusters Installation and Configuration Guide Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide 2-36 Oracle Database Administrator’s Guide Managing Initialization Parameters Using a Server Parameter File 2. Create a server parameter file using the CREATE SPFILE statement. This statement reads the initialization parameter file to create a server parameter file. The database does not have to be started to issue a CREATE SPFILE statement. 3. Start up the instance using the newly created server parameter file. Creating a Server Parameter File You use the CREATE SPFILE statement to create a server parameter file from a text initialization parameter file. You must have the SYSDBA or the SYSOPER system privilege to execute this statement. When you use the Database Configuration Assistant to create a database, it can automatically create a server parameter file for you. Note: The CREATE SPFILE statement can be executed before or after instance startup. However, if the instance has been started using a server parameter file, an error is raised if you attempt to re-create the same server parameter file that is currently being used by the instance. The following example creates a server parameter file from initialization parameter file /u01/oracle/dbs/init.ora. In this example no SPFILE name is specified, so the file is created with the platform-specific default name and location shown in Table 2–5 on page 2-37. CREATE SPFILE FROM PFILE='/u01/oracle/dbs/init.ora'; Another example, which follows, illustrates creating a server parameter file and supplying a name. CREATE SPFILE='/u01/oracle/dbs/test_spfile.ora' FROM PFILE='/u01/oracle/dbs/test_init.ora'; When you create a server parameter file from an initialization parameter file, comments specified on the same lines as a parameter setting in the initialization parameter file are maintained in the server parameter file. All other comments are ignored. The server parameter file is always created on the machine running the database server. If a server parameter file of the same name already exists on the server, it is overwritten with the new information. Oracle recommends that you allow the database server to default the name and location of the server parameter file. This will ease administration of your database. For example, the STARTUP command assumes this default location to read the parameter file. The table that follows shows the default name and location of the server parameter file. The table assumes that the SPFILE is a file. If it is a raw device, the default name could be a logical volume name or partition device name, and the default location could differ. Table 2–5 Platform Server Parameter File Default Name and Location on UNIX and Windows Default Name Default Location UNIX and spfile$ORACLE_SID.ora Linux $ORACLE_HOME/dbs or the same location as the datafiles Windows %ORACLE_HOME%\database spfile%ORACLE_SID%.ora Creating an Oracle Database 2-37 Managing Initialization Parameters Using a Server Parameter File Upon startup, the instance first searches for the server parameter file named spfile$ORACLE_SID.ora, and if not found, searches for spfile.ora. Using spfile.ora enables all Real Application Cluster (RAC) instances to use the same server parameter file. Note: If neither server parameter file is found, the instance searches for the text initialization parameter file init$ORACLE_SID.ora. If you create a server parameter file in a location other than the default location, you must create a text initialization parameter file that points to the server parameter file. For more information, see "Starting Up a Database" on page 3-1. The SPFILE Initialization Parameter The SPFILE initialization parameter contains the name of the current server parameter file. When the default server parameter file is used by the server (that is, you issue a STARTUP command and do not specify a PFILE), the value of SPFILE is internally set by the server. The SQL*Plus command SHOW PARAMETERS SPFILE (or any other method of querying the value of a parameter) displays the name of the server parameter file that is currently in use. Using ALTER SYSTEM to Change Initialization Parameter Values The ALTER SYSTEM statement lets you set, change, or restore to default the values of initialization parameter. If you are using a text initialization parameter file, the ALTER SYSTEM statement changes the value of a parameter only for the current instance, because there is no mechanism for automatically updating initialization parameters on disk. You must update them manually to be passed to a future instance. Using a server parameter file overcomes this limitation. Setting or Changing Initialization Parameter Values Use the SET clause of the ALTER SYSTEM statement to set or change initialization parameter values. The optional SCOPE clause specifies the scope of a change as described in the following table: SCOPE Clause Description SCOPE = SPFILE The change is applied in the server parameter file only. The effect is as follows: ■ ■ SCOPE = MEMORY For static parameters, the behavior is the same as for dynamic parameters. This is the only SCOPE specification allowed for static parameters. The change is applied in memory only. The effect is as follows: ■ ■ 2-38 Oracle Database Administrator’s Guide For dynamic parameters, the change is effective at the next startup and is persistent. For dynamic parameters, the effect is immediate, but it is not persistent because the server parameter file is not updated. For static parameters, this specification is not allowed. Managing Initialization Parameters Using a Server Parameter File SCOPE Clause Description SCOPE = BOTH The change is applied in both the server parameter file and memory. The effect is as follows: ■ ■ For dynamic parameters, the effect is immediate and persistent. For static parameters, this specification is not allowed. It is an error to specify SCOPE=SPFILE or SCOPE=BOTH if the server is not using a server parameter file. The default is SCOPE=BOTH if a server parameter file was used to start up the instance, and MEMORY if a text initialization parameter file was used to start up the instance. For dynamic parameters, you can also specify the DEFERRED keyword. When specified, the change is effective only for future sessions. An optional COMMENT clause lets you associate a text string with the parameter update. When you specify SCOPE as SPFILE or BOTH, the comment is written to the server parameter file. The following statement changes the maximum number of job queue processes allowed for the instance. It includes a comment, and explicitly states that the change is to be made only in memory (that is, it is not persistent across instance shutdown and startup). ALTER SYSTEM SET JOB_QUEUE_PROCESSES=50 COMMENT='temporary change on Nov 29' SCOPE=MEMORY; The next example sets a complex initialization parameter that takes a list of attributes. Specifically, the parameter value being set is the LOG_ARCHIVE_DEST_n initialization parameter. This statement could change an existing setting for this parameter of create a new archive destination. ALTER SYSTEM SET LOG_ARCHIVE_DEST_4='LOCATION=/u02/oracle/rbdb1/',MANDATORY,'REOPEN=2' COMMENT='Add new destimation on Nov 29' SCOPE=SPFILE; When a value consists of a list of parameters, you cannot edit individual attributes by the position or ordinal number. You must specify the complete list of values each time the parameter is updated, and the new list completely replaces the old list. Exporting the Server Parameter File You can use the CREATE PFILE statement to export a server parameter file to a text initialization parameter file. Doing so might be necessary for several reasons: ■ ■ For diagnostic purposes, listing all of the parameter values currently used by an instance. This is analogous to the SQL*Plus SHOW PARAMETERS command or selecting from the V$PARAMETER or V$PARAMETER2 views. To modify the server parameter file by first exporting it, editing the resulting text file, and then re-creating it using the CREATE SPFILE statement The exported file can also be used to start up an instance using the PFILE clause. You must have the SYSDBA or the SYSOPER system privilege to execute the CREATE PFILE statement. The exported file is created on the database server machine. It Creating an Oracle Database 2-39 Managing Initialization Parameters Using a Server Parameter File contains any comments associated with the parameter in the same line as the parameter setting. The following example creates a text initialization parameter file from the server parameter file: CREATE PFILE FROM SPFILE; Because no names were specified for the files, the database creates an initialization parameter file with a platform-specific name, and it is created from the platform-specific default server parameter file. The following example creates a text initialization parameter file from a server parameter file, but in this example the names of the files are specified: CREATE PFILE='/u01/oracle/dbs/test_init.ora' FROM SPFILE='/u01/oracle/dbs/test_spfile.ora'; Backing Up the Server Parameter File You can create a backup of your server parameter file by exporting it, as described in "Exporting the Server Parameter File". If the backup and recovery strategy for your database is implemented using Recovery Manager (RMAN), then you can use RMAN to create a backup. The server parameter file is backed up automatically by RMAN when you back up your database, but RMAN also lets you specifically create a backup of the currently active server parameter file. See Also: Oracle Database Backup and Recovery Basics Errors and Recovery for the Server Parameter File If an error occurs while reading the server parameter file (during startup or an export operation), or while writing the server parameter file during its creation, the operation terminates with an error reported to the user. If an error occurs while reading or writing the server parameter file during a parameter update, the error is reported in the alert log and all subsequent parameter updates to the server parameter file are ignored. At this point, you can take one of the following actions: ■ ■ Shut down the instance, recover the server parameter file, and then restart the instance. Continue to run the database if you do not care that subsequent parameter updates will not be persistent. Viewing Parameter Settings You can view parameter settings in several ways, as shown in the following table. Method Description SHOW PARAMETERS This SQL*Plus command displays the values of parameters currently in use. CREATE PFILE This SQL statement creates a text initialization parameter file from the binary server parameter file. V$PARAMETER This view displays the values of parameters currently in effect. 2-40 Oracle Database Administrator’s Guide Defining Application Services for Oracle Database 10g Method Description V$PARAMETER2 This view displays the values of parameters currently in effect. It is easier to distinguish list parameter values in this view because each list parameter value appears as a row. V$SPPARAMETER This view displays the current contents of the server parameter file. The view returns FALSE values in the ISSPECIFIED column if a server parameter file is not being used by the instance. See Also: Oracle Database Reference for a complete description of views Defining Application Services for Oracle Database 10g This section describes Oracle Database 10g services and includes the following topics: ■ Deploying Services ■ Configuring Services ■ Using Services Services are logical abstractions for managing workloads in Oracle Database 10g. Services divide workloads into mutually disjoint groupings. Each service represents a workload with common attributes, service-level thresholds, and priorities. The grouping is based on attributes of work that might include the application function to be used, the priority of execution for the application function, the job class to be managed, or the data range used in the application function or job class. For example, the Oracle E-Business suite defines a service for each responsibility, such as general ledger, accounts receivable, order entry, and so on. In Oracle Database 10g, services are built into the Oracle Database providing single system image for workloads, prioritization for workloads, performance measures for real transactions, and alerts and actions when performance goals are violated. Services enable you to configure a workload, administer it, enable and disable it, and measure the workload as a single entity. You can do this using standard tools such as the Database Configuration Assistant (DBCA), Net Configuration Assistant (NetCA), and Enterprise Manager (EM). Enterprise Manager supports viewing and operating services as a whole, with drill down to the instance-level when needed. In Real Application Clusters (RAC), a service can span one or more instances and facilitate real workload balancing based on real transaction performance. This provides end-to-end unattended recovery, rolling changes by workload, and full location transparency. RAC also enables you to manage a number of service features with Enterprise Manager, the DBCA, and the Server Control utility (SRVCTL). Services also offer an extra dimension in performance tuning. Tuning by "service and SQL" can replace tuning by "session and SQL" in the majority of systems where all sessions are anonymous and shared. With services, workloads are visible and measurable. Resource consumption and waits are attributable by application. Additionally, resources assigned to services can be augmented when loads increase or decrease. This dynamic resource allocation enables a cost-effective solution for meeting demands as they occur. For example, services are measured automatically and the performance is compared to service-level thresholds. Performance violations are reported to Enterprise Manager, enabling the execution of automatic or scheduled solutions. Creating an Oracle Database 2-41 Defining Application Services for Oracle Database 10g See Also: Oracle Real Application Clusters Deployment and Performance Guide for more information about services in RAC Deploying Services Installations configure Oracle Database 10g services in the database giving each service a unique global name, associated performance goals, and associated importance. The services are tightly integrated with the Oracle Database and are maintained in the data dictionary. You can find service information in the following service-specific views: ■ DBA_SERVICES ■ ALL_SERVICES or V$SERVICES ■ V$ACTIVE_SERVICES ■ V$SERVICE_STATS ■ V$SERVICE_EVENTS ■ V$SERVICE_WAIT_CLASSES ■ V$SERV_MOD_ACT_STATS ■ V$SERVICE_METRICS ■ V$SERVICE_METRICS_HISTORY The following additional views also contain some information about services: ■ V$SESSION ■ V$ACTIVE_SESSION_HISTORY ■ DBA_RSRC_GROUP_MAPPINGS ■ DBA_SCHEDULER_JOB_CLASSES ■ DBA_THRESHOLDS See Also: Oracle Database Reference for detailed information about these views Several Oracle Database features support services. The Automatic Workload Repository (AWR) manages the performance of services. AWR records service performance, including execution times, wait classes, and resources consumed by service. AWR alerts warn when service response time thresholds are exceeded. The dynamic views report current service performance metrics with one hour of history. Each service has quality-of-service thresholds for response time and CPU consumption. In addition, the Database Resource Manager maps services to consumer groups. This enables you to automatically manage the priority of one service relative to others. You can use consumer groups to define relative priority in terms of either ratios or resource consumption. This is described in more detail, for example, in Oracle Real Application Clusters Deployment and Performance Guide. Configuring Services Services describe applications, application functions, and data ranges as either functional services or data-dependent services. Functional services are the most common mapping of workloads. Sessions using a particular function are grouped together. For Oracle*Applications, ERP, CRM, and iSupport functions create a 2-42 Oracle Database Administrator’s Guide Defining Application Services for Oracle Database 10g functional division of the work. For SAP, dialog and update functions create a functional division of the work. In contrast, data-dependent routing routes sessions to services based on data keys. The mapping of work requests to services occurs in the object relational mapping layer for application servers and TP monitors. For example, in RAC, these ranges can be completely dynamic and based on demand because the database is shared. You can also define preconnect application services in RAC databases. Preconnect services span instances to support a service in the event of a failure. The preconnect service supports TAF preconnect mode and is managed transparently when using RAC. In addition to application services, Oracle Database also supports two internal services: SYS$BACKGROUND is used by the background processes only and SYS$USERS is the default service for user sessions that are not associated with services. Use the DBMS_SERVICE package or set the SERVICE_NAMES parameter to create application services on a single-instance Oracle Database. You can later define the response time goal or importance of each service through EM, either individually or by using the Enterprise Manager feature "Copy Thresholds From a Baseline" on the Manage Metrics/Edit Threshold pages. You can also do this using PL/SQL. Using Services Using services requires no changes to your application code. Client-side work connects to a service. Server-side work specifies the service when creating the job class for the Job Scheduler and the database links for distributed databases. Work requests executing under a service inherit the performance thresholds for the service and are measured as part of the service. Client-Side Use Middle-tier applications and client-server applications use a service by specifying the service as part of the connection in TNS connect data. This connect data may be in the TNSnames file for thick Net drivers, in the URL specification for thin drivers, or may be maintained in the Oracle Internet Directory. For example, data sources for the Oracle Application Server 10g are set to route to a service. Using Easy Connect Naming, this connection needs only the host name and service name (for example, hr/hr@myDBhost/myservice). For Oracle E-Business Suite, the service is also maintained in the application database identifier and in the cookie for the ICX parameters. Server-Side Use Server-side work, such as the Oracle Scheduler, parallel execution, and Oracle Streams Advanced Queuing, set the service name as part of the workload definition. For the Oracle Scheduler, the service that the job class uses is defined when the job class is created. During execution, jobs are assigned to job classes, and job classes run within services. Using services with job classes ensures that the work executed by the job scheduler is identified for workload management and performance tuning. For parallel query and parallel DML, the query coordinator connects to a service just like any other client. The parallel query processes inherit the service for the duration of the execution. At the end of query execution, the parallel execution processes revert to the default service. Creating an Oracle Database 2-43 Considerations After Creating a Database Chapter 27, "Using the Scheduler" for more information about the Oracle Scheduler. See Also: Considerations After Creating a Database After you create a database, the instance is left running, and the database is open and available for normal database use. You may want to perform other actions, some of which are discussed in this section. Some Security Considerations Note Regarding Security Enhancements: In this release of Oracle Database and in subsequent releases, several enhancements are being made to ensure the security of default database user accounts. You can find a security checklist for this release in Oracle Database Security Guide. Oracle recommends that you read this checklist and configure your database accordingly. After the database is created, you can configure it to take advantage of Oracle Identity Management. For information on how to do this, please refer to Oracle Database Enterprise User Security Administrator's Guide. A newly created database has at least three user accounts that are important for administering your database: SYS, SYSTEM, and SYSMAN. Caution: To prevent unauthorized access and protect the integrity of your database, it is important that new passwords for user accounts SYS and SYSTEM be specified when the database is created. This is accomplished by specifying the following CREATE DATABASE clauses when manually creating you database, or by using DBCA to create the database: ■ USER SYS IDENTIFIED BY ■ USER SYSTEM IDENTIFIED BY Additional administrative accounts are provided by Oracle Database that should be used only by authorized users. To protect these accounts from being used by unauthorized users familiar with their Oracle-supplied passwords, these accounts are initially locked with their passwords expired. As the database administrator, you are responsible for the unlocking and resetting of these accounts. Table 2–6 lists the administrative accounts that are provided by Oracle Database. Not all accounts may be present on your system, depending upon the options that you selected for your database. Table 2–6 Administrative User Accounts Provided by Oracle Database Username Password Description See Also CTXSYS CTXSYS The Oracle Text account Oracle Text Reference DBSNMP DBSNMP The account used by the Management Agent component of Oracle Enterprise Manager to monitor and manage the database Oracle Enterprise Manager Grid Control Installation and Basic Configuration 2-44 Oracle Database Administrator’s Guide Considerations After Creating a Database Table 2–6 (Cont.) Administrative User Accounts Provided by Oracle Database Username Password Description See Also LBACSYS LBACSYS The Oracle Label Security administrator account Oracle Label Security Administrator's Guide MDDATA MDDATA The schema used by Oracle Spatial for storing Geocoder and router data Oracle Spatial User's Guide and Reference MDSYS MDSYS The Oracle Spatial and Oracle interMedia Locator administrator account Oracle Spatial User's Guide and Reference DMSYS DMSYS The Oracle Data Mining account. Oracle Data Mining Administrator's Guide Oracle Data Mining Concepts OLAPSYS MANAGER The account used to create OLAP metadata structures. It owns the OLAP Catalog (CWMLite). Oracle OLAP Application Developer's Guide ORDPLUGINS ORDPLUGINS The Oracle interMedia user. Plug-ins supplied by Oracle and third party format plug-ins are installed in this schema. Oracle interMedia User's Guide ORDSYS ORDSYS The Oracle interMedia administrator account Oracle interMedia User's Guide OUTLN OUTLN Oracle Database The account that supports plan Performance Tuning stability. Plan stability enables Guide you to maintain the same execution plans for the same SQL statements. OUTLN acts as a role to centrally manage metadata associated with stored outlines. SI_INFORMTN_ SCHEMA SI_INFORMTN_ SCHEMA The account that stores the information views for the SQL/MM Still Image Standard Oracle interMedia User's Guide SYS CHANGE_ON_ INSTALL The account used to perform database administration tasks Oracle Database Administrator's Guide SYSMAN CHANGE_ON_ INSTALL The account used to perform Oracle Enterprise Manager database administration tasks. Note that SYS and SYSTEM can also perform these tasks. Oracle Enterprise Manager Grid Control Installation and Basic Configuration SYSTEM MANAGER Another account used to perform database administration tasks. Oracle Database Administrator's Guide See Also: ■ ■ ■ "Database Administrator Usernames" on page 1-9 for more information about the users SYS and SYSTEM Oracle Database Security Guide to learn how to add new users and change passwords Oracle Database SQL Reference for the syntax of the ALTER USER statement used for unlocking user accounts Enabling Transparent Data Encryption Transparent data encryption is a feature that enables encryption of database columns before storing them in the datafile. If users attempt to circumvent the database access control mechanisms by looking inside datafiles directly with operating system tools, transparent data encryption prevents such users from viewing sensitive information. Creating an Oracle Database 2-45 Viewing Information About the Database Users who have the CREATE TABLE privilege can choose one or more columns in a table to be encrypted. The data is encrypted in the data files and in the audit logs (if audit is turned on). Database users with appropriate privileges can view the data in unencrypted format. For information on enabling and disabling transparent data encryption, see Oracle Database Security Guide Creating a Secure External Password Store For large-scale deployments where applications use password credentials to connect to databases, it is possible to store such credentials in a client-side Oracle wallet. An Oracle wallet is a secure software container that is used to store authentication and signing credentials. Storing database password credentials in a client-side Oracle wallet eliminates the need to embed usernames and passwords in application code, batch jobs, or scripts. This reduces the risk of exposing passwords in the clear in scripts and application code, and simplifies maintenance because you need not change your code each time usernames and passwords change. In addition, not having to change application code also makes it easier to enforce password management policies for these user accounts. When you configure a client to use the external password store, applications can use the following syntax to connect to databases that use password authentication: CONNECT /@database_alias Note that you need not specify database login credentials in this CONNECT statement. Instead your system looks for database login credentials in the client wallet. See Also: Oracle Database Advanced Security Administrator's Guide for information about configuring your client to use a secure external password store and for information about managing credentials in it. Installing the Oracle Database Sample Schemas The Oracle Database distribution media includes various SQL files that let you experiment with the system, learn SQL, or create additional tables, views, or synonyms. Oracle Database includes sample schemas that help you to become familiar with Oracle Database functionality. All Oracle Database documentation and training materials are being converted to the Sample Schemas environment as those materials are updated. The Sample Schemas can be installed automatically by the Database Configuration Assistant, or you can install them manually. The schemas and installation instructions are described in detail in Oracle Database Sample Schemas. Viewing Information About the Database In addition to the views listed previously in "Viewing Parameter Settings", you can view information about your database content and structure using the following views: View Description DATABASE_PROPERTIES Displays permanent database properties GLOBAL_NAME Displays the global database name 2-46 Oracle Database Administrator’s Guide Viewing Information About the Database View Description V$DATABASE Contains database information from the control file Creating an Oracle Database 2-47 Viewing Information About the Database 2-48 Oracle Database Administrator’s Guide 3 Starting Up and Shutting Down This chapter describes the procedures for starting up and shutting down an Oracle Database instance and contains the following topics: ■ Starting Up a Database ■ Altering Database Availability ■ Shutting Down a Database ■ Quiescing a Database ■ Suspending and Resuming a Database See Also: Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide for additional information specific to an Oracle Real Application Clusters environment Starting Up a Database When you start up a database, you create an instance of that database and you determine the state of the database. Normally, you start up an instance by mounting and opening the database. Doing so makes the database available for any valid user to connect to and perform typical data access operations. Other options exist, and these are also discussed in this section. This section contains the following topics relating to starting up an instance of a database: ■ Options for Starting Up a Database ■ Understanding Initialization Parameter Files ■ Preparing to Start Up an Instance ■ Starting Up an Instance Options for Starting Up a Database You can start up a database instance with SQL*Plus, Recovery Manager, or Enterprise Manager. Starting Up a Database Using SQL*Plus You can start a SQL*Plus session, connect to Oracle Database with administrator privileges, and then issue the STARTUP command. Using SQL*Plus in this way is the only method described in detail in this book. Starting Up and Shutting Down 3-1 Starting Up a Database Starting Up a Database Using Recovery Manager You can also use Recovery Manager (RMAN) to execute STARTUP and SHUTDOWN commands. You may prefer to do this if your are within the RMAN environment and do not want to invoke SQL*Plus. See Also: Oracle Database Backup and Recovery Basics for information on starting up the database using RMAN Starting Up a Database Using Oracle Enterprise Manager You can use Oracle Enterprise Manager (EM) to administer your database, including starting it up and shutting it down. EM combines a GUI console, agents, common services, and tools to provide an integrated and comprehensive systems management platform for managing Oracle products. EM Database Control, which is the portion of EM that is dedicated to administering an Oracle database, enables you to perform the functions discussed in this book using a GUI interface, rather than command line operations. See Also: ■ ■ ■ Oracle Enterprise Manager Concepts Oracle Enterprise Manager Grid Control Installation and Basic Configuration Oracle Database 2 Day DBA The remainder of this section describes using SQL*Plus to start up a database instance. Understanding Initialization Parameter Files To start an instance, the database must read instance configuration parameters (the initialization parameters) from either a server parameter file (SPFILE) or a text initialization parameter file. When you issue the SQL*Plus STARTUP command, the database attempts to read the initialization parameters from an SPFILE in a platform-specific default location. If it finds no SPFILE, it searches for a text initialization parameter file. For UNIX or Linux, the platform-specific default location (directory) for the SPFILE and text initialization parameter file is: Note: $ORACLE_HOME/dbs For Windows NT and Windows 2000 the location is: %ORACLE_HOME%\database In the platform-specific default location, Oracle Database locates your initialization parameter file by examining filenames in the following order: 1. spfile$ORACLE_SID.ora 2. spfile.ora 3. init$ORACLE_SID.ora The first two filenames represent SPFILEs and the third represents a text initialization parameter file. 3-2 Oracle Database Administrator’s Guide Starting Up a Database The spfile.ora file is included in this search path because in a Real Application Clusters environment one server parameter file is used to store the initialization parameter settings for all instances. There is no instance-specific location for storing a server parameter file. Note: For more information about the server parameter file for a Real Application Clusters environment, see Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide. If you (or the Database Configuration Assistant) created a server parameter file, but you want to override it with a text initialization parameter file, you can specify the PFILE clause of the STARTUP command to identify the initialization parameter file. STARTUP PFILE = /u01/oracle/dbs/init.ora Starting Up with a Non-Default Server Parameter File A non-default server parameter file (SPFILE) is an SPFILE that is in a location other than the default location. It is not usually necessary to start an instance with a non-default SPFILE. However, should such a need arise, you can use the PFILE clause to start an instance with a non-default server parameter file as follows: 1. Create a one-line text initialization parameter file that contains only the SPFILE parameter. The value of the parameter is the non-default server parameter file location. For example, create a text initialization parameter file /u01/oracle/dbs/spf_init.ora that contains only the following parameter: SPFILE = /u01/oracle/dbs/test_spfile.ora Note: You cannot use the IFILE initialization parameter within a text initialization parameter file to point to a server parameter file. In this context, you must use the SPFILE initialization parameter. 2. Start up the instance pointing to this initialization parameter file. STARTUP PFILE = /u01/oracle/dbs/spf_init.ora The SPFILE must reside on the machine running the database server. Therefore, the preceding method also provides a means for a client machine to start a database that uses an SPFILE. It also eliminates the need for a client machine to maintain a client-side initialization parameter file. When the client machine reads the initialization parameter file containing the SPFILE parameter, it passes the value to the server where the specified SPFILE is read. Initialization Files and Automatic Storage Management A database that uses Automatic Storage Management (ASM) usually has a non-default SPFILE. If you use the Database Configuration Assistant (DBCA) to configure a database to use ASM, DBCA creates an SPFILE for the database instance in an ASM disk group, and then creates a text initialization parameter file in the default location in the local file system to point to the SPFILE. Starting Up and Shutting Down 3-3 Starting Up a Database See Also: Chapter 2, "Creating an Oracle Database", for more information about initialization parameters, initialization parameter files, and server parameter files Preparing to Start Up an Instance You must perform some preliminary steps before attempting to start an instance of your database using SQL*Plus. 1. Ensure that environment variables are set so that you connect to the desired Oracle instance. For details, see "Selecting an Instance with Environment Variables" on page 1-7. 2. Start SQL*Plus without connecting to the database: SQLPLUS /NOLOG 3. Connect to Oracle Database as SYSDBA: CONNECT username/password AS SYSDBA Now you are connected to the database and ready to start up an instance of your database. See Also: SQL*Plus User's Guide and Reference for descriptions and syntax for the CONNECT, STARTUP, and SHUTDOWN commands. Starting Up an Instance You use the SQL*Plus STARTUP command to start up an Oracle Database instance. You can start an instance in various modes: ■ ■ ■ ■ Start the instance without mounting a database. This does not allow access to the database and usually would be done only for database creation or the re-creation of control files. Start the instance and mount the database, but leave it closed. This state allows for certain DBA activities, but does not allow general access to the database. Start the instance, and mount and open the database. This can be done in unrestricted mode, allowing access to all users, or in restricted mode, allowing access for database administrators only. Force the instance to start after a startup or shutdown problem, or start the instance and have complete media recovery begin immediately. You cannot start a database instance if you are connected to the database through a shared server process. Note: The following scenarios describe and illustrate the various states in which you can start up an instance. Some restrictions apply when combining clauses of the STARTUP command. 3-4 Oracle Database Administrator’s Guide Starting Up a Database It is possible to encounter problems starting up an instance if control files, database files, or redo log files are not available. If one or more of the files specified by the CONTROL_FILES initialization parameter does not exist or cannot be opened when you attempt to mount a database, Oracle Database returns a warning message and does not mount the database. If one or more of the datafiles or redo log files is not available or cannot be opened when attempting to open a database, the database returns a warning message and does not open the database. Note: SQL*Plus User's Guide and Reference for information about the restrictions that apply when combining clauses of the STARTUP command See Also: Starting an Instance, and Mounting and Opening a Database Normal database operation means that an instance is started and the database is mounted and open. This mode allows any valid user to connect to the database and perform data access operations. The following command starts an instance, reads the initialization parameters from the default location, and then mounts and opens the database. (You can optionally specify a PFILE clause.) STARTUP Starting an Instance Without Mounting a Database You can start an instance without mounting a database. Typically, you do so only during database creation. Use the STARTUP command with the NOMOUNT clause: STARTUP NOMOUNT Starting an Instance and Mounting a Database You can start an instance and mount a database without opening it, allowing you to perform specific maintenance operations. For example, the database must be mounted but not open during the following tasks: ■ ■ Enabling and disabling redo log archiving options. For more information, please refer to Chapter 7, "Managing Archived Redo Logs". Performing full database recovery. For more information, please refer to Oracle Database Backup and Recovery Basics The following command starts an instance and mounts the database, but leaves the database closed: STARTUP MOUNT Restricting Access to an Instance at Startup You can start an instance, and optionally mount and open a database, in restricted mode so that the instance is available only to administrative personnel (not general database users). Use this mode of instance startup when you need to accomplish one of the following tasks: ■ Perform an export or import of data ■ Perform a data load (with SQL*Loader) Starting Up and Shutting Down 3-5 Starting Up a Database ■ Temporarily prevent typical users from using data ■ Perform certain migration or upgrade operations Typically, all users with the CREATE SESSION system privilege can connect to an open database. Opening a database in restricted mode allows database access only to users with both the CREATE SESSION and RESTRICTED SESSION system privilege. Only database administrators should have the RESTRICTED SESSION system privilege. Further, when the instance is in restricted mode, a database administrator cannot access the instance remotely through an Oracle Net listener, but can only access the instance locally from the machine that the instance is running on. The following command starts an instance (and mounts and opens the database) in restricted mode: STARTUP RESTRICT You can use the RESTRICT clause in combination with the MOUNT, NOMOUNT, and OPEN clauses. Later, use the ALTER SYSTEM statement to disable the RESTRICTED SESSION feature: ALTER SYSTEM DISABLE RESTRICTED SESSION; If you open the database in nonrestricted mode and later find that you need to restrict access, you can use the ALTER SYSTEM statement to do so, as described in "Restricting Access to an Open Database" on page 3-8. Oracle Database SQL Reference for more information on the ALTER SYSTEM statement See Also: Forcing an Instance to Start In unusual circumstances, you might experience problems when attempting to start a database instance. You should not force a database to start unless you are faced with the following: ■ ■ You cannot shut down the current instance with the SHUTDOWN NORMAL, SHUTDOWN IMMEDIATE, or SHUTDOWN TRANSACTIONAL commands. You experience problems when starting an instance. If one of these situations arises, you can usually solve the problem by starting a new instance (and optionally mounting and opening the database) using the STARTUP command with the FORCE clause: STARTUP FORCE If an instance is running, STARTUP FORCE shuts it down with mode ABORT before restarting it. In this case, beginning with Oracle Database 10g Release 2, the alert log shows the message "Shutting down instance (abort)" followed by "Starting ORACLE instance (normal)." (Earlier versions of the database showed only "Starting ORACLE instance (force)" in the alert log.) See Also: "Shutting Down with the ABORT Clause" on page 3-10 to understand the side effects of aborting the current instance 3-6 Oracle Database Administrator’s Guide Altering Database Availability Starting an Instance, Mounting a Database, and Starting Complete Media Recovery If you know that media recovery is required, you can start an instance, mount a database to the instance, and have the recovery process automatically start by using the STARTUP command with the RECOVER clause: STARTUP OPEN RECOVER If you attempt to perform recovery when no recovery is required, Oracle Database issues an error message. Automatic Database Startup at Operating System Start Many sites use procedures to enable automatic startup of one or more Oracle Database instances and databases immediately following a system start. The procedures for performing this task are specific to each operating system. For information about automatic startup, see your operating system specific Oracle documentation. Starting Remote Instances If your local Oracle Database server is part of a distributed database, you might want to start a remote instance and database. Procedures for starting and stopping remote instances vary widely depending on communication protocol and operating system. Altering Database Availability You can alter the availability of a database. You may want to do this in order to restrict access for maintenance reasons or to make the database read only. The following sections explain how to alter the availability of a database: ■ Mounting a Database to an Instance ■ Opening a Closed Database ■ Opening a Database in Read-Only Mode ■ Restricting Access to an Open Database Mounting a Database to an Instance When you need to perform specific administrative operations, the database must be started and mounted to an instance, but closed. You can achieve this scenario by starting the instance and mounting the database. To mount a database to a previously started, but not opened instance, use the SQL statement ALTER DATABASE with the MOUNT clause as follows: ALTER DATABASE MOUNT; See Also: "Starting an Instance and Mounting a Database" on page 3-5 for a list of operations that require the database to be mounted and closed (and procedures to start an instance and mount a database in one step) Opening a Closed Database You can make a mounted but closed database available for general use by opening the database. To open a mounted database, use the ALTER DATABASE statement with the OPEN clause: ALTER DATABASE OPEN; Starting Up and Shutting Down 3-7 Altering Database Availability After executing this statement, any valid Oracle Database user with the CREATE SESSION system privilege can connect to the database. Opening a Database in Read-Only Mode Opening a database in read-only mode enables you to query an open database while eliminating any potential for online data content changes. While opening a database in read-only mode guarantees that datafile and redo log files are not written to, it does not restrict database recovery or operations that change the state of the database without generating redo. For example, you can take datafiles offline or bring them online since these operations do not affect data content. If a query against a database in read-only mode uses temporary tablespace, for example to do disk sorts, then the issuer of the query must have a locally managed tablespace assigned as the default temporary tablespace. Otherwise, the query will fail. This is explained in "Creating a Locally Managed Temporary Tablespace" on page 8-9. Ideally, you open a database in read-only mode when you alternate a standby database between read-only and recovery mode. Be aware that these are mutually exclusive modes. The following statement opens a database in read-only mode: ALTER DATABASE OPEN READ ONLY; You can also open a database in read/write mode as follows: ALTER DATABASE OPEN READ WRITE; However, read/write is the default mode. Note: You cannot use the RESETLOGS clause with a READ ONLY clause. Oracle Database SQL Reference for more information about the ALTER DATABASE statement See Also: Restricting Access to an Open Database To place an instance in restricted mode, where only users with administrative privileges can access it, use the SQL statement ALTER SYSTEM with the ENABLE RESTRICTED SESSION clause. After placing an instance in restricted mode, you should consider killing all current user sessions before performing any administrative tasks. To lift an instance from restricted mode, use ALTER SYSTEM with the DISABLE RESTRICTED SESSION clause. See Also: ■ ■ "Terminating Sessions" on page 4-16 for directions for killing user sessions "Restricting Access to an Instance at Startup" on page 3-5 to learn some reasons for placing an instance in restricted mode 3-8 Oracle Database Administrator’s Guide Shutting Down a Database Shutting Down a Database To initiate database shutdown, use the SQL*Plus SHUTDOWN command. Control is not returned to the session that initiates a database shutdown until shutdown is complete. Users who attempt connections while a shutdown is in progress receive a message like the following: ORA-01090: shutdown in progress - connection is not permitted You cannot shut down a database if you are connected to the database through a shared server process. Note: To shut down a database and instance, you must first connect as SYSOPER or SYSDBA. There are several modes for shutting down a database. These are discussed in the following sections: ■ Shutting Down with the NORMAL Clause ■ Shutting Down with the IMMEDIATE Clause ■ Shutting Down with the TRANSACTIONAL Clause ■ Shutting Down with the ABORT Clause Some shutdown modes wait for certain events to occur (such as transactions completing or users disconnecting) before actually bringing down the database. There is a one-hour timeout period for these events. This timeout behavior is discussed in this additional section: ■ Shutdown Timeout Shutting Down with the NORMAL Clause To shut down a database in normal situations, use the SHUTDOWN command with the NORMAL clause: SHUTDOWN NORMAL The NORMAL clause is optional, because this is the default shutdown method if no clause is provided. Normal database shutdown proceeds with the following conditions: ■ ■ No new connections are allowed after the statement is issued. Before the database is shut down, the database waits for all currently connected users to disconnect from the database. The next startup of the database will not require any instance recovery procedures. Shutting Down with the IMMEDIATE Clause Use immediate database shutdown only in the following situations: ■ To initiate an automated and unattended backup ■ When a power shutdown is going to occur soon ■ When the database or one of its applications is functioning irregularly and you cannot contact users to ask them to log off or they are unable to log off To shut down a database immediately, use the SHUTDOWN command with the IMMEDIATE clause: Starting Up and Shutting Down 3-9 Shutting Down a Database SHUTDOWN IMMEDIATE Immediate database shutdown proceeds with the following conditions: ■ ■ ■ No new connections are allowed, nor are new transactions allowed to be started, after the statement is issued. Any uncommitted transactions are rolled back. (If long uncommitted transactions exist, this method of shutdown might not complete quickly, despite its name.) Oracle Database does not wait for users currently connected to the database to disconnect. The database implicitly rolls back active transactions and disconnects all connected users. The next startup of the database will not require any instance recovery procedures. Shutting Down with the TRANSACTIONAL Clause When you want to perform a planned shutdown of an instance while allowing active transactions to complete first, use the SHUTDOWN command with the TRANSACTIONAL clause: SHUTDOWN TRANSACTIONAL Transactional database shutdown proceeds with the following conditions: ■ ■ ■ No new connections are allowed, nor are new transactions allowed to be started, after the statement is issued. After all transactions have completed, any client still connected to the instance is disconnected. At this point, the instance shuts down just as it would when a SHUTDOWN IMMEDIATE statement is submitted. The next startup of the database will not require any instance recovery procedures. A transactional shutdown prevents clients from losing work, and at the same time, does not require all users to log off. Shutting Down with the ABORT Clause You can shut down a database instantaneously by aborting the database instance. If possible, perform this type of shutdown only in the following situations: The database or one of its applications is functioning irregularly and none of the other types of shutdown works. ■ ■ You need to shut down the database instantaneously (for example, if you know a power shutdown is going to occur in one minute). You experience problems when starting a database instance. When you must do a database shutdown by aborting transactions and user connections, issue the SHUTDOWN command with the ABORT clause: SHUTDOWN ABORT An aborted database shutdown proceeds with the following conditions: ■ No new connections are allowed, nor are new transactions allowed to be started, after the statement is issued. 3-10 Oracle Database Administrator’s Guide Quiescing a Database ■ ■ ■ Current client SQL statements being processed by Oracle Database are immediately terminated. Uncommitted transactions are not rolled back. Oracle Database does not wait for users currently connected to the database to disconnect. The database implicitly disconnects all connected users. The next startup of the database will require instance recovery procedures. Shutdown Timeout Shutdown modes that wait for users to disconnect or for transactions to complete have a limit on the amount of time that they wait. If all events blocking the shutdown do not occur within one hour, the shutdown command cancels with the following message: ORA-01013: user requested cancel of current operation. Quiescing a Database Occasionally you might want to put a database in a state that allows only DBA transactions, queries, fetches, or PL/SQL statements. Such a state is referred to as a quiesced state, in the sense that no ongoing non-DBA transactions, queries, fetches, or PL/SQL statements are running in the system. In this discussion of quiesce database, a DBA is defined as user SYS or SYSTEM. Other users, including those with the DBA role, are not allowed to issue the ALTER SYSTEM QUIESCE DATABASE statement or proceed after the database is quiesced. Note: The quiesced state lets administrators perform actions that cannot safely be done otherwise. These actions include: ■ ■ Actions that fail if concurrent user transactions access the same object--for example, changing the schema of a database table or adding a column to an existing table where a no-wait lock is required. Actions whose undesirable intermediate effect can be seen by concurrent user transactions--for example, a multistep procedure for reorganizing a table when the table is first exported, then dropped, and finally imported. A concurrent user who attempts to access the table after it was dropped, but before import, would not have an accurate view of the situation. Without the ability to quiesce the database, you would need to shut down the database and reopen it in restricted mode. This is a serious restriction, especially for systems requiring 24 x 7 availability. Quiescing a database is much a smaller restriction, because it eliminates the disruption to users and the downtime associated with shutting down and restarting the database. When the database is in the quiesced state, it is through the facilities of the Database Resource Manager that non-DBA sessions are prevented from becoming active. Therefore, while this statement is in effect, any attempt to change the current resource plan will be queued until after the system is unquiesced. See Chapter 24, "Using the Database Resource Manager" for more information about the Database Resource Manager. Starting Up and Shutting Down 3-11 Quiescing a Database Placing a Database into a Quiesced State To place a database into a quiesced state, issue the following statement: ALTER SYSTEM QUIESCE RESTRICTED; Non-DBA active sessions will continue until they become inactive. An active session is one that is currently inside of a transaction, a query, a fetch, or a PL/SQL statement; or a session that is currently holding any shared resources (for example, enqueues). No inactive sessions are allowed to become active. For example, If a user issues a SQL query in an attempt to force an inactive session to become active, the query will appear to be hung. When the database is later unquiesced, the session is resumed, and the blocked action is processed. Once all non-DBA sessions become inactive, the ALTER SYSTEM QUIESCE RESTRICTED statement completes, and the database is in a quiesced state. In an Oracle Real Application Clusters environment, this statement affects all instances, not just the one that issues the statement. The ALTER SYSTEM QUIESCE RESTRICTED statement may wait a long time for active sessions to become inactive. You can determine the sessions that are blocking the quiesce operation by querying the V$BLOCKING_QUIESCE view. This view returns only a single column: SID (Session ID). You can join it with V$SESSION to get more information about the session, as shown in the following example: select bl.sid, user, osuser, type, program from v$blocking_quiesce bl, v$session se where bl.sid = se.sid; See Oracle Database Reference for details on these view. If you interrupt the request to quiesce the database, or if your session terminates abnormally before all active sessions are quiesced, then Oracle Database automatically reverses any partial effects of the statement. For queries that are carried out by successive multiple Oracle Call Interface (OCI) fetches, the ALTER SYSTEM QUIESCE RESTRICTED statement does not wait for all fetches to finish. It only waits for the current fetch to finish. For both dedicated and shared server connections, all non-DBA logins after this statement is issued are queued by the Database Resource Manager, and are not allowed to proceed. To the user, it appears as if the login is hung. The login will resume when the database is unquiesced. The database remains in the quiesced state even if the session that issued the statement exits. A DBA must log in to the database to issue the statement that specifically unquiesces the database. You cannot perform a cold backup when the database is in the quiesced state, because Oracle Database background processes may still perform updates for internal purposes even while the database is quiesced. In addition, the file headers of online datafiles continue to appear to be accessible. They do not look the same as if a clean shutdown had been performed. However, you can still take online backups while the database is in a quiesced state. Note: Restoring the System to Normal Operation The following statement restores the database to normal operation: 3-12 Oracle Database Administrator’s Guide Suspending and Resuming a Database ALTER SYSTEM UNQUIESCE; All non-DBA activity is allowed to proceed. In an Oracle Real Application Clusters environment, this statement is not required to be issued from the same session, or even the same instance, as that which quiesced the database. If the session issuing the ALTER SYSTEM UNQUIESCE statement terminates abnormally, then the Oracle Database server ensures that the unquiesce operation completes. Viewing the Quiesce State of an Instance You can query the ACTIVE_STATE column of the V$INSTANCE view to see the current state of an instance. The column values has one of these values: ■ NORMAL: Normal unquiesced state. ■ QUIESCING: Being quiesced, but some non-DBA sessions are still active. ■ QUIESCED: Quiesced; no non-DBA sessions are active or allowed. Suspending and Resuming a Database The ALTER SYSTEM SUSPEND statement halts all input and output (I/O) to datafiles (file header and file data) and control files. The suspended state lets you back up a database without I/O interference. When the database is suspended all preexisting I/O operations are allowed to complete and any new database accesses are placed in a queued state. The suspend command is not specific to an instance. In an Oracle Real Application Clusters environment, when you issue the suspend command on one system, internal locking mechanisms propagate the halt request across instances, thereby quiescing all active instances in a given cluster. However, if someone starts a new instance another instance is being suspended, the new instance will not be suspended. Use the ALTER SYSTEM RESUME statement to resume normal database operations. The SUSPEND and RESUME commands can be issued from different instances. For example, if instances 1, 2, and 3 are running, and you issue an ALTER SYSTEM SUSPEND statement from instance 1, then you can issue a RESUME statement from instance 1, 2, or 3 with the same effect. The suspend/resume feature is useful in systems that allow you to mirror a disk or file and then split the mirror, providing an alternative backup and restore solution. If you use a system that is unable to split a mirrored disk from an existing database while writes are occurring, then you can use the suspend/resume feature to facilitate the split. The suspend/resume feature is not a suitable substitute for normal shutdown operations, because copies of a suspended database can contain uncommitted updates. Do not use the ALTER SYSTEM SUSPEND statement as a substitute for placing a tablespace in hot backup mode. Precede any database suspend operation by an ALTER TABLESPACE BEGIN BACKUP statement. Caution: The following statements illustrate ALTER SYSTEM SUSPEND/RESUME usage. The V$INSTANCE view is queried to confirm database status. SQL> ALTER SYSTEM SUSPEND; System altered Starting Up and Shutting Down 3-13 Suspending and Resuming a Database SQL> SELECT DATABASE_STATUS FROM V$INSTANCE; DATABASE_STATUS --------SUSPENDED SQL> ALTER SYSTEM RESUME; System altered SQL> SELECT DATABASE_STATUS FROM V$INSTANCE; DATABASE_STATUS --------ACTIVE See Also: Oracle Database Backup and Recovery Advanced User's Guide for details about backing up a database using the database suspend/resume feature 3-14 Oracle Database Administrator’s Guide 4 Managing Oracle Database Processes This chapter describes how to manage and monitor the processes of an Oracle Database instance and contains the following topics: ■ About Dedicated and Shared Server Processes ■ Configuring Oracle Database for Shared Server ■ About Oracle Database Background Processes ■ Managing Processes for Parallel SQL Execution ■ Managing Processes for External Procedures ■ Terminating Sessions ■ Monitoring the Operation of Your Database About Dedicated and Shared Server Processes Oracle Database creates server processes to handle the requests of user processes connected to an instance. A server process can be either of the following: ■ A dedicated server process, which services only one user process ■ A shared server process, which can service multiple user processes Your database is always enabled to allow dedicated server processes, but you must specifically configure and enable shared server by setting one or more initialization parameters. Dedicated Server Processes Figure 4–1, "Oracle Database Dedicated Server Processes" illustrates how dedicated server processes work. In this diagram two user processes are connected to the database through dedicated server processes. In general, it is better to be connected through a dispatcher and use a shared server process. This is illustrated in Figure 4–2, "Oracle Database Shared Server Processes". A shared server process can be more efficient because it keeps the number of processes required for the running instance low. In the following situations, however, users and administrators should explicitly connect to an instance using a dedicated server process: ■ ■ To submit a batch job (for example, when a job can allow little or no idle time for the server process) To use Recovery Manager (RMAN) to back up, restore, or recover a database Managing Oracle Database Processes 4-1 About Dedicated and Shared Server Processes To request a dedicated server connection when Oracle Database is configured for shared server, users must connect using a net service name that is configured to use a dedicated server. Specifically, the net service name value should include the SERVER=DEDICATED clause in the connect descriptor. See Also: Oracle Database Net Services Administrator's Guide for more information about requesting a dedicated server connection Figure 4–1 Oracle Database Dedicated Server Processes User Process User Process Application Code Application Code Client Workstation Database Server Dedicated Server Process Oracle Server Code Oracle Server Code Program Interface System Global Area Shared Server Processes Consider an order entry system with dedicated server processes. A customer phones the order desk and places an order, and the clerk taking the call enters the order into the database. For most of the transaction, the clerk is on the telephone talking to the customer. A server process is not needed during this time, so the server process dedicated to the clerk's user process remains idle. The system is slower for other clerks entering orders, because the idle server process is holding system resources. Shared server architecture eliminates the need for a dedicated server process for each connection (see Figure 4–2). 4-2 Oracle Database Administrator’s Guide About Dedicated and Shared Server Processes Figure 4–2 Oracle Database Shared Server Processes User Process Code Code Code Application Code Code Code Code Code Code 7 Client Workstation Database Server 1 Dispatcher Processes 6 Oracle Oracle Oracle Oracle Server Code Server Code Server Code Server Code Shared server processes 3 4 2 System Global Area Request Queue 5 Response Queues In a shared server configuration, client user processes connect to a dispatcher. The dispatcher can support multiple client connections concurrently. Each client connection is bound to a virtual circuit, which is a piece of shared memory used by the dispatcher for client database connection requests and replies. The dispatcher places a virtual circuit on a common queue when a request arrives. An idle shared server process picks up the virtual circuit from the common queue, services the request, and relinquishes the virtual circuit before attempting to retrieve another virtual circuit from the common queue. This approach enables a small pool of server processes to serve a large number of clients. A significant advantage of shared server architecture over the dedicated server model is the reduction of system resources, enabling the support of an increased number of users. For even better resource management, shared server can be configured for connection pooling. Connection pooling lets a dispatcher support more users by enabling the database server to time-out protocol connections and to use those connections to service an active session. Further, shared server can be configured for session multiplexing, which combines multiple sessions for transmission over a single network connection in order to conserve the operating system's resources. Managing Oracle Database Processes 4-3 Configuring Oracle Database for Shared Server Shared server architecture requires Oracle Net Services. User processes targeting the shared server must connect through Oracle Net Services, even if they are on the same machine as the Oracle Database instance. See Also: Oracle Database Net Services Administrator's Guide for more detailed information about shared server, including features such as connection pooling and session multiplexing Configuring Oracle Database for Shared Server Shared memory resources are preconfigured to allow the enabling of shared server at run time. You need not configure it by specifying parameters in your initialization parameter file, but you can do so if that better suits your environment. You can start dispatchers and shared server processes (shared servers) dynamically using the ALTER SYSTEM statement. This section discusses how to enable shared server and how to set or alter shared server initialization parameters. It contains the following topics: ■ Initialization Parameters for Shared Server ■ Enabling Shared Server ■ Configuring Dispatchers ■ Monitoring Shared Server Oracle Database SQL Reference for further information about the ALTER SYSTEM statement See Also: Initialization Parameters for Shared Server The following initialization parameters control shared server operation: ■ ■ ■ ■ ■ ■ SHARED_SERVERS: Specifies the initial number of shared servers to start and the minimum number of shared servers to keep. This is the only required parameter for using shared servers. MAX_SHARED_SERVERS: Specifies the maximum number of shared servers that can run simultaneously. SHARED_SERVER_SESSIONS: Specifies the total number of shared server user sessions that can run simultaneously. Setting this parameter enables you to reserve user sessions for dedicated servers. DISPATCHERS: Configures dispatcher processes in the shared server architecture. MAX_DISPATCHERS: Specifies the maximum number of dispatcher processes that can run simultaneously. This parameter can be ignored for now. It will only be useful in a future release when the number of dispatchers is auto-tuned according to the number of concurrent connections. CIRCUITS: Specifies the total number of virtual circuits that are available for inbound and outbound network sessions. See Also: Oracle Database Reference for more information about these initialization parameters 4-4 Oracle Database Administrator’s Guide Configuring Oracle Database for Shared Server Enabling Shared Server Shared server is enabled by setting the SHARED_SERVERS initialization parameter to a value greater than 0. The other shared server initialization parameters need not be set. Because shared server requires at least one dispatcher in order to work, a dispatcher is brought up even if no dispatcher has been configured. Dispatchers are discussed in "Configuring Dispatchers" on page 4-7. Shared server can be started dynamically by setting the SHARED_SERVERS parameter to a nonzero value with the ALTER SYSTEM statement, or SHARED_SERVERS can be included at database startup in the initialization parameter file. If SHARED_SERVERS is not included in the initialization parameter file, or is included but is set to 0, then shared server is not enabled at database startup. Note: For backward compatibility, if SHARED_SERVERS is not included in the initialization parameter file at database startup, but DISPATCHERS is included and it specifies at least one dispatcher, shared server is enabled. In this case, the default for SHARED_SERVERS is 1. However, if neither SHARED_SERVERS nor DISPATCHERS is included in the initialization file, you cannot start shared server after the instance is brought up by just altering the DISPATCHERS parameter. You must specifically alter SHARED_SERVERS to a nonzero value to start shared server. Determining a Value for SHARED_SERVERS The SHARED_SERVERS initialization parameter specifies the minimum number of shared servers that you want created when the instance is started. After instance startup, Oracle Database can dynamically adjust the number of shared servers based on how busy existing shared servers are and the length of the request queue. In typical systems, the number of shared servers stabilizes at a ratio of one shared server for every ten connections. For OLTP applications, when the rate of requests is low, or when the ratio of server usage to request is low, the connections-to-servers ratio could be higher. In contrast, in applications where the rate of requests is high or the server usage-to-request ratio is high, the connections-to-server ratio could be lower. The PMON (process monitor) background process cannot terminate shared servers below the value specified by SHARED_SERVERS. Therefore, you can use this parameter to stabilize the load and minimize strain on the system by preventing PMON from terminating and then restarting shared servers because of coincidental fluctuations in load. If you know the average load on your system, you can set SHARED_SERVERS to an optimal value. The following example shows how you can use this parameter: Assume a database is being used by a telemarketing center staffed by 1000 agents. On average, each agent spends 90% of the time talking to customers and only 10% of the time looking up and updating records. To keep the shared servers from being terminated as agents talk to customers and then spawned again as agents access the database, a DBA specifies that the optimal number of shared servers is 100. However, not all work shifts are staffed at the same level. On the night shift, only 200 agents are needed. Since SHARED_SERVERS is a dynamic parameter, a DBA reduces the number of shared servers to 20 at night, thus allowing resources to be freed up for other tasks such as batch jobs. Managing Oracle Database Processes 4-5 Configuring Oracle Database for Shared Server Decreasing the Number of Shared Server Processes You can decrease the minimum number of shared servers that must be kept active by dynamically setting the SHARED_SERVERS parameter to a lower value. Thereafter, until the number of shared servers is decreased to the value of the SHARED_SERVERS parameter, any shared servers that become inactive are marked by PMON for termination. The following statement reduces the number of shared servers: ALTER SYSTEM SET SHARED_SERVERS = 5; Setting SHARED_SERVERS to 0 disables shared server. For more information, please refer to "Disabling Shared Servers" on page 4-12. Limiting the Number of Shared Server Processes The MAX_SHARED_SERVERS parameter specifies the maximum number of shared servers that can be automatically created by PMON. It has no default value. If no value is specified, then PMON starts as many shared servers as is required by the load, subject to these limitations: ■ ■ ■ The process limit (set by the PROCESSES initialization parameter) A minimum number of free process slots (at least one-eighth of the total process slots, or two slots if PROCESSES is set to less than 24) System resources On Windows NT, take care when setting MAX_SHARED_SERVERS to a high value, because each server is a thread in a common process. Note: The value of SHARED_SERVERS overrides the value of MAX_SHARED_SERVERS. Therefore, you can force PMON to start more shared servers than the MAX_SHARED_SERVERS value by setting SHARED_SERVERS to a value higher than MAX_SHARED_SERVERS. You can subsequently place a new upper limit on the number of shared servers by dynamically altering the MAX_SHARED_SERVERS to a value higher than SHARED_SERVERS. The primary reason to limit the number of shared servers is to reserve resources, such as memory and CPU time, for other processes. For example, consider the case of the telemarketing center discussed previously: The DBA wants to reserve two thirds of the resources for batch jobs at night. He sets MAX_SHARED_SERVERS to less than one third of the maximum number of processes (PROCESSES). By doing so, the DBA ensures that even if all agents happen to access the database at the same time, batch jobs can connect to dedicated servers without having to wait for the shared servers to be brought down after processing agents' requests. Another reason to limit the number of shared servers is to prevent the concurrent run of too many server processes from slowing down the system due to heavy swapping, although PROCESSES can serve as the upper bound for this rather than MAX_SHARED_SERVERS. Still other reasons to limit the number of shared servers are testing, debugging, performance analysis, and tuning. For example, to see how many shared servers are needed to efficiently support a certain user community, you can vary 4-6 Oracle Database Administrator’s Guide Configuring Oracle Database for Shared Server MAX_SHARED_SERVERS from a very small number upward until no delay in response time is noticed by the users. Limiting the Number of Shared Server Sessions The SHARED_SERVER_SESSIONS initialization parameter specifies the maximum number of concurrent shared server user sessions. Setting this parameter, which is a dynamic parameter, lets you reserve database sessions for dedicated servers. This in turn ensures that administrative tasks that require dedicated servers, such as backing up or recovering the database, are not preempted by shared server sessions. This parameter has no default value. If it is not specified, the system can create shared server sessions as needed, limited by the SESSIONS initialization parameter. Protecting Shared Memory The CIRCUITS parameter sets a maximum limit on the number of virtual circuits that can be created in shared memory. This parameter has no default. If it is not specified, then the system can create circuits as needed, limited by the DISPATCHERS initialization parameter and system resources. Configuring Dispatchers The DISPATCHERS initialization parameter configures dispatcher processes in the shared server architecture. At least one dispatcher process is required for shared server to work.If you do not specify a dispatcher, but you enable shared server by setting SHARED_SERVER to a nonzero value, then by default Oracle Database creates one dispatcher for the TCP protocol. The equivalent DISPATCHERS explicit setting of the initialization parameter for this configuration is: dispatchers="(PROTOCOL=tcp)" You can configure more dispatchers, using the DISPATCHERS initialization parameter, if either of the following conditions apply: ■ ■ You need to configure a protocol other than TCP/IP. You configure a protocol address with one of the following attributes of the DISPATCHERS parameter: – ADDRESS – DESCRIPTION – PROTOCOL You want to configure one or more of the optional dispatcher attributes: – DISPATCHERS – CONNECTIONS – SESSIONS – TICKS – LISTENER – MULTIPLEX – POOL – SERVICE Managing Oracle Database Processes 4-7 Configuring Oracle Database for Shared Server Note: Database Configuration Assistant helps you configure this parameter. DISPATCHERS Initialization Parameter Attributes This section provides brief descriptions of the attributes that can be specified with the DISPATCHERS initialization parameter. A protocol address is required and is specified using one or more of the following attributes: Attribute Description ADDRESS Specify the network protocol address of the endpoint on which the dispatchers listen. DESCRIPTION Specify the network description of the endpoint on which the dispatchers listen, including the network protocol address. The syntax is as follows: (DESCRIPTION=(ADDRESS=...)) PROTOCOL Specify the network protocol for which the dispatcher generates a listening endpoint. For example: (PROTOCOL=tcp) See the Oracle Database Net Services Reference for further information about protocol address syntax. The following attribute specifies how many dispatchers this configuration should have. It is optional and defaults to 1. Attribute Description DISPATCHERS Specify the initial number of dispatchers to start. The following attributes tell the instance about the network attributes of each dispatcher of this configuration. They are all optional. Attribute Description CONNECTIONS Specify the maximum number of network connections to allow for each dispatcher. SESSIONS Specify the maximum number of network sessions to allow for each dispatcher. TICKS Specify the duration of a TICK in seconds. A TICK is a unit of time in terms of which the connection pool timeout can be specified. Used for connection pooling. LISTENER Specify an alias name for the listeners with which the PMON process registers dispatcher information. Set the alias to a name that is resolved through a naming method. MULTIPLEX Used to enable the Oracle Connection Manager session multiplexing feature. POOL Used to enable connection pooling. SERVICE Specify the service names the dispatchers register with the listeners. 4-8 Oracle Database Administrator’s Guide Configuring Oracle Database for Shared Server You can specify either an entire attribute name a substring consisting of at least the first three characters. For example, you can specify SESSIONS=3, SES=3, SESS=3, or SESSI=3, and so forth. Oracle Database Reference for more detailed descriptions of the attributes of the DISPATCHERS initialization parameter See Also: Determining the Number of Dispatchers Once you know the number of possible connections for each process for the operating system, calculate the initial number of dispatchers to create during instance startup, for each network protocol, using the following formula: Number of dispatchers = CEIL ( max. concurrent sessions / connections for each dispatcher ) CEIL returns the result roundest up to the next whole integer. For example, assume a system that can support 970 connections for each process, and that has: ■ A maximum of 4000 sessions concurrently connected through TCP/IP and ■ A maximum of 2,500 sessions concurrently connected through TCP/IP with SSL The DISPATCHERS attribute for TCP/IP should be set to a minimum of five dispatchers (4000 / 970), and for TCP/IP with SSL three dispatchers (2500 / 970: DISPATCHERS='(PROT=tcp)(DISP=5)', '(PROT-tcps)(DISP=3)' Depending on performance, you may need to adjust the number of dispatchers. Setting the Initial Number of Dispatchers You can specify multiple dispatcher configurations by setting DISPATCHERS to a comma separated list of strings, or by specifying multiple DISPATCHERS parameters in the initialization file. If you specify DISPATCHERS multiple times, the lines must be adjacent to each other in the initialization parameter file. Internally, Oracle Database assigns an INDEX value (beginning with zero) to each DISPATCHERS parameter. You can later refer to that DISPATCHERS parameter in an ALTER SYSTEM statement by its index number. Some examples of setting the DISPATCHERS initialization parameter follow. Example: Typical This is a typical example of setting the DISPATCHERS initialization parameter. DISPATCHERS="(PROTOCOL=TCP)(DISPATCHERS=2)" Example: Forcing the IP Address Used for Dispatchers The following hypothetical example will create two dispatchers that will listen on the specified IP address. The address must be a valid IP address for the host that the instance is on. (The host may be configured with multiple IP addresses.) DISPATCHERS="(ADDRESS=(PROTOCOL=TCP)(HOST=144.25.16.201))(DISPATCHERS=2)" Example: Forcing the Port Used by Dispatchers To force the dispatchers to use a specific port as the listening endpoint, add the PORT attribute as follows: DISPATCHERS="(ADDRESS=(PROTOCOL=TCP)(PORT=5000))" DISPATCHERS="(ADDRESS=(PROTOCOL=TCP)(PORT=5001))" Managing Oracle Database Processes 4-9 Configuring Oracle Database for Shared Server Altering the Number of Dispatchers You can control the number of dispatcher processes in the instance. Unlike the number of shared servers, the number of dispatchers does not change automatically. You change the number of dispatchers explicitly with the ALTER SYSTEM statement. In this release of Oracle Database, you can increase the number of dispatchers to more than the limit specified by the MAX_DISPATCHERS parameter. It is planned that MAX_DISPATCHERS will be taken into consideration in a future release. Monitor the following views to determine the load on the dispatcher processes: ■ V$QUEUE ■ V$DISPATCHER ■ V$DISPATCHER_RATE See Also: Oracle Database Performance Tuning Guide for information about monitoring these views to determine dispatcher load and performance If these views indicate that the load on the dispatcher processes is consistently high, then performance may be improved by starting additional dispatcher processes to route user requests. In contrast, if the load on dispatchers is consistently low, reducing the number of dispatchers may improve performance. To dynamically alter the number of dispatchers when the instance is running, use the ALTER SYSTEM statement to modify the DISPATCHERS attribute setting for an existing dispatcher configuration. You can also add new dispatcher configurations to start dispatchers with different network attributes. When you reduce the number of dispatchers for a particular dispatcher configuration, the dispatchers are not immediately removed. Rather, as users disconnect, Oracle Database terminates dispatchers down to the limit you specify in DISPATCHERS, For example, suppose the instance was started with this DISPATCHERS setting in the initialization parameter file: DISPATCHERS='(PROT=tcp)(DISP=2)', '(PROT=tcps)(DISP=2)' To increase the number of dispatchers for the TCP/IP protocol from 2 to 3, and decrease the number of dispatchers for the TCP/IP with SSL protocol from 2 to 1, you can issue the following statement: ALTER SYSTEM SET DISPATCHERS = '(INDEX=0)(DISP=3)', '(INDEX=1)(DISP=1)'; or ALTER SYSTEM SET DISPATCHERS = '(PROT=tcp)(DISP=3)', '(PROT-tcps)(DISP=1)'; You need not specify (DISP=1). It is optional because 1 is the default value for the DISPATCHERS parameter. Note: If fewer than three dispatcher processes currently exist for TCP/IP, the database creates new ones. If more than one dispatcher process currently exists for TCP/IP with SSL, then the database terminates the extra ones as the connected users disconnect. Suppose that instead of changing the number of dispatcher processes for the TCP/IP protocol, you want to add another TCP/IP dispatcher that supports connection pooling. You can do so by entering the following statement: 4-10 Oracle Database Administrator’s Guide Configuring Oracle Database for Shared Server ALTER SYSTEM SET DISPATCHERS = '(INDEX=2)(PROT=tcp)(POOL=on)'; The INDEX attribute is needed to add the new dispatcher configuration. If you omit (INDEX=2) in the preceding statement, then the TCP/IP dispatcher configuration at INDEX 0 will be changed to support connection pooling, and the number of dispatchers for that configuration will be reduced to 1, which is the default when the number of dispatchers (attribute DISPATCHERS) is not specified. Notes on Altering Dispatchers ■ ■ ■ ■ The INDEX keyword can be used to identify which dispatcher configuration to modify. If you do not specify INDEX, then the first dispatcher configuration matching the DESCRIPTION, ADDRESS, or PROTOCOL specified will be modified. If no match is found among the existing dispatcher configurations, then a new dispatcher will be added. The INDEX value can range from 0 to n-1, where n is the current number of dispatcher configurations. If your ALTER SYSTEM statement specifies an INDEX value equal to n, where n is the current number of dispatcher configurations, a new dispatcher configuration will be added. To see the values of the current dispatcher configurations--that is, the number of dispatchers, whether connection pooling is on, and so forth--query the V$DISPATCHER_CONFIG dynamic performance view. To see which dispatcher configuration a dispatcher is associated with, query the CONF_INDX column of the V$DISPATCHER view. When you change the DESCRIPTION, ADDRESS, PROTOCOL, CONNECTIONS, TICKS, MULTIPLEX, and POOL attributes of a dispatcher configuration, the change does not take effect for existing dispatchers but only for new dispatchers. Therefore, in order for the change to be effective for all dispatchers associated with a configuration, you must forcibly kill existing dispatchers after altering the DISPATCHERS parameter, and let the database start new ones in their place with the newly specified properties. The attributes LISTENER and SERVICES are not subject to the same constraint. They apply to existing dispatchers associated with the modified configuration. Attribute SESSIONS applies to existing dispatchers only if its value is reduced. However, if its value is increased, it is applied only to newly started dispatchers. Shutting Down Specific Dispatcher Processes With the ALTER SYSTEM statement, you leave it up to the database to determine which dispatchers to shut down to reduce the number of dispatchers. Alternatively, it is possible to shut down specific dispatcher processes. To identify the name of the specific dispatcher process to shut down, use the V$DISPATCHER dynamic performance view. SELECT NAME, NETWORK FROM V$DISPATCHER; Each dispatcher is uniquely identified by a name of the form Dnnn. To shut down dispatcher D002, issue the following statement: ALTER SYSTEM SHUTDOWN IMMEDIATE 'D002'; The IMMEDIATE keyword stops the dispatcher from accepting new connections and the database immediately terminates all existing connections through that dispatcher. After all sessions are cleaned up, the dispatcher process shuts down. If IMMEDIATE Managing Oracle Database Processes 4-11 Configuring Oracle Database for Shared Server were not specified, the dispatcher would wait until all of its users disconnected and all of its connections terminated before shutting down. Disabling Shared Servers You disable shared server by setting SHARED_SERVERS to 0. No new client can connect in shared mode. However, when you set SHARED_SERVERS to 0, Oracle Database retains some shared servers until all shared server connections are closed. The number of shared servers retained is either the number specified by the preceding setting of SHARED_SERVERS or the value of the MAX_SHARED_SERVERS parameter, whichever is smaller. If both SHARED_SERVERS and MAX_SHARED_SERVERS are set to 0, then all shared servers will terminate and requests from remaining shared server clients will be queued until the value of SHARED_SERVERS or MAX_SHARED_SERVERS is raised again. To terminate dispatchers once all shared server clients disconnect, enter this statement: ALTER SYSTEM SET DISPATCHERS = ''; Monitoring Shared Server The following views are useful for obtaining information about your shared server configuration and for monitoring performance. View Description V$DISPATCHER Provides information on the dispatcher processes, including name, network address, status, various usage statistics, and index number. V$DISPATCHER_CONFIG Provides configuration information about the dispatchers. V$DISPATCHER_RATE Provides rate statistics for the dispatcher processes. V$QUEUE Contains information on the shared server message queues. V$SHARED_SERVER Contains information on the shared servers. V$CIRCUIT Contains information about virtual circuits, which are user connections to the database through dispatchers and servers. V$SHARED_SERVER_MONITOR Contains information for tuning shared server. V$SGA Contains size information about various system global area (SGA) groups. May be useful when tuning shared server. V$SGASTAT Contains detailed statistical information about the SGA, useful for tuning. V$SHARED_POOL_RESERVED Lists statistics to help tune the reserved pool and space within the shared pool. See Also: ■ ■ Oracle Database Reference for detailed descriptions of these views Oracle Database Performance Tuning Guide for specific information about monitoring and tuning shared server 4-12 Oracle Database Administrator’s Guide About Oracle Database Background Processes About Oracle Database Background Processes To maximize performance and accommodate many users, a multiprocess Oracle Database system uses background processes. Background processes consolidate functions that would otherwise be handled by multiple database programs running for each user process. Background processes asynchronously perform I/O and monitor other Oracle Database processes to provide increased parallelism for better performance and reliability. Table 4–1 describes the basic Oracle Database background processes, many of which are discussed in more detail elsewhere in this book. The use of additional database server features or options can cause more background processes to be present. For example, when you use Advanced Queuing, the queue monitor (QMNn) background process is present. When you specify the FILE_MAPPING initialization parameter for mapping datafiles to physical devices on a storage subsystem, then the FMON process is present. Table 4–1 Oracle Database Background Processes Process Name Description Database writer (DBWn) The database writer writes modified blocks from the database buffer cache to the datafiles. Oracle Database allows a maximum of 20 database writer processes (DBW0-DBW9 and DBWa-DBWj). The DB_WRITER_PROCESSES initialization parameter specifies the number of DBWn processes. The database selects an appropriate default setting for this initialization parameter or adjusts a user-specified setting based on the number of CPUs and the number of processor groups. For more information about setting the DB_WRITER_PROCESSES initialization parameter, see the Oracle Database Performance Tuning Guide. Log writer (LGWR) The log writer process writes redo log entries to disk. Redo log entries are generated in the redo log buffer of the system global area (SGA). LGWR writes the redo log entries sequentially into a redo log file. If the database has a multiplexed redo log, then LGWR writes the redo log entries to a group of redo log files. See Chapter 6, "Managing the Redo Log" for information about the log writer process. Checkpoint (CKPT) At specific times, all modified database buffers in the system global area are written to the datafiles by DBWn. This event is called a checkpoint. The checkpoint process is responsible for signalling DBWn at checkpoints and updating all the datafiles and control files of the database to indicate the most recent checkpoint. System monitor (SMON) The system monitor performs recovery when a failed instance starts up again. In a Real Application Clusters database, the SMON process of one instance can perform instance recovery for other instances that have failed. SMON also cleans up temporary segments that are no longer in use and recovers dead transactions skipped during system failure and instance recovery because of file-read or offline errors. These transactions are eventually recovered by SMON when the tablespace or file is brought back online. Process monitor (PMON) The process monitor performs process recovery when a user process fails. PMON is responsible for cleaning up the cache and freeing resources that the process was using. PMON also checks on the dispatcher processes (described later in this table) and server processes and restarts them if they have failed. Archiver (ARCn) One or more archiver processes copy the redo log files to archival storage when they are full or a log switch occurs. Archiver processes are the subject of Chapter 7, "Managing Archived Redo Logs". Managing Oracle Database Processes 4-13 Managing Processes for Parallel SQL Execution Table 4–1 (Cont.) Oracle Database Background Processes Process Name Description Recoverer (RECO) The recoverer process is used to resolve distributed transactions that are pending because of a network or system failure in a distributed database. At timed intervals, the local RECO attempts to connect to remote databases and automatically complete the commit or rollback of the local portion of any pending distributed transactions. For information about this process and how to start it, see Chapter 33, "Managing Distributed Transactions". Dispatcher (Dnnn) Dispatchers are optional background processes, present only when the shared server configuration is used. Shared server was discussed previously in "Configuring Oracle Database for Shared Server" on page 4-4. Global Cache Service (LMS) In a Real Application Clusters environment, this process manages resources and provides inter-instance resource control. Oracle Database Concepts for more information about Oracle Database background processes See Also: Managing Processes for Parallel SQL Execution Note: The parallel execution feature described in this section is available with the Oracle Database Enterprise Edition. This section describes how to manage parallel processing of SQL statements. In this configuration Oracle Database can divide the work of processing an SQL statement among multiple parallel processes. The execution of many SQL statements can be parallelized. The degree of parallelism is the number of parallel execution servers that can be associated with a single operation. The degree of parallelism is determined by any of the following: ■ ■ A PARALLEL clause in a statement For objects referred to in a query, the PARALLEL clause that was used when the object was created or altered ■ A parallel hint inserted into the statement ■ A default determined by the database An example of using parallel SQL execution is contained in "Parallelizing Table Creation" on page 15-8. The following topics are contained in this section: ■ About Parallel Execution Servers ■ Altering Parallel Execution for a Session See Also: ■ ■ Oracle Database Concepts for a description of parallel execution Oracle Database Performance Tuning Guide for information about using parallel hints 4-14 Oracle Database Administrator’s Guide Managing Processes for Parallel SQL Execution About Parallel Execution Servers When an instance starts up, Oracle Database creates a pool of parallel execution servers which are available for any parallel operation. A process called the parallel execution coordinator dispatches the execution of a pool of parallel execution servers and coordinates the sending of results from all of these parallel execution servers back to the user. The parallel execution servers are enabled by default, because by default the value for PARALLEL_MAX_SERVERS initialization parameter is set >0. The processes are available for use by the various Oracle Database features that are capable of exploiting parallelism. Related initialization parameters are tuned by the database for the majority of users, but you can alter them as needed to suit your environment. For ease of tuning, some parameters can be altered dynamically. Parallelism can be used by a number of features, including transaction recovery, replication, and SQL execution. In the case of parallel SQL execution, the topic discussed in this book, parallel server processes remain associated with a statement throughout its execution phase. When the statement is completely processed, these processes become available to process other statements. See Also: Oracle Database Data Warehousing Guide for more information about using and tuning parallel execution, including parallel SQL execution Altering Parallel Execution for a Session You control parallel SQL execution for a session using the ALTER SESSION statement. Disabling Parallel SQL Execution You disable parallel SQL execution with an ALTER SESSION DISABLE PARALLEL DML|DDL|QUERY statement. All subsequent DML (INSERT, UPDATE, DELETE), DDL (CREATE, ALTER), or query (SELECT) operations are executed serially after such a statement is issued. They will be executed serially regardless of any PARALLEL clause associated with the statement or parallel attribute associated with the table or indexes involved. The following statement disables parallel DDL operations: ALTER SESSION DISABLE PARALLEL DDL; Enabling Parallel SQL Execution You enable parallel SQL execution with an ALTER SESSION ENABLE PARALLEL DML|DDL|QUERY statement. Subsequently, when a PARALLEL clause or parallel hint is associated with a statement, those DML, DDL, or query statements will execute in parallel. By default, parallel execution is enabled for DDL and query statements. A DML statement can be parallelized only if you specifically issue an ALTER SESSION statement to enable parallel DML: ALTER SESSION ENABLE PARALLEL DML; Forcing Parallel SQL Execution You can force parallel execution of all subsequent DML, DDL, or query statements for which parallelization is possible with the ALTER SESSION FORCE PARALLEL DML|DDL|QUERY statement. Additionally you can force a specific degree of parallelism to be in effect, overriding any PARALLEL clause associated with subsequent statements. If you do not specify a degree of parallelism in this statement, Managing Oracle Database Processes 4-15 Managing Processes for External Procedures the default degree of parallelism is used. However, a degree of parallelism specified in a statement through a hint will override the degree being forced. The following statement forces parallel execution of subsequent statements and sets the overriding degree of parallelism to 5: ALTER SESSION FORCE PARALLEL DDL PARALLEL 5; Managing Processes for External Procedures External procedures are procedures written in one language that are called from another program in a different language. An example is a PL/SQL program calling one or more C routines that are required to perform special-purpose processing. These callable routines are stored in a dynamic link library (DLL), or a libunit in the case of a Java class method, and are registered with the base language. Oracle Database provides a special-purpose interface, the call specification (call spec), that enables users to call external procedures from other languages. To call an external procedure, an application alerts a network listener process, which in turn starts an external procedure agent. The default name of the agent is extproc, and this agent must reside on the same computer as the database server. Using the network connection established by the listener, the application passes to the external procedure agent the name of the DLL or libunit, the name of the external procedure, and any relevant parameters. The external procedure agent then loads, DLL or libunit, runs the external procedure, and passes back to the application any values returned by the external procedure. To control access to DLLs, the database administrator grants execute privileges on the appropriate DLLs to application developers. The application developers write the external procedures and grant execute privilege on specific external procedures to other users. The external library (DLL file) must be statically linked. In other words, it must not reference any external symbols from other external libraries (DLL files). Oracle Database does not resolve such symbols, so they can cause your external procedure to fail. Note: The environment for calling external procedures, consisting of tnsnames.ora and listener.ora entries, is configured by default during the installation of your database. You may need to perform additional network configuration steps for a higher level of security. These steps are documented in the Oracle Database Net Services Administrator's Guide. See Also: Oracle Database Application Developer's Guide Fundamentals for information about external procedures Terminating Sessions Sometimes it is necessary to terminate current user sessions. For example, you might want to perform an administrative operation and need to terminate all non-administrative sessions. This section describes the various aspects of terminating sessions, and contains the following topics: ■ Identifying Which Session to Terminate ■ Terminating an Active Session 4-16 Oracle Database Administrator’s Guide Terminating Sessions ■ Terminating an Inactive Session When a session is terminated, any active transactions of the session are rolled back, and resources held by the session (such as locks and memory areas) are immediately released and available to other sessions. You terminate a current session using the SQL statement ALTER SYSTEM KILL SESSION. The following statement terminates the session whose system identifier is 7 and serial number is 15: ALTER SYSTEM KILL SESSION '7,15'; Identifying Which Session to Terminate To identify which session to terminate, specify the session index number and serial number. To identify the system identifier (SID) and serial number of a session, query the V$SESSION dynamic performance view. For example, the following query identifies all sessions for the user jward: SELECT SID, SERIAL#, STATUS FROM V$SESSION WHERE USERNAME = 'JWARD'; SID ----7 12 SERIAL# --------15 63 STATUS -------ACTIVE INACTIVE A session is ACTIVE when it is making a SQL call to Oracle Database. A session is INACTIVE if it is not making a SQL call to the database. See Also: Oracle Database Reference for a description of the status values for a session Terminating an Active Session If a user session is processing a transaction (ACTIVE status) when you terminate the session, the transaction is rolled back and the user immediately receives the following message: ORA-00028: your session has been killed If, after receiving the ORA-00028 message, a user submits additional statements before reconnecting to the database, Oracle Database returns the following message: ORA-01012: not logged on An active session cannot be interrupted when it is performing network I/O or rolling back a transaction. Such a session cannot be terminated until the operation completes. In this case, the session holds all resources until it is terminated. Additionally, the session that issues the ALTER SYSTEM statement to terminate a session waits up to 60 seconds for the session to be terminated. If the operation that cannot be interrupted continues past one minute, the issuer of the ALTER SYSTEM statement receives a message indicating that the session has been marked to be terminated. A session marked to be terminated is indicated in V$SESSION with a status of KILLED and a server that is something other than PSEUDO. Managing Oracle Database Processes 4-17 Monitoring the Operation of Your Database Terminating an Inactive Session If the session is not making a SQL call to Oracle Database (is INACTIVE) when it is terminated, the ORA-00028 message is not returned immediately. The message is not returned until the user subsequently attempts to use the terminated session. When an inactive session has been terminated, the STATUS of the session in the V$SESSION view is KILLED. The row for the terminated session is removed from V$SESSION after the user attempts to use the session again and receives the ORA-00028 message. In the following example, an inactive session is terminated. First, V$SESSION is queried to identify the SID and SERIAL# of the session, and then the session is terminated. SELECT SID,SERIAL#,STATUS,SERVER FROM V$SESSION WHERE USERNAME = 'JWARD'; SID SERIAL# ----- -------7 15 12 63 2 rows selected. STATUS --------INACTIVE INACTIVE SERVER --------DEDICATED DEDICATED ALTER SYSTEM KILL SESSION '7,15'; Statement processed. SELECT SID, SERIAL#, STATUS, SERVER FROM V$SESSION WHERE USERNAME = 'JWARD'; SID SERIAL# ----- -------7 15 12 63 2 rows selected. STATUS --------KILLED INACTIVE SERVER --------PSEUDO DEDICATED Monitoring the Operation of Your Database It is important that you monitor the operation of your database on a regular basis. Doing so not only informs you about errors that have not yet come to your attention but also gives you a better understanding of the normal operation of your database. Being familiar with normal behavior in turn helps you recognize when something is wrong. This section describes some of the options available to you for monitoring the operation of your database. Server-Generated Alerts A server-generated alert is a notification from the Oracle Database server of an impending problem. The notification may contain suggestions for correcting the problem. Notifications are also provided when the problem condition has been cleared. Alerts are automatically generated when a problem occurs or when data does not match expected values for metrics, such as the following: ■ Physical Reads Per Second 4-18 Oracle Database Administrator’s Guide Monitoring the Operation of Your Database ■ User Commits Per Second ■ SQL Service Response Time Server-generated alerts can be based on threshold levels or can issue simply because an event has occurred. Threshold-based alerts can be triggered at both threshold warning and critical levels. The value of these levels can be customer-defined or internal values, and some alerts have default threshold levels which you can change if appropriate. For example, by default a server-generated alert is generated for tablespace space usage when the percentage of space usage exceeds either the 85% warning or 97% critical threshold level. Examples of alerts not based on threshold levels are: ■ Snapshot Too Old ■ Resumable Session Suspended ■ Recovery Area Space Usage An alert message is sent to the predefined persistent queue ALERT_QUE owned by the user SYS. Oracle Enterprise Manager reads this queue and provides notifications about outstanding server alerts, and sometimes suggests actions for correcting the problem. The alerts are displayed on the Enterprise Manager console and can be configured to send email or pager notifications to selected administrators. If an alert cannot be written to the alert queue, a message about the alert is written to the Oracle Database alert log. Background processes periodically flush the data to the Automatic Workload Repository to capture a history of metric values. The alert history table and ALERT_QUE are purged automatically by the system at regular intervals. The most convenient way to set and view threshold values is to use Enterprise Manager. See Oracle Database 2 Day DBA for instructions. See Also: Oracle Enterprise Manager Concepts for information about alerts available with Oracle Enterprise Manager Using APIs to Administer Server-Generated Alerts You can view and change threshold settings for the server alert metrics using the SET_THRESHOLD and GET_THRESHOLD procedures of the DBMS_SERVER_ALERTS PL/SQL package. The DBMS_AQ and DBMS_AQADM packages provide procedures for accessing and reading alert messages in the alert queue. See Also: Oracle Database PL/SQL Packages and Types Reference for information about the DBMS_SERVER_ALERTS, DBMS_AQ, and DBMS_AQADM packages Setting Threshold Levels The following example shows how to set thresholds with the SET_THRESHOLD procedure for CPU time for each user call for an instance: DBMS_SERVER_ALERT.SET_THRESHOLD( DBMS_SERVER_ALERT.CPU_TIME_PER_CALL, DBMS_SERVER_ALERT.OPERATOR_GE, '8000', DBMS_SERVER_ALERT.OPERATOR_GE, '10000', 1, 2, 'inst1', DBMS_SERVER_ALERT.OBJECT_TYPE_SERVICE, 'main.regress.rdbms.dev.us.oracle.com'); In this example, a warning alert is issued when CPU time exceeds 8000 microseconds for each user call and a critical alert is issued when CPU time exceeds 10,000 microseconds for each user call. The arguments include: Managing Oracle Database Processes 4-19 Monitoring the Operation of Your Database ■ ■ ■ ■ ■ CPU_TIME_PER_CALL specifies the metric identifier. For a list of support metrics, see Oracle Database PL/SQL Packages and Types Reference. The observation period is set to 1 minute. This period specifies the number of minutes that the condition must deviate from the threshold value before the alert is issued. The number of consecutive occurrences is set to 2. This number specifies how many times the metric value must violate the threshold values before the alert is generated. The name of the instance is set to inst1. The constant DBMS_ALERT.OBJECT_TYPE_SERVICE specifies the object type on which the threshold is set. In this example, the service name is main.regress.rdbms.dev.us.oracle.com. Retrieving Threshold Information To retrieve threshold values, use the GET_THRESHOLD procedure. For example: DECLARE warning_operator BINARY_INTEGER; warning_value VARCHAR2(60); critical_operator BINARY_INTEGER; critical_value VARCHAR2(60); observation_period BINARY_INTEGER; consecutive_occurrences BINARY_INTEGER; BEGIN DBMS_SERVER_ALERT.GET_THRESHOLD( DBMS_SERVER_ALERT.CPU_TIME_PER_CALL, warning_operator, warning_value, critical_operator, critical_value, observation_period, consecutive_occurrences, 'inst1', DBMS_SERVER_ALERT.OBJECT_TYPE_SERVICE, 'main.regress.rdbms.dev.us.oracle.com'); DBMS_OUTPUT.PUT_LINE('Warning operator: ' || warning_operator); DBMS_OUTPUT.PUT_LINE('Warning value: ' || warning_value); DBMS_OUTPUT.PUT_LINE('Critical operator: ' || critical_operator); DBMS_OUTPUT.PUT_LINE('Critical value: ' || critical_value); DBMS_OUTPUT.PUT_LINE('Observation_period: ' || observation_period); DBMS_OUTPUT.PUT_LINE('Consecutive occurrences:' || consecutive_occurrences); END; / You can also check specific threshold settings with the DBA_THRESHOLDS view. For example: SELECT metrics_name, warning_value, critical_value, consecutive_occurrences FROM DBA_THRESHOLDS WHERE metrics_name LIKE '%CPU Time%'; Additional APIs to Manage Server-Generated Alerts If you use your own tool rather than Enterprise Manager to display alerts, you must subscribe to the ALERT_QUE, read the ALERT_QUE, and display an alert notification after setting the threshold levels for an alert. To create an agent and subscribe the agent to the ALERT_QUE, use the CREATE_AQ_AGENT and ADD_SUBSCRIBER procedures of the DBMS_AQADM package. Next you must associate a database user with the subscribing agent, because only a user associated with the subscribing agent can access queued messages in the secure ALERT_QUE. You must also assign the enqueue privilege to the user. Use the ENABLE_DB_ACCESS and GRANT_QUEUE_PRIVILEGE procedures of the DBMS_AQADM package. 4-20 Oracle Database Administrator’s Guide Monitoring the Operation of Your Database Optionally, you can register with the DBMS_AQ.REGISTER procedure to receive an asynchronous notification when an alert is enqueued to ALERT_QUE. The notification can be in the form of email, HTTP post, or PL/SQL procedure. To read an alert message, you can use the DBMS_AQ.DEQUEUE procedure or OCIAQDeq call. After the message has been dequeued, use the DBMS_SERVER_ALERT.EXPAND_MESSAGE procedure to expand the text of the message. Viewing Alert Data The following dictionary views provide information about server alerts: ■ DBA_THRESHOLDS lists the threshold settings defined for the instance. ■ DBA_OUTSTANDING_ALERTS describes the outstanding alerts in the database. ■ DBA_ALERT_HISTORY lists a history of alerts that have been cleared. ■ V$ALERT_TYPES provides information such as group and type for each alert. ■ ■ V$METRICNAME contains the names, identifiers, and other information about the system metrics. V$METRIC and V$METRIC_HISTORY views contain system-level metric values in memory. Oracle Database Reference for information on static data dictionary views and dynamic performance views See Also: Monitoring the Database Using Trace Files and the Alert Log Each server and background process can write to an associated trace file. When an internal error is detected by a process, it dumps information about the error to its trace file. Some of the information written to a trace file is intended for the database administrator, and other information is for Oracle Support Services. Trace file information is also used to tune applications and instances. The alert log is a special trace file. The alert log of a database is a chronological log of messages and errors, and includes the following items: ■ ■ ■ ■ ■ All internal errors (ORA-600), block corruption errors (ORA-1578), and deadlock errors (ORA-60) that occur Administrative operations, such as CREATE, ALTER, and DROP statements and STARTUP, SHUTDOWN, and ARCHIVELOG statements Messages and errors relating to the functions of shared server and dispatcher processes Errors occurring during the automatic refresh of a materialized view The values of all initialization parameters that had nondefault values at the time the database and instance start Oracle Database uses the alert log to record these operations as an alternative to displaying the information on an operator's console (although some systems also display information on the console). If an operation is successful, a "completed" message is written in the alert log, along with a timestamp. Initialization parameters controlling the location and size of trace files are: ■ BACKGROUND_DUMP_DEST ■ USER_DUMP_DEST Managing Oracle Database Processes 4-21 Monitoring the Operation of Your Database ■ MAX_DUMP_FILE_SIZE These parameters are discussed in the sections that follow. See Also: Oracle Database Reference for information about initialization parameters that control the writing to trace files Using the Trace Files Check the alert log and other trace files of an instance periodically to learn whether the background processes have encountered errors. For example, when the log writer process (LGWR) cannot write to a member of a log group, an error message indicating the nature of the problem is written to the LGWR trace file and the database alert log. Such an error message means that a media or I/O problem has occurred and should be corrected immediately. Oracle Database also writes values of initialization parameters to the alert log, in addition to other important statistics. Specifying the Location of Trace Files All trace files for background processes and the alert log are written to the directory specified by the initialization parameter BACKGROUND_DUMP_DEST. All trace files for server processes are written to the directory specified by the initialization parameter USER_DUMP_DEST. The names of trace files are operating system specific, but each file usually includes the name of the process writing the file (such as LGWR and RECO). See Also: Your operating system specific Oracle documentation for information about the names of trace files Controlling the Size of Trace Files You can control the maximum size of all trace files (excluding the alert log) using the initialization parameter MAX_DUMP_FILE_SIZE, which limits the file to the specified number of operating system blocks. To control the size of an alert log, you must manually delete the file when you no longer need it. Otherwise the database continues to append to the file. You can safely delete the alert log while the instance is running, although you should consider making an archived copy of it first. This archived copy could prove valuable if you should have a future problem that requires investigating the history of an instance. Controlling When Oracle Database Writes to Trace Files Background processes always write to a trace file when appropriate. In the case of the ARCn background process, it is possible, through an initialization parameter, to control the amount and type of trace information that is produced. This behavior is described in "Controlling Trace Output Generated by the Archivelog Process" on page 7-13. Other background processes do not have this flexibility. Trace files are written on behalf of server processes whenever internal errors occur. Additionally, setting the initialization parameter SQL_TRACE = TRUE causes the SQL trace facility to generate performance statistics for the processing of all SQL statements for an instance and write them to the USER_DUMP_DEST directory. Optionally, you can request that trace files be generated for server processes. Regardless of the current value of the SQL_TRACE initialization parameter, each session can enable or disable trace logging on behalf of the associated server process 4-22 Oracle Database Administrator’s Guide Monitoring the Operation of Your Database by using the SQL statement ALTER SESSION SET SQL_TRACE. This example enables the SQL trace facility for a specific session: ALTER SESSION SET SQL_TRACE TRUE; Caution: The SQL trace facility for server processes can cause significant system overhead resulting in severe performance impact, so you should enable this feature only when collecting statistics. Use the DBMS_SESSION or the DBMS_MONITOR package if you want to control SQL tracing for a session. Reading the Trace File for Shared Server Sessions If shared server is enabled, each session using a dispatcher is routed to a shared server process, and trace information is written to the server trace file only if the session has enabled tracing (or if an error is encountered). Therefore, to track tracing for a specific session that connects using a dispatcher, you might have to explore several shared server trace files. To help you, Oracle provides a command line utility program, trcsess, which consolidates all trace information pertaining to a user session in one place and orders the information by time. See Also: Oracle Database Performance Tuning Guide for information about using the SQL trace facility and using TKPROF and trcsess to interpret the generated trace files Monitoring Locks Locks are mechanisms that prevent destructive interaction between transactions accessing the same resource. The resources can be either user objects, such as tables and rows, or system objects not visible to users, such as shared data structures in memory and data dictionary rows. Oracle Database automatically obtains and manages necessary locks when executing SQL statements, so you need not be concerned with such details. However, the database also lets you lock data manually. A deadlock can occur when two or more users are waiting for data locked by each other. Deadlocks prevent some transactions from continuing to work. Oracle Database automatically detects deadlock situations and resolves them by rolling back one of the statements involved in the deadlock, thereby releasing one set of the conflicting row locks. Oracle Database is designed to avoid deadlocks, and they are not common. Most often they occur when transactions explicitly override the default locking of the database. Deadlocks can affect the performance of your database, so Oracle provides some scripts and views that enable you to monitor locks. The utllockt.sql script displays, in a tree fashion, the sessions in the system that are waiting for locks and the locks that they are waiting for. The location of this script file is operating system dependent. A second script, catblock.sql, creates the lock views that utllockt.sql needs, so you must run it before running utllockt.sql. The following views can help you to monitor locks: Managing Oracle Database Processes 4-23 Monitoring the Operation of Your Database View Description V$LOCK Lists the locks currently held by Oracle Database and outstanding requests for a lock or latch DBA_BLOCKERS Displays a session if it is holding a lock on an object for which another session is waiting DBA_WAITERS Displays a session if it is waiting for a locked object DBA_DDL_LOCKS Lists all DDL locks held in the database and all outstanding requests for a DDL lock DBA_DML_LOCKS Lists all DML locks held in the database and all outstanding requests for a DML lock DBA_LOCK Lists all locks or latches held in the database and all outstanding requests for a lock or latch DBA_LOCK_INTERNAL Displays a row for each lock or latch that is being held, and one row for each outstanding request for a lock or latch See Also: ■ ■ Oracle Database Reference contains detailed descriptions of these views Oracle Database Concepts contains more information about locks. Monitoring Wait Events Wait events are statistics that are incremented by a server process to indicate that it had to wait for an event to complete before being able to continue processing. A session could wait for a variety of reasons, including waiting for more input, waiting for the operating system to complete a service such as a disk write, or it could wait for a lock or latch. When a session is waiting for resources, it is not doing any useful work. A large number of waits is a source of concern. Wait event data reveals various symptoms of problems that might be affecting performance, such as latch contention, buffer contention, and I/O contention. Oracle provides several views that display wait event statistics. A discussion of these views and their role in instance tuning is contained in Oracle Database Performance Tuning Guide. Process and Session Views This section lists some of the data dictionary views that you can use to monitor an Oracle Database instance. These views are general in their scope. Other views, more specific to a process, are discussed in the section of this book where the process is described. View Description V$PROCESS Contains information about the currently active processes V$LOCKED_OBJECT Lists all locks acquired by every transaction on the system V$SESSION Lists session information for each current session V$SESS_IO Contains I/O statistics for each user session 4-24 Oracle Database Administrator’s Guide Monitoring the Operation of Your Database View Description V$SESSION_LONGOPS Displays the status of various operations that run for longer than 6 seconds (in absolute time). These operations currently include many backup and recovery functions, statistics gathering, and query execution. More operations are added for every Oracle Database release. V$SESSION_WAIT Lists the resources or events for which active sessions are waiting V$SYSSTAT Contains session statistics V$RESOURCE_LIMIT Provides information about current and maximum global resource utilization for some system resources V$SQLAREA Contains statistics about shared SQL area and contains one row for each SQL string. Also provides statistics about SQL statements that are in memory, parsed, and ready for execution V$LATCH Contains statistics for nonparent latches and summary statistics for parent latches Oracle Database Reference contains detailed descriptions of these views See Also: Managing Oracle Database Processes 4-25 Monitoring the Operation of Your Database 4-26 Oracle Database Administrator’s Guide Part II Oracle Database Structure and Storage This part describes database structure in terms of its storage components and how to create and manage those components. It contains the following chapters: ■ Chapter 5, "Managing Control Files" ■ Chapter 6, "Managing the Redo Log" ■ Chapter 7, "Managing Archived Redo Logs" ■ Chapter 8, "Managing Tablespaces" ■ Chapter 9, "Managing Datafiles and Tempfiles" ■ Chapter 10, "Managing the Undo Tablespace" 5 Managing Control Files This chapter explains how to create and maintain the control files for your database and contains the following topics: ■ What Is a Control File? ■ Guidelines for Control Files ■ Creating Control Files ■ Troubleshooting After Creating Control Files ■ Backing Up Control Files ■ Recovering a Control File Using a Current Copy ■ Dropping Control Files ■ Displaying Control File Information See Also: Part III, "Automated File and Storage Management" for information about creating control files that are both created and managed by the Oracle Database server What Is a Control File? Every Oracle Database has a control file, which is a small binary file that records the physical structure of the database. The control file includes: ■ The database name ■ Names and locations of associated datafiles and redo log files ■ The timestamp of the database creation ■ The current log sequence number ■ Checkpoint information The control file must be available for writing by the Oracle Database server whenever the database is open. Without the control file, the database cannot be mounted and recovery is difficult. The control file of an Oracle Database is created at the same time as the database. By default, at least one copy of the control file is created during database creation. On some operating systems the default is to create multiple copies. You should create two or more copies of the control file during database creation. You can also create control files later, if you lose control files or want to change particular settings in the control files. Managing Control Files 5-1 Guidelines for Control Files Guidelines for Control Files This section describes guidelines you can use to manage the control files for a database, and contains the following topics: ■ Provide Filenames for the Control Files ■ Multiplex Control Files on Different Disks ■ Back Up Control Files ■ Manage the Size of Control Files Provide Filenames for the Control Files You specify control file names using the CONTROL_FILES initialization parameter in the database initialization parameter file (see "Creating Initial Control Files" on page 5-3). The instance recognizes and opens all the listed file during startup, and the instance writes to and maintains all listed control files during database operation. If you do not specify files for CONTROL_FILES before database creation: ■ ■ ■ If you are not using Oracle-managed files, then the database creates a control file and uses a default filename. The default name is operating system specific. If you are using Oracle-managed files, then the initialization parameters you set to enable that feature determine the name and location of the control files, as described in Chapter 11, "Using Oracle-Managed Files". If you are using Automatic Storage Management, you can place incomplete ASM filenames in the DB_CREATE_FILE_DEST and DB_RECOVERY_FILE_DEST initialization parameters. ASM then automatically creates control files in the appropriate places. See "About ASM Filenames" on page 12-34 and "Creating a Database in ASM" on page 12-42 for more information. Multiplex Control Files on Different Disks Every Oracle Database should have at least two control files, each stored on a different physical disk. If a control file is damaged due to a disk failure, the associated instance must be shut down. Once the disk drive is repaired, the damaged control file can be restored using the intact copy of the control file from the other disk and the instance can be restarted. In this case, no media recovery is required. The behavior of multiplexed control files is this: ■ ■ ■ The database writes to all filenames listed for the initialization parameter CONTROL_FILES in the database initialization parameter file. The database reads only the first file listed in the CONTROL_FILES parameter during database operation. If any of the control files become unavailable during database operation, the instance becomes inoperable and should be aborted. Oracle strongly recommends that your database has a minimum of two control files and that they are located on separate physical disks. Note: One way to multiplex control files is to store a control file copy on every disk drive that stores members of redo log groups, if the redo log is multiplexed. By storing 5-2 Oracle Database Administrator’s Guide Creating Control Files control files in these locations, you minimize the risk that all control files and all groups of the redo log will be lost in a single disk failure. Back Up Control Files It is very important that you back up your control files. This is true initially, and every time you change the physical structure of your database. Such structural changes include: ■ Adding, dropping, or renaming datafiles ■ Adding or dropping a tablespace, or altering the read/write state of the tablespace ■ Adding or dropping redo log files or groups The methods for backing up control files are discussed in "Backing Up Control Files" on page 5-7. Manage the Size of Control Files The main determinants of the size of a control file are the values set for the MAXDATAFILES, MAXLOGFILES, MAXLOGMEMBERS, MAXLOGHISTORY, and MAXINSTANCES parameters in the CREATE DATABASE statement that created the associated database. Increasing the values of these parameters increases the size of a control file of the associated database. See Also: ■ ■ Your operating system specific Oracle documentation contains more information about the maximum control file size. Oracle Database SQL Reference for a description of the CREATE DATABASE statement Creating Control Files This section describes ways to create control files, and contains the following topics: ■ Creating Initial Control Files ■ Creating Additional Copies, Renaming, and Relocating Control Files ■ Creating New Control Files Creating Initial Control Files The initial control files of an Oracle Database are created when you issue the CREATE DATABASE statement. The names of the control files are specified by the CONTROL_FILES parameter in the initialization parameter file used during database creation. The filenames specified in CONTROL_FILES should be fully specified and are operating system specific. The following is an example of a CONTROL_FILES initialization parameter: CONTROL_FILES = (/u01/oracle/prod/control01.ctl, /u02/oracle/prod/control02.ctl, /u03/oracle/prod/control03.ctl) If files with the specified names currently exist at the time of database creation, you must specify the CONTROLFILE REUSE clause in the CREATE DATABASE statement, or else an error occurs. Also, if the size of the old control file differs from the SIZE parameter of the new one, you cannot use the REUSE clause. Managing Control Files 5-3 Creating Control Files The size of the control file changes between some releases of Oracle Database, as well as when the number of files specified in the control file changes. Configuration parameters such as MAXLOGFILES, MAXLOGMEMBERS, MAXLOGHISTORY, MAXDATAFILES, and MAXINSTANCES affect control file size. You can subsequently change the value of the CONTROL_FILES initialization parameter to add more control files or to change the names or locations of existing control files. See Also: Your operating system specific Oracle documentation contains more information about specifying control files. Creating Additional Copies, Renaming, and Relocating Control Files You can create an additional control file copy for multiplexing by copying an existing control file to a new location and adding the file name to the list of control files. Similarly, you rename an existing control file by copying the file to its new name or location, and changing the file name in the control file list. In both cases, to guarantee that control files do not change during the procedure, shut down the database before copying the control file. To add a multiplexed copy of the current control file or to rename a control file: 1. Shut down the database. 2. Copy an existing control file to a new location, using operating system commands. 3. Edit the CONTROL_FILES parameter in the database initialization parameter file to add the new control file name, or to change the existing control filename. 4. Restart the database. Creating New Control Files This section discusses when and how to create new control files. When to Create New Control Files It is necessary for you to create new control files in the following situations: ■ ■ All control files for the database have been permanently damaged and you do not have a control file backup. You want to change the database name. For example, you would change a database name if it conflicted with another database name in a distributed environment. You can change the database name and DBID (internal database identifier) using the DBNEWID utility. See Oracle Database Utilities for information about using this utility. Note: ■ The compatibility level is set to a value that is earlier than 10.2.0, and you must make a change to an area of database configuration that relates to any of the following parameters from the CREATE DATABASE or CREATE CONTROLFILE commands: MAXLOGFILES, MAXLOGMEMBERS, MAXLOGHISTORY, and MAXINSTANCES. If compatibility is 10.2.0 or later, you do not have to create new control files when you make such a change; the control files automatically expand, if necessary, to accommodate the new configuration information. 5-4 Oracle Database Administrator’s Guide Creating Control Files For example, assume that when you created the database or recreated the control files, you set MAXLOGFILES to 3. Suppose that now you want to add a fourth redo log file group to the database with the ALTER DATABASE command. If compatibility is set to 10.2.0 or later, you can do so and the controlfiles automatically expand to accommodate the new logfile information. However, with compatibility set earlier than 10.2.0, your ALTER DATABASE command would generate an error, and you would have to first create new control files. For information on compatibility level, see "The COMPATIBLE Initialization Parameter and Irreversible Compatibility" on page 2-34. The CREATE CONTROLFILE Statement You can create a new control file for a database using the CREATE CONTROLFILE statement. The following statement creates a new control file for the prod database (a database that formerly used a different database name): CREATE CONTROLFILE SET DATABASE prod LOGFILE GROUP 1 ('/u01/oracle/prod/redo01_01.log', '/u01/oracle/prod/redo01_02.log'), GROUP 2 ('/u01/oracle/prod/redo02_01.log', '/u01/oracle/prod/redo02_02.log'), GROUP 3 ('/u01/oracle/prod/redo03_01.log', '/u01/oracle/prod/redo03_02.log') RESETLOGS DATAFILE '/u01/oracle/prod/system01.dbf' SIZE 3M, '/u01/oracle/prod/rbs01.dbs' SIZE 5M, '/u01/oracle/prod/users01.dbs' SIZE 5M, '/u01/oracle/prod/temp01.dbs' SIZE 5M MAXLOGFILES 50 MAXLOGMEMBERS 3 MAXLOGHISTORY 400 MAXDATAFILES 200 MAXINSTANCES 6 ARCHIVELOG; Cautions: ■ ■ The CREATE CONTROLFILE statement can potentially damage specified datafiles and redo log files. Omitting a filename can cause loss of the data in that file, or loss of access to the entire database. Use caution when issuing this statement and be sure to follow the instructions in "Steps for Creating New Control Files". If the database had forced logging enabled before creating the new control file, and you want it to continue to be enabled, then you must specify the FORCE LOGGING clause in the CREATE CONTROLFILE statement. See "Specifying FORCE LOGGING Mode" on page 2-18. Oracle Database SQL Reference describes the complete syntax of the CREATE CONTROLFILE statement See Also: Steps for Creating New Control Files Complete the following steps to create a new control file. Managing Control Files 5-5 Creating Control Files 1. Make a list of all datafiles and redo log files of the database. If you follow recommendations for control file backups as discussed in "Backing Up Control Files" on page 5-7, you will already have a list of datafiles and redo log files that reflect the current structure of the database. However, if you have no such list, executing the following statements will produce one. SELECT MEMBER FROM V$LOGFILE; SELECT NAME FROM V$DATAFILE; SELECT VALUE FROM V$PARAMETER WHERE NAME = 'control_files'; If you have no such lists and your control file has been damaged so that the database cannot be opened, try to locate all of the datafiles and redo log files that constitute the database. Any files not specified in step 5 are not recoverable once a new control file has been created. Moreover, if you omit any of the files that make up the SYSTEM tablespace, you might not be able to recover the database. 2. Shut down the database. If the database is open, shut down the database normally if possible. Use the IMMEDIATE or ABORT clauses only as a last resort. 3. Back up all datafiles and redo log files of the database. 4. Start up a new instance, but do not mount or open the database: STARTUP NOMOUNT 5. Create a new control file for the database using the CREATE CONTROLFILE statement. When creating a new control file, specify the RESETLOGS clause if you have lost any redo log groups in addition to control files. In this case, you will need to recover from the loss of the redo logs (step 8). You must specify the RESETLOGS clause if you have renamed the database. Otherwise, select the NORESETLOGS clause. 6. Store a backup of the new control file on an offline storage device. See "Backing Up Control Files" on page 5-7 for instructions for creating a backup. 7. Edit the CONTROL_FILES initialization parameter for the database to indicate all of the control files now part of your database as created in step 5 (not including the backup control file). If you are renaming the database, edit the DB_NAME parameter in your instance parameter file to specify the new name. 8. Recover the database if necessary. If you are not recovering the database, skip to step 9. If you are creating the control file as part of recovery, recover the database. If the new control file was created using the NORESETLOGS clause (step 5), you can recover the database with complete, closed database recovery. If the new control file was created using the RESETLOGS clause, you must specify USING BACKUP CONTROL FILE. If you have lost online or archived redo logs or datafiles, use the procedures for recovering those files. See Also: Oracle Database Backup and Recovery Basics and Oracle Database Backup and Recovery Advanced User's Guide for information about recovering your database and methods of recovering a lost control file 9. Open the database using one of the following methods: 5-6 Oracle Database Administrator’s Guide Backing Up Control Files ■ If you did not perform recovery, or you performed complete, closed database recovery in step 8, open the database normally. ALTER DATABASE OPEN; ■ If you specified RESETLOGS when creating the control file, use the ALTER DATABASE statement, indicating RESETLOGS. ALTER DATABASE OPEN RESETLOGS; The database is now open and available for use. Troubleshooting After Creating Control Files After issuing the CREATE CONTROLFILE statement, you may encounter some errors. This section describes the most common control file errors: ■ Checking for Missing or Extra Files ■ Handling Errors During CREATE CONTROLFILE Checking for Missing or Extra Files After creating a new control file and using it to open the database, check the alert log to see if the database has detected inconsistencies between the data dictionary and the control file, such as a datafile in the data dictionary includes that the control file does not list. If a datafile exists in the data dictionary but not in the new control file, the database creates a placeholder entry in the control file under the name MISSINGnnnn, where nnnn is the file number in decimal. MISSINGnnnn is flagged in the control file as being offline and requiring media recovery. If the actual datafile corresponding to MISSINGnnnn is read-only or offline normal, then you can make the datafile accessible by renaming MISSINGnnnn to the name of the actual datafile. If MISSINGnnnn corresponds to a datafile that was not read-only or offline normal, then you cannot use the rename operation to make the datafile accessible, because the datafile requires media recovery that is precluded by the results of RESETLOGS. In this case, you must drop the tablespace containing the datafile. Conversely, if a datafile listed in the control file is not present in the data dictionary, then the database removes references to it from the new control file. In both cases, the database includes an explanatory message in the alert log to let you know what was found. Handling Errors During CREATE CONTROLFILE If Oracle Database sends you an error (usually error ORA-01173, ORA-01176, ORA-01177, ORA-01215, or ORA-01216) when you attempt to mount and open the database after creating a new control file, the most likely cause is that you omitted a file from the CREATE CONTROLFILE statement or included one that should not have been listed. In this case, you should restore the files you backed up in step 3 on page 5-6 and repeat the procedure from step 4, using the correct filenames. Backing Up Control Files Use the ALTER DATABASE BACKUP CONTROLFILE statement to back up your control files. You have two options: Managing Control Files 5-7 Recovering a Control File Using a Current Copy 1. Back up the control file to a binary file (duplicate of existing control file) using the following statement: ALTER DATABASE BACKUP CONTROLFILE TO '/oracle/backup/control.bkp'; 2. Produce SQL statements that can later be used to re-create your control file: ALTER DATABASE BACKUP CONTROLFILE TO TRACE; This command writes a SQL script to the database trace file where it can be captured and edited to reproduce the control file. See Also: Oracle Database Backup and Recovery Basics for more information on backing up your control files Recovering a Control File Using a Current Copy This section presents ways that you can recover your control file from a current backup or from a multiplexed copy. Recovering from Control File Corruption Using a Control File Copy This procedure assumes that one of the control files specified in the CONTROL_FILES parameter is corrupted, that the control file directory is still accessible, and that you have a multiplexed copy of the control file. 1. With the instance shut down, use an operating system command to overwrite the bad control file with a good copy: % cp /u03/oracle/prod/control03.ctl 2. /u02/oracle/prod/control02.ctl Start SQL*Plus and open the database: SQL> STARTUP Recovering from Permanent Media Failure Using a Control File Copy This procedure assumes that one of the control files specified in the CONTROL_FILES parameter is inaccessible due to a permanent media failure and that you have a multiplexed copy of the control file. 1. With the instance shut down, use an operating system command to copy the current copy of the control file to a new, accessible location: % cp /u01/oracle/prod/control01.ctl 2. /u04/oracle/prod/control03.ctl Edit the CONTROL_FILES parameter in the initialization parameter file to replace the bad location with the new location: CONTROL_FILES = (/u01/oracle/prod/control01.ctl, /u02/oracle/prod/control02.ctl, /u04/oracle/prod/control03.ctl) 3. Start SQL*Plus and open the database: SQL> STARTUP If you have multiplexed control files, you can get the database started up quickly by editing the CONTROL_FILES initialization parameter. Remove the bad control file from CONTROL_FILES setting and you can restart the database immediately. Then you can perform the reconstruction of the bad control file and at some later time shut down 5-8 Oracle Database Administrator’s Guide Displaying Control File Information and restart the database after editing the CONTROL_FILES initialization parameter to include the recovered control file. Dropping Control Files You want to drop control files from the database, for example, if the location of a control file is no longer appropriate. Remember that the database should have at least two control files at all times. 1. Shut down the database. 2. Edit the CONTROL_FILES parameter in the database initialization parameter file to delete the old control file name. 3. Restart the database. This operation does not physically delete the unwanted control file from the disk. Use operating system commands to delete the unnecessary file after you have dropped the control file from the database. Note: Displaying Control File Information The following views display information about control files: View Description V$DATABASE Displays database information from the control file V$CONTROLFILE Lists the names of control files V$CONTROLFILE_RECORD_SECTION Displays information about control file record sections V$PARAMETER Displays the names of control files as specified in the CONTROL_FILES initialization parameter This example lists the names of the control files. SQL> SELECT NAME FROM V$CONTROLFILE; NAME ------------------------------------/u01/oracle/prod/control01.ctl /u02/oracle/prod/control02.ctl /u03/oracle/prod/control03.ctl Managing Control Files 5-9 Displaying Control File Information 5-10 Oracle Database Administrator’s Guide 6 Managing the Redo Log This chapter explains how to manage the online redo log. The current redo log is always online, unlike archived copies of a redo log. Therefore, the online redo log is usually referred to as simply the redo log. This chapter contains the following topics: ■ What Is the Redo Log? ■ Planning the Redo Log ■ Creating Redo Log Groups and Members ■ Relocating and Renaming Redo Log Members ■ Dropping Redo Log Groups and Members ■ Forcing Log Switches ■ Verifying Blocks in Redo Log Files ■ Clearing a Redo Log File ■ Viewing Redo Log Information See Also: Part III, "Automated File and Storage Management" for information about redo log files that are both created and managed by the Oracle Database server What Is the Redo Log? The most crucial structure for recovery operations is the redo log, which consists of two or more preallocated files that store all changes made to the database as they occur. Every instance of an Oracle Database has an associated redo log to protect the database in case of an instance failure. Redo Threads When speaking in the context of multiple database instances, the redo log for each database instance is also referred to as a redo thread. In typical configurations, only one database instance accesses an Oracle Database, so only one thread is present. In an Oracle Real Application Clusters environment, however, two or more instances concurrently access a single database and each instance has its own thread of redo. A separate redo thread for each instance avoids contention for a single set of redo log files, thereby eliminating a potential performance bottleneck. This chapter describes how to configure and manage the redo log on a standard single-instance Oracle Database. The thread number can be assumed to be 1 in all Managing the Redo Log 6-1 What Is the Redo Log? discussions and examples of statements. For information about redo log groups in a Real Application Clusters environment, please refer to Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide. Redo Log Contents Redo log files are filled with redo records. A redo record, also called a redo entry, is made up of a group of change vectors, each of which is a description of a change made to a single block in the database. For example, if you change a salary value in an employee table, you generate a redo record containing change vectors that describe changes to the data segment block for the table, the undo segment data block, and the transaction table of the undo segments. Redo entries record data that you can use to reconstruct all changes made to the database, including the undo segments. Therefore, the redo log also protects rollback data. When you recover the database using redo data, the database reads the change vectors in the redo records and applies the changes to the relevant blocks. Redo records are buffered in a circular fashion in the redo log buffer of the SGA (see "How Oracle Database Writes to the Redo Log" on page 6-2) and are written to one of the redo log files by the Log Writer (LGWR) database background process. Whenever a transaction is committed, LGWR writes the transaction redo records from the redo log buffer of the SGA to a redo log file, and assigns a system change number (SCN) to identify the redo records for each committed transaction. Only when all redo records associated with a given transaction are safely on disk in the online logs is the user process notified that the transaction has been committed. Redo records can also be written to a redo log file before the corresponding transaction is committed. If the redo log buffer fills, or another transaction commits, LGWR flushes all of the redo log entries in the redo log buffer to a redo log file, even though some redo records may not be committed. If necessary, the database can roll back these changes. How Oracle Database Writes to the Redo Log The redo log of a database consists of two or more redo log files. The database requires a minimum of two files to guarantee that one is always available for writing while the other is being archived (if the database is in ARCHIVELOG mode). See "Managing Archived Redo Logs" on page 7-1 for more information. LGWR writes to redo log files in a circular fashion. When the current redo log file fills, LGWR begins writing to the next available redo log file. When the last available redo log file is filled, LGWR returns to the first redo log file and writes to it, starting the cycle again. Figure 6–1 illustrates the circular writing of the redo log file. The numbers next to each line indicate the sequence in which LGWR writes to each redo log file. Filled redo log files are available to LGWR for reuse depending on whether archiving is enabled. ■ ■ If archiving is disabled (the database is in NOARCHIVELOG mode), a filled redo log file is available after the changes recorded in it have been written to the datafiles. If archiving is enabled (the database is in ARCHIVELOG mode), a filled redo log file is available to LGWR after the changes recorded in it have been written to the datafiles and the file has been archived. 6-2 Oracle Database Administrator’s Guide What Is the Redo Log? Figure 6–1 Reuse of Redo Log Files by LGWR Online redo log file #1 1, 4, 7, ... Online redo log file #2 2, 5, 8, ... Online redo log file #3 3, 6, 9, ... LGWR Active (Current) and Inactive Redo Log Files Oracle Database uses only one redo log files at a time to store redo records written from the redo log buffer. The redo log file that LGWR is actively writing to is called the current redo log file. Redo log files that are required for instance recovery are called active redo log files. Redo log files that are no longer required for instance recovery are called inactive redo log files. If you have enabled archiving (the database is in ARCHIVELOG mode), then the database cannot reuse or overwrite an active online log file until one of the archiver background processes (ARCn) has archived its contents. If archiving is disabled (the database is in NOARCHIVELOG mode), then when the last redo log file is full, LGWR continues by overwriting the first available active file. Log Switches and Log Sequence Numbers A log switch is the point at which the database stops writing to one redo log file and begins writing to another. Normally, a log switch occurs when the current redo log file is completely filled and writing must continue to the next redo log file. However, you can configure log switches to occur at regular intervals, regardless of whether the current redo log file is completely filled. You can also force log switches manually. Oracle Database assigns each redo log file a new log sequence number every time a log switch occurs and LGWR begins writing to it. When the database archives redo log files, the archived log retains its log sequence number. A redo log file that is cycled back for use is given the next available log sequence number. Each online or archived redo log file is uniquely identified by its log sequence number. During crash, instance, or media recovery, the database properly applies redo log files in ascending order by using the log sequence number of the necessary archived and redo log files. Managing the Redo Log 6-3 Planning the Redo Log Planning the Redo Log This section provides guidelines you should consider when configuring a database instance redo log and contains the following topics: ■ Multiplexing Redo Log Files ■ Placing Redo Log Members on Different Disks ■ Setting the Size of Redo Log Members ■ Choosing the Number of Redo Log Files ■ Controlling Archive Lag Multiplexing Redo Log Files To protect against a failure involving the redo log itself, Oracle Database allows a multiplexed redo log, meaning that two or more identical copies of the redo log can be automatically maintained in separate locations. For the most benefit, these locations should be on separate disks. Even if all copies of the redo log are on the same disk, however, the redundancy can help protect against I/O errors, file corruption, and so on. When redo log files are multiplexed, LGWR concurrently writes the same redo log information to multiple identical redo log files, thereby eliminating a single point of redo log failure. Multiplexing is implemented by creating groups of redo log files. A group consists of a redo log file and its multiplexed copies. Each identical copy is said to be a member of the group. Each redo log group is defined by a number, such as group 1, group 2, and so on. Figure 6–2 Multiplexed Redo Log Files Disk A Disk B 1, 3, 5, ... A_LOG1 B_LOG1 Group 1 LGWR Group 2 A_LOG2 2, 4, 6, ... B_LOG2 Group 1 Group 2 In Figure 6–2, A_LOG1 and B_LOG1 are both members of Group 1, A_LOG2 and B_LOG2 are both members of Group 2, and so forth. Each member in a group must be exactly the same size. Each member of a log file group is concurrently active—that is, concurrently written to by LGWR—as indicated by the identical log sequence numbers assigned by LGWR. In Figure 6–2, first LGWR writes concurrently to both A_LOG1 and B_LOG1. Then it 6-4 Oracle Database Administrator’s Guide Planning the Redo Log writes concurrently to both A_LOG2 and B_LOG2, and so on. LGWR never writes concurrently to members of different groups (for example, to A_LOG1 and B_LOG2). Oracle recommends that you multiplex your redo log files. The loss of the log file data can be catastrophic if recovery is required. Note that when you multiplex the redo log, the database must increase the amount of I/O that it performs. Depending on your configuration, this may impact overall database performance. Note: Responding to Redo Log Failure Whenever LGWR cannot write to a member of a group, the database marks that member as INVALID and writes an error message to the LGWR trace file and to the database alert log to indicate the problem with the inaccessible files. The specific reaction of LGWR when a redo log member is unavailable depends on the reason for the lack of availability, as summarized in the table that follows. Condition LGWR Action LGWR can successfully write to at least one member in a group Writing proceeds as normal. LGWR writes to the available members of a group and ignores the unavailable members. LGWR cannot access the next group at a log switch because the group needs to be archived Database operation temporarily halts until the group becomes available or until the group is archived. All members of the next group are inaccessible to LGWR at a log switch because of media failure Oracle Database returns an error, and the database instance shuts down. In this case, you may need to perform media recovery on the database from the loss of a redo log file. If the database checkpoint has moved beyond the lost redo log, media recovery is not necessary, because the database has saved the data recorded in the redo log to the datafiles. You need only drop the inaccessible redo log group. If the database did not archive the bad log, use ALTER DATABASE CLEAR UNARCHIVED LOG to disable archiving before the log can be dropped. All members of a group suddenly become inaccessible to LGWR while it is writing to them Oracle Database returns an error and the database instance immediately shuts down. In this case, you may need to perform media recovery. If the media containing the log is not actually lost--for example, if the drive for the log was inadvertently turned off--media recovery may not be needed. In this case, you need only turn the drive back on and let the database perform automatic instance recovery. Legal and Illegal Configurations In most cases, a multiplexed redo log should be symmetrical: all groups of the redo log should have the same number of members. However, the database does not require that a multiplexed redo log be symmetrical. For example, one group can have only one member, and other groups can have two members. This configuration protects against disk failures that temporarily affect some redo log members but leave others intact. The only requirement for an instance redo log is that it have at least two groups. Figure 6–3 shows legal and illegal multiplexed redo log configurations. The second configuration is illegal because it has only one group. Managing the Redo Log 6-5 Planning the Redo Log Figure 6–3 Legal and Illegal Multiplexed Redo Log Configuration LEGAL Group 1 Group 2 Group 3 Disk A Disk B A_LOG1 B_LOG1 A_LOG2 B_LOG2 A_LOG3 B_LOG3 Disk A Disk B A_LOG1 B_LOG1 ILLEGAL Group 1 Group 2 Group 3 Group 1 Group 2 Group 3 Placing Redo Log Members on Different Disks When setting up a multiplexed redo log, place members of a group on different physical disks. If a single disk fails, then only one member of a group becomes unavailable to LGWR and other members remain accessible to LGWR, so the instance can continue to function. If you archive the redo log, spread redo log members across disks to eliminate contention between the LGWR and ARCn background processes. For example, if you have two groups of multiplexed redo log members (a duplexed redo log), place each member on a different disk and set your archiving destination to a fifth disk. Doing so will avoid contention between LGWR (writing to the members) and ARCn (reading the members). 6-6 Oracle Database Administrator’s Guide Planning the Redo Log Datafiles should also be placed on different disks from redo log files to reduce contention in writing data blocks and redo records. Setting the Size of Redo Log Members When setting the size of redo log files, consider whether you will be archiving the redo log. Redo log files should be sized so that a filled group can be archived to a single unit of offline storage media (such as a tape or disk), with the least amount of space on the medium left unused. For example, suppose only one filled redo log group can fit on a tape and 49% of the tape storage capacity remains unused. In this case, it is better to decrease the size of the redo log files slightly, so that two log groups could be archived on each tape. All members of the same multiplexed redo log group must be the same size. Members of different groups can have different sizes. However, there is no advantage in varying file size between groups. If checkpoints are not set to occur between log switches, make all groups the same size to guarantee that checkpoints occur at regular intervals. The minimum size permitted for a redo log file is 4 MB. See Also: Your operating system–specific Oracle documentation. The default size of redo log files is operating system dependent. Choosing the Number of Redo Log Files The best way to determine the appropriate number of redo log files for a database instance is to test different configurations. The optimum configuration has the fewest groups possible without hampering LGWR from writing redo log information. In some cases, a database instance may require only two groups. In other situations, a database instance may require additional groups to guarantee that a recycled group is always available to LGWR. During testing, the easiest way to determine whether the current redo log configuration is satisfactory is to examine the contents of the LGWR trace file and the database alert log. If messages indicate that LGWR frequently has to wait for a group because a checkpoint has not completed or a group has not been archived, add groups. Consider the parameters that can limit the number of redo log files before setting up or altering the configuration of an instance redo log. The following parameters limit the number of redo log files that you can add to a database: ■ ■ The MAXLOGFILES parameter used in the CREATE DATABASE statement determines the maximum number of groups of redo log files for each database. Group values can range from 1 to MAXLOGFILES. When the compatibility level is set earlier than 10.2.0, the only way to override this upper limit is to re-create the database or its control file. Therefore, it is important to consider this limit before creating a database. When compatibility is set to 10.2.0 or later, you can exceed the MAXLOGFILES limit, and the control files expand as needed. If MAXLOGFILES is not specified for the CREATE DATABASE statement, then the database uses an operating system specific default value. The MAXLOGMEMBERS parameter used in the CREATE DATABASE statement determines the maximum number of members for each group. As with MAXLOGFILES, the only way to override this upper limit is to re-create the database or control file. Therefore, it is important to consider this limit before creating a database. If no MAXLOGMEMBERS parameter is specified for the CREATE DATABASE statement, then the database uses an operating system default value. Managing the Redo Log 6-7 Planning the Redo Log See Also: ■ ■ Oracle Database Backup and Recovery Advanced User's Guide to learn how checkpoints and the redo log impact instance recovery Your operating system specific Oracle documentation for the default and legal values of the MAXLOGFILES and MAXLOGMEMBERS parameters Controlling Archive Lag You can force all enabled redo log threads to switch their current logs at regular time intervals. In a primary/standby database configuration, changes are made available to the standby database by archiving redo logs at the primary site and then shipping them to the standby database. The changes that are being applied by the standby database can lag behind the changes that are occurring on the primary database, because the standby database must wait for the changes in the primary database redo log to be archived (into the archived redo log) and then shipped to it. To limit this lag, you can set the ARCHIVE_LAG_TARGET initialization parameter. Setting this parameter lets you specify in seconds how long that lag can be. Setting the ARCHIVE_LAG_TARGET Initialization Parameter When you set the ARCHIVE_LAG_TARGET initialization parameter, you cause the database to examine the current redo log of the instance periodically. If the following conditions are met, then the instance will switch the log: ■ ■ The current log was created prior to n seconds ago, and the estimated archival time for the current log is m seconds (proportional to the number of redo blocks used in the current log), where n + m exceeds the value of the ARCHIVE_LAG_TARGET initialization parameter. The current log contains redo records. In an Oracle Real Application Clusters environment, the instance also causes other threads to switch and archive their logs if they are falling behind. This can be particularly useful when one instance in the cluster is more idle than the other instances (as when you are running a 2-node primary/secondary configuration of Oracle Real Application Clusters). The ARCHIVE_LAG_TARGET initialization parameter specifies the target of how many seconds of redo the standby could lose in the event of a primary shutdown or failure if the Oracle Data Guard environment is not configured in a no-data-loss mode. It also provides an upper limit of how long (in seconds) the current log of the primary database can span. Because the estimated archival time is also considered, this is not the exact log switch time. The following initialization parameter setting sets the log switch interval to 30 minutes (a typical value). ARCHIVE_LAG_TARGET = 1800 A value of 0 disables this time-based log switching functionality. This is the default setting. You can set the ARCHIVE_LAG_TARGET initialization parameter even if there is no standby database. For example, the ARCHIVE_LAG_TARGET parameter can be set specifically to force logs to be switched and archived. 6-8 Oracle Database Administrator’s Guide Creating Redo Log Groups and Members ARCHIVE_LAG_TARGET is a dynamic parameter and can be set with the ALTER SYSTEM SET statement. The ARCHIVE_LAG_TARGET parameter must be set to the same value in all instances of an Oracle Real Application Clusters environment. Failing to do so results in unpredictable behavior. Caution: Factors Affecting the Setting of ARCHIVE_LAG_TARGET Consider the following factors when determining if you want to set the ARCHIVE_LAG_TARGET parameter and in determining the value for this parameter. ■ Overhead of switching (as well as archiving) logs ■ How frequently normal log switches occur as a result of log full conditions ■ How much redo loss is tolerated in the standby database Setting ARCHIVE_LAG_TARGET may not be very useful if natural log switches already occur more frequently than the interval specified. However, in the case of irregularities of redo generation speed, the interval does provide an upper limit for the time range each current log covers. If the ARCHIVE_LAG_TARGET initialization parameter is set to a very low value, there can be a negative impact on performance. This can force frequent log switches. Set the parameter to a reasonable value so as not to degrade the performance of the primary database. Creating Redo Log Groups and Members Plan the redo log of a database and create all required groups and members of redo log files during database creation. However, there are situations where you might want to create additional groups or members. For example, adding groups to a redo log can correct redo log group availability problems. To create new redo log groups and members, you must have the ALTER DATABASE system privilege. A database can have up to MAXLOGFILES groups. See Also: Oracle Database SQL Reference for a complete description of the ALTER DATABASE statement Creating Redo Log Groups To create a new group of redo log files, use the SQL statement ALTER DATABASE with the ADD LOGFILE clause. The following statement adds a new group of redo logs to the database: ALTER DATABASE ADD LOGFILE ('/oracle/dbs/log1c.rdo', '/oracle/dbs/log2c.rdo') SIZE 500K; Use fully specify filenames of new log members to indicate where the operating system file should be created. Otherwise, the files will be created in either the default or current directory of the database server, depending upon your operating system. Note: You can also specify the number that identifies the group using the GROUP clause: Managing the Redo Log 6-9 Relocating and Renaming Redo Log Members ALTER DATABASE ADD LOGFILE GROUP 10 ('/oracle/dbs/log1c.rdo', '/oracle/dbs/log2c.rdo') SIZE 500K; Using group numbers can make administering redo log groups easier. However, the group number must be between 1 and MAXLOGFILES. Do not skip redo log file group numbers (that is, do not number your groups 10, 20, 30, and so on), or you will consume unnecessary space in the control files of the database. Creating Redo Log Members In some cases, it might not be necessary to create a complete group of redo log files. A group could already exist, but not be complete because one or more members of the group were dropped (for example, because of a disk failure). In this case, you can add new members to an existing group. To create new redo log members for an existing group, use the SQL statement ALTER DATABASE with the ADD LOGFILE MEMBER clause. The following statement adds a new redo log member to redo log group number 2: ALTER DATABASE ADD LOGFILE MEMBER '/oracle/dbs/log2b.rdo' TO GROUP 2; Notice that filenames must be specified, but sizes need not be. The size of the new members is determined from the size of the existing members of the group. When using the ALTER DATABASE statement, you can alternatively identify the target group by specifying all of the other members of the group in the TO clause, as shown in the following example: ALTER DATABASE ADD LOGFILE MEMBER '/oracle/dbs/log2c.rdo' TO ('/oracle/dbs/log2a.rdo', '/oracle/dbs/log2b.rdo'); Fully specify the filenames of new log members to indicate where the operating system files should be created. Otherwise, the files will be created in either the default or current directory of the database server, depending upon your operating system. You may also note that the status of the new log member is shown as INVALID. This is normal and it will change to active (blank) when it is first used. Note: Relocating and Renaming Redo Log Members You can use operating system commands to relocate redo logs, then use the ALTER DATABASE statement to make their new names (locations) known to the database. This procedure is necessary, for example, if the disk currently used for some redo log files is going to be removed, or if datafiles and a number of redo log files are stored on the same disk and should be separated to reduce contention. To rename redo log members, you must have the ALTER DATABASE system privilege. Additionally, you might also need operating system privileges to copy files to the desired location and privileges to open and back up the database. Before relocating your redo logs, or making any other structural changes to the database, completely back up the database in case you experience problems while performing the operation. As a precaution, after renaming or relocating a set of redo log files, immediately back up the database control file. 6-10 Oracle Database Administrator’s Guide Relocating and Renaming Redo Log Members Use the following steps for relocating redo logs. The example used to illustrate these steps assumes: ■ ■ ■ The log files are located on two disks: diska and diskb. The redo log is duplexed: one group consists of the members /diska/logs/log1a.rdo and /diskb/logs/log1b.rdo, and the second group consists of the members /diska/logs/log2a.rdo and /diskb/logs/log2b.rdo. The redo log files located on diska must be relocated to diskc. The new filenames will reflect the new location: /diskc/logs/log1c.rdo and /diskc/logs/log2c.rdo. Steps for Renaming Redo Log Members 1. Shut down the database. SHUTDOWN 2. Copy the redo log files to the new location. Operating system files, such as redo log members, must be copied using the appropriate operating system commands. See your operating system specific documentation for more information about copying files. You can execute an operating system command to copy a file (or perform other operating system commands) without exiting SQL*Plus by using the HOST command. Some operating systems allow you to use a character in place of the word HOST. For example, you can use an exclamation point (!) in UNIX. Note: The following example uses operating system commands (UNIX) to move the redo log members to a new location: mv /diska/logs/log1a.rdo /diskc/logs/log1c.rdo mv /diska/logs/log2a.rdo /diskc/logs/log2c.rdo 3. Startup the database, mount, but do not open it. CONNECT / as SYSDBA STARTUP MOUNT 4. Rename the redo log members. Use the ALTER DATABASE statement with the RENAME FILE clause to rename the database redo log files. ALTER DATABASE RENAME FILE '/diska/logs/log1a.rdo', '/diska/logs/log2a.rdo' TO '/diskc/logs/log1c.rdo', '/diskc/logs/log2c.rdo'; 5. Open the database for normal operation. The redo log alterations take effect when the database is opened. ALTER DATABASE OPEN; Managing the Redo Log 6-11 Dropping Redo Log Groups and Members Dropping Redo Log Groups and Members In some cases, you may want to drop an entire group of redo log members. For example, you want to reduce the number of groups in an instance redo log. In a different case, you may want to drop one or more specific redo log members. For example, if a disk failure occurs, you may need to drop all the redo log files on the failed disk so that the database does not try to write to the inaccessible files. In other situations, particular redo log files become unnecessary. For example, a file might be stored in an inappropriate location. Dropping Log Groups To drop a redo log group, you must have the ALTER DATABASE system privilege. Before dropping a redo log group, consider the following restrictions and precautions: ■ ■ ■ An instance requires at least two groups of redo log files, regardless of the number of members in the groups. (A group comprises one or more members.) You can drop a redo log group only if it is inactive. If you need to drop the current group, first force a log switch to occur. Make sure a redo log group is archived (if archiving is enabled) before dropping it. To see whether this has happened, use the V$LOG view. SELECT GROUP#, ARCHIVED, STATUS FROM V$LOG; GROUP# --------1 2 3 4 ARC --YES NO YES YES STATUS ---------------ACTIVE CURRENT INACTIVE INACTIVE Drop a redo log group with the SQL statement ALTER DATABASE with the DROP LOGFILE clause. The following statement drops redo log group number 3: ALTER DATABASE DROP LOGFILE GROUP 3; When a redo log group is dropped from the database, and you are not using the Oracle-managed files feature, the operating system files are not deleted from disk. Rather, the control files of the associated database are updated to drop the members of the group from the database structure. After dropping a redo log group, make sure that the drop completed successfully, and then use the appropriate operating system command to delete the dropped redo log files. When using Oracle-managed files, the cleanup of operating systems files is done automatically for you. Dropping Redo Log Members To drop a redo log member, you must have the ALTER DATABASE system privilege. Consider the following restrictions and precautions before dropping individual redo log members: ■ It is permissible to drop redo log files so that a multiplexed redo log becomes temporarily asymmetric. For example, if you use duplexed groups of redo log files, you can drop one member of one group, even though all other groups have two members each. However, you should rectify this situation immediately so that 6-12 Oracle Database Administrator’s Guide Verifying Blocks in Redo Log Files all groups have at least two members, and thereby eliminate the single point of failure possible for the redo log. ■ ■ ■ An instance always requires at least two valid groups of redo log files, regardless of the number of members in the groups. (A group comprises one or more members.) If the member you want to drop is the last valid member of the group, you cannot drop the member until the other members become valid. To see a redo log file status, use the V$LOGFILE view. A redo log file becomes INVALID if the database cannot access it. It becomes STALE if the database suspects that it is not complete or correct. A stale log file becomes valid again the next time its group is made the active group. You can drop a redo log member only if it is not part of an active or current group. If you want to drop a member of an active group, first force a log switch to occur. Make sure the group to which a redo log member belongs is archived (if archiving is enabled) before dropping the member. To see whether this has happened, use the V$LOG view. To drop specific inactive redo log members, use the ALTER DATABASE statement with the DROP LOGFILE MEMBER clause. The following statement drops the redo log /oracle/dbs/log3c.rdo: ALTER DATABASE DROP LOGFILE MEMBER '/oracle/dbs/log3c.rdo'; When a redo log member is dropped from the database, the operating system file is not deleted from disk. Rather, the control files of the associated database are updated to drop the member from the database structure. After dropping a redo log file, make sure that the drop completed successfully, and then use the appropriate operating system command to delete the dropped redo log file. To drop a member of an active group, you must first force a log switch. Forcing Log Switches A log switch occurs when LGWR stops writing to one redo log group and starts writing to another. By default, a log switch occurs automatically when the current redo log file group fills. You can force a log switch to make the currently active group inactive and available for redo log maintenance operations. For example, you want to drop the currently active group, but are not able to do so until the group is inactive. You may also wish to force a log switch if the currently active group needs to be archived at a specific time before the members of the group are completely filled. This option is useful in configurations with large redo log files that take a long time to fill. To force a log switch, you must have the ALTER SYSTEM privilege. Use the ALTER SYSTEM statement with the SWITCH LOGFILE clause. The following statement forces a log switch: ALTER SYSTEM SWITCH LOGFILE; Verifying Blocks in Redo Log Files You can configure the database to use checksums to verify blocks in the redo log files. If you set the initialization parameter DB_BLOCK_CHECKSUM to TRUE, the database computes a checksum for each database block when it is written to disk, including Managing the Redo Log 6-13 Clearing a Redo Log File each redo log block as it is being written to the current log. The checksum is stored the header of the block. Oracle Database uses the checksum to detect corruption in a redo log block. The database verifies the redo log block when the block is read from an archived log during recovery and when it writes the block to an archive log file. An error is raised and written to the alert log if corruption is detected. If corruption is detected in a redo log block while trying to archive it, the system attempts to read the block from another member in the group. If the block is corrupted in all members of the redo log group, then archiving cannot proceed. The default value of DB_BLOCK_CHECKSUM is TRUE. The value of this parameter can be changed dynamically using the ALTER SYSTEM statement. There is a slight overhead and decrease in database performance with DB_BLOCK_CHECKSUM enabled. Monitor your database performance to decide if the benefit of using data block checksums to detect corruption outweighs the performance impact. Note: Oracle Database Reference for a description of the DB_BLOCK_CHECKSUM initialization parameter See Also: Clearing a Redo Log File A redo log file might become corrupted while the database is open, and ultimately stop database activity because archiving cannot continue. In this situation the ALTER DATABASE CLEAR LOGFILE statement can be used to reinitialize the file without shutting down the database. The following statement clears the log files in redo log group number 3: ALTER DATABASE CLEAR LOGFILE GROUP 3; This statement overcomes two situations where dropping redo logs is not possible: ■ If there are only two log groups ■ The corrupt redo log file belongs to the current group If the corrupt redo log file has not been archived, use the UNARCHIVED keyword in the statement. ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3; This statement clears the corrupted redo logs and avoids archiving them. The cleared redo logs are available for use even though they were not archived. If you clear a log file that is needed for recovery of a backup, then you can no longer recover from that backup. The database writes a message in the alert log describing the backups from which you cannot recover. If you clear an unarchived redo log file, you should make another backup of the database. Note: If you want to clear an unarchived redo log that is needed to bring an offline tablespace online, use the UNRECOVERABLE DATAFILE clause in the ALTER DATABASE CLEAR LOGFILE statement. 6-14 Oracle Database Administrator’s Guide Viewing Redo Log Information If you clear a redo log needed to bring an offline tablespace online, you will not be able to bring the tablespace online again. You will have to drop the tablespace or perform an incomplete recovery. Note that tablespaces taken offline normal do not require recovery. Viewing Redo Log Information The following views provide information on redo logs. View Description V$LOG Displays the redo log file information from the control file V$LOGFILE Identifies redo log groups and members and member status V$LOG_HISTORY Contains log history information The following query returns the control file information about the redo log for a database. SELECT * FROM V$LOG; GROUP# THREAD# SEQ BYTES ------ ------- ----- ------1 1 10605 1048576 2 1 10606 1048576 3 1 10603 1048576 4 1 10604 1048576 MEMBERS ------1 1 1 1 ARC --YES NO YES YES STATUS --------ACTIVE CURRENT INACTIVE INACTIVE FIRST_CHANGE# ------------11515628 11517595 11511666 11513647 FIRST_TIM --------16-APR-00 16-APR-00 16-APR-00 16-APR-00 To see the names of all of the member of a group, use a query similar to the following: SELECT * FROM V$LOGFILE; GROUP# -----1 2 3 4 STATUS ------- MEMBER ---------------------------------D:\ORANT\ORADATA\IDDB2\REDO04.LOG D:\ORANT\ORADATA\IDDB2\REDO03.LOG D:\ORANT\ORADATA\IDDB2\REDO02.LOG D:\ORANT\ORADATA\IDDB2\REDO01.LOG If STATUS is blank for a member, then the file is in use. See Also: Oracle Database Reference for detailed information about these views Managing the Redo Log 6-15 Viewing Redo Log Information 6-16 Oracle Database Administrator’s Guide 7 Managing Archived Redo Logs This chapter describes how to archive redo data. It contains the following topics: ■ What Is the Archived Redo Log? ■ Choosing Between NOARCHIVELOG and ARCHIVELOG Mode ■ Controlling Archiving ■ Specifying the Archive Destination ■ Specifying the Mode of Log Transmission ■ Managing Archive Destination Failure ■ Controlling Trace Output Generated by the Archivelog Process ■ Viewing Information About the Archived Redo Log See Also: ■ ■ Part III, "Automated File and Storage Management" for information about creating an archived redo log that is both created and managed by the Oracle Database server Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide for information specific to archiving in the Oracle Real Application Clusters environment What Is the Archived Redo Log? Oracle Database lets you save filled groups of redo log files to one or more offline destinations, known collectively as the archived redo log, or more simply the archive log. The process of turning redo log files into archived redo log files is called archiving. This process is only possible if the database is running in ARCHIVELOG mode. You can choose automatic or manual archiving. An archived redo log file is a copy of one of the filled members of a redo log group. It includes the redo entries and the unique log sequence number of the identical member of the redo log group. For example, if you are multiplexing your redo log, and if group 1 contains identical member files a_log1 and b_log1, then the archiver process (ARCn) will archive one of these member files. Should a_log1 become corrupted, then ARCn can still archive the identical b_log1. The archived redo log contains a copy of every group created since you enabled archiving. When the database is running in ARCHIVELOG mode, the log writer process (LGWR) cannot reuse and hence overwrite a redo log group until it has been archived. The background process ARCn automates archiving operations when automatic archiving Managing Archived Redo Logs 7-1 Choosing Between NOARCHIVELOG and ARCHIVELOG Mode is enabled. The database starts multiple archiver processes as needed to ensure that the archiving of filled redo logs does not fall behind. You can use archived redo logs to: ■ Recover a database ■ Update a standby database ■ Get information about the history of a database using the LogMiner utility See Also: The following sources document the uses for archived redo logs: ■ ■ ■ Oracle Database Backup and Recovery Basics Oracle Data Guard Concepts and Administration discusses setting up and maintaining a standby database Oracle Database Utilities contains instructions for using the LogMiner PL/SQL package Choosing Between NOARCHIVELOG and ARCHIVELOG Mode This section describes the issues you must consider when choosing to run your database in NOARCHIVELOG or ARCHIVELOG mode, and contains these topics: ■ Running a Database in NOARCHIVELOG Mode ■ Running a Database in ARCHIVELOG Mode The choice of whether to enable the archiving of filled groups of redo log files depends on the availability and reliability requirements of the application running on the database. If you cannot afford to lose any data in your database in the event of a disk failure, use ARCHIVELOG mode. The archiving of filled redo log files can require you to perform extra administrative operations. Running a Database in NOARCHIVELOG Mode When you run your database in NOARCHIVELOG mode, you disable the archiving of the redo log. The database control file indicates that filled groups are not required to be archived. Therefore, when a filled group becomes inactive after a log switch, the group is available for reuse by LGWR. NOARCHIVELOG mode protects a database from instance failure but not from media failure. Only the most recent changes made to the database, which are stored in the online redo log groups, are available for instance recovery. If a media failure occurs while the database is in NOARCHIVELOG mode, you can only restore the database to the point of the most recent full database backup. You cannot recover transactions subsequent to that backup. In NOARCHIVELOG mode you cannot perform online tablespace backups, nor can you use online tablespace backups taken earlier while the database was in ARCHIVELOG mode. To restore a database operating in NOARCHIVELOG mode, you can use only whole database backups taken while the database is closed. Therefore, if you decide to operate a database in NOARCHIVELOG mode, take whole database backups at regular, frequent intervals. 7-2 Oracle Database Administrator’s Guide Choosing Between NOARCHIVELOG and ARCHIVELOG Mode Running a Database in ARCHIVELOG Mode When you run a database in ARCHIVELOG mode, you enable the archiving of the redo log. The database control file indicates that a group of filled redo log files cannot be reused by LGWR until the group is archived. A filled group becomes available for archiving immediately after a redo log switch occurs. The archiving of filled groups has these advantages: ■ ■ ■ A database backup, together with online and archived redo log files, guarantees that you can recover all committed transactions in the event of an operating system or disk failure. If you keep an archived log, you can use a backup taken while the database is open and in normal system use. You can keep a standby database current with its original database by continuously applying the original archived redo logs to the standby. You can configure an instance to archive filled redo log files automatically, or you can archive manually. For convenience and efficiency, automatic archiving is usually best. Figure 7–1 illustrates how the archiver process (ARC0 in this illustration) writes filled redo log files to the database archived redo log. If all databases in a distributed database operate in ARCHIVELOG mode, you can perform coordinated distributed database recovery. However, if any database in a distributed database is in NOARCHIVELOG mode, recovery of a global distributed database (to make all databases consistent) is limited by the last full backup of any database operating in NOARCHIVELOG mode. Figure 7–1 Redo Log File Use in ARCHIVELOG Mode 0001 0001 LGWR Log 0001 0001 0001 0002 0002 0001 0002 0002 0003 0003 ARC0 ARC0 ARC0 LGWR LGWR LGWR Log 0003 Log 0004 Log 0002 Archived Redo Log Files Online Redo Log Files TIME Managing Archived Redo Logs 7-3 Controlling Archiving Controlling Archiving This section describes how to set the archiving mode of the database and how to control the archiving process. The following topics are discussed: ■ Setting the Initial Database Archiving Mode ■ Changing the Database Archiving Mode ■ Performing Manual Archiving ■ Adjusting the Number of Archiver Processes See Also: your Oracle operating system specific documentation for additional information on controlling archiving modes Setting the Initial Database Archiving Mode You set the initial archiving mode as part of database creation in the CREATE DATABASE statement. Usually, you can use the default of NOARCHIVELOG mode at database creation because there is no need to archive the redo information generated by that process. After creating the database, decide whether to change the initial archiving mode. If you specify ARCHIVELOG mode, you must have initialization parameters set that specify the destinations for the archive log files (see "Specifying Archive Destinations" on page 7-6). Changing the Database Archiving Mode To change the archiving mode of the database, use the ALTER DATABASE statement with the ARCHIVELOG or NOARCHIVELOG clause. To change the archiving mode, you must be connected to the database with administrator privileges (AS SYSDBA). The following steps switch the database archiving mode from NOARCHIVELOG to ARCHIVELOG: 1. Shut down the database instance. SHUTDOWN An open database must first be closed and any associated instances shut down before you can switch the database archiving mode. You cannot change the mode from ARCHIVELOG to NOARCHIVELOG if any datafiles need media recovery. 2. Back up the database. Before making any major change to a database, always back up the database to protect against any problems. This will be your final backup of the database in NOARCHIVELOG mode and can be used if something goes wrong during the change to ARCHIVELOG mode. See Oracle Database Backup and Recovery Basics for information about taking database backups. 3. Edit the initialization parameter file to include the initialization parameters that specify the destinations for the archive log files (see "Specifying Archive Destinations" on page 7-6). 4. Start a new instance and mount, but do not open, the database. STARTUP MOUNT To enable or disable archiving, the database must be mounted but not open. 7-4 Oracle Database Administrator’s Guide Controlling Archiving 5. Change the database archiving mode. Then open the database for normal operations. ALTER DATABASE ARCHIVELOG; ALTER DATABASE OPEN; 6. Shut down the database. SHUTDOWN IMMEDIATE 7. Back up the database. Changing the database archiving mode updates the control file. After changing the database archiving mode, you must back up all of your database files and control file. Any previous backup is no longer usable because it was taken in NOARCHIVELOG mode. See Also: Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide for more information about switching the archiving mode when using Real Application Clusters Performing Manual Archiving To operate your database in manual archiving mode, follow the procedure shown in "Changing the Database Archiving Mode" on page 7-4. However, when you specify the new mode in step 5, use the following statement: ALTER DATABASE ARCHIVELOG MANUAL; When you operate your database in manual ARCHIVELOG mode, you must archive inactive groups of filled redo log files or your database operation can be temporarily suspended. To archive a filled redo log group manually, connect with administrator privileges. Ensure that the database is mounted but not open. Use the ALTER SYSTEM statement with the ARCHIVE LOG clause to manually archive filled redo log files. The following statement archives all unarchived log files: ALTER SYSTEM ARCHIVE LOG ALL; When you use manual archiving mode, you cannot specify any standby databases in the archiving destinations. Even when automatic archiving is enabled, you can use manual archiving for such actions as rearchiving an inactive group of filled redo log members to another location. In this case, it is possible for the instance to reuse the redo log group before you have finished manually archiving, and thereby overwrite the files. If this happens, the database writes an error message to the alert log. Adjusting the Number of Archiver Processes The LOG_ARCHIVE_MAX_PROCESSES initialization parameter specifies the number of ARCn processes that the database initially invokes. The default is two processes. There is usually no need specify this initialization parameter or to change its default value, because the database starts additional archiver processes (ARCn) as needed to ensure that the automatic processing of filled redo log files does not fall behind. However, to avoid any runtime overhead of invoking additional ARCn processes, you can set the LOG_ARCHIVE_MAX_PROCESSES initialization parameter to specify up to ten ARCn processes to be started at instance startup. The LOG_ARCHIVE_MAX_PROCESSES parameter is dynamic, and can be changed using Managing Archived Redo Logs 7-5 Specifying the Archive Destination the ALTER SYSTEM statement. The database must be mounted but not open. The following statement increases (or decreases) the number of ARCn processes currently running: ALTER SYSTEM SET LOG_ARCHIVE_MAX_PROCESSES=3; Specifying the Archive Destination Before you can archive redo logs, you must determine the destination to which you will archive and familiarize yourself with the various destination states. The dynamic performance (V$) views, listed in "Viewing Information About the Archived Redo Log" on page 7-14, provide all needed archive information. The following topics are contained in this section: ■ Specifying Archive Destinations ■ Understanding Archive Destination Status Specifying Archive Destinations You can choose whether to archive redo logs to a single destination or multiplex them. If you want to archive only to a single destination, you specify that destination in the LOG_ARCHIVE_DEST initialization parameter. If you want to multiplex the archived logs, you can choose whether to archive to up to ten locations (using the LOG_ARCHIVE_DEST_n parameters) or to archive only to a primary and secondary destination (using LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST). The following table summarizes the multiplexing alternatives, which are further described in the sections that follow. Method Initialization Parameter Host Example 1 LOG_ARCHIVE_DEST_n Local or remote LOG_ARCHIVE_DEST_1 = 'LOCATION=/disk1/arc' where: LOG_ARCHIVE_DEST_2 = 'SERVICE=standby1' n is an integer from 1 to 10 2 LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST Local only LOG_ARCHIVE_DEST = '/disk1/arc' LOG_ARCHIVE_DUPLEX_DEST = '/disk2/arc' See Also: ■ ■ Oracle Database Reference for additional information about the initialization parameters used to control the archiving of redo logs Oracle Data Guard Concepts and Administration for information about using the LOG_ARCHIVE_DEST_n initialization parameter for specifying a standby destination. There are additional keywords that can be specified with this initialization parameter that are not discussed in this book. Method 1: Using the LOG_ARCHIVE_DEST_n Parameter Use the LOG_ARCHIVE_DEST_n parameter (where n is an integer from 1 to 10) to specify from one to ten different destinations for archival. Each numerically suffixed parameter uniquely identifies an individual destination. You specify the location for LOG_ARCHIVE_DEST_n using the keywords explained in the following table: 7-6 Oracle Database Administrator’s Guide Specifying the Archive Destination Keyword Indicates Example LOCATION A local file system location. LOG_ARCHIVE_DEST_1 = 'LOCATION=/disk1/arc' SERVICE Remote archival through Oracle Net service name. LOG_ARCHIVE_DEST_2 = 'SERVICE=standby1' If you use the LOCATION keyword, specify a valid path name for your operating system. If you specify SERVICE, the database translates the net service name through the tnsnames.ora file to a connect descriptor. The descriptor contains the information necessary for connecting to the remote database. The service name must have an associated database SID, so that the database correctly updates the log history of the control file for the standby database. Perform the following steps to set the destination for archived redo logs using the LOG_ARCHIVE_DEST_n initialization parameter: 1. Use SQL*Plus to shut down the database. SHUTDOWN 2. Set the LOG_ARCHIVE_DEST_n initialization parameter to specify from one to ten archiving locations. The LOCATION keyword specifies an operating system specific path name. For example, enter: LOG_ARCHIVE_DEST_1 = 'LOCATION = /disk1/archive' LOG_ARCHIVE_DEST_2 = 'LOCATION = /disk2/archive' LOG_ARCHIVE_DEST_3 = 'LOCATION = /disk3/archive' If you are archiving to a standby database, use the SERVICE keyword to specify a valid net service name from the tnsnames.ora file. For example, enter: LOG_ARCHIVE_DEST_4 = 'SERVICE = standby1' 3. Optionally, set the LOG_ARCHIVE_FORMAT initialization parameter, using %t to include the thread number as part of the file name, %s to include the log sequence number, and %r to include the resetlogs ID (a timestamp value represented in ub4). Use capital letters (%T, %S, and %R) to pad the file name to the left with zeroes. If the COMPATIBLE initialization parameter is set to 10.0 or higher, the database requires the specification of resetlogs ID (%r) when you include the LOG_ARCHIVE_FORMAT parameter. The default for this parameter is operating system dependent. For example, this is the default format for UNIX: Note: LOG_ARCHIVE_FORMAT=%t_%s_%r.dbf The incarnation of a database changes when you open it with the RESETLOGS option. Specifying %r causes the database to capture the resetlogs ID in the archive log file name, enabling you to more easily perform recovery from a backup of a previous database incarnation. See Oracle Database Backup and Recovery Advanced User's Guide for more information about this method of recovery. The following example shows a setting of LOG_ARCHIVE_FORMAT: Managing Archived Redo Logs 7-7 Specifying the Archive Destination LOG_ARCHIVE_FORMAT = arch_%t_%s_%r.arc This setting will generate archived logs as follows for thread 1; log sequence numbers 100, 101, and 102; resetlogs ID 509210197. The identical resetlogs ID indicates that the files are all from the same database incarnation: /disk1/archive/arch_1_100_509210197.arc, /disk1/archive/arch_1_101_509210197.arc, /disk1/archive/arch_1_102_509210197.arc /disk2/archive/arch_1_100_509210197.arc, /disk2/archive/arch_1_101_509210197.arc, /disk2/archive/arch_1_102_509210197.arc /disk3/archive/arch_1_100_509210197.arc, /disk3/archive/arch_1_101_509210197.arc, /disk3/archive/arch_1_102_509210197.arc Method 2: Using LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST To specify a maximum of two locations, use the LOG_ARCHIVE_DEST parameter to specify a primary archive destination and the LOG_ARCHIVE_DUPLEX_DEST to specify an optional secondary archive destination. All locations must be local. Whenever the database archives a redo log, it archives it to every destination specified by either set of parameters. Perform the following steps the use method 2: 1. Use SQL*Plus to shut down the database. SHUTDOWN 2. Specify destinations for the LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST parameter (you can also specify LOG_ARCHIVE_DUPLEX_DEST dynamically using the ALTER SYSTEM statement). For example, enter: LOG_ARCHIVE_DEST = '/disk1/archive' LOG_ARCHIVE_DUPLEX_DEST = '/disk2/archive' 3. Set the LOG_ARCHIVE_FORMAT initialization parameter as described in step 3 for method 1. Understanding Archive Destination Status Each archive destination has the following variable characteristics that determine its status: ■ ■ ■ Valid/Invalid: indicates whether the disk location or service name information is specified and valid Enabled/Disabled: indicates the availability state of the location and whether the database can use the destination Active/Inactive: indicates whether there was a problem accessing the destination Several combinations of these characteristics are possible. To obtain the current status and other information about each destination for an instance, query the V$ARCHIVE_DEST view. 7-8 Oracle Database Administrator’s Guide Specifying the Mode of Log Transmission The characteristics determining a locations status that appear in the view are shown in Table 7–1. Note that for a destination to be used, its characteristics must be valid, enabled, and active. Table 7–1 Destination Status Characteristics STATUS Valid Enabled Active Meaning VALID True True True The user has properly initialized the destination, which is available for archiving. INACTIVE False n/a n/a The user has not provided or has deleted the destination information. ERROR True True False An error occurred creating or writing to the destination file; refer to error data. FULL True True False Destination is full (no disk space). DEFERRED True False True The user manually and temporarily disabled the destination. DISABLED True False False The user manually and temporarily disabled the destination following an error; refer to error data. BAD PARAM n/a n/a n/a A parameter error occurred; refer to error data. The LOG_ARCHIVE_DEST_STATE_n (where n is an integer from 1 to 10) initialization parameter lets you control the availability state of the specified destination (n). ■ ENABLE indicates that the database can use the destination. ■ DEFER indicates that the location is temporarily disabled. ■ ALTERNATE indicates that the destination is an alternate. The availability state of the destination is DEFER, unless there is a failure of its parent destination, in which case its state becomes ENABLE. Specifying the Mode of Log Transmission The two modes of transmitting archived logs to their destination are normal archiving transmission and standby transmission mode. Normal transmission involves transmitting files to a local disk. Standby transmission involves transmitting files through a network to either a local or remote standby database. Normal Transmission Mode In normal transmission mode, the archiving destination is another disk drive of the database server. In this configuration archiving does not contend with other files required by the instance and can complete more quickly. Specify the destination with either the LOG_ARCHIVE_DEST_n or LOG_ARCHIVE_DEST parameters. It is good practice to move archived redo log files and corresponding database backups from the local disk to permanent inexpensive offline storage media such as tape. A primary value of archived logs is database recovery, so you want to ensure that these logs are safe should disaster strike your primary database. Managing Archived Redo Logs 7-9 Specifying the Mode of Log Transmission Standby Transmission Mode In standby transmission mode, the archiving destination is either a local or remote standby database. Caution: You can maintain a standby database on a local disk, but Oracle strongly encourages you to maximize disaster protection by maintaining your standby database at a remote site. If you are operating your standby database in managed recovery mode, you can keep your standby database synchronized with your source database by automatically applying transmitted archive logs. To transmit files successfully to a standby database, either ARCn or a server process must do the following: ■ ■ Recognize a remote location Transmit the archived logs in conjunction with a remote file server (RFS) process that resides on the remote server Each ARCn process has a corresponding RFS for each standby destination. For example, if three ARCn processes are archiving to two standby databases, then Oracle Database establishes six RFS connections. You transmit archived logs through a network to a remote location by using Oracle Net Services. Indicate a remote archival by specifying a Oracle Net service name as an attribute of the destination. Oracle Database then translates the service name, through the tnsnames.ora file, to a connect descriptor. The descriptor contains the information necessary for connecting to the remote database. The service name must have an associated database SID, so that the database correctly updates the log history of the control file for the standby database. The RFS process, which runs on the destination node, acts as a network server to the ARCn client. Essentially, ARCn pushes information to RFS, which transmits it to the standby database. The RFS process, which is required when archiving to a remote destination, is responsible for the following tasks: ■ ■ ■ ■ Consuming network I/O from the ARCn process Creating file names on the standby database by using the STANDBY_ARCHIVE_DEST parameter Populating the log files at the remote site Updating the standby database control file (which Recovery Manager can then use for recovery) Archived redo logs are integral to maintaining a standby database, which is an exact replica of a database. You can operate your database in standby archiving mode, which automatically updates a standby database with archived redo logs from the original database. 7-10 Oracle Database Administrator’s Guide Managing Archive Destination Failure See Also: ■ ■ Oracle Data Guard Concepts and Administration Oracle Database Net Services Administrator's Guide for information about connecting to a remote database using a service name Managing Archive Destination Failure Sometimes archive destinations can fail, causing problems when you operate in automatic archiving mode. Oracle Database provides procedures to help you minimize the problems associated with destination failure. These procedures are discussed in the sections that follow: ■ Specifying the Minimum Number of Successful Destinations ■ Rearchiving to a Failed Destination Specifying the Minimum Number of Successful Destinations The optional initialization parameter LOG_ARCHIVE_MIN_SUCCEED_DEST=n determines the minimum number of destinations to which the database must successfully archive a redo log group before it can reuse online log files. The default value is 1. Valid values for n are 1 to 2 if you are using duplexing, or 1 to 10 if you are multiplexing. Specifying Mandatory and Optional Destinations The LOG_ARCHIVE_DEST_n parameter lets you specify whether a destination is OPTIONAL (the default) or MANDATORY. The LOG_ARCHIVE_MIN_SUCCEED_DEST=n parameter uses all MANDATORY destinations plus some number of non-standby OPTIONAL destinations to determine whether LGWR can overwrite the online log. The following rules apply: ■ ■ ■ ■ ■ ■ Omitting the MANDATORY attribute for a destination is the same as specifying OPTIONAL. You must have at least one local destination, which you can declare OPTIONAL or MANDATORY. When you specify a value for LOG_ARCHIVE_MIN_SUCCEED_DEST=n, Oracle Database will treat at least one local destination as MANDATORY, because the minimum value for LOG_ARCHIVE_MIN_SUCCEED_DEST is 1. If any MANDATORY destination fails, including a MANDATORY standby destination, Oracle Database ignores the LOG_ARCHIVE_MIN_SUCCEED_DEST parameter. The LOG_ARCHIVE_MIN_SUCCEED_DEST value cannot be greater than the number of destinations, nor can it be greater than the number of MANDATORY destinations plus the number of OPTIONAL local destinations. If you DEFER a MANDATORY destination, and the database overwrites the online log without transferring the archived log to the standby site, then you must transfer the log to the standby manually. If you are duplexing the archived logs, you can establish which destinations are mandatory or optional by using the LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST parameters. The following rules apply: ■ Any destination declared by LOG_ARCHIVE_DEST is mandatory. Managing Archived Redo Logs 7-11 Managing Archive Destination Failure Any destination declared by LOG_ARCHIVE_DUPLEX_DEST is optional if LOG_ARCHIVE_MIN_SUCCEED_DEST = 1 and mandatory if LOG_ARCHIVE_MIN_SUCCEED_DEST = 2. ■ Specifying the Number of Successful Destinations: Scenarios You can see the relationship between the LOG_ARCHIVE_DEST_n and LOG_ARCHIVE_MIN_SUCCEED_DEST parameters most easily through sample scenarios. Scenario for Archiving to Optional Local Destinations In this scenario, you archive to three local destinations, each of which you declare as OPTIONAL. Table 7–2 illustrates the possible values for LOG_ARCHIVE_MIN_SUCCEED_DEST=n in this case. Table 7–2 LOG_ARCHIVE_MIN_SUCCEED_DEST Values for Scenario 1 Value Meaning 1 The database can reuse log files only if at least one of the OPTIONAL destinations succeeds. 2 The database can reuse log files only if at least two of the OPTIONAL destinations succeed. 3 The database can reuse log files only if all of the OPTIONAL destinations succeed. 4 or greater ERROR: The value is greater than the number of destinations. This scenario shows that even though you do not explicitly set any of your destinations to MANDATORY using the LOG_ARCHIVE_DEST_n parameter, the database must successfully archive to one or more of these locations when LOG_ARCHIVE_MIN_SUCCEED_DEST is set to 1, 2, or 3. Scenario for Archiving to Both Mandatory and Optional Destinations Consider a case in which: ■ You specify two MANDATORY destinations. ■ You specify two OPTIONAL destinations. ■ No destination is a standby database. Table 7–3 shows the possible values for LOG_ARCHIVE_MIN_SUCCEED_DEST=n. Table 7–3 LOG_ARCHIVE_MIN_SUCCEED_DEST Values for Scenario 2 Value Meaning 1 The database ignores the value and uses the number of MANDATORY destinations (in this example, 2). 2 The database can reuse log files even if no OPTIONAL destination succeeds. 3 The database can reuse logs only if at least one OPTIONAL destination succeeds. 4 The database can reuse logs only if both OPTIONAL destinations succeed. 5 or greater ERROR: The value is greater than the number of destinations. This case shows that the database must archive to the destinations you specify as MANDATORY, regardless of whether you set LOG_ARCHIVE_MIN_SUCCEED_DEST to archive to a smaller number of destinations. 7-12 Oracle Database Administrator’s Guide Controlling Trace Output Generated by the Archivelog Process Rearchiving to a Failed Destination Use the REOPEN attribute of the LOG_ARCHIVE_DEST_n parameter to specify whether and when ARCn should attempt to rearchive to a failed destination following an error. REOPEN applies to all errors, not just OPEN errors. REOPEN=n sets the minimum number of seconds before ARCn should try to reopen a failed destination. The default value for n is 300 seconds. A value of 0 is the same as turning off the REOPEN attribute; ARCn will not attempt to archive after a failure. If you do not specify the REOPEN keyword, ARCn will never reopen a destination following an error. You cannot use REOPEN to specify the number of attempts ARCn should make to reconnect and transfer archived logs. The REOPEN attempt either succeeds or fails. When you specify REOPEN for an OPTIONAL destination, the database can overwrite online logs if there is an error. If you specify REOPEN for a MANDATORY destination, the database stalls the production database when it cannot successfully archive. In this situation, consider the following options: Archive manually to the failed destination. ■ Change the destination by deferring the destination, specifying the destination as optional, or changing the service. ■ Drop the destination. ■ When using the REOPEN keyword, note the following: ARCn reopens a destination only when starting an archive operation from the beginning of the log file, never during a current operation. ARCn always retries the log copy from the beginning. ■ If you specified REOPEN, either with a specified time the default, ARCn checks to see whether the time of the recorded error plus the REOPEN interval is less than the current time. If it is, ARCn retries the log copy. ■ The REOPEN clause successfully affects the ACTIVE=TRUE destination state. The VALID and ENABLED states are not changed. ■ Controlling Trace Output Generated by the Archivelog Process Background processes always write to a trace file when appropriate. (See the discussion of this topic in "Monitoring the Database Using Trace Files and the Alert Log" on page 4-21.) In the case of the archivelog process, you can control the output that is generated to the trace file. You do this by setting the LOG_ARCHIVE_TRACE initialization parameter to specify a trace level. The following values can be specified: Trace Level Meaning 0 Disable archivelog tracing. This is the default. 1 Track archival of redo log file. 2 Track archival status for each archivelog destination. 4 Track archival operational phase. 8 Track archivelog destination activity. 16 Track detailed archivelog destination activity. 32 Track archivelog destination parameter modifications. Managing Archived Redo Logs 7-13 Viewing Information About the Archived Redo Log Trace Level Meaning 64 Track ARCn process state activity. 128 Track FAL (fetch archived log) server related activities. 256 Supported in a future release. 512 Tracks asynchronous LGWR activity. 1024 RFS physical client tracking. 2048 ARCn/RFS heartbeat tracking. 4096 Track real-time apply You can combine tracing levels by specifying a value equal to the sum of the individual levels that you would like to trace. For example, setting LOG_ARCHIVE_TRACE=12, will generate trace level 8 and 4 output. You can set different values for the primary and any standby database. The default value for the LOG_ARCHIVE_TRACE parameter is 0. At this level, the archivelog process generates appropriate alert and trace entries for error conditions. You can change the value of this parameter dynamically using the ALTER SYSTEM statement. The database must be mounted but not open. For example: ALTER SYSTEM SET LOG_ARCHIVE_TRACE=12; Changes initiated in this manner will take effect at the start of the next archiving operation. See Also: Oracle Data Guard Concepts and Administration for information about using this parameter with a standby database Viewing Information About the Archived Redo Log You can display information about the archived redo logs using the following sources: ■ Dynamic Performance Views ■ The ARCHIVE LOG LIST Command Dynamic Performance Views Several dynamic performance views contain useful information about archived redo logs, as summarized in the following table. Dynamic Performance View Description V$DATABASE Shows if the database is in ARCHIVELOG or NOARCHIVELOG mode and if MANUAL (archiving mode) has been specified. V$ARCHIVED_LOG Displays historical archived log information from the control file. If you use a recovery catalog, the RC_ARCHIVED_LOG view contains similar information. V$ARCHIVE_DEST Describes the current instance, all archive destinations, and the current value, mode, and status of these destinations. V$ARCHIVE_PROCESSES Displays information about the state of the various archive processes for an instance. 7-14 Oracle Database Administrator’s Guide Viewing Information About the Archived Redo Log Dynamic Performance View Description V$BACKUP_REDOLOG Contains information about any backups of archived logs. If you use a recovery catalog, the RC_BACKUP_REDOLOG contains similar information. V$LOG Displays all redo log groups for the database and indicates which need to be archived. V$LOG_HISTORY Contains log history information such as which logs have been archived and the SCN range for each archived log. For example, the following query displays which redo log group requires archiving: SELECT GROUP#, ARCHIVED FROM SYS.V$LOG; GROUP# -------1 2 ARC --YES NO To see the current archiving mode, query the V$DATABASE view: SELECT LOG_MODE FROM SYS.V$DATABASE; LOG_MODE -----------NOARCHIVELOG See Also: Oracle Database Reference for detailed descriptions of dynamic performance views The ARCHIVE LOG LIST Command The SQL*Plus command ARCHIVE LOG LIST displays archiving information for the connected instance. For example: SQL> ARCHIVE LOG LIST Database log mode Automatic archival Archive destination Oldest online log sequence Next log sequence to archive Current log sequence Archive Mode Enabled D:\oracle\oradata\IDDB2\archive 11160 11163 11163 This display tells you all the necessary information regarding the archived redo log settings for the current instance: ■ The database is currently operating in ARCHIVELOG mode. ■ Automatic archiving is enabled. ■ The archived redo log destination is D:\oracle\oradata\IDDB2\archive. ■ The oldest filled redo log group has a sequence number of 11160. ■ The next filled redo log group to archive has a sequence number of 11163. ■ The current redo log file has a sequence number of 11163. Managing Archived Redo Logs 7-15 Viewing Information About the Archived Redo Log SQL*Plus User's Guide and Reference for more information on the ARCHIVE LOG LIST command See Also: 7-16 Oracle Database Administrator’s Guide 8 Managing Tablespaces This chapter describes the various aspects of tablespace management, and contains the following topics: ■ Guidelines for Managing Tablespaces ■ Creating Tablespaces ■ Specifying Nonstandard Block Sizes for Tablespaces ■ Controlling the Writing of Redo Records ■ Altering Tablespace Availability ■ Using Read-Only Tablespaces ■ Renaming Tablespaces ■ Dropping Tablespaces ■ Managing the SYSAUX Tablespace ■ Diagnosing and Repairing Locally Managed Tablespace Problems ■ Migrating the SYSTEM Tablespace to a Locally Managed Tablespace ■ Transporting Tablespaces Between Databases ■ Viewing Tablespace Information See Also: Chapter 11, "Using Oracle-Managed Files" for information about creating datafiles and tempfiles that are both created and managed by the Oracle Database server Guidelines for Managing Tablespaces Before working with tablespaces of an Oracle Database, familiarize yourself with the guidelines provided in the following sections: ■ Using Multiple Tablespaces ■ Assigning Tablespace Quotas to Users See Also: Oracle Database Concepts for a complete discussion of database structure, space management, tablespaces, and datafiles Using Multiple Tablespaces Using multiple tablespaces allows you more flexibility in performing database operations. When a database has multiple tablespaces, you can: Managing Tablespaces 8-1 Creating Tablespaces ■ ■ ■ ■ ■ ■ Separate user data from data dictionary data to reduce I/O contention. Separate data of one application from the data of another to prevent multiple applications from being affected if a tablespace must be taken offline. Store different the datafiles of different tablespaces on different disk drives to reduce I/O contention. Take individual tablespaces offline while others remain online, providing better overall availability. Optimizing tablespace use by reserving a tablespace for a particular type of database use, such as high update activity, read-only activity, or temporary segment storage. Back up individual tablespaces. Some operating systems set a limit on the number of files that can be open simultaneously. Such limits can affect the number of tablespaces that can be simultaneously online. To avoid exceeding your operating system limit, plan your tablespaces efficiently. Create only enough tablespaces to fulfill your needs, and create these tablespaces with as few files as possible. If you need to increase the size of a tablespace, add one or two large datafiles, or create datafiles with autoextension enabled, rather than creating many small datafiles. Review your data in light of these factors and decide how many tablespaces you need for your database design. Assigning Tablespace Quotas to Users Grant to users who will be creating tables, clusters, materialized views, indexes, and other objects the privilege to create the object and a quota (space allowance or limit) in the tablespace intended to hold the object segment. See Also: Oracle Database Security Guide for information about creating users and assigning tablespace quotas. Creating Tablespaces Before you can create a tablespace, you must create a database to contain it. The primary tablespace in any database is the SYSTEM tablespace, which contains information basic to the functioning of the database server, such as the data dictionary and the system rollback segment. The SYSTEM tablespace is the first tablespace created at database creation. It is managed as any other tablespace, but requires a higher level of privilege and is restricted in some ways. For example, you cannot rename or drop the SYSTEM tablespace or take it offline. The SYSAUX tablespace, which acts as an auxiliary tablespace to the SYSTEM tablespace, is also always created when you create a database. It contains information about and the schemas used by various Oracle products and features, so that those products do not require their own tablespaces. As for the SYSTEM tablespace, management of the SYSAUX tablespace requires a higher level of security and you cannot rename or drop it. The management of the SYSAUX tablespace is discussed separately in "Managing the SYSAUX Tablespace" on page 8-20. The steps for creating tablespaces vary by operating system, but the first step is always to use your operating system to create a directory structure in which your datafiles will be allocated. On most operating systems, you specify the size and fully specified filenames of datafiles when you create a new tablespace or alter an existing tablespace by adding datafiles. Whether you are creating a new tablespace or modifying an 8-2 Oracle Database Administrator’s Guide Creating Tablespaces existing one, the database automatically allocates and formats the datafiles as specified. To create a new tablespace, use the SQL statement CREATE TABLESPACE or CREATE TEMPORARY TABLESPACE. You must have the CREATE TABLESPACE system privilege to create a tablespace. Later, you can use the ALTER TABLESPACE or ALTER DATABASE statements to alter the tablespace. You must have the ALTER TABLESPACE or ALTER DATABASE system privilege, correspondingly. You can also use the CREATE UNDO TABLESPACE statement to create a special type of tablespace called an undo tablespace, which is specifically designed to contain undo records. These are records generated by the database that are used to roll back, or undo, changes to the database for recovery, read consistency, or as requested by a ROLLBACK statement. Creating and managing undo tablespaces is the subject of Chapter 10, "Managing the Undo Tablespace". The creation and maintenance of permanent and temporary tablespaces are discussed in the following sections: ■ Locally Managed Tablespaces ■ Bigfile Tablespaces ■ Temporary Tablespaces ■ Multiple Temporary Tablespaces: Using Tablespace Groups See Also: ■ ■ ■ Chapter 2, "Creating an Oracle Database" and your Oracle Database installation documentation for your operating system for information about tablespaces that are created at database creation Oracle Database SQL Reference for more information about the syntax and semantics of the CREATE TABLESPACE, CREATE TEMPORARY TABLESPACE, ALTER TABLESPACE, and ALTER DATABASE statements. "Specifying Database Block Sizes" on page 2-23 for information about initialization parameters necessary to create tablespaces with nonstandard block sizes Locally Managed Tablespaces Locally managed tablespaces track all extent information in the tablespace itself by using bitmaps, resulting in the following benefits: ■ ■ ■ ■ ■ ■ Fast, concurrent space operations. Space allocations and deallocations modify locally managed resources (bitmaps stored in header files). Enhanced performance Readable standby databases are allowed, because locally managed temporary tablespaces do not generate any undo or redo. Space allocation is simplified, because when the AUTOALLOCATE clause is specified, the database automatically selects the appropriate extent size. User reliance on the data dictionary is reduced, because the necessary information is stored in file headers and bitmap blocks. Coalescing free extents is unnecessary for locally managed tablespaces. Managing Tablespaces 8-3 Creating Tablespaces All tablespaces, including the SYSTEM tablespace, can be locally managed. The DBMS_SPACE_ADMIN package provides maintenance procedures for locally managed tablespaces. See Also: ■ ■ ■ "Creating a Locally Managed SYSTEM Tablespace" on page 2-11, "Migrating the SYSTEM Tablespace to a Locally Managed Tablespace" on page 8-24, and "Diagnosing and Repairing Locally Managed Tablespace Problems" on page 8-22 "Bigfile Tablespaces" on page 8-6 for information about creating another type of locally managed tablespace that contains only a single datafile or tempfile. Oracle Database PL/SQL Packages and Types Reference for information on the DBMS_SPACE_ADMIN package Creating a Locally Managed Tablespace Create a locally managed tablespace by specifying LOCAL in the EXTENT MANAGEMENT clause of the CREATE TABLESPACE statement. This is the default for new permanent tablespaces, but you must specify the EXTENT MANAGEMENT LOCAL clause if you want to specify either the AUTOALLOCATE clause or the UNIFORM clause. You can have the database manage extents for you automatically with the AUTOALLOCATE clause (the default), or you can specify that the tablespace is managed with uniform extents of a specific size (UNIFORM). If you expect the tablespace to contain objects of varying sizes requiring many extents with different extent sizes, then AUTOALLOCATE is the best choice. AUTOALLOCATE is also a good choice if it is not important for you to have a lot of control over space allocation and deallocation, because it simplifies tablespace management. Some space may be wasted with this setting, but the benefit of having Oracle Database manage your space most likely outweighs this drawback. If you want exact control over unused space, and you can predict exactly the space to be allocated for an object or objects and the number and size of extents, then UNIFORM is a good choice. This setting ensures that you will never have unusable space in your tablespace. When you do not explicitly specify the type of extent management, Oracle Database determines extent management as follows: ■ ■ If the CREATE TABLESPACE statement omits the DEFAULT storage clause, then the database creates a locally managed autoallocated tablespace. If the CREATE TABLESPACE statement includes a DEFAULT storage clause, then the database considers the following: – If you specified the MINIMUM EXTENT clause, the database evaluates whether the values of MINIMUM EXTENT, INITIAL, and NEXT are equal and the value of PCTINCREASE is 0. If so, the database creates a locally managed uniform tablespace with extent size = INITIAL. If the MINIMUM EXTENT, INITIAL, and NEXT parameters are not equal, or if PCTINCREASE is not 0, the database ignores any extent storage parameters you may specify and creates a locally managed, autoallocated tablespace. – If you did not specify MINIMUM EXTENT clause, the database evaluates only whether the storage values of INITIAL and NEXT are equal and 8-4 Oracle Database Administrator’s Guide Creating Tablespaces PCTINCREASE is 0. If so, the tablespace is locally managed and uniform. Otherwise, the tablespace is locally managed and autoallocated. The following statement creates a locally managed tablespace named lmtbsb and specifies AUTOALLOCATE: CREATE TABLESPACE lmtbsb DATAFILE '/u02/oracle/data/lmtbsb01.dbf' SIZE 50M EXTENT MANAGEMENT LOCAL AUTOALLOCATE; AUTOALLOCATE causes the tablespace to be system managed with a minimum extent size of 64K. The alternative to AUTOALLOCATE is UNIFORM. which specifies that the tablespace is managed with extents of uniform size. You can specify that size in the SIZE clause of UNIFORM. If you omit SIZE, then the default size is 1M. The following example creates a tablespace with uniform 128K extents. (In a database with 2K blocks, each extent would be equivalent to 64 database blocks). Each 128K extent is represented by a bit in the extent bitmap for this file. CREATE TABLESPACE lmtbsb DATAFILE '/u02/oracle/data/lmtbsb01.dbf' SIZE 50M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K; You cannot specify the DEFAULT storage clause, MINIMUM EXTENT, or TEMPORARY when you explicitly specify EXTENT MANAGEMENT LOCAL. If you want to create a temporary locally managed tablespace, use the CREATE TEMPORARY TABLESPACE statement. When you allocate a datafile for a locally managed tablespace, you should allow space for metadata used for space management (the extent bitmap or space header segment) which are part of user space. For example, if specify the UNIFORM clause in the extent management clause but you omit the SIZE parameter, then the default extent size is 1MB. In that case, the size specified for the datafile must be larger (at least one block plus space for the bitmap) than 1MB. Note: Specifying Segment Space Management in Locally Managed Tablespaces In a locally managed tablespace, there are two methods that Oracle Database can use to manage segment space: automatic and manual. Manual segment space management uses linked lists called "freelists" to manage free space in the segment, while automatic segment space management uses bitmaps. Automatic segment space management is the more efficient method, and is the default for all new permanent, locally managed tablespaces. Automatic segment space management delivers better space utilization than manual segment space management. It is also self-tuning, in that it scales with increasing number of users or instances. In an Oracle Real Application Clusters environment, automatic segment space management allows for a dynamic affinity of space to instances. In addition, for many standard workloads, application performance with automatic segment space management is better than the performance of a well-tuned application using manual segment space management. Although automatic segment space management is the default for all new permanent, locally managed tablespaces, you can explicitly enable it with the SEGMENT SPACE MANAGEMENT AUTO clause. Managing Tablespaces 8-5 Creating Tablespaces The following statement creates tablespace lmtbsb with automatic segment space management: CREATE TABLESPACE lmtbsb DATAFILE '/u02/oracle/data/lmtbsb01.dbf' SIZE 50M EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO; The SEGMENT SPACE MANAGEMENT MANUAL clause disables automatic segment space management. The segment space management that you specify at tablespace creation time applies to all segments subsequently created in the tablespace. You cannot change the segment space management mode of a tablespace. Notes: ■ ■ If you set extent management to LOCAL UNIFORM, then you must ensure that each extent contains at least 5 database blocks. If you set extent management to LOCAL AUTOALLOCATE, and if the database block size is 16K or greater, then Oracle manages segment space by creating extents with a minimum size of 5 blocks rounded up to 64K. Locally managed tablespaces using automatic segment space management can be created as single-file or bigfile tablespaces, as described in "Bigfile Tablespaces" on page 8-6. Altering a Locally Managed Tablespace You cannot alter a locally managed tablespace to a locally managed temporary tablespace, nor can you change its method of segment space management. Coalescing free extents is unnecessary for locally managed tablespaces. However, you can use the ALTER TABLESPACE statement on locally managed tablespaces for some operations, including the following: ■ Adding a datafile. For example: ALTER TABLESPACE lmtbsb ADD DATAFILE '/u02/oracle/data/lmtbsb02.dbf' SIZE 1M; ■ ■ ■ Altering tablespace availability (ONLINE/OFFLINE). See "Altering Tablespace Availability" on page 8-13. Making a tablespace read-only or read/write. See "Using Read-Only Tablespaces" on page 8-15. Renaming a datafile, or enabling or disabling the autoextension of the size of a datafile in the tablespace. See Chapter 9, "Managing Datafiles and Tempfiles". Bigfile Tablespaces A bigfile tablespace is a tablespace with a single, but very large (up to 4G blocks) datafile. Traditional smallfile tablespaces, in contrast, can contain multiple datafiles, but the files cannot be as large. The benefits of bigfile tablespaces are the following: ■ A bigfile tablespace with 8K blocks can contain a 32 terabyte datafile. A bigfile tablespace with 32K blocks can contain a 128 terabyte datafile. The maximum number of datafiles in an Oracle Database is limited (usually to 64K files). 8-6 Oracle Database Administrator’s Guide Creating Tablespaces Therefore, bigfile tablespaces can significantly enhance the storage capacity of an Oracle Database. ■ ■ Bigfile tablespaces can reduce the number of datafiles needed for a database. An additional benefit is that the DB_FILES initialization parameter and MAXDATAFILES parameter of the CREATE DATABASE and CREATE CONTROLFILE statements can be adjusted to reduce the amount of SGA space required for datafile information and the size of the control file. Bigfile tablespaces simplify database management by providing datafile transparency. SQL syntax for the ALTER TABLESPACE statement lets you perform operations on tablespaces, rather than the underlying individual datafiles. Bigfile tablespaces are supported only for locally managed tablespaces with automatic segment space management, with three exceptions: locally managed undo tablespaces, temporary tablespaces, and the SYSTEM tablespace. Notes: ■ ■ ■ Bigfile tablespaces are intended to be used with Automatic Storage Management (ASM) or other logical volume managers that supports striping or RAID, and dynamically extensible logical volumes. Avoid creating bigfile tablespaces on a system that does not support striping because of negative implications for parallel query execution and RMAN backup parallelization. Using bigfile tablespaces on platforms that do not support large file sizes is not recommended and can limit tablespace capacity. Refer to your operating system specific documentation for information about maximum supported file sizes. Creating a Bigfile Tablespace To create a bigfile tablespace, specify the BIGFILE keyword of the CREATE TABLESPACE statement (CREATE BIGFILE TABLESPACE ...). Oracle Database automatically creates a locally managed tablespace with automatic segment space management. You can, but need not, specify EXTENT MANAGEMENT LOCAL and SEGMENT SPACE MANAGEMENT AUTO in this statement. However, the database returns an error if you specify EXTENT MANAGEMENT DICTIONARY or SEGMENT SPACE MANAGEMENT MANUAL. The remaining syntax of the statement is the same as for the CREATE TABLESPACE statement, but you can only specify one datafile. For example: CREATE BIGFILE TABLESPACE bigtbs DATAFILE '/u02/oracle/data/bigtbs01.dbf' SIZE 50G ... You can specify SIZE in kilobytes (K), megabytes (M), gigabytes (G), or terabytes (T). If the default tablespace type was set to BIGFILE at database creation, you need not specify the keyword BIGFILE in the CREATE TABLESPACE statement. A bigfile tablespace is created by default. If the default tablespace type was set to BIGFILE at database creation, but you want to create a traditional (smallfile) tablespace, then specify a CREATE SMALLFILE TABLESPACE statement to override the default tablespace type for the tablespace that you are creating. Managing Tablespaces 8-7 Creating Tablespaces See Also: "Supporting Bigfile Tablespaces During Database Creation" on page 2-16 Altering a Bigfile Tablespace Two clauses of the ALTER TABLESPACE statement support datafile transparency when you are using bigfile tablespaces: ■ RESIZE: The RESIZE clause lets you resize the single datafile in a bigfile tablespace to an absolute size, without referring to the datafile. For example: ALTER TABLESPACE bigtbs RESIZE 80G; ■ AUTOEXTEND (used outside of the ADD DATAFILE clause): With a bigfile tablespace, you can use the AUTOEXTEND clause outside of the ADD DATAFILE clause. For example: ALTER TABLESPACE bigtbs AUTOEXTEND ON NEXT 20G; An error is raised if you specify an ADD DATAFILE clause for a bigfile tablespace. Identifying a Bigfile Tablespace The following views contain a BIGFILE column that identifies a tablespace as a bigfile tablespace: ■ DBA_TABLESPACES ■ USER_TABLESPACES ■ V$TABLESPACE You can also identify a bigfile tablespace by the relative file number of its single datafile. That number is 1024 on most platforms, but 4096 on OS/390. Temporary Tablespaces A temporary tablespace contains transient data that persists only for the duration of the session. Temporary tablespaces can improve the concurrency of multiple sort operations, reduce their overhead, and avoid Oracle Database space management operations. A temporary tablespace can be assigned to users with the CREATE USER or ALTER USER statement and can be shared by multiple users. Within a temporary tablespace, all sort operations for a given instance and tablespace share a single sort segment. Sort segments exist for every instance that performs sort operations within a given tablespace. The sort segment is created by the first statement that uses a temporary tablespace for sorting, after startup, and is released only at shutdown. An extent cannot be shared by multiple transactions. You can view the allocation and deallocation of space in a temporary tablespace sort segment using the V$SORT_SEGMENT view. The V$TEMPSEG_USAGE view identifies the current sort users in those segments. You cannot explicitly create objects in a temporary tablespace. The exception to the preceding statement is a temporary table. When you create a temporary table, its rows are stored in your default temporary tablespace. See "Creating a Temporary Table" on page 15-6 for more information. Note: 8-8 Oracle Database Administrator’s Guide Creating Tablespaces See Also: ■ ■ ■ Oracle Database Security Guide for information about creating users and assigning temporary tablespaces Oracle Database Reference for more information about the V$SORT_SEGMENT and V$TEMPSEG_USAGE views Oracle Database Performance Tuning Guide for a discussion on tuning sorts Creating a Locally Managed Temporary Tablespace Because space management is much simpler and more efficient in locally managed tablespaces, they are ideally suited for temporary tablespaces. Locally managed temporary tablespaces use tempfiles, which do not modify data outside of the temporary tablespace or generate any redo for temporary tablespace data. Because of this, they enable you to perform on-disk sorting operations in a read-only or standby database. You also use different views for viewing information about tempfiles than you would for datafiles. The V$TEMPFILE and DBA_TEMP_FILES views are analogous to the V$DATAFILE and DBA_DATA_FILES views. To create a locally managed temporary tablespace, you use the CREATE TEMPORARY TABLESPACE statement, which requires that you have the CREATE TABLESPACE system privilege. The following statement creates a temporary tablespace in which each extent is 16M. Each 16M extent (which is the equivalent of 8000 blocks when the standard block size is 2K) is represented by a bit in the bitmap for the file. CREATE TEMPORARY TABLESPACE lmtemp TEMPFILE '/u02/oracle/data/lmtemp01.dbf' SIZE 20M REUSE EXTENT MANAGEMENT LOCAL UNIFORM SIZE 16M; The extent management clause is optional for temporary tablespaces because all temporary tablespaces are created with locally managed extents of a uniform size. The Oracle Database default for SIZE is 1M. But if you want to specify another value for SIZE, you can do so as shown in the preceding statement. The AUTOALLOCATE clause is not allowed for temporary tablespaces. On some operating systems, the database does not allocate space for the tempfile until the tempfile blocks are actually accessed. This delay in space allocation results in faster creation and resizing of tempfiles, but it requires that sufficient disk space is available when the tempfiles are later used. Please refer to your operating system documentation to determine whether the database allocates tempfile space in this way on your system. Note: Creating a Bigfile Temporary Tablespace Just as for regular tablespaces, you can create single-file (bigfile) temporary tablespaces. Use the CREATE BIGFILE TEMPORARY TABLESPACE statement to create a single-tempfile tablespace. See the sections "Creating a Bigfile Tablespace" on page 8-7 and "Altering a Bigfile Tablespace" on page 8-8 for information about bigfile tablespaces, but consider that you are creating temporary tablespaces that use tempfiles instead of datafiles. Managing Tablespaces 8-9 Creating Tablespaces Altering a Locally Managed Temporary Tablespace You cannot use the ALTER TABLESPACE statement, with the TEMPORARY keyword, to change a locally managed permanent tablespace into a locally managed temporary tablespace. You must use the CREATE TEMPORARY TABLESPACE statement to create a locally managed temporary tablespace. Note: Except for adding a tempfile, taking a tempfile offline, or bringing a tempfile online, as illustrated in the following examples, you cannot use the ALTER TABLESPACE statement for a locally managed temporary tablespace. ALTER TABLESPACE lmtemp ADD TEMPFILE '/u02/oracle/data/lmtemp02.dbf' SIZE 18M REUSE; ALTER TABLESPACE lmtemp TEMPFILE OFFLINE; ALTER TABLESPACE lmtemp TEMPFILE ONLINE; You cannot take a temporary tablespace offline. Instead, you take its tempfile offline. The view V$TEMPFILE displays online status for a tempfile. Note: However, the ALTER DATABASE statement can be used to alter tempfiles. The following statements take offline and bring online tempfiles. They behave identically to the last two ALTER TABLESPACE statements in the previous example. ALTER DATABASE TEMPFILE '/u02/oracle/data/lmtemp02.dbf' OFFLINE; ALTER DATABASE TEMPFILE '/u02/oracle/data/lmtemp02.dbf' ONLINE; The following statement resizes a temporary file: ALTER DATABASE TEMPFILE '/u02/oracle/data/lmtemp02.dbf' RESIZE 18M; The following statement drops a temporary file and deletes the operating system file: ALTER DATABASE TEMPFILE '/u02/oracle/data/lmtemp02.dbf' DROP INCLUDING DATAFILES; The tablespace to which this tempfile belonged remains. A message is written to the alert log for the datafile that was deleted. If an operating system error prevents the deletion of the file, the statement still succeeds, but a message describing the error is written to the alert log. It is also possible to use the ALTER DATABASE statement to enable or disable the automatic extension of an existing tempfile, and to rename (RENAME FILE) a tempfile. See Oracle Database SQL Reference for the required syntax. To rename a tempfile, you take the tempfile offline, use operating system commands to rename or relocate the tempfile, and then use the ALTER DATABASE RENAME FILE command to update the database controlfiles. Note: 8-10 Oracle Database Administrator’s Guide Creating Tablespaces Multiple Temporary Tablespaces: Using Tablespace Groups A tablespace group enables a user to consume temporary space from multiple tablespaces. A tablespace group has the following characteristics: ■ ■ ■ It contains at least one tablespace. There is no explicit limit on the maximum number of tablespaces that are contained in a group. It shares the namespace of tablespaces, so its name cannot be the same as any tablespace. You can specify a tablespace group name wherever a tablespace name would appear when you assign a default temporary tablespace for the database or a temporary tablespace for a user. You do not explicitly create a tablespace group. Rather, it is created implicitly when you assign the first temporary tablespace to the group. The group is deleted when the last temporary tablespace it contains is removed from it. Using a tablespace group, rather than a single temporary tablespace, can alleviate problems caused where one tablespace is inadequate to hold the results of a sort, particularly on a table that has many partitions. A tablespace group enables parallel execution servers in a single parallel operation to use multiple temporary tablespaces. The view DBA_TABLESPACE_GROUPS lists tablespace groups and their member tablespaces. Oracle Database Security Guide for more information about assigning a temporary tablespace or tablespace group to a user See Also: Creating a Tablespace Group You create a tablespace group implicitly when you include the TABLESPACE GROUP clause in the CREATE TEMPORARY TABLESPACE or ALTER TABLESPACE statement and the specified tablespace group does not currently exist. For example, if neither group1 nor group2 exists, then the following statements create those groups, each of which has only the specified tablespace as a member: CREATE TEMPORARY TABLESPACE lmtemp2 TEMPFILE '/u02/oracle/data/lmtemp201.dbf' SIZE 50M TABLESPACE GROUP group1; ALTER TABLESPACE lmtemp TABLESPACE GROUP group2; Changing Members of a Tablespace Group You can add a tablespace to an existing tablespace group by specifying the existing group name in the TABLESPACE GROUP clause of the CREATE TEMPORARY TABLESPACE or ALTER TABLESPACE statement. The following statement adds a tablespace to an existing group. It creates and adds tablespace lmtemp3 to group1, so that group1 contains tablespaces lmtemp2 and lmtemp3. CREATE TEMPORARY TABLESPACE lmtemp3 TEMPFILE '/u02/oracle/data/lmtemp301.dbf' SIZE 25M TABLESPACE GROUP group1; The following statement also adds a tablespace to an existing group, but in this case because tablespace lmtemp2 already belongs to group1, it is in effect moved from group1 to group2: Managing Tablespaces 8-11 Specifying Nonstandard Block Sizes for Tablespaces ALTER TABLESPACE lmtemp2 TABLESPACE GROUP group2; Now group2 contains both lmtemp and lmtemp2, while group1 consists of only tmtemp3. You can remove a tablespace from a group as shown in the following statement: ALTER TABLESPACE lmtemp3 TABLESPACE GROUP ''; Tablespace lmtemp3 no longer belongs to any group. Further, since there are no longer any members of group1, this results in the implicit deletion of group1. Assigning a Tablespace Group as the Default Temporary Tablespace Use the ALTER DATABASE...DEFAULT TEMPORARY TABLESPACE statement to assign a tablespace group as the default temporary tablespace for the database. For example: ALTER DATABASE sample DEFAULT TEMPORARY TABLESPACE group2; Any user who has not explicitly been assigned a temporary tablespace will now use tablespaces lmtemp and lmtemp2. If a tablespace group is specified as the default temporary tablespace, you cannot drop any of its member tablespaces. You must first remove the tablespace from the tablespace group. Likewise, you cannot drop a single temporary tablespace as long as it is the default temporary tablespace. Specifying Nonstandard Block Sizes for Tablespaces You can create tablespaces with block sizes different from the standard database block size, which is specified by the DB_BLOCK_SIZE initialization parameter. This feature lets you transport tablespaces with unlike block sizes between databases. Use the BLOCKSIZE clause of the CREATE TABLESPACE statement to create a tablespace with a block size different from the database standard block size. In order for the BLOCKSIZE clause to succeed, you must have already set the DB_CACHE_SIZE and at least one DB_nK_CACHE_SIZE initialization parameter. Further, and the integer you specify in the BLOCKSIZE clause must correspond with the setting of one DB_nK_CACHE_SIZE parameter setting. Although redundant, specifying a BLOCKSIZE equal to the standard block size, as specified by the DB_BLOCK_SIZE initialization parameter, is allowed. The following statement creates tablespace lmtbsb, but specifies a block size that differs from the standard database block size (as specified by the DB_BLOCK_SIZE initialization parameter): CREATE TABLESPACE lmtbsb DATAFILE '/u02/oracle/data/lmtbsb01.dbf' SIZE 50M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K BLOCKSIZE 8K; See Also: ■ ■ ■ "Specifying Database Block Sizes" on page 2-23 "Setting the Buffer Cache Initialization Parameters" on page 2-30 for information about the DB_CACHE_SIZE and DB_nK_CACHE_SIZE parameter settings "Transporting Tablespaces Between Databases" on page 8-25 8-12 Oracle Database Administrator’s Guide Altering Tablespace Availability Controlling the Writing of Redo Records For some database operations, you can control whether the database generates redo records. Without redo, no media recovery is possible. However, suppressing redo generation can improve performance, and may be appropriate for easily recoverable operations. An example of such an operation is a CREATE TABLE...AS SELECT statement, which can be repeated in case of database or instance failure. Specify the NOLOGGING clause in the CREATE TABLESPACE statement if you wish to suppress redo when these operations are performed for objects within the tablespace. If you do not include this clause, or if you specify LOGGING instead, then the database generates redo when changes are made to objects in the tablespace. Redo is never generated for temporary segments or in temporary tablespaces, regardless of the logging attribute. The logging attribute specified at the tablespace level is the default attribute for objects created within the tablespace. You can override this default logging attribute by specifying LOGGING or NOLOGGING at the schema object level--for example, in a CREATE TABLE statement. If you have a standby database, NOLOGGING mode causes problems with the availability and accuracy of the standby database. To overcome this problem, you can specify FORCE LOGGING mode. When you include the FORCE LOGGING clause in the CREATE TABLESPACE statement, you force the generation of redo records for all operations that make changes to objects in a tablespace. This overrides any specification made at the object level. If you transport a tablespace that is in FORCE LOGGING mode to another database, the new tablespace will not maintain the FORCE LOGGING mode. See Also: ■ ■ Oracle Database SQL Reference for information about operations that can be done in NOLOGGING mode "Specifying FORCE LOGGING Mode" on page 2-18 for more information about FORCE LOGGING mode and for information about the effects of the FORCE LOGGING clause used with the CREATE DATABASE statement Altering Tablespace Availability You can take an online tablespace offline so that it is temporarily unavailable for general use. The rest of the database remains open and available for users to access data. Conversely, you can bring an offline tablespace online to make the schema objects within the tablespace available to database users. The database must be open to alter the availability of a tablespace. To alter the availability of a tablespace, use the ALTER TABLESPACE statement. You must have the ALTER TABLESPACE or MANAGE TABLESPACE system privilege. See Also: "Altering Datafile Availability" on page 9-6 for information about altering the availability of individual datafiles within a tablespace Taking Tablespaces Offline You may want to take a tablespace offline for any of the following reasons: Managing Tablespaces 8-13 Altering Tablespace Availability ■ ■ ■ ■ To make a portion of the database unavailable while allowing normal access to the remainder of the database To perform an offline tablespace backup (even though a tablespace can be backed up while online and in use) To make an application and its group of tables temporarily unavailable while updating or maintaining the application To rename or relocate tablespace datafiles See "Renaming and Relocating Datafiles" on page 9-8 for details. When a tablespace is taken offline, the database takes all the associated files offline. You cannot take the following tablespaces offline: ■ SYSTEM ■ The undo tablespace ■ Temporary tablespaces Before taking a tablespace offline, consider altering the tablespace allocation of any users who have been assigned the tablespace as a default tablespace. Doing so is advisable because those users will not be able to access objects in the tablespace while it is offline. You can specify any of the following parameters as part of the ALTER TABLESPACE...OFFLINE statement: Clause Description NORMAL A tablespace can be taken offline normally if no error conditions exist for any of the datafiles of the tablespace. No datafile in the tablespace can be currently offline as the result of a write error. When you specify OFFLINE NORMAL, the database takes a checkpoint for all datafiles of the tablespace as it takes them offline. NORMAL is the default. TEMPORARY A tablespace can be taken offline temporarily, even if there are error conditions for one or more files of the tablespace. When you specify OFFLINE TEMPORARY, the database takes offline the datafiles that are not already offline, checkpointing them as it does so. If no files are offline, but you use the temporary clause, media recovery is not required to bring the tablespace back online. However, if one or more files of the tablespace are offline because of write errors, and you take the tablespace offline temporarily, the tablespace requires recovery before you can bring it back online. IMMEDIATE 8-14 Oracle Database Administrator’s Guide A tablespace can be taken offline immediately, without the database taking a checkpoint on any of the datafiles. When you specify OFFLINE IMMEDIATE, media recovery for the tablespace is required before the tablespace can be brought online. You cannot take a tablespace offline immediately if the database is running in NOARCHIVELOG mode. Using Read-Only Tablespaces Caution: If you must take a tablespace offline, use the NORMAL clause (the default) if possible. This setting guarantees that the tablespace will not require recovery to come back online, even if after incomplete recovery you reset the redo log sequence using an ALTER DATABASE OPEN RESETLOGS statement. Specify TEMPORARY only when you cannot take the tablespace offline normally. In this case, only the files taken offline because of errors need to be recovered before the tablespace can be brought online. Specify IMMEDIATE only after trying both the normal and temporary settings. The following example takes the users tablespace offline normally: ALTER TABLESPACE users OFFLINE NORMAL; Bringing Tablespaces Online You can bring any tablespace in an Oracle Database online whenever the database is open. A tablespace is normally online so that the data contained within it is available to database users. If a tablespace to be brought online was not taken offline "cleanly" (that is, using the NORMAL clause of the ALTER TABLESPACE OFFLINE statement), you must first perform media recovery on the tablespace before bringing it online. Otherwise, the database returns an error and the tablespace remains offline. See Also: Depending upon your archiving strategy, refer to one of the following books for information about performing media recovery: ■ Oracle Database Backup and Recovery Basics ■ Oracle Database Backup and Recovery Advanced User's Guide The following statement brings the users tablespace online: ALTER TABLESPACE users ONLINE; Using Read-Only Tablespaces Making a tablespace read-only prevents write operations on the datafiles in the tablespace. The primary purpose of read-only tablespaces is to eliminate the need to perform backup and recovery of large, static portions of a database. Read-only tablespaces also provide a way to protecting historical data so that users cannot modify it. Making a tablespace read-only prevents updates on all tables in the tablespace, regardless of a user's update privilege level. Note: Making a tablespace read-only cannot in itself be used to satisfy archiving or data publishing requirements, because the tablespace can only be brought online in the database in which it was created. However, you can meet such requirements by using the transportable tablespace feature, as described in "Transporting Tablespaces Between Databases" on page 8-25. Managing Tablespaces 8-15 Using Read-Only Tablespaces You can drop items, such as tables or indexes, from a read-only tablespace, but you cannot create or alter objects in a read-only tablespace. You can execute statements that update the file description in the data dictionary, such as ALTER TABLE...ADD or ALTER TABLE...MODIFY, but you will not be able to utilize the new description until the tablespace is made read/write. Read-only tablespaces can be transported to other databases. And, since read-only tablespaces can never be updated, they can reside on CD-ROM or WORM (Write Once-Read Many) devices. The following topics are discussed in this section: ■ Making a Tablespace Read-Only ■ Making a Read-Only Tablespace Writable ■ Creating a Read-Only Tablespace on a WORM Device ■ Delaying the Opening of Datafiles in Read-Only Tablespaces See Also: "Transporting Tablespaces Between Databases" on page 8-25 Making a Tablespace Read-Only All tablespaces are initially created as read/write. Use the READ ONLY clause in the ALTER TABLESPACE statement to change a tablespace to read-only. You must have the ALTER TABLESPACE or MANAGE TABLESPACE system privilege. Before you can make a tablespace read-only, the following conditions must be met. ■ ■ ■ The tablespace must be online. This is necessary to ensure that there is no undo information that needs to be applied to the tablespace. The tablespace cannot be the active undo tablespace or SYSTEM tablespace. The tablespace must not currently be involved in an online backup, because the end of a backup updates the header file of all datafiles in the tablespace. For better performance while accessing data in a read-only tablespace, you can issue a query that accesses all of the blocks of the tables in the tablespace just before making it read-only. A simple query, such as SELECT COUNT (*), executed against each table ensures that the data blocks in the tablespace can be subsequently accessed most efficiently. This eliminates the need for the database to check the status of the transactions that most recently modified the blocks. The following statement makes the flights tablespace read-only: ALTER TABLESPACE flights READ ONLY; You can issue the ALTER TABLESPACE...READ ONLY statement while the database is processing transactions. After the statement is issued, the tablespace is put into a transitional read-only state. No transactions are allowed to make further changes (using DML statements) to the tablespace. If a transaction attempts further changes, it is terminated and rolled back. However, transactions that already made changes and that attempt no further changes are allowed to commit or roll back. When there are transactions waiting to commit, the ALTER TABLESPACE...READ ONLY statement does not return immediately. It waits for all transactions started before you issued the ALTER TABLESPACE statement to either commit or rollback. 8-16 Oracle Database Administrator’s Guide Using Read-Only Tablespaces This transitional read-only state only occurs if the value of the initialization parameter COMPATIBLE is 8.1.0 or greater. If this parameter is set to a value less than 8.1.0, the ALTER TABLESPACE ...READ ONLY statement fails if any active transactions exist. Note: If you find it is taking a long time for the ALTER TABLESPACE statement to complete, you can identify the transactions that are preventing the read-only state from taking effect. You can then notify the owners of those transactions and decide whether to terminate the transactions, if necessary. The following example identifies the transaction entry for the ALTER TABLESPACE...READ ONLY statement and note its session address (saddr): SELECT SQL_TEXT, SADDR FROM V$SQLAREA,V$SESSION WHERE V$SQLAREA.ADDRESS = V$SESSION.SQL_ADDRESS AND SQL_TEXT LIKE 'alter tablespace%'; SQL_TEXT SADDR ---------------------------------------- -------alter tablespace tbs1 read only 80034AF0 The start SCN of each active transaction is stored in the V$TRANSACTION view. Displaying this view sorted by ascending start SCN lists the transactions in execution order. From the preceding example, you already know the session address of the transaction entry for the read-only statement, and you can now locate it in the V$TRANSACTION view. All transactions with smaller start SCN, which indicates an earlier execution, can potentially hold up the quiesce and subsequent read-only state of the tablespace. SELECT SES_ADDR, START_SCNB FROM V$TRANSACTION ORDER BY START_SCNB; SES_ADDR START_SCNB -------- ---------800352A0 3621 80035A50 3623 80034AF0 3628 80037910 3629 --> --> --> --> waiting on this txn waiting on this txn this is the ALTER TABLESPACE statement don't care about this txn After making the tablespace read-only, it is advisable to back it up immediately. As long as the tablespace remains read-only, no further backups of the tablespace are necessary, because no changes can be made to it. See Also: Depending upon your backup and recovery strategy, refer to one of the following books for information about backing up and recovering a database with read-only datafiles: ■ Oracle Database Backup and Recovery Basics ■ Oracle Database Backup and Recovery Advanced User's Guide Making a Read-Only Tablespace Writable Use the READ WRITE keywords in the ALTER TABLESPACE statement to change a tablespace to allow write operations. You must have the ALTER TABLESPACE or MANAGE TABLESPACE system privilege. Managing Tablespaces 8-17 Using Read-Only Tablespaces A prerequisite to making the tablespace read/write is that all of the datafiles in the tablespace, as well as the tablespace itself, must be online. Use the DATAFILE...ONLINE clause of the ALTER DATABASE statement to bring a datafile online. The V$DATAFILE view lists the current status of datafiles. The following statement makes the flights tablespace writable: ALTER TABLESPACE flights READ WRITE; Making a read-only tablespace writable updates the control file entry for the datafiles, so that you can use the read-only version of the datafiles as a starting point for recovery. Creating a Read-Only Tablespace on a WORM Device Follow these steps to create a read-only tablespace on a CD-ROM or WORM (Write Once-Read Many) device. 1. Create a writable tablespace on another device. Create the objects that belong in the tablespace and insert your data. 2. Alter the tablespace to make it read-only. 3. Copy the datafiles of the tablespace onto the WORM device. Use operating system commands to copy the files. 4. Take the tablespace offline. 5. Rename the datafiles to coincide with the names of the datafiles you copied onto your WORM device. Use ALTER TABLESPACE with the RENAME DATAFILE clause. Renaming the datafiles changes their names in the control file. 6. Bring the tablespace back online. Delaying the Opening of Datafiles in Read-Only Tablespaces When substantial portions of a very large database are stored in read-only tablespaces that are located on slow-access devices or hierarchical storage, you should consider setting the READ_ONLY_OPEN_DELAYED initialization parameter to TRUE. This speeds certain operations, primarily opening the database, by causing datafiles in read-only tablespaces to be accessed for the first time only when an attempt is made to read data stored within them. Setting READ_ONLY_OPEN_DELAYED=TRUE has the following side-effects: ■ ■ ■ ■ ■ ■ A missing or bad read-only file is not detected at open time. It is only discovered when there is an attempt to access it. ALTER SYSTEM CHECK DATAFILES does not check read-only files. ALTER TABLESPACE...ONLINE and ALTER DATABASE DATAFILE...ONLINE do not check read-only files. They are checked only upon the first access. V$RECOVER_FILE, V$BACKUP, and V$DATAFILE_HEADER do not access read-only files. Read-only files are indicated in the results list with the error "DELAYED OPEN", with zeroes for the values of other columns. V$DATAFILE does not access read-only files. Read-only files have a size of "0" listed. V$RECOVER_LOG does not access read-only files. Logs they could need for recovery are not added to the list. 8-18 Oracle Database Administrator’s Guide Renaming Tablespaces ■ ALTER DATABASE NOARCHIVELOG does not access read-only files.It proceeds even if there is a read-only file that requires recovery. Notes: ■ ■ RECOVER DATABASE and ALTER DATABASE OPEN RESETLOGS continue to access all read-only datafiles regardless of the parameter value. If you want to avoid accessing read-only files for these operations, those files should be taken offline. If a backup control file is used, the read-only status of some files may be inaccurate. This can cause some of these operations to return unexpected results. Care should be taken in this situation. Renaming Tablespaces Using the RENAME TO clause of the ALTER TABLESPACE, you can rename a permanent or temporary tablespace. For example, the following statement renames the users tablespace: ALTER TABLESPACE users RENAME TO usersts; When you rename a tablespace the database updates all references to the tablespace name in the data dictionary, control file, and (online) datafile headers. The database does not change the tablespace ID so if this tablespace were, for example, the default tablespace for a user, then the renamed tablespace would show as the default tablespace for the user in the DBA_USERS view. The following affect the operation of this statement: ■ ■ ■ ■ ■ ■ The COMPATIBLE parameter must be set to 10.0 or higher. If the tablespace being renamed is the SYSTEM tablespace or the SYSAUX tablespace, then it will not be renamed and an error is raised. If any datafile in the tablespace is offline, or if the tablespace is offline, then the tablespace is not renamed and an error is raised. If the tablespace is read only, then datafile headers are not updated. This should not be regarded as corruption; instead, it causes a message to be written to the alert log indicating that datafile headers have not been renamed. The data dictionary and control file are updated. If the tablespace is the default temporary tablespace, then the corresponding entry in the database properties table is updated and the DATABASE_PROPERTIES view shows the new name. If the tablespace is an undo tablespace and if the following conditions are met, then the tablespace name is changed to the new tablespace name in the server parameter file (SPFILE). – The server parameter file was used to start up the database. – The tablespace name is specified as the UNDO_TABLESPACE for any instance. If a traditional initialization parameter file (PFILE) is being used then a message is written to the alert log stating that the initialization parameter file must be manually changed. Managing Tablespaces 8-19 Dropping Tablespaces Dropping Tablespaces You can drop a tablespace and its contents (the segments contained in the tablespace) from the database if the tablespace and its contents are no longer required. You must have the DROP TABLESPACE system privilege to drop a tablespace. Caution: Once a tablespace has been dropped, the data in the tablespace is not recoverable. Therefore, make sure that all data contained in a tablespace to be dropped will not be required in the future. Also, immediately before and after dropping a tablespace from a database, back up the database completely. This is strongly recommended so that you can recover the database if you mistakenly drop a tablespace, or if the database experiences a problem in the future after the tablespace has been dropped. When you drop a tablespace, the file pointers in the control file of the associated database are removed. You can optionally direct Oracle Database to delete the operating system files (datafiles) that constituted the dropped tablespace. If you do not direct the database to delete the datafiles at the same time that it deletes the tablespace, you must later use the appropriate commands of your operating system to delete them. You cannot drop a tablespace that contains any active segments. For example, if a table in the tablespace is currently being used or the tablespace contains undo data needed to roll back uncommitted transactions, you cannot drop the tablespace. The tablespace can be online or offline, but it is best to take the tablespace offline before dropping it. To drop a tablespace, use the DROP TABLESPACE statement. The following statement drops the users tablespace, including the segments in the tablespace: DROP TABLESPACE users INCLUDING CONTENTS; If the tablespace is empty (does not contain any tables, views, or other structures), you do not need to specify the INCLUDING CONTENTS clause. Use the CASCADE CONSTRAINTS clause to drop all referential integrity constraints from tables outside the tablespace that refer to primary and unique keys of tables inside the tablespace. To delete the datafiles associated with a tablespace at the same time that the tablespace is dropped, use the INCLUDING CONTENTS AND DATAFILES clause. The following statement drops the users tablespace and its associated datafiles: DROP TABLESPACE users INCLUDING CONTENTS AND DATAFILES; A message is written to the alert log for each datafile that is deleted. If an operating system error prevents the deletion of a file, the DROP TABLESPACE statement still succeeds, but a message describing the error is written to the alert log. See Also: "Dropping Datafiles" on page 9-11 Managing the SYSAUX Tablespace The SYSAUX tablespace was installed as an auxiliary tablespace to the SYSTEM tablespace when you created your database. Some database components that formerly created and used separate tablespaces now occupy the SYSAUX tablespace. If the SYSAUX tablespace becomes unavailable, core database functionality will remain operational. The database features that use the SYSAUX tablespace could fail, or function with limited capability. 8-20 Oracle Database Administrator’s Guide Managing the SYSAUX Tablespace Monitoring Occupants of the SYSAUX Tablespace The list of registered occupants of the SYSAUX tablespace are discussed in "Creating the SYSAUX Tablespace" on page 2-12. These components can use the SYSAUX tablespace, and their installation provides the means of establishing their occupancy of the SYSAUX tablespace. You can monitor the occupants of the SYSAUX tablespace using the V$SYSAUX_OCCUPANTS view. This view lists the following information about the occupants of the SYSAUX tablespace: ■ Name of the occupant ■ Occupant description ■ Schema name ■ Move procedure ■ Current space usage View information is maintained by the occupants. See Also: Oracle Database Reference for a detailed description of the V$SYSAUX_OCCUPANTS view Moving Occupants Out Of or Into the SYSAUX Tablespace You will have an option at component install time to specify that you do not want the component to reside in SYSAUX. Also, if you later decide that the component should be relocated to a designated tablespace, you can use the move procedure for that component, as specified in the V$SYSAUX_OCCUPANTS view, to perform the move. For example, assume that you install Oracle Ultra Search into the default tablespace, which is SYSAUX. Later you discover that Ultra Search is using up too much space. To alleviate this space pressure on SYSAUX, you can call a PL/SQL move procedure specified in the V$SYSAUX_OCCUPANTS view to relocate Ultra Search to another tablespace. The move procedure also lets you move a component from another tablespace into the SYSAUX tablespace. Controlling the Size of the SYSAUX Tablespace The SYSAUX tablespace is occupied by a number of database components (see Table 2–2), and its total size is governed by the space consumed by those components. The space consumed by the components, in turn, depends on which features or functionality are being used and on the nature of the database workload. The largest portion of the SYSAUX tablespace is occupied by the Automatic Workload Repository (AWR). The space consumed by the AWR is determined by several factors, including the number of active sessions in the system at any given time, the snapshot interval, and the historical data retention period. A typical system with an average of 30 concurrent active sessions may require approximately 200 to 300 MB of space for its AWR data. You can control the size of the AWR by changing the snapshot interval and historical data retention period. For more information on managing the AWR snapshot interval and retention period, please refer to Oracle Database Performance Tuning Guide. Another major occupant of the SYSAUX tablespace is the embedded Enterprise Manager (EM) repository. This repository is used by Oracle Enterprise Manager Database Control to store its metadata. The size of this repository depends on database activity and on configuration-related information stored in the repository. Managing Tablespaces 8-21 Diagnosing and Repairing Locally Managed Tablespace Problems Other database components in the SYSAUX tablespace will grow in size only if their associated features (for example, Oracle UltraSearch, Oracle Text, Oracle Streams) are in use. If the features are not used, then these components do not have any significant effect on the size of the SYSAUX tablespace. Diagnosing and Repairing Locally Managed Tablespace Problems Oracle Database includes the DBMS_SPACE_ADMIN package, which is a collection of aids for diagnosing and repairing problems in locally managed tablespaces. DBMS_SPACE_ADMIN Package Procedures The following table lists the DBMS_SPACE_ADMIN package procedures. See Oracle Database PL/SQL Packages and Types Reference for details on each procedure. Procedure Description ASSM_SEGMENT_VERIFY Verifies the integrity of segments created in tablespaces that have automatic segment space management enabled. Outputs a dump file named sid_ora_process_id.trc to the location specified in the USER_DUMP_DEST initialization parameter. Use SEGMENT_VERIFY for tablespaces with manual segment space management. ASSM_TABLESPACE_VERIFY Verifies the integrity of tablespaces that have automatic segment space management enabled. Outputs a dump file named sid_ora_process_id.trc to the location specified in the USER_DUMP_DEST initialization parameter. Use TABLESPACE_VERIFY for tablespaces with manual segment space management. SEGMENT_CORRUPT Marks the segment corrupt or valid so that appropriate error recovery can be done SEGMENT_DROP_CORRUPT Drops a segment currently marked corrupt (without reclaiming space) SEGMENT_DUMP Dumps the segment header and bitmap blocks of a specific segment to a dump file named sid_ora_process_id.trc in the location specified in the USER_DUMP_DEST initialization parameter. Provides an option to select a slightly abbreviated dump, which includes segment header and includes bitmap block summaries, without percent-free states of each block. SEGMENT_VERIFY Verifies the consistency of the extent map of the segment TABLESPACE_FIX_BITMAPS Marks the appropriate DBA range (extent) as free or used in bitmap TABLESPACE_FIX_SEGMENT_STATES Fixes the state of the segments in a tablespace in which migration was stopped TABLESPACE_MIGRATE_FROM_LOCAL Migrates a locally managed tablespace to dictionary-managed tablespace TABLESPACE_MIGRATE_TO_LOCAL Migrates a dictionary-managed tablespace to a locally managed tablespace TABLESPACE_REBUILD_BITMAPS Rebuilds the appropriate bitmaps TABLESPACE_REBUILD_QUOTAS Rebuilds quotas for a specific tablespace TABLESPACE_RELOCATE_BITMAPS Relocates the bitmaps to the specified destination TABLESPACE_VERIFY Verifies that the bitmaps and extent maps for the segments in the tablespace are synchronized 8-22 Oracle Database Administrator’s Guide Diagnosing and Repairing Locally Managed Tablespace Problems The following scenarios describe typical situations in which you can use the DBMS_SPACE_ADMIN package to diagnose and resolve problems. Some of these procedures can result in lost and unrecoverable data if not used properly. You should work with Oracle Support Services if you have doubts about these procedures. Note: See Also: Oracle Database PL/SQL Packages and Types Reference for details about the DBMS_SPACE_ADMIN package Scenario 1: Fixing Bitmap When Allocated Blocks are Marked Free (No Overlap) The TABLESPACE_VERIFY procedure discovers that a segment has allocated blocks that are marked free in the bitmap, but no overlap between segments is reported. In this scenario, perform the following tasks: 1. Call the SEGMENT_DUMP procedure to dump the ranges that the administrator allocated to the segment. 2. For each range, call the TABLESPACE_FIX_BITMAPS procedure with the TABLESPACE_EXTENT_MAKE_USED option to mark the space as used. 3. Call TABLESPACE_REBUILD_QUOTAS to rebuild quotas. Scenario 2: Dropping a Corrupted Segment You cannot drop a segment because the bitmap has segment blocks marked "free". The system has automatically marked the segment corrupted. In this scenario, perform the following tasks: 1. Call the SEGMENT_VERIFY procedure with the SEGMENT_VERIFY_EXTENTS_GLOBAL option. If no overlaps are reported, then proceed with steps 2 through 5. 2. Call the SEGMENT_DUMP procedure to dump the DBA ranges allocated to the segment. 3. For each range, call TABLESPACE_FIX_BITMAPS with the TABLESPACE_EXTENT_MAKE_FREE option to mark the space as free. 4. Call SEGMENT_DROP_CORRUPT to drop the SEG$ entry. 5. Call TABLESPACE_REBUILD_QUOTAS to rebuild quotas. Scenario 3: Fixing Bitmap Where Overlap is Reported The TABLESPACE_VERIFY procedure reports some overlapping. Some of the real data must be sacrificed based on previous internal errors. After choosing the object to be sacrificed, in this case say, table t1, perform the following tasks: 1. Make a list of all objects that t1 overlaps. 2. Drop table t1. If necessary, follow up by calling the SEGMENT_DROP_CORRUPT procedure. Managing Tablespaces 8-23 Migrating the SYSTEM Tablespace to a Locally Managed Tablespace 3. Call the SEGMENT_VERIFY procedure on all objects that t1 overlapped. If necessary, call the TABLESPACE_FIX_BITMAPS procedure to mark appropriate bitmap blocks as used. 4. Rerun the TABLESPACE_VERIFY procedure to verify that the problem is resolved. Scenario 4: Correcting Media Corruption of Bitmap Blocks A set of bitmap blocks has media corruption. In this scenario, perform the following tasks: 1. Call the TABLESPACE_REBUILD_BITMAPS procedure, either on all bitmap blocks, or on a single block if only one is corrupt. 2. Call the TABLESPACE_REBUILD_QUOTAS procedure to rebuild quotas. 3. Call the TABLESPACE_VERIFY procedure to verify that the bitmaps are consistent. Scenario 5: Migrating from a Dictionary-Managed to a Locally Managed Tablespace Use the TABLESPACE_MIGRATE_TO_LOCAL procedure to migrate a dictionary-managed tablespace to a locally managed tablespace. This operation is done online, but space management operations are blocked until the migration has been completed. This means that you can read or modify data while the migration is in progress, but if you are loading a large amount of data that requires the allocation of additional extents, then the operation may be blocked. Assume that the database block size is 2K and the existing extent sizes in tablespace tbs_1 are 10, 50, and 10,000 blocks (used, used, and free). The MINIMUM EXTENT value is 20K (10 blocks). Allow the system to choose the bitmap allocation unit. The value of 10 blocks is chosen, because it is the highest common denominator and does not exceed MINIMUM EXTENT. The statement to convert tbs_1 to a locally managed tablespace is as follows: EXEC DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL ('tbs_1'); If you choose to specify an allocation unit size, it must be a factor of the unit size calculated by the system. Migrating the SYSTEM Tablespace to a Locally Managed Tablespace Use the DBMS_SPACE_ADMIN package to migrate the SYSTEM tablespace from dictionary-managed to locally managed. The following statement performs the migration: SQL> EXECUTE DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL('SYSTEM'); Before performing the migration the following conditions must be met: ■ The database has a default temporary tablespace that is not SYSTEM. ■ There are no rollback segments in the dictionary-managed tablespace. ■ ■ There is at least one online rollback segment in a locally managed tablespace, or if using automatic undo management, an undo tablespace is online. All tablespaces other than the tablespace containing the undo space (that is, the tablespace containing the rollback segment or the undo tablespace) are in read-only mode. 8-24 Oracle Database Administrator’s Guide Transporting Tablespaces Between Databases ■ The system is in restricted mode. ■ There is a cold backup of the database. All of these conditions, except for the cold backup, are enforced by the TABLESPACE_MIGRATE_TO_LOCAL procedure. After the SYSTEM tablespace is migrated to locally managed, any dictionary-managed tablespaces in the database cannot be made read/write. If you want to be able to use the dictionary-managed tablespaces in read/write mode, then Oracle recommends that you first migrate these tablespaces to locally managed before migrating the SYSTEM tablespace. Note: Transporting Tablespaces Between Databases This section describes how to transport tablespaces between databases, and contains the following topics: ■ Introduction to Transportable Tablespaces ■ About Transporting Tablespaces Across Platforms ■ Limitations on Transportable Tablespace Use ■ Compatibility Considerations for Transportable Tablespaces ■ Transporting Tablespaces Between Databases: A Procedure and Example ■ Using Transportable Tablespaces: Scenarios Note: You must be using the Enterprise Edition of Oracle8i or later to generate a transportable tablespace set. However, you can use any edition of Oracle8i or later to import a transportable tablespace set into an Oracle Database on the same platform. To import a transportable tablespace set into an Oracle Database on a different platform, both databases must have compatibility set to at least 10.0. Please refer to "Compatibility Considerations for Transportable Tablespaces" on page 8-28 for a discussion of database compatibility for transporting tablespaces across release levels. Introduction to Transportable Tablespaces You can use the Transportable Tablespaces feature to copy a set of tablespaces from one Oracle Database to another. This method for transporting tablespaces requires that you place the tablespaces to be transported in read-only mode until you complete the transporting process. If this is undesirable, you can use the Transportable Tablespaces from Backup feature, described in Oracle Database Backup and Recovery Advanced User's Guide. Note: The tablespaces being transported can be either dictionary managed or locally managed. Starting with Oracle9i, the transported tablespaces are not required to be of the same block size as the target database standard block size. Managing Tablespaces 8-25 Transporting Tablespaces Between Databases Moving data using transportable tablespaces is much faster than performing either an export/import or unload/load of the same data. This is because the datafiles containing all of the actual data are just copied to the destination location, and you use an export/import utility to transfer only the metadata of the tablespace objects to the new database. The remainder of this chapter assumes that Data Pump is the import/export utility used. However, the transportable tablespaces feature supports both Data Pump and the original import and export utilities, IMP and EXP, with one caveat: you must use IMP and EXP for tablespaces containing XMLTypes. Refer to Oracle Database Utilities for more information on these utilities and to Oracle XML DB Developer's Guide for more information on XMLTypes. Note: The transportable tablespace feature is useful in a number of scenarios, including: ■ Exporting and importing partitions in data warehousing tables ■ Publishing structured data on CDs ■ Copying multiple read-only versions of a tablespace on multiple databases ■ Archiving historical data ■ Performing tablespace point-in-time-recovery (TSPITR) These scenarios are discussed in "Using Transportable Tablespaces: Scenarios" on page 8-37. There are two ways to transport a tablespace: ■ ■ Manually, following the steps described in this section. This involves issuing commands to SQL*Plus, RMAN, IMP/EXP and Data Pump. Using the Transport Tablespaces Wizard in Enterprise Manager To run the Transport Tablespaces Wizard: 1. Log in to Enterprise Manager with a user that has the EXP_FULL_DATABASE role. 2. Click the Maintenance link to go to the Maintenance tab. 3. Under the heading Move Database Files, click Transport Tablespaces. See Also: Oracle Database Data Warehousing Guide for information about using transportable tablespaces in a data warehousing environment About Transporting Tablespaces Across Platforms Starting with Oracle Database 10g, you can transport tablespaces across platforms. This functionality can be used to: ■ ■ Allow a database to be migrated from one platform to another Provide an easier and more efficient means for content providers to publish structured data and distribute it to customers running Oracle Database on different platforms 8-26 Oracle Database Administrator’s Guide Transporting Tablespaces Between Databases ■ ■ Simplify the distribution of data from a data warehouse environment to data marts, which are often running on smaller platforms Enable the sharing of read-only tablespaces between Oracle Database installations on different operating systems or platforms, assuming that your storage system is accessible from those platforms and the platforms all have the same endianness, as described in the sections that follow Many, but not all, platforms are supported for cross-platform tablespace transport. You can query the V$TRANSPORTABLE_PLATFORM view to see the platforms that are supported, and to determine each platform's endian format (byte ordering). The following query displays the platforms that support cross-platform tablespace transport: SQL> COLUMN PLATFORM_NAME FORMAT A32 SQL> SELECT * FROM V$TRANSPORTABLE_PLATFORM; PLATFORM_ID ----------1 2 7 10 6 3 5 4 11 15 8 9 13 16 12 17 PLATFORM_NAME -------------------------------Solaris[tm] OE (32-bit) Solaris[tm] OE (64-bit) Microsoft Windows IA (32-bit) Linux IA (32-bit) AIX-Based Systems (64-bit) HP-UX (64-bit) HP Tru64 UNIX HP-UX IA (64-bit) Linux IA (64-bit) HP Open VMS Microsoft Windows IA (64-bit) IBM zSeries Based Linux Linux 64-bit for AMD Apple Mac OS Microsoft Windows 64-bit for AMD Solaris Operating System (x86) ENDIAN_FORMAT -------------Big Big Little Little Big Big Little Big Little Little Little Big Little Big Little Little 16 rows selected. If the source platform and the target platform are of different endianness, then an additional step must be done on either the source or target platform to convert the tablespace being transported to the target format. If they are of the same endianness, then no conversion is necessary and tablespaces can be transported as if they were on the same platform. Before a tablespace can be transported to a different platform, the datafile header must identify the platform to which it belongs. In an Oracle Database with compatibility set to 10.0.0 or later, you can accomplish this by making the datafile read/write at least once. Limitations on Transportable Tablespace Use Be aware of the following limitations as you plan to transport tablespaces: ■ ■ The source and target database must use the same character set and national character set. You cannot transport a tablespace to a target database in which a tablespace with the same name already exists. However, you can rename either the tablespace to be transported or the destination tablespace before the transport operation. Managing Tablespaces 8-27 Transporting Tablespaces Between Databases ■ ■ Objects with underlying objects (such as materialized views) or contained objects (such as partitioned tables) are not transportable unless all of the underlying or contained objects are in the tablespace set. Beginning with Oracle Database 10g Release 2, you can transport tablespaces that contain XMLTypes, but you must use the IMP and EXP utilities, not Data Pump. When using EXP, ensure that the CONSTRAINTS and TRIGGERS parameters are set to Y (the default). The following query returns a list of tablespaces that contain XMLTypes: select distinct p.tablespace_name from dba_tablespaces p, dba_xml_tables x, dba_users u, all_all_tables t where t.table_name=x.table_name and t.tablespace_name=p.tablespace_name and x.owner=u.username See Oracle XML DB Developer's Guide for information on XMLTypes. Transporting tablespaces with XMLTypes has the following limitations: ■ ■ ■ ■ ■ The target database must have XML DB installed. Schemas referenced by XMLType tables cannot be the XML DB standard schemas. Schemas referenced by XMLType tables cannot have cyclic dependencies. Any row level security on XMLType tables is lost upon import. This is because the access control lists (ACLs) that implement the row level security cannot be imported, as the target database may not have the same set of users as the source database. If the schema for a transported XMLType table is not present in the target database, it is imported and registered. If the schema already exists in the target database, an error is returned unless the ignore=y option is set. Additional limitations include the following: Transportable tablespaces do not support 8.0-compatible advanced queues with multiple recipients. Advanced Queues SYSTEM Tablespace Objects You cannot transport the SYSTEM tablespace or objects owned by the user SYS. Some examples of such objects are PL/SQL, Java classes, callouts, views, synonyms, users, privileges, dimensions, directories, and sequences. Opaque Types Types whose interpretation is application-specific and opaque to the database (such as RAW, BFILE, and the AnyTypes) can be transported, but they are not converted as part of the cross-platform transport operation. Their actual structure is known only to the application, so the application must address any endianness issues after these types are moved to the new platform. Types and objects that use these opaque types, either directly or indirectly, are also subject to this limitation. Floating-Point Numbers BINARY_FLOAT and BINARY_DOUBLE types are transportable using Data Pump but not the original export utility, EXP. Compatibility Considerations for Transportable Tablespaces When you create a transportable tablespace set, Oracle Database computes the lowest compatibility level at which the target database must run. This is referred to as the compatibility level of the transportable set. Beginning with Oracle Database 10g, a tablespace can always be transported to a database with the same or higher 8-28 Oracle Database Administrator’s Guide Transporting Tablespaces Between Databases compatibility setting, whether the target database is on the same or a different platform. The database signals an error if the compatibility level of the transportable set is higher than the compatibility level of the target database. The following table shows the minimum compatibility requirements of the source and target tablespace in various scenarios. The source and target database need not have the same compatibility setting. Table 8–1 Minimum Compatibility Requirements Minimum Compatibility Setting Transport Scenario Source Database Target Database Databases on the same platform 8.0 8.0 Tablespace with different database block size than the target database 9.0 9.0 Databases on different platforms 10.0 10.0 Transporting Tablespaces Between Databases: A Procedure and Example The following steps summarize the process of transporting a tablespace. Details for each step are provided in the subsequent example. 1. For cross-platform transport, check the endian format of both platforms by querying the V$TRANSPORTABLE_PLATFORM view. Ignore this step if you are transporting your tablespace set to the same platform. 2. Pick a self-contained set of tablespaces. 3. Generate a transportable tablespace set. A transportable tablespace set (or transportable set) consists of datafiles for the set of tablespaces being transported and an export file containing structural information (metadata) for the set of tablespaces. You use Data Pump or EXP to perform the export. Note: If any of the tablespaces contain XMLTypes, you must use EXP. If you are transporting the tablespace set to a platform with different endianness from the source platform, you must convert the tablespace set to the endianness of the target platform. You can perform a source-side conversion at this step in the procedure, or you can perform a target-side conversion as part of step 4. Note: This method of generating a transportable tablespace requires that you temporarily make the tablespace read-only. If this is undesirable, you can use the alternate method known as transportable tablespace from backup. See Oracle Database Backup and Recovery Advanced User's Guide for details. 4. Transport the tablespace set. Copy the datafiles and the export file to a place that is accessible to the target database. If you have transported the tablespace set to a platform with different endianness from the source platform, and you have not performed a source-side conversion to Managing Tablespaces 8-29 Transporting Tablespaces Between Databases the endianness of the target platform, you should perform a target-side conversion now. 5. Import the tablespace set. Invoke the Data Pump utility or IMP to import the metadata for the set of tablespaces into the target database. Note: If any of the tablespaces contain XMLTypes, you must use IMP. Example The steps for transporting a tablespace are illustrated more fully in the example that follows, where it is assumed the following datafiles and tablespaces exist: Tablespace Datafile sales_1 /u01/oracle/oradata/salesdb/sales_101.dbf sales_2 /u01/oracle/oradata/salesdb/sales_201.dbf Step 1: Determine if Platforms are Supported and Endianness This step is only necessary if you are transporting the tablespace set to a platform different from the source platform. If you are transporting the tablespace set to a platform different from the source platform, then determine if cross-platform tablespace transport is supported for both the source and target platforms, and determine the endianness of each platform. If both platforms have the same endianness, no conversion is necessary. Otherwise you must do a conversion of the tablespace set either at the source or target database. If you are transporting sales_1 and sales_2 to a different platform, you can execute the following query on each platform. If the query returns a row, the platform supports cross-platform tablespace transport. SELECT d.PLATFORM_NAME, ENDIAN_FORMAT FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME; The following is the query result from the source platform: PLATFORM_NAME ENDIAN_FORMAT ------------------------- -------------Solaris[tm] OE (32-bit) Big The following is the result from the target platform: PLATFORM_NAME ENDIAN_FORMAT ------------------------- -------------Microsoft Windows NT Little You can see that the endian formats are different and thus a conversion is necessary for transporting the tablespace set. Step 2: Pick a Self-Contained Set of Tablespaces There may be logical or physical dependencies between objects in the transportable set and those outside of the set. You can only transport a set of tablespaces that is self-contained. In this context "self-contained" means that there are no references from 8-30 Oracle Database Administrator’s Guide Transporting Tablespaces Between Databases inside the set of tablespaces pointing outside of the tablespaces. Some examples of self contained tablespace violations are: ■ An index inside the set of tablespaces is for a table outside of the set of tablespaces. It is not a violation if a corresponding index for a table is outside of the set of tablespaces. Note: ■ A partitioned table is partially contained in the set of tablespaces. The tablespace set you want to copy must contain either all partitions of a partitioned table, or none of the partitions of a partitioned table. If you want to transport a subset of a partition table, you must exchange the partitions into tables. ■ A referential integrity constraint points to a table across a set boundary. When transporting a set of tablespaces, you can choose to include referential integrity constraints. However, doing so can affect whether or not a set of tablespaces is self-contained. If you decide not to transport constraints, then the constraints are not considered as pointers. ■ ■ A table inside the set of tablespaces contains a LOB column that points to LOBs outside the set of tablespaces. An XML DB schema (*.xsd) that was registered by user A imports a global schema that was registered by user B, and the following is true: the default tablespace for user A is tablespace A, the default tablespace for user B is tablespace B, and only tablespace A is included in the set of tablespaces. To determine whether a set of tablespaces is self-contained, you can invoke the TRANSPORT_SET_CHECK procedure in the Oracle supplied package DBMS_TTS. You must have been granted the EXECUTE_CATALOG_ROLE role (initially signed to SYS) to execute this procedure. When you invoke the DBMS_TTS package, you specify the list of tablespaces in the transportable set to be checked for self containment. You can optionally specify if constraints must be included. For strict or full containment, you must additionally set the TTS_FULL_CHECK parameter to TRUE. The strict or full containment check is for cases that require capturing not only references going outside the transportable set, but also those coming into the set. Tablespace Point-in-Time Recovery (TSPITR) is one such case where dependent objects must be fully contained or fully outside the transportable set. For example, it is a violation to perform TSPITR on a tablespace containing a table t but not its index i because the index and data will be inconsistent after the transport. A full containment check ensures that there are no dependencies going outside or coming into the transportable set. See the example for TSPITR in the Oracle Database Backup and Recovery Advanced User's Guide. The default for transportable tablespaces is to check for self containment rather than full containment. Note: The following statement can be used to determine whether tablespaces sales_1 and sales_2 are self-contained, with referential integrity constraints taken into consideration (indicated by TRUE). EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK('sales_1,sales_2', TRUE); Managing Tablespaces 8-31 Transporting Tablespaces Between Databases After invoking this PL/SQL package, you can see all violations by selecting from the TRANSPORT_SET_VIOLATIONS view. If the set of tablespaces is self-contained, this view is empty. The following example illustrates a case where there are two violations: a foreign key constraint, dept_fk, across the tablespace set boundary, and a partitioned table, jim.sales, that is partially contained in the tablespace set. SQL> SELECT * FROM TRANSPORT_SET_VIOLATIONS; VIOLATIONS --------------------------------------------------------------------------Constraint DEPT_FK between table JIM.EMP in tablespace SALES_1 and table JIM.DEPT in tablespace OTHER Partitioned table JIM.SALES is partially contained in the transportable set These violations must be resolved before sales_1 and sales_2 are transportable. As noted in the next step, one choice for bypassing the integrity constraint violation is to not export the integrity constraints. See Also: ■ ■ Oracle Database PL/SQL Packages and Types Reference for more information about the DBMS_TTS package Oracle Database Backup and Recovery Advanced User's Guide for information specific to using the DBMS_TTS package for TSPITR Step 3: Generate a Transportable Tablespace Set Any privileged user can perform this step. However, you must have been assigned the EXP_FULL_DATABASE role to perform a transportable tablespace export operation. Note: This method of generating a transportable tablespace requires that you temporarily make the tablespace read-only. If this is undesirable, you can use the alternate method known as transportable tablespace from backup. See Oracle Database Backup and Recovery Advanced User's Guide for details. After ensuring you have a self-contained set of tablespaces that you want to transport, generate a transportable tablespace set by performing the following actions: 1. Make all tablespaces in the set you are copying read-only. SQL> ALTER TABLESPACE sales_1 READ ONLY; Tablespace altered. SQL> ALTER TABLESPACE sales_2 READ ONLY; Tablespace altered. 2. Invoke the Data Pump export utility on the host system and specify which tablespaces are in the transportable set. If any of the tablespaces have XMLTypes, you must use EXP instead of Data Pump. Ensure that the CONSTRAINTS and TRIGGERS parameters are set to Y (the default). Note: 8-32 Oracle Database Administrator’s Guide Transporting Tablespaces Between Databases SQL> HOST $ EXPDP system/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir TRANSPORT_TABLESPACES = sales_1,sales_2 You must always specify TRANSPORT_TABLESPACES, which determines the mode of the export operation. In this example: ■ ■ ■ The DUMPFILE parameter specifies the name of the structural information export file to be created, expdat.dmp. The DIRECTORY parameter specifies the default directory object that points to the operating system or Automatic Storage Management location of the dump file. You must create the DIRECTORY object before invoking Data Pump, and you must grant the READ and WRITE object privileges on the directory to PUBLIC. See Oracle Database SQL Reference for information on the CREATE DIRECTORY command. Triggers and indexes are included in the export operation by default. If you want to perform a transport tablespace operation with a strict containment check, use the TRANSPORT_FULL_CHECK parameter, as shown in the following example: EXPDP system/password DUMPFILE=expdat.dmp DIRECTORY = dpump_dir TRANSPORT_TABLESPACES=sales_1,sales_2 TRANSPORT_FULL_CHECK=Y In this example, the Data Pump export utility verifies that there are no dependencies between the objects inside the transportable set and objects outside the transportable set. If the tablespace set being transported is not self-contained, then the export fails and indicates that the transportable set is not self-contained. You must then return to Step 1 to resolve all violations. Notes: The Data Pump utility is used to export only data dictionary structural information (metadata) for the tablespaces. No actual data is unloaded, so this operation goes relatively quickly even for large tablespace sets. 3. When finished, exit back to SQL*Plus: $ EXIT Oracle Database Utilities for information about using the Data Pump utility See Also: If sales_1 and sales_2 are being transported to a different platform, and the endianness of the platforms is different, and if you want to convert before transporting the tablespace set, then convert the datafiles composing the sales_1 and sales_2 tablespaces: 4. From SQL*Plus, return to the host system: SQL> HOST 5. The RMAN CONVERT command is used to do the conversion. Start RMAN and connect to the target database: $ RMAN TARGET / Recovery Manager: Release 10.1.0.0.0 Managing Tablespaces 8-33 Transporting Tablespaces Between Databases Copyright (c) 1995, 2003, Oracle Corporation. All rights reserved. connected to target database: salesdb (DBID=3295731590) 6. Convert the datafiles into a temporary location on the source platform. In this example, assume that the temporary location, directory /temp, has already been created. The converted datafiles are assigned names by the system. RMAN> CONVERT TABLESPACE sales_1,sales_2 2> TO PLATFORM 'Microsoft Windows NT' 3> FORMAT '/temp/%U'; Starting backup at 08-APR-03 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: sid=11 devtype=DISK channel ORA_DISK_1: starting datafile conversion input datafile fno=00005 name=/u01/oracle/oradata/salesdb/sales_101.dbf converted datafile=/temp/data_D-10_I-3295731590_TS-ADMIN_TBS_FNO-5_05ek24v5 channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:15 channel ORA_DISK_1: starting datafile conversion input datafile fno=00004 name=/u01/oracle/oradata/salesdb/sales_101.dbf converted datafile=/temp/data_D-10_I-3295731590_TS-EXAMPLE_FNO-4_06ek24vl channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:45 Finished backup at 08-APR-03 See Also: Oracle Database Backup and Recovery Reference for a description of the RMAN CONVERT command 7. Exit Recovery Manager: RMAN> exit Recovery Manager complete. Step 4: Transport the Tablespace Set Transport both the datafiles and the export file of the tablespaces to a place that is accessible to the target database. If both the source and destination are files systems, you can use: ■ Any facility for copying flat files (for example, an operating system copy utility or ftp) ■ The DBMS_FILE_TRANSFER package ■ RMAN ■ Any facility for publishing on CDs If either the source or destination is an Automatic Storage Management (ASM) disk group, you can use: ■ ftp to or from the /sys/asm virtual folder in the XML DB repository See "Accessing Automatic Storage Management Files with the XML DB Virtual Folder" on page 12-46 for more information. ■ The DBMS_FILE_TRANSFER package ■ RMAN 8-34 Oracle Database Administrator’s Guide Transporting Tablespaces Between Databases Caution: Exercise caution when using the UNIX dd utility to copy raw-device files between databases. The dd utility can be used to copy an entire source raw-device file, or it can be invoked with options that instruct it to copy only a specific range of blocks from the source raw-device file. It is difficult to ascertain actual datafile size for a raw-device file because of hidden control information that is stored as part of the datafile. Thus, it is advisable when using the dd utility to specify copying the entire source raw-device file contents. If you are transporting the tablespace set to a platform with endianness that is different from the source platform, and you have not yet converted the tablespace set, you must do so now. This example assumes that you have completed the following steps before the transport: 1. Set the source tablespaces to be transported to be read-only. 2. Use the export utility to create an export file (in our example, expdat.dmp). Datafiles that are to be converted on the target platform can be moved to a temporary location on the target platform. However, all datafiles, whether already converted or not, must be moved to a designated location on the target database. Now use RMAN to convert the necessary transported datafiles to the endian format of the destination host format and deposit the results in /orahome/dbs, as shown in this hypothetical example: RMAN> CONVERT DATAFILE 2> '/hq/finance/work/tru/tbs_31.f', 3> '/hq/finance/work/tru/tbs_32.f', 4> '/hq/finance/work/tru/tbs_41.f' 5> TO PLATFORM="Solaris[tm] OE (32-bit)" 6> FROM PLATFORM="HP TRu64 UNIX" 7> DB_FILE_NAME_CONVERT= 8> "/hq/finance/work/tru/", "/hq/finance/dbs/tru" 9> PARALLELISM=5; You identify the datafiles by filename, not by tablespace name. Until the tablespace metadata is imported, the local instance has no way of knowing the desired tablespace names. The source and destination platforms are optional. RMAN determines the source platform by examining the datafile, and the target platform defaults to the platform of the host running the conversion. See Also: "Copying Files Using the Database Server" on page 9-12 for information about using the DBMS_FILE_TRANSFER package to copy the files that are being transported and their metadata Managing Tablespaces 8-35 Transporting Tablespaces Between Databases Step 5: Import the Tablespace Set If you are transporting a tablespace of a different block size than the standard block size of the database receiving the tablespace set, then you must first have a DB_nK_CACHE_SIZE initialization parameter entry in the receiving database parameter file. Note: For example, if you are transporting a tablespace with an 8K block size into a database with a 4K standard block size, then you must include a DB_8K_CACHE_SIZE initialization parameter entry in the parameter file. If it is not already included in the parameter file, this parameter can be set using the ALTER SYSTEM SET statement. See Oracle Database Reference for information about specifying values for the DB_nK_CACHE_SIZE initialization parameter. Any privileged user can perform this step. To import a tablespace set, perform the following tasks: 1. Import the tablespace metadata using the Data Pump Import utility, impdp: If any of the tablespaces contain XMLTypes, you must use IMP instead of Data Pump. Note: IMPDP system/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir TRANSPORT_DATAFILES= /salesdb/sales_101.dbf, /salesdb/sales_201.dbf REMAP_SCHEMA=(dcranney:smith) REMAP_SCHEMA=(jfee:williams) In this example we specify the following: ■ ■ ■ ■ The DUMPFILE parameter specifies the exported file containing the metadata for the tablespaces to be imported. The DIRECTORY parameter specifies the directory object that identifies the location of the dump file. The TRANSPORT_DATAFILES parameter identifies all of the datafiles containing the tablespaces to be imported. The REMAP_SCHEMA parameter changes the ownership of database objects. If you do not specify REMAP_SCHEMA, all database objects (such as tables and indexes) are created in the same user schema as in the source database, and those users must already exist in the target database. If they do not exist, then the import utility returns an error. In this example, objects in the tablespace set owned by dcranney in the source database will be owned by smith in the target database after the tablespace set is imported. Similarly, objects owned by jfee in the source database will be owned by williams in the target database. In this case, the target database is not required to have users dcranney and jfee, but must have users smith and williams. After this statement executes successfully, all tablespaces in the set being copied remain in read-only mode. Check the import logs to ensure that no error has occurred. 8-36 Oracle Database Administrator’s Guide Transporting Tablespaces Between Databases When dealing with a large number of datafiles, specifying the list of datafile names in the statement line can be a laborious process. It can even exceed the statement line limit. In this situation, you can use an import parameter file. For example, you can invoke the Data Pump import utility as follows: IMPDP system/password PARFILE='par.f' where the parameter file, par.f contains the following: DIRECTORY=dpump_dir DUMPFILE=expdat.dmp TRANSPORT_DATAFILES="'/db/sales_jan','/db/sales_feb'" REMAP_SCHEMA=dcranney:smith REMAP_SCHEMA=jfee:williams Oracle Database Utilities for information about using the import utility See Also: 2. If required, put the tablespaces into read/write mode as follows: ALTER TABLESPACE sales_1 READ WRITE; ALTER TABLESPACE sales_2 READ WRITE; Using Transportable Tablespaces: Scenarios The following sections describe some uses for transportable tablespaces: ■ Transporting and Attaching Partitions for Data Warehousing ■ Publishing Structured Data on CDs ■ Mounting the Same Tablespace Read-Only on Multiple Databases ■ Archiving Historical Data Using Transportable Tablespaces ■ Using Transportable Tablespaces to Perform TSPITR Transporting and Attaching Partitions for Data Warehousing Typical enterprise data warehouses contain one or more large fact tables. These fact tables can be partitioned by date, making the enterprise data warehouse a historical database. You can build indexes to speed up star queries. Oracle recommends that you build local indexes for such historically partitioned tables to avoid rebuilding global indexes every time you drop the oldest partition from the historical database. Suppose every month you would like to load one month of data into the data warehouse. There is a large fact table in the data warehouse called sales, which has the following columns: CREATE TABLE sales (invoice_no NUMBER, sale_year INT NOT NULL, sale_month INT NOT NULL, sale_day INT NOT NULL) PARTITION BY RANGE (sale_year, sale_month, (partition jan98 VALUES LESS THAN (1998, partition feb98 VALUES LESS THAN (1998, partition mar98 VALUES LESS THAN (1998, partition apr98 VALUES LESS THAN (1998, partition may98 VALUES LESS THAN (1998, partition jun98 VALUES LESS THAN (1998, sale_day) 2, 1), 3, 1), 4, 1), 5, 1), 6, 1), 7, 1)); You create a local non-prefixed index: Managing Tablespaces 8-37 Transporting Tablespaces Between Databases CREATE INDEX sales_index ON sales(invoice_no) LOCAL; Initially, all partitions are empty, and are in the same default tablespace. Each month, you want to create one partition and attach it to the partitioned sales table. Suppose it is July 1998, and you would like to load the July sales data into the partitioned table. In a staging database, you create a new tablespace, ts_jul. You also create a table, jul_sales, in that tablespace with exactly the same column types as the sales table. You can create the table jul_sales using the CREATE TABLE ... AS SELECT statement. After creating and populating jul_sales, you can also create an index, jul_sale_index, for the table, indexing the same column as the local index in the sales table. After building the index, transport the tablespace ts_jul to the data warehouse. In the data warehouse, add a partition to the sales table for the July sales data. This also creates another partition for the local non-prefixed index: ALTER TABLE sales ADD PARTITION jul98 VALUES LESS THAN (1998, 8, 1); Attach the transported table jul_sales to the table sales by exchanging it with the new partition: ALTER TABLE sales EXCHANGE PARTITION jul98 WITH TABLE jul_sales INCLUDING INDEXES WITHOUT VALIDATION; This statement places the July sales data into the new partition jul98, attaching the new data to the partitioned table. This statement also converts the index jul_sale_index into a partition of the local index for the sales table. This statement should return immediately, because it only operates on the structural information and it simply switches database pointers. If you know that the data in the new partition does not overlap with data in previous partitions, you are advised to specify the WITHOUT VALIDATION clause. Otherwise, the statement goes through all the new data in the new partition in an attempt to validate the range of that partition. If all partitions of the sales table came from the same staging database (the staging database is never destroyed), the exchange statement always succeeds. In general, however, if data in a partitioned table comes from different databases, it is possible that the exchange operation may fail. For example, if the jan98 partition of sales did not come from the same staging database, the preceding exchange operation can fail, returning the following error: ORA-19728: data object number conflict between table JUL_SALES and partition JAN98 in table SALES To resolve this conflict, move the offending partition by issuing the following statement: ALTER TABLE sales MOVE PARTITION jan98; Then retry the exchange operation. After the exchange succeeds, you can safely drop jul_sales and jul_sale_index (both are now empty). Thus you have successfully loaded the July sales data into your data warehouse. Publishing Structured Data on CDs Transportable tablespaces provide a way to publish structured data on CDs. A data provider can load a tablespace with data to be published, generate the transportable set, and copy the transportable set to a CD. This CD can then be distributed. 8-38 Oracle Database Administrator’s Guide Transporting Tablespaces Between Databases When customers receive this CD, they can add the CD contents to an existing database without having to copy the datafiles from the CD to disk storage. For example, suppose on a Windows NT machine D: drive is the CD drive. You can import a transportable set with datafile catalog.f and export file expdat.dmp as follows: IMPDP system/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir TRANSPORT_DATAFILES='D:\catalog.f' You can remove the CD while the database is still up. Subsequent queries to the tablespace return an error indicating that the database cannot open the datafiles on the CD. However, operations to other parts of the database are not affected. Placing the CD back into the drive makes the tablespace readable again. Removing the CD is the same as removing the datafiles of a read-only tablespace. If you shut down and restart the database, the database indicates that it cannot find the removed datafile and does not open the database (unless you set the initialization parameter READ_ONLY_OPEN_DELAYED to TRUE). When READ_ONLY_OPEN_DELAYED is set to TRUE, the database reads the file only when someone queries the transported tablespace. Thus, when transporting a tablespace from a CD, you should always set the READ_ONLY_OPEN_DELAYED initialization parameter to TRUE, unless the CD is permanently attached to the database. Mounting the Same Tablespace Read-Only on Multiple Databases You can use transportable tablespaces to mount a tablespace read-only on multiple databases. In this way, separate databases can share the same data on disk instead of duplicating data on separate disks. The tablespace datafiles must be accessible by all databases. To avoid database corruption, the tablespace must remain read-only in all the databases mounting the tablespace. The following are two scenarios for mounting the same tablespace read-only on multiple databases: ■ The tablespace originates in a database that is separate from the databases that will share the tablespace. You generate a transportable set in the source database, put the transportable set onto a disk that is accessible to all databases, and then import the metadata into each database on which you want to mount the tablespace. ■ The tablespace already belongs to one of the databases that will share the tablespace. It is assumed that the datafiles are already on a shared disk. In the database where the tablespace already exists, you make the tablespace read-only, generate the transportable set, and then import the tablespace into the other databases, leaving the datafiles in the same location on the shared disk. You can make a disk accessible by multiple computers in several ways. You can use either a cluster file system or raw disk. You can also use network file system (NFS), but be aware that if a user queries the shared tablespace while NFS is down, the database will hang until the NFS operation times out. Later, you can drop the read-only tablespace in some of the databases. Doing so does not modify the datafiles for the tablespace. Thus, the drop operation does not corrupt the tablespace. Do not make the tablespace read/write unless only one database is mounting the tablespace. Managing Tablespaces 8-39 Viewing Tablespace Information Archiving Historical Data Using Transportable Tablespaces Since a transportable tablespace set is a self-contained set of files that can be imported into any Oracle Database, you can archive old/historical data in an enterprise data warehouse using the transportable tablespace procedures described in this chapter. See Also: Oracle Database Data Warehousing Guide for more details Using Transportable Tablespaces to Perform TSPITR You can use transportable tablespaces to perform tablespace point-in-time recovery (TSPITR). See Also: Oracle Database Backup and Recovery Advanced User's Guide for information about how to perform TSPITR using transportable tablespaces Moving Databases Across Platforms Using Transportable Tablespaces You can use the transportable tablespace feature to migrate a database to a different platform by creating a new database on the destination platform and performing a transport of all the user tablespaces. See Oracle Database Backup and Recovery Advanced User's Guide for more information. You cannot transport the SYSTEM tablespace. Therefore, objects such as sequences, PL/SQL packages, and other objects that depend on the SYSTEM tablespace are not transported. You must either create these objects manually on the destination database, or use Data Pump to transport the objects that are not moved by transportable tablespace. Viewing Tablespace Information The following data dictionary and dynamic performance views provide useful information about the tablespaces of a database. View Description V$TABLESPACE Name and number of all tablespaces from the control file. DBA_TABLESPACES, USER_TABLESPACES Descriptions of all (or user accessible) tablespaces. DBA_TABLESPACE_GROUPS Displays the tablespace groups and the tablespaces that belong to them. DBA_SEGMENTS, USER_SEGMENTS Information about segments within all (or user accessible) tablespaces. DBA_EXTENTS, USER_EXTENTS Information about data extents within all (or user accessible) tablespaces. DBA_FREE_SPACE, USER_FREE_SPACE Information about free extents within all (or user accessible) tablespaces. V$DATAFILE Information about all datafiles, including tablespace number of owning tablespace. V$TEMPFILE Information about all tempfiles, including tablespace number of owning tablespace. DBA_DATA_FILES Shows files (datafiles) belonging to tablespaces. DBA_TEMP_FILES Shows files (tempfiles) belonging to temporary tablespaces. 8-40 Oracle Database Administrator’s Guide Viewing Tablespace Information View Description V$TEMP_EXTENT_MAP Information for all extents in all locally managed temporary tablespaces. V$TEMP_EXTENT_POOL For locally managed temporary tablespaces: the state of temporary space cached and used for by each instance. V$TEMP_SPACE_HEADER Shows space used/free for each tempfile. DBA_USERS Default and temporary tablespaces for all users. DBA_TS_QUOTAS Lists tablespace quotas for all users. V$SORT_SEGMENT Information about every sort segment in a given instance. The view is only updated when the tablespace is of the TEMPORARY type. V$TEMPSEG_USAGE Describes temporary (sort) segment usage by user for temporary or permanent tablespaces. The following are just a few examples of using some of these views. See Also: Oracle Database Reference for complete description of these views Example 1: Listing Tablespaces and Default Storage Parameters To list the names and default storage parameters of all tablespaces in a database, use the following query on the DBA_TABLESPACES view: SELECT TABLESPACE_NAME "TABLESPACE", INITIAL_EXTENT "INITIAL_EXT", NEXT_EXTENT "NEXT_EXT", MIN_EXTENTS "MIN_EXT", MAX_EXTENTS "MAX_EXT", PCT_INCREASE FROM DBA_TABLESPACES; TABLESPACE ---------RBS SYSTEM TEMP TESTTBS USERS INITIAL_EXT ----------1048576 106496 106496 57344 57344 NEXT_EXT -------1048576 106496 106496 16384 57344 MIN_EXT ------2 1 1 2 1 MAX_EXT ------40 99 99 10 99 PCT_INCREASE -----------0 1 0 1 1 Example 2: Listing the Datafiles and Associated Tablespaces of a Database To list the names, sizes, and associated tablespaces of a database, enter the following query on the DBA_DATA_FILES view: SELECT FILE_NAME, BLOCKS, TABLESPACE_NAME FROM DBA_DATA_FILES; FILE_NAME -----------/U02/ORACLE/IDDB3/DBF/RBS01.DBF /U02/ORACLE/IDDB3/DBF/SYSTEM01.DBF /U02/ORACLE/IDDB3/DBF/TEMP01.DBF /U02/ORACLE/IDDB3/DBF/TESTTBS01.DBF /U02/ORACLE/IDDB3/DBF/USERS01.DBF BLOCKS ---------1536 6586 6400 6400 384 TABLESPACE_NAME ------------------RBS SYSTEM TEMP TESTTBS USERS Managing Tablespaces 8-41 Viewing Tablespace Information Example 3: Displaying Statistics for Free Space (Extents) of Each Tablespace To produce statistics about free extents and coalescing activity for each tablespace in the database, enter the following query: SELECT TABLESPACE_NAME "TABLESPACE", FILE_ID, COUNT(*) "PIECES", MAX(blocks) "MAXIMUM", MIN(blocks) "MINIMUM", AVG(blocks) "AVERAGE", SUM(blocks) "TOTAL" FROM DBA_FREE_SPACE GROUP BY TABLESPACE_NAME, FILE_ID; TABLESPACE ---------RBS SYSTEM TEMP TESTTBS USERS FILE_ID ------2 1 4 5 3 PIECES -----1 1 1 5 1 MAXIMUM ------955 119 6399 6364 363 MINIMUM ------955 119 6399 3 363 AVERAGE ------955 119 6399 1278 363 TOTAL -----955 119 6399 6390 363 PIECES shows the number of free space extents in the tablespace file, MAXIMUM and MINIMUM show the largest and smallest contiguous area of space in database blocks, AVERAGE shows the average size in blocks of a free space extent, and TOTAL shows the amount of free space in each tablespace file in blocks. This query is useful when you are going to create a new object or you know that a segment is about to extend, and you want to make sure that there is enough space in the containing tablespace. 8-42 Oracle Database Administrator’s Guide 9 Managing Datafiles and Tempfiles This chapter describes the various aspects of datafile and tempfile management, and contains the following topics: ■ Guidelines for Managing Datafiles ■ Creating Datafiles and Adding Datafiles to a Tablespace ■ Changing Datafile Size ■ Altering Datafile Availability ■ Renaming and Relocating Datafiles ■ Dropping Datafiles ■ Verifying Data Blocks in Datafiles ■ Copying Files Using the Database Server ■ Mapping Files to Physical Devices ■ Viewing Datafile Information See Also: Part III, "Automated File and Storage Management" for information about creating datafiles and tempfiles that are both created and managed by the Oracle Database server Guidelines for Managing Datafiles Datafiles are physical files of the operating system that store the data of all logical structures in the database. They must be explicitly created for each tablespace. Tempfiles are a special class of datafiles that are associated only with temporary tablespaces. Information in this chapter applies to both datafiles and tempfiles except where differences are noted. Tempfiles are further described in "Creating a Locally Managed Temporary Tablespace" on page 8-9 Note: Oracle Database assigns each datafile two associated file numbers, an absolute file number and a relative file number, that are used to uniquely identify it. These numbers are described in the following table: Managing Datafiles and Tempfiles 9-1 Guidelines for Managing Datafiles Type of File Number Description Absolute Uniquely identifies a datafile in the database. This file number can be used in many SQL statements that reference datafiles in place of using the file name. The absolute file number can be found in the FILE# column of the V$DATAFILE or V$TEMPFILE view, or in the FILE_ID column of the DBA_DATA_FILES or DBA_TEMP_FILES view. Relative Uniquely identifies a datafile within a tablespace. For small and medium size databases, relative file numbers usually have the same value as the absolute file number. However, when the number of datafiles in a database exceeds a threshold (typically 1023), the relative file number differs from the absolute file number. In a bigfile tablespace, the relative file number is always 1024 (4096 on OS/390 platform). This section describes aspects of managing datafiles, and contains the following topics: ■ Determine the Number of Datafiles ■ Determine the Size of Datafiles ■ Place Datafiles Appropriately ■ Store Datafiles Separate from Redo Log Files Determine the Number of Datafiles At least one datafile is required for the SYSTEM and SYSAUX tablespaces of a database. Your database should contain several other tablespaces with their associated datafiles or tempfiles. The number of datafiles that you anticipate creating for your database can affect the settings of initialization parameters and the specification of CREATE DATABASE statement clauses. Be aware that your operating system might impose limits on the number of datafiles contained in your Oracle Database. Also consider that the number of datafiles, and how and where they are allocated can affect the performance of your database. One means of controlling the number of datafiles in your database and simplifying their management is to use bigfile tablespaces. Bigfile tablespaces comprise a single, very large datafile and are especially useful in ultra large databases and where a logical volume manager is used for managing operating system files. Bigfile tablespaces are discussed in "Bigfile Tablespaces" on page 8-6. Note: Consider the following guidelines when determining the number of datafiles for your database. Determine a Value for the DB_FILES Initialization Parameter When starting an Oracle Database instance, the DB_FILES initialization parameter indicates the amount of SGA space to reserve for datafile information and thus, the maximum number of datafiles that can be created for the instance. This limit applies for the life of the instance. You can change the value of DB_FILES (by changing the initialization parameter setting), but the new value does not take effect until you shut down and restart the instance. 9-2 Oracle Database Administrator’s Guide Guidelines for Managing Datafiles When determining a value for DB_FILES, take the following into consideration: ■ ■ If the value of DB_FILES is too low, you cannot add datafiles beyond the DB_FILES limit without first shutting down the database. If the value of DB_FILES is too high, memory is unnecessarily consumed. Consider Possible Limitations When Adding Datafiles to a Tablespace You can add datafiles to traditional smallfile tablespaces, subject to the following limitations: ■ ■ ■ ■ ■ Operating systems often impose a limit on the number of files a process can open simultaneously. More datafiles cannot be created when the operating system limit of open files is reached. Operating systems impose limits on the number and size of datafiles. The database imposes a maximum limit on the number of datafiles for any Oracle Database opened by any instance. This limit is operating system specific. You cannot exceed the number of datafiles specified by the DB_FILES initialization parameter. When you issue CREATE DATABASE or CREATE CONTROLFILE statements, the MAXDATAFILES parameter specifies an initial size of the datafile portion of the control file. However, if you attempt to add a new file whose number is greater than MAXDATAFILES, but less than or equal to DB_FILES, the control file will expand automatically so that the datafiles section can accommodate more files. Consider the Performance Impact The number of datafiles contained in a tablespace, and ultimately the database, can have an impact upon performance. Oracle Database allows more datafiles in the database than the operating system defined limit. The database DBWn processes can open all online datafiles. Oracle Database is capable of treating open file descriptors as a cache, automatically closing files when the number of open file descriptors reaches the operating system-defined limit. This can have a negative performance impact. When possible, adjust the operating system limit on open file descriptors so that it is larger than the number of online datafiles in the database. See Also: ■ ■ Your operating system specific Oracle documentation for more information on operating system limits Oracle Database SQL Reference for more information about the MAXDATAFILES parameter of the CREATE DATABASE or CREATE CONTROLFILE statement Determine the Size of Datafiles When creating a tablespace, you should estimate the potential size of database objects and create sufficient datafiles. Later, if needed, you can create additional datafiles and add them to a tablespace to increase the total amount of disk space allocated to it, and consequently the database. Preferably, place datafiles on multiple devices to ensure that data is spread evenly across all devices. Managing Datafiles and Tempfiles 9-3 Creating Datafiles and Adding Datafiles to a Tablespace Place Datafiles Appropriately Tablespace location is determined by the physical location of the datafiles that constitute that tablespace. Use the hardware resources of your computer appropriately. For example, if several disk drives are available to store the database, consider placing potentially contending datafiles on separate disks.This way, when users query information, both disk drives can work simultaneously, retrieving data at the same time. See Also: Oracle Database Performance Tuning Guide for information about I/O and the placement of datafiles Store Datafiles Separate from Redo Log Files Datafiles should not be stored on the same disk drive that stores the database redo log files. If the datafiles and redo log files are stored on the same disk drive and that disk drive fails, the files cannot be used in your database recovery procedures. If you multiplex your redo log files, then the likelihood of losing all of your redo log files is low, so you can store datafiles on the same drive as some redo log files. Creating Datafiles and Adding Datafiles to a Tablespace You can create datafiles and associate them with a tablespace using any of the statements listed in the following table. In all cases, you can either specify the file specifications for the datafiles being created, or you can use the Oracle-managed files feature to create files that are created and managed by the database server. The table includes a brief description of the statement, as used to create datafiles, and references the section of this book where use of the statement is specifically described: SQL Statement Description Additional Information CREATE TABLESPACE Creates a tablespace and the datafiles that comprise it "Creating Tablespaces" on page 8-2 CREATE TEMPORARY TABLESPACE Creates a locally-managed temporary tablespace and the tempfiles (tempfiles are a special kind of datafile) that comprise it "Creating a Locally Managed Temporary Tablespace" on page 8-9 ALTER TABLESPACE ... ADD DATAFILE Creates and adds a datafile to a tablespace "Altering a Locally Managed Temporary Tablespace" on page 8-10 ALTER TABLESPACE ... ADD TEMPFILE Creates and adds a tempfile to a temporary tablespace "Creating a Locally Managed Temporary Tablespace" on page 8-9 CREATE DATABASE Creates a database and associated datafiles "Manually Creating an Oracle Database" on page 2-2 ALTER DATABASE ... CREATE DATAFILE Creates a new empty datafile in place of an old one--useful to re-create a datafile that was lost with no backup. See Oracle Database Backup and Recovery Advanced User's Guide. If you add new datafiles to a tablespace and do not fully specify the filenames, the database creates the datafiles in the default database directory or the current directory, depending upon your operating system. Oracle recommends you always specify a fully qualified name for a datafile. Unless you want to reuse existing files, make sure 9-4 Oracle Database Administrator’s Guide Changing Datafile Size the new filenames do not conflict with other files. Old files that have been previously dropped will be overwritten. If a statement that creates a datafile fails, the database removes any created operating system files. However, because of the large number of potential errors that can occur with file systems and storage subsystems, there can be situations where you must manually remove the files using operating system commands. Changing Datafile Size This section describes the various ways to alter the size of a datafile, and contains the following topics: ■ Enabling and Disabling Automatic Extension for a Datafile ■ Manually Resizing a Datafile Enabling and Disabling Automatic Extension for a Datafile You can create datafiles or alter existing datafiles so that they automatically increase in size when more space is needed in the database. The file size increases in specified increments up to a specified maximum. Setting your datafiles to extend automatically provides these advantages: ■ ■ Reduces the need for immediate intervention when a tablespace runs out of space Ensures applications will not halt or be suspended because of failures to allocate extents To determine whether a datafile is auto-extensible, query the DBA_DATA_FILES view and examine the AUTOEXTENSIBLE column. You can specify automatic file extension by specifying an AUTOEXTEND ON clause when you create datafiles using the following SQL statements: ■ CREATE DATABASE ■ ALTER DATABASE ■ CREATE TABLESPACE ■ ALTER TABLESPACE You can enable or disable automatic file extension for existing datafiles, or manually resize a datafile, using the ALTER DATABASE statement. For a bigfile tablespace, you are able to perform these operations using the ALTER TABLESPACE statement. The following example enables automatic extension for a datafile added to the users tablespace: ALTER TABLESPACE users ADD DATAFILE '/u02/oracle/rbdb1/users03.dbf' SIZE 10M AUTOEXTEND ON NEXT 512K MAXSIZE 250M; The value of NEXT is the minimum size of the increments added to the file when it extends. The value of MAXSIZE is the maximum size to which the file can automatically extend. The next example disables the automatic extension for the datafile. ALTER DATABASE DATAFILE '/u02/oracle/rbdb1/users03.dbf' Managing Datafiles and Tempfiles 9-5 Altering Datafile Availability AUTOEXTEND OFF; Oracle Database SQL Reference for more information about the SQL statements for creating or altering datafiles See Also: Manually Resizing a Datafile You can manually increase or decrease the size of a datafile using the ALTER DATABASE statement. This enables you to add more space to your database without adding more datafiles. This is beneficial if you are concerned about reaching the maximum number of datafiles allowed in your database. For a bigfile tablespace you can use the ALTER TABLESPACE statement to resize a datafile. You are not allowed to add a datafile to a bigfile tablespace. Manually reducing the sizes of datafiles enables you to reclaim unused space in the database. This is useful for correcting errors in estimates of space requirements. In the next example, assume that the datafile /u02/oracle/rbdb1/stuff01.dbf has extended up to 250M. However, because its tablespace now stores smaller objects, the datafile can be reduced in size. The following statement decreases the size of datafile /u02/oracle/rbdb1/stuff01.dbf: ALTER DATABASE DATAFILE '/u02/oracle/rbdb1/stuff01.dbf' RESIZE 100M; It is not always possible to decrease the size of a file to a specific value. It could be that the file contains data beyond the specified decreased size, in which case the database will return an error. Note: Altering Datafile Availability You can alter the availability of individual datafiles or tempfiles by taking them offline or bringing them online. Offline datafiles are unavailable to the database and cannot be accessed until they are brought back online. Reasons for altering datafile availability include the following: ■ ■ ■ ■ You want to perform an offline backup of a datafile. You want to rename or relocate a datafile. You must first take it offline or take the tablespace offline. The database has problems writing to a datafile and automatically takes the datafile offline. Later, after resolving the problem, you can bring the datafile back online manually. A datafile becomes missing or corrupted. You must take it offline before you can open the database. The datafiles of a read-only tablespace can be taken offline or brought online, but bringing a file online does not affect the read-only status of the tablespace. You cannot write to the datafile until the tablespace is returned to the read/write state. 9-6 Oracle Database Administrator’s Guide Altering Datafile Availability You can make all datafiles of a tablespace temporarily unavailable by taking the tablespace itself offline. You must leave these files in the tablespace to bring the tablespace back online, although you can relocate or rename them following procedures similar to those shown in "Renaming and Relocating Datafiles" on page 9-8. Note: For more information, see "Taking Tablespaces Offline" on page 8-13. To take a datafile offline or bring it online, you must have the ALTER DATABASE system privilege. To take all datafiles or tempfiles offline using the ALTER TABLESPACE statement, you must have the ALTER TABLESPACE or MANAGE TABLESPACE system privilege. In an Oracle Real Application Clusters environment, the database must be open in exclusive mode. This section describes ways to alter datafile availability, and contains the following topics: ■ Bringing Datafiles Online or Taking Offline in ARCHIVELOG Mode ■ Taking Datafiles Offline in NOARCHIVELOG Mode ■ Altering the Availability of All Datafiles or Tempfiles in a Tablespace Bringing Datafiles Online or Taking Offline in ARCHIVELOG Mode To bring an individual datafile online, issue the ALTER DATABASE statement and include the DATAFILE clause.The following statement brings the specified datafile online: ALTER DATABASE DATAFILE '/u02/oracle/rbdb1/stuff01.dbf' ONLINE; To take the same file offline, issue the following statement: ALTER DATABASE DATAFILE '/u02/oracle/rbdb1/stuff01.dbf' OFFLINE; Note: To use this form of the ALTER DATABASE statement, the database must be in ARCHIVELOG mode. This requirement prevents you from accidentally losing the datafile, since taking the datafile offline while in NOARCHIVELOG mode is likely to result in losing the file. Taking Datafiles Offline in NOARCHIVELOG Mode To take a datafile offline when the database is in NOARCHIVELOG mode, use the ALTER DATABASE statement with both the DATAFILE and OFFLINE FOR DROP clauses. ■ ■ The OFFLINE keyword causes the database to mark the datafile OFFLINE, whether or not it is corrupted, so that you can open the database. The FOR DROP keywords mark the datafile for subsequent dropping. Such a datafile can no longer be brought back online. Managing Datafiles and Tempfiles 9-7 Renaming and Relocating Datafiles Note: This operation does not actually drop the datafile. It remains in the data dictionary, and you must drop it yourself using one of the following methods: ■ An ALTER TABLESPACE ... DROP DATAFILE statement. After an OFFLINE FOR DROP, this method works for dictionary managed tablespaces only. ■ ■ A DROP TABLESPACE ... INCLUDING CONTENTS AND DATAFILES statement If the preceding methods fail, an operating system command to delete the datafile. This is the least desirable method, as it leaves references to the datafile in the data dictionary and control files. The following statement takes the specified datafile offline and marks it to be dropped: ALTER DATABASE DATAFILE '/u02/oracle/rbdb1/users03.dbf' OFFLINE FOR DROP; Altering the Availability of All Datafiles or Tempfiles in a Tablespace Clauses of the ALTER TABLESPACE statement allow you to change the online or offline status of all of the datafiles or tempfiles within a tablespace. Specifically, the statements that affect online/offline status are: ■ ALTER TABLESPACE ... DATAFILE {ONLINE|OFFLINE} ■ ALTER TABLESPACE ... TEMPFILE {ONLINE|OFFLINE} You are required only to enter the tablespace name, not the individual datafiles or tempfiles. All of the datafiles or tempfiles are affected, but the online/offline status of the tablespace itself is not changed. In most cases the preceding ALTER TABLESPACE statements can be issued whenever the database is mounted, even if it is not open. However, the database must not be open if the tablespace is the SYSTEM tablespace, an undo tablespace, or the default temporary tablespace. The ALTER DATABASE DATAFILE and ALTER DATABASE TEMPFILE statements also have ONLINE/OFFLINE clauses, however in those statements you must enter all of the filenames for the tablespace. The syntax is different from the ALTER TABLESPACE...ONLINE|OFFLINE statement that alters tablespace availability, because that is a different operation. The ALTER TABLESPACE statement takes datafiles offline as well as the tablespace, but it cannot be used to alter the status of a temporary tablespace or its tempfile(s). Renaming and Relocating Datafiles You can rename datafiles to either change their names or relocate them. Some possible procedures for doing this are described in the following sections: ■ Procedures for Renaming and Relocating Datafiles in a Single Tablespace ■ Procedure for Renaming and Relocating Datafiles in Multiple Tablespaces When you rename and relocate datafiles with these procedures, only the pointers to the datafiles, as recorded in the database control file, are changed. The procedures do not physically rename any operating system files, nor do they copy files at the 9-8 Oracle Database Administrator’s Guide Renaming and Relocating Datafiles operating system level. Renaming and relocating datafiles involves several steps. Read the steps and examples carefully before performing these procedures. Procedures for Renaming and Relocating Datafiles in a Single Tablespace The section suggests some procedures for renaming and relocating datafiles that can be used for a single tablespace. You must have ALTER TABLESPACE system privileges. See Also: "Taking Tablespaces Offline" on page 8-13 for more information about taking tablespaces offline in preparation for renaming or relocating datafiles Procedure for Renaming Datafiles in a Single Tablespace To rename datafiles in a single tablespace, complete the following steps: 1. Take the tablespace that contains the datafiles offline. The database must be open. For example: ALTER TABLESPACE users OFFLINE NORMAL; 2. Rename the datafiles using the operating system. 3. Use the ALTER TABLESPACE statement with the RENAME DATAFILE clause to change the filenames within the database. For example, the following statement renames the datafiles /u02/oracle/rbdb1/user1.dbf and /u02/oracle/rbdb1/user2.dbf to/u02/oracle/rbdb1/users01.dbf and /u02/oracle/rbdb1/users02.dbf, respectively: ALTER TABLESPACE users RENAME DATAFILE '/u02/oracle/rbdb1/user1.dbf', '/u02/oracle/rbdb1/user2.dbf' TO '/u02/oracle/rbdb1/users01.dbf', '/u02/oracle/rbdb1/users02.dbf'; Always provide complete filenames (including their paths) to properly identify the old and new datafiles. In particular, specify the old datafile name exactly as it appears in the DBA_DATA_FILES view of the data dictionary. 4. Back up the database. After making any structural changes to a database, always perform an immediate and complete backup. Procedure for Relocating Datafiles in a Single Tablespace Here is a sample procedure for relocating a datafile. Assume the following conditions: ■ ■ An open database has a tablespace named users that is made up of datafiles all located on the same disk. The datafiles of the users tablespace are to be relocated to different and separate disk drives. ■ You are currently connected with administrator privileges to the open database. ■ You have a current backup of the database. Complete the following steps: Managing Datafiles and Tempfiles 9-9 Renaming and Relocating Datafiles 1. If you do not know the specific file names or sizes, you can obtain this information by issuing the following query of the data dictionary view DBA_DATA_FILES: SQL> SELECT FILE_NAME, BYTES FROM DBA_DATA_FILES 2> WHERE TABLESPACE_NAME = 'USERS'; FILE_NAME -----------------------------------------/u02/oracle/rbdb1/users01.dbf /u02/oracle/rbdb1/users02.dbf 2. BYTES ---------------102400000 102400000 Take the tablespace containing the datafiles offline: ALTER TABLESPACE users OFFLINE NORMAL; 3. Copy the datafiles to their new locations and rename them using the operating system. You can copy the files using the DBMS_FILE_TRANSFER package discussed in "Copying Files Using the Database Server" on page 9-12. You can temporarily exit SQL*Plus to execute an operating system command to copy a file by using the SQL*Plus HOST command. Note: 4. Rename the datafiles within the database. The datafile pointers for the files that make up the users tablespace, recorded in the control file of the associated database, must now be changed from the old names to the new names. Use the ALTER TABLESPACE...RENAME DATAFILE statement. ALTER TABLESPACE users RENAME DATAFILE '/u02/oracle/rbdb1/users01.dbf', '/u02/oracle/rbdb1/users02.dbf' TO '/u03/oracle/rbdb1/users01.dbf', '/u04/oracle/rbdb1/users02.dbf'; 5. Back up the database. After making any structural changes to a database, always perform an immediate and complete backup. Procedure for Renaming and Relocating Datafiles in Multiple Tablespaces You can rename and relocate datafiles in one or more tablespaces using the ALTER DATABASE RENAME FILE statement. This method is the only choice if you want to rename or relocate datafiles of several tablespaces in one operation. You must have the ALTER DATABASE system privilege. To rename or relocate datafiles of the SYSTEM tablespace, the default temporary tablespace, or the active undo tablespace you must use this ALTER DATABASE method because you cannot take these tablespaces offline. Note: To rename datafiles in multiple tablespaces, follow these steps. 1. Ensure that the database is mounted but closed. 9-10 Oracle Database Administrator’s Guide Dropping Datafiles Optionally, the database does not have to be closed, but the datafiles (or tempfiles) must be offline. Note: 2. Copy the datafiles to be renamed to their new locations and new names, using the operating system. You can copy the files using the DBMS_FILE_TRANSFER package discussed in "Copying Files Using the Database Server" on page 9-12. 3. Use ALTER DATABASE to rename the file pointers in the database control file. For example, the following statement renames the datafiles/u02/oracle/rbdb1/sort01.dbf and /u02/oracle/rbdb1/user3.dbf to /u02/oracle/rbdb1/temp01.dbf and /u02/oracle/rbdb1/users03.dbf, respectively: ALTER DATABASE RENAME FILE '/u02/oracle/rbdb1/sort01.dbf', '/u02/oracle/rbdb1/user3.dbf' TO '/u02/oracle/rbdb1/temp01.dbf', '/u02/oracle/rbdb1/users03.dbf; Always provide complete filenames (including their paths) to properly identify the old and new datafiles. In particular, specify the old datafile names exactly as they appear in the DBA_DATA_FILES view. 4. Back up the database. After making any structural changes to a database, always perform an immediate and complete backup. Dropping Datafiles You use the DROP DATAFILE and DROP TEMPFILE clauses of the ALTER TABLESPACE command to drop a single datafile or tempfile. The datafile must be empty. (A datafile is considered to be empty when no extents remain allocated from it.) When you drop a datafile or tempfile, references to the datafile or tempfile are removed from the data dictionary and control files, and the physical file is deleted from the file system or Automatic Storage Management (ASM) disk group. The following example drops the datafile identified by the alias example_df3.f in the ASM disk group DGROUP1. The datafile belongs to the example tablespace. ALTER TABLESPACE example DROP DATAFILE '+DGROUP1/example_df3.f'; The next example drops the tempfile lmtemp02.dbf, which belongs to the lmtemp tablespace. ALTER TABLESPACE lmtemp DROP TEMPFILE '/u02/oracle/data/lmtemp02.dbf'; This is equivalent to the following statement: ALTER DATABASE TEMPFILE '/u02/oracle/data/lmtemp02.dbf' DROP INCLUDING DATAFILES; See Oracle Database SQL Reference for ALTER TABLESPACE syntax details. Restrictions for Dropping Datafiles The following are restrictions for dropping datafiles and tempfiles: ■ The database must be open. ■ If a datafile is not empty, it cannot be dropped. Managing Datafiles and Tempfiles 9-11 Verifying Data Blocks in Datafiles If you must remove a datafile that is not empty and that cannot be made empty by dropping schema objects, you must drop the tablespace that contains the datafile. ■ You cannot drop the first or only datafile in a tablespace. This means that DROP DATAFILE cannot be used with a bigfile tablespace. ■ You cannot drop datafiles in a read-only tablespace. ■ You cannot drop datafiles in the SYSTEM tablespace. ■ If a datafile in a locally managed tablespace is offline, it cannot be dropped. See Also: Dropping Tablespaces on page 8-20 Verifying Data Blocks in Datafiles If you want to configure the database to use checksums to verify data blocks, set the initialization parameter DB_BLOCK_CHECKSUM to TRUE. This causes the DBWn process and the direct loader to calculate a checksum for each block and to store the checksum in the block header when writing the block to disk. The checksum is verified when the block is read, but only if DB_BLOCK_CHECKSUM is TRUE and the last write of the block stored a checksum. If corruption is detected, the database returns message ORA-01578 and writes information about the corruption to the alert log. The default value of DB_BLOCK_CHECKSUM is TRUE. The value of this parameter can be changed dynamically using the ALTER SYSTEM statement. Regardless of the setting of this parameter, checksums are always used to verify data blocks in the SYSTEM tablespace. See Also: Oracle Database Reference for more information about the DB_BLOCK_CHECKSUM initialization parameter Copying Files Using the Database Server You do not necessarily have to use the operating system to copy a file within a database, or transfer a file between databases as you would do when using the transportable tablespace feature. You can use the DBMS_FILE_TRANSFER package, or you can use Streams propagation. Using Streams is not discussed in this book, but an example of using the DBMS_FILE_TRANSFER package is shown in "Copying a File on a Local File System" on page 9-13. The DBMS_FILE_TRANSFER package can use a local file system or an Automatic Storage Management (ASM) disk group as the source or destination for a file transfer. Only Oracle database files (datafiles, tempfiles, controlfiles, and so on) can be involved in transfers to and from ASM. Caution: Do not use the DBMS_FILE_TRANSFER package to copy or transfer a file that is being modified by a database because doing so may result in an inconsistent file. On UNIX systems, the owner of a file created by the DBMS_FILE_TRANSFER package is the owner of the shadow process running the instance. Normally, this owner is ORACLE. A file created using DBMS_FILE_TRANSFER is always writable and readable by all processes in the database, but non privileged users who need to read or write such a file directly may need access from a system administrator. 9-12 Oracle Database Administrator’s Guide Copying Files Using the Database Server This section contains the following topics: ■ Copying a File on a Local File System ■ Third-Party File Transfer ■ File Transfer and the DBMS_SCHEDULER Package ■ Advanced File Transfer Mechanisms See Also: ■ Oracle Streams Concepts and Administration ■ "Transporting Tablespaces Between Databases" on page 8-25 ■ Oracle Database PL/SQL Packages and Types Reference for a description of the DBMS_FILE_TRANSFER package. Copying a File on a Local File System This section includes an example that uses the COPY_FILE procedure in the DBMS_FILE_TRANSFER package to copy a file on a local file system. The following example copies a binary file named db1.dat from the /usr/admin/source directory to the /usr/admin/destination directory as db1_copy.dat on a local file system: 1. In SQL*Plus, connect as an administrative user who can grant privileges and create directory objects using SQL. 2. Use the SQL command CREATE DIRECTORY to create a directory object for the directory from which you want to copy the file. A directory object is similar to an alias for the directory. For example, to create a directory object called SOURCE_DIR for the /usr/admin/source directory on your computer system, execute the following statement: CREATE DIRECTORY SOURCE_DIR AS '/usr/admin/source'; 3. Use the SQL command CREATE DIRECTORY to create a directory object for the directory into which you want to copy the binary file. For example, to create a directory object called DEST_DIR for the /usr/admin/destination directory on your computer system, execute the following statement: CREATE DIRECTORY DEST_DIR AS '/usr/admin/destination'; 4. Grant the required privileges to the user who will run the COPY_FILE procedure. In this example, the strmadmin user runs the procedure. GRANT EXECUTE ON DBMS_FILE_TRANSFER TO strmadmin; GRANT READ ON DIRECTORY source_dir TO strmadmin; GRANT WRITE ON DIRECTORY dest_dir TO strmadmin; 5. Connect as strmadmin user: CONNECT strmadmin/strmadminpw 6. Run the COPY_FILE procedure to copy the file: BEGIN DBMS_FILE_TRANSFER.COPY_FILE( source_directory_object source_file_name => => 'SOURCE_DIR', 'db1.dat', Managing Datafiles and Tempfiles 9-13 Copying Files Using the Database Server destination_directory_object destination_file_name => => 'DEST_DIR', 'db1_copy.dat'); END; / Do not use the DBMS_FILE_TRANSFER package to copy or transfer a file that is being modified by a database because doing so may result in an inconsistent file. Caution: Third-Party File Transfer Although the procedures in the DBMS_FILE_TRANSFER package typically are invoked as local procedure calls, they can also be invoked as remote procedure calls. A remote procedure call lets you copy a file within a database even when you are connected to a different database. For example, you can make a copy of a file on database DB, even if you are connected to another database, by executing the following remote procedure call: DBMS_FILE_TRANSFER.COPY_FILE@DB(...) Using remote procedure calls enables you to copy a file between two databases, even if you are not connected to either database. For example, you can connect to database A and then transfer a file from database B to database C. In this example, database A is the third party because it is neither the source of nor the destination for the transferred file. A third-party file transfer can both push and pull a file. Continuing with the previous example, you can perform a third-party file transfer if you have a database link from A to either B or C, and that database has a database link to the other database. Database A does not need a database link to both B and C. For example, if you have a database link from A to B, and another database link from B to C, then you can run the following procedure at A to transfer a file from B to C: DBMS_FILE_TRANSFER.PUT_FILE@B(...) This configuration pushes the file. Alternatively, if you have a database link from A to C, and another database link from C to B, then you can run the following procedure at database A to transfer a file from B to C: DBMS_FILE_TRANSFER.GET_FILE@C(...) This configuration pulls the file. File Transfer and the DBMS_SCHEDULER Package You can use the DBMS_SCHEDULER package to transfer files automatically within a single database and between databases. Third-party file transfers are also supported by the DBMS_SCHEDULER package. You can monitor a long-running file transfer done by the Scheduler using the V$SESSION_LONGOPS dynamic performance view at the databases reading or writing the file. Any database links used by a Scheduler job must be fixed user database links. You can use a restartable Scheduler job to improve the reliability of file transfers automatically, especially if there are intermittent failures. If a file transfer fails before the destination file is closed, then you can restart the file transfer from the beginning once the database has removed any partially written destination file. Hence you 9-14 Oracle Database Administrator’s Guide Mapping Files to Physical Devices should consider using a restartable Scheduler job to transfer a file if the rest of the job is restartable. See Chapter 27, "Using the Scheduler" for more information on Scheduler jobs. If a single restartable job transfers several files, then you should consider restart scenarios in which some of the files have been transferred already and some have not been transferred yet. Note: Advanced File Transfer Mechanisms You can create more sophisticated file transfer mechanisms using both the DBMS_FILE_TRANSFER package and the DBMS_SCHEDULER package. For example, when several databases have a copy of the file you want to transfer, you can consider factors such as source availability, source load, and communication bandwidth to the destination database when deciding which source database to contact first and which source databases to try if failures occur. In this case, the information about these factors must be available to you, and you must create the mechanism that considers these factors. As another example, when early completion time is more important than load, you can submit a number of Scheduler jobs to transfer files in parallel. As a final example, knowing something about file layout on the source and destination databases enables you to minimize disk contention by performing or scheduling simultaneous transfers only if they use different I/O devices. Mapping Files to Physical Devices In an environment where datafiles are simply file system files or are created directly on a raw device, it is relatively straight forward to see the association between a tablespace and the underlying device. Oracle Database provides views, such as DBA_TABLESPACES, DBA_DATA_FILES, and V$DATAFILE, that provide a mapping of files onto devices. These mappings, along with device statistics can be used to evaluate I/O performance. However, with the introduction of host based Logical Volume Managers (LVM), and sophisticated storage subsystems that provide RAID (Redundant Array of Inexpensive Disks) features, it is not easy to determine file to device mapping. This poses a problem because it becomes difficult to determine your "hottest" files when they are hidden behind a "black box". This section presents the Oracle Database approach to resolving this problem. The following topics are contained in this section: ■ Overview of Oracle Database File Mapping Interface ■ How the Oracle Database File Mapping Interface Works ■ Using the Oracle Database File Mapping Interface ■ File Mapping Examples Managing Datafiles and Tempfiles 9-15 Mapping Files to Physical Devices This section presents an overview of the Oracle Database file mapping interface and explains how to use the DBMS_STORAGE_MAP package and dynamic performance views to expose the mapping of files onto physical devices. You can more easily access this functionality through the Oracle Enterprise Manager (EM). It provides an easy to use graphical interface for mapping files to physical devices. Note: Overview of Oracle Database File Mapping Interface To acquire an understanding of I/O performance, one must have detailed knowledge of the storage hierarchy in which files reside. Oracle Database provides a mechanism to show a complete mapping of a file to intermediate layers of logical volumes to actual physical devices. This is accomplished though a set of dynamic performance views (V$ views). Using these views, you can locate the exact disk on which any block of a file resides. To build these views, storage vendors must provide mapping libraries that are responsible for mapping their particular I/O stack elements. The database communicates with these libraries through an external non-Oracle Database process that is spawned by a background process called FMON. FMON is responsible for managing the mapping information. Oracle provides a PL/SQL package, DBMS_STORAGE_MAP, that you use to invoke mapping operations that populate the mapping views. The file mapping interface is not available on Windows platforms. Note: How the Oracle Database File Mapping Interface Works This section describes the components of the Oracle Database file mapping interface and how the interface works. It contains the following topics: ■ Components of File Mapping ■ Mapping Structures ■ Example of Mapping Structures ■ Configuration ID Components of File Mapping The following figure shows the components of the file mapping mechanism. 9-16 Oracle Database Administrator’s Guide Mapping Files to Physical Devices Figure 9–1 Components of File Mapping mapping lib0 Oracle Instance SGA FMON FMPUTL mapping lib1 External Process . . . mapping libn The following sections briefly describes these components and how they work together to populate the mapping views: ■ FMON ■ External Process (FMPUTL) ■ Mapping Libraries FMON FMON is a background process started by the database whenever the FILE_MAPPING initialization parameter is set to TRUE. FMON is responsible for: ■ Building mapping information, which is stored in the SGA. This information is composed of the following structures: – Files – File system extents – Elements – Subelements These structures are explained in "Mapping Structures" on page 9-18. ■ ■ ■ Refreshing mapping information when a change occurs because of: – Changes to datafiles (size) – Addition or deletion of datafiles – Changes to the storage configuration (not frequent) Saving mapping information in the data dictionary to maintain a view of the information that is persistent across startup and shutdown operations Restoring mapping information into the SGA at instance startup. This avoids the need for a potentially expensive complete rebuild of the mapping information on every instance startup. You help control this mapping using procedures that are invoked with the DBMS_STORAGE_MAP package. External Process (FMPUTL) FMON spawns an external non-Oracle Database process called FMPUTL, that communicates directly with the vendor supplied mapping libraries. This process obtains the mapping information through all levels of the I/O stack, assuming that mapping libraries exist for all levels. On some platforms the external process requires that the SETUID bit is set to ON because root privileges are needed to map through all levels of the I/O mapping stack. Managing Datafiles and Tempfiles 9-17 Mapping Files to Physical Devices The external process is responsible for discovering the mapping libraries and dynamically loading them into its address space. Mapping Libraries Oracle Database uses mapping libraries to discover mapping information for the elements that are owned by a particular mapping library. Through these mapping libraries information about individual I/O stack elements is communicated. This information is used to populate dynamic performance views that can be queried by users. Mapping libraries need to exist for all levels of the stack for the mapping to be complete, and different libraries may own their own parts of the I/O mapping stack. For example, a VERITAS VxVM library would own the stack elements related to the VERITAS Volume Manager, and an EMC library would own all EMC storage specific layers of the I/O mapping stack. Mapping libraries are vendor supplied. However, Oracle currently supplies a mapping library for EMC storage. The mapping libraries available to a database server are identified in a special file named filemap.ora. Mapping Structures The mapping structures and the Oracle Database representation of these structures are described in this section. You will need to understand this information in order to interpret the information in the mapping views. The following are the primary structures that compose the mapping information: ■ Files A file mapping structure provides a set of attributes for a file, including file size, number of file system extents that the file is composed of, and the file type. ■ File system extents A file system extent mapping structure describes a contiguous chunk of blocks residing on one element. This includes the device offset, the extent size, the file offset, the type (data or parity), and the name of the element where the extent resides. File system extents are not the same as Oracle Database extents. File system extents are physical contiguous blocks of data written to a device as managed by the file system. Oracle Database extents are logical structures managed by the database, such as tablespace extents. Note: ■ Elements An element mapping structure is the abstract mapping structure that describes a storage component within the I/O stack. Elements may be mirrors, stripes, partitions, RAID5, concatenated elements, and disks. These structures are the mapping building blocks. ■ Subelements A subelement mapping structure describes the link between an element and the next elements in the I/O mapping stack. This structure contains the subelement number, size, the element name where the subelement exists, and the element offset. All of these mapping structures are illustrated in the following example. 9-18 Oracle Database Administrator’s Guide Mapping Files to Physical Devices Example of Mapping Structures Consider an Oracle Database which is composed of two data files X and Y. Both files X and Y reside on a file system mounted on volume A. File X is composed of two extents while file Y is composed of only one extent. The two extents of File X and the one extent of File Y both map to Element A. Element A is striped to Elements B and C. Element A maps to Elements B and C by way of Subelements B0 and C1, respectively. Element B is a partition of Element D (a physical disk), and is mapped to Element D by way of subelement D0. Element C is mirrored over Elements E and F (both physical disks), and is mirrored to those physical disks by way of Subelements E0 and F1, respectively. All of the mapping structures are illustrated in Figure 9–2. Figure 9–2 Illustration of Mapping Structures File X File Extent 1 File Extent 2 File Extent 1 File Y Element A Sub B0 Sub C1 Element B Sub D0 Element D Element C Sub E0 Element E Sub F1 Element F Note that the mapping structures represented are sufficient to describe the entire mapping information for the Oracle Database instance and consequently to map every logical block within the file into a (element name, element offset) tuple (or more in case of mirroring) at each level within the I/O stack. Configuration ID The configuration ID captures the version information associated with elements or files. The vendor library provides the configuration ID and updates it whenever a change occurs. Without a configuration ID, there is no way for the database to tell whether the mapping has changed. There are two kinds of configuration IDs: Managing Datafiles and Tempfiles 9-19 Mapping Files to Physical Devices ■ Persistent These configuration IDs are persistent across instance shutdown ■ Non-persistent The configuration IDs are not persistent across instance shutdown. The database is only capable of refreshing the mapping information while the instance is up. Using the Oracle Database File Mapping Interface This section discusses how to use the Oracle Database file mapping interface. It contains the following topics: ■ Enabling File Mapping ■ Using the DBMS_STORAGE_MAP Package ■ Obtaining Information from the File Mapping Views Enabling File Mapping The following steps enable the file mapping feature: 1. Ensure that a valid filemap.ora file exists in the /opt/ORCLfmap/prot1_32/etc directory for 32-bit platforms, or in the /opt/ORCLfmap/prot1_64/etc directory for 64-bit platforms. Caution: While the format and content of the filemap.ora file is discussed here, it is for informational reasons only. The filemap.ora file is created by the database when your system is installed. Until such time that vendors supply there own libraries, there will be only one entry in the filemap.ora file, and that is the Oracle-supplied EMC library. This file should be modified manually by uncommenting this entry only if an EMC Symmetrix array is available. The filemap.ora file is the configuration file that describes all of the available mapping libraries. FMON requires that a filemap.ora file exists and that it points to a valid path to mapping libraries. Otherwise, it will not start successfully. The following row needs to be included in filemap.ora for each library: lib=vendor_name:mapping_library_path where: ■ vendor_name should be Oracle for the EMC Symmetric library ■ mapping_library_path is the full path of the mapping library Note that the ordering of the libraries in this file is extremely important. The libraries are queried based on their order in the configuration file. The file mapping service can be even started even if no mapping libraries are available. The filemap.ora file still needs to be present even though it is empty. In this case, the mapping service is constrained in the sense that new mapping information cannot be discovered. Only restore and drop operations are allowed in such a configuration. 2. Set the FILE_MAPPING initialization parameter to TRUE. 9-20 Oracle Database Administrator’s Guide Mapping Files to Physical Devices The instance does not have to be shut down to set this parameter. You can set it using the following ALTER SYSTEM statement: ALTER SYSTEM SET FILE_MAPPING=TRUE; 3. Invoke the appropriate DBMS_STORAGE_MAP mapping procedure. You have two options: ■ ■ In a cold startup scenario, the Oracle Database is just started and no mapping operation has been invoked yet. You execute the DBMS_STORAGE_MAP.MAP_ALL procedure to build the mapping information for the entire I/O subsystem associated with the database. In a warm start scenario where the mapping information is already built, you have the option to invoke the DBMS_STORAGE_MAP.MAP_SAVE procedure to save the mapping information in the data dictionary. (Note that this procedure is invoked in DBMS_STORAGE_MAP.MAP_ALL() by default.) This forces all of the mapping information in the SGA to be flushed to disk. Once you restart the database, use DBMS_STORAGE_MAP.RESTORE() to restore the mapping information into the SGA. If needed, DBMS_STORAGE_MAP.MAP_ALL() can be called to refresh the mapping information. Using the DBMS_STORAGE_MAP Package The DBMS_STORAGE_MAP package enables you to control the mapping operations. The various procedures available to you are described in the following table. Procedure Use to: MAP_OBJECT Build the mapping information for the database object identified by object name, owner, and type MAP_ELEMENT Build mapping information for the specified element MAP_FILE Build mapping information for the specified filename MAP_ALL Build entire mapping information for all types of database files (excluding archive logs) DROP_ELEMENT Drop the mapping information for a specified element DROP_FILE Drop the file mapping information for the specified filename DROP_ALL Drop all mapping information in the SGA for this instance SAVE Save into the data dictionary the required information needed to regenerate the entire mapping RESTORE Load the entire mapping information from the data dictionary into the shared memory of the instance LOCK_MAP Lock the mapping information in the SGA for this instance UNLOCK_MAP Unlock the mapping information in the SGA for this instance See Also: ■ ■ Oracle Database PL/SQL Packages and Types Reference for a description of the DBMS_STORAGE_MAP package "File Mapping Examples" on page 9-23 for an example of using the DBMS_STORAGE_MAP package Managing Datafiles and Tempfiles 9-21 Mapping Files to Physical Devices Obtaining Information from the File Mapping Views Mapping information generated by DBMS_STORAGE_MAP package is captured in dynamic performance views. Brief descriptions of these views are presented here. View Description V$MAP_LIBRARY Contains a list of all mapping libraries that have been dynamically loaded by the external process V$MAP_FILE Contains a list of all file mapping structures in the shared memory of the instance V$MAP_FILE_EXTENT Contains a list of all file system extent mapping structures in the shared memory of the instance V$MAP_ELEMENT Contains a list of all element mapping structures in the SGA of the instance V$MAP_EXT_ELEMENT Contains supplementary information for all element mapping V$MAP_SUBELEMENT Contains a list of all subelement mapping structures in the shared memory of the instance V$MAP_COMP_LIST Contains supplementary information for all element mapping structures. V$MAP_FILE_IO_STACK The hierarchical arrangement of storage containers for the file displayed as a series of rows. Each row represents a level in the hierarchy. Oracle Database Reference for a complete description of the dynamic performance views See Also: However, the information generated by the DBMS_STORAGE_MAP.MAP_OBJECT procedure is captured in a global temporary table named MAP_OBJECT. This table displays the hierarchical arrangement of storage containers for objects. Each row in the table represents a level in the hierarchy. A description of the MAP_OBJECT table follows. Column Datatype OBJECT_NAME VARCHAR2(2000) Name of the object OBJECT_OWNER VARCHAR2(2000) Owner of the object OBJECT_TYPE VARCHAR2(2000) Object type FILE_MAP_IDX NUMBER File index (corresponds to FILE_MAP_IDX in V$MAP_FILE) DEPTH NUMBER Element depth within the I/O stack ELEM_IDX NUMBER Index corresponding to element CU_SIZE NUMBER Contiguous set of logical blocks of the file, in HKB (half KB) units, that is resident contiguously on the element STRIDE NUMBER Number of HKB between contiguous units (CU) in the file that are contiguous on this element. Used in RAID5 and striped files. NUM_CU NUMBER Number of contiguous units that are adjacent to each other on this element that are separated by STRIDE HKB in the file. In RAID5, the number of contiguous units also include the parity stripes. 9-22 Oracle Database Administrator’s Guide Description Mapping Files to Physical Devices Column Datatype Description ELEM_OFFSET NUMBER Element offset in HKB units FILE_OFFSET NUMBER Offset in HKB units from the start of the file to the first byte of the contiguous units DATA_TYPE VARCHAR2(2000) Datatype (DATA, PARITY, or DATA AND PARITY) PARITY_POS NUMBER Position of the parity. Only for RAID5. This field is needed to distinguish the parity from the data part. Parity period. Only for RAID5. PARITY_PERIOD NUMBER File Mapping Examples The following examples illustrates some of the powerful capabilities of the Oracle Database file mapping feature. This includes: ■ The ability to map all the database files that span a particular device ■ The ability to map a particular file into its corresponding devices ■ The ability to map a particular database object, including its block distribution at all levels within the I/O stack Consider an Oracle Database instance which is composed of two datafiles: ■ t_db1.f ■ t_db2.f These files are created on a Solaris UFS file system mounted on a VERITAS VxVM host based striped volume, /dev/vx/dsk/ipfdg/ipf-vol1, that consists of the following host devices as externalized from an EMC Symmetrix array: ■ /dev/vx/rdmp/c2t1d0s2 ■ /dev/vx/rdmp/c2t1d1s2 Note that the following examples require the execution of a MAP_ALL() operation. Example 1: Map All Database Files that Span a Device The following query returns all Oracle Database files associated with the /dev/vx/rdmp/c2t1d1s2 host device: SELECT UNIQUE me.ELEM_NAME, mf.FILE_NAME FROM V$MAP_FILE_IO_STACK fs, V$MAP_FILE mf, V$MAP_ELEMENT me WHERE mf.FILE_MAP_IDX = fs.FILE_MAP_IDX AND me.ELEM_IDX = fs.ELEM_IDX AND me.ELEM_NAME = '/dev/vx/rdmp/c2t1d1s2'; The query results are: ELEM_NAME -----------------------/dev/vx/rdmp/c2t1d1s2 /dev/vx/rdmp/c2t1d1s2 FILE_NAME -------------------------------/oracle/dbs/t_db1.f /oracle/dbs/t_db2.f Example 2: Map a File into Its Corresponding Devices The following query displays a topological graph of the /oracle/dbs/t_db1.f datafile: WITH fv AS Managing Datafiles and Tempfiles 9-23 Mapping Files to Physical Devices (SELECT FILE_MAP_IDX, FILE_NAME FROM V$MAP_FILE WHERE FILE_NAME = '/oracle/dbs/t_db1.f') SELECT fv.FILE_NAME, LPAD(' ', 4 * (LEVEL - 1)) || el.ELEM_NAME ELEM_NAME FROM V$MAP_SUBELEMENT sb, V$MAP_ELEMENT el, fv, (SELECT UNIQUE ELEM_IDX FROM V$MAP_FILE_IO_STACK io, fv WHERE io.FILE_MAP_IDX = fv.FILE_MAP_IDX) fs WHERE el.ELEM_IDX = sb.CHILD_IDX AND fs.ELEM_IDX = el.ELEM_IDX START WITH sb.PARENT_IDX IN (SELECT DISTINCT ELEM_IDX FROM V$MAP_FILE_EXTENT fe, fv WHERE fv.FILE_MAP_IDX = fe.FILE_MAP_IDX) CONNECT BY PRIOR sb.CHILD_IDX = sb.PARENT_IDX; The resulting topological graph is: FILE_NAME ----------------------/oracle/dbs/t_db1.f /oracle/dbs/t_db1.f /oracle/dbs/t_db1.f /oracle/dbs/t_db1.f /oracle/dbs/t_db1.f /oracle/dbs/t_db1.f /oracle/dbs/t_db1.f /oracle/dbs/t_db1.f /oracle/dbs/t_db1.f /oracle/dbs/t_db1.f /oracle/dbs/t_db1.f ELEM_NAME ------------------------------------------------_sym_plex_/dev/vx/rdsk/ipfdg/ipf-vol1_-1_-1 _sym_subdisk_/dev/vx/rdsk/ipfdg/ipf-vol1_0_0_0 /dev/vx/rdmp/c2t1d0s2 _sym_symdev_000183600407_00C _sym_hyper_000183600407_00C_0 _sym_hyper_000183600407_00C_1 _sym_subdisk_/dev/vx/rdsk/ipfdg/ipf-vol1_0_1_0 /dev/vx/rdmp/c2t1d1s2 _sym_symdev_000183600407_00D _sym_hyper_000183600407_00D_0 _sym_hyper_000183600407_00D_1 Example 3: Map a Database Object This example displays the block distribution at all levels within the I/O stack for the scott.bonus table. A MAP_OBJECT() operation must first be executed as follows: EXECUTE DBMS_STORAGE_MAP.MAP_OBJECT('BONUS','SCOTT','TABLE'); The query is as follows: SELECT io.OBJECT_NAME o_name, io.OBJECT_OWNER o_owner, io.OBJECT_TYPE o_type, mf.FILE_NAME, me.ELEM_NAME, io.DEPTH, (SUM(io.CU_SIZE * (io.NUM_CU - DECODE(io.PARITY_PERIOD, 0, 0, TRUNC(io.NUM_CU / io.PARITY_PERIOD)))) / 2) o_size FROM MAP_OBJECT io, V$MAP_ELEMENT me, V$MAP_FILE mf WHERE io.OBJECT_NAME = 'BONUS' AND io.OBJECT_OWNER = 'SCOTT' AND io.OBJECT_TYPE = 'TABLE' AND me.ELEM_IDX = io.ELEM_IDX AND mf.FILE_MAP_IDX = io.FILE_MAP_IDX GROUP BY io.ELEM_IDX, io.FILE_MAP_IDX, me.ELEM_NAME, mf.FILE_NAME, io.DEPTH, io.OBJECT_NAME, io.OBJECT_OWNER, io.OBJECT_TYPE ORDER BY io.DEPTH; The following is the result of the query. Note that the o_size column is expressed in KB. O_NAME -----BONUS BONUS O_OWNER ------SCOTT SCOTT O_TYPE -----TABLE TABLE FILE_NAME ------------------/oracle/dbs/t_db1.f /oracle/dbs/t_db1.f 9-24 Oracle Database Administrator’s Guide ELEM_NAME ----------------------------/dev/vx/dsk/ipfdg/ipf-vol1 _sym_plex_/dev/vx/rdsk/ipf DEPTH -----0 1 O_SIZE -----20 20 Viewing Datafile Information BONUS SCOTT TABLE /oracle/dbs/t_db1.f BONUS SCOTT TABLE /oracle/dbs/t_db1.f BONUS BONUS BONUS BONUS BONUS BONUS BONUS BONUS SCOTT SCOTT SCOTT SCOTT SCOTT SCOTT SCOTT SCOTT TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE /oracle/dbs/t_db1.f /oracle/dbs/t_db1.f /oracle/dbs/t_db1.f /oracle/dbs/t_db1.f /oracle/dbs/t_db1.f /oracle/dbs/t_db1.f /oracle/dbs/t_db1.f /oracle/dbs/t_db1.f pdg/if-vol1_-1_-1 _sym_subdisk_/dev/vx/rdsk/ ipfdg/ipf-vol1_0_1_0 _sym_subdisk_/dev/vx/rdsk/ipf dg/ipf-vol1_0_2_0 /dev/vx/rdmp/c2t1d1s2 /dev/vx/rdmp/c2t1d2s2 _sym_symdev_000183600407_00D _sym_symdev_000183600407_00E _sym_hyper_000183600407_00D_0 _sym_hyper_000183600407_00D_1 _sym_hyper_000183600407_00E_0 _sym_hyper_000183600407_00E_1 2 12 2 8 3 3 4 4 5 5 6 6 12 8 12 8 12 12 8 8 Viewing Datafile Information The following data dictionary views provide useful information about the datafiles of a database: View Description DBA_DATA_FILES Provides descriptive information about each datafile, including the tablespace to which it belongs and the file ID. The file ID can be used to join with other views for detail information. DBA_EXTENTS DBA view describes the extents comprising all segments in the database. Contains the file ID of the datafile containing the extent. USER view describes extents of the segments belonging to objects owned by the current user. USER_EXTENTS DBA_FREE_SPACE USER_FREE_SPACE DBA view lists the free extents in all tablespaces. Includes the file ID of the datafile containing the extent. USER view lists the free extents in the tablespaces accessible to the current user. V$DATAFILE Contains datafile information from the control file V$DATAFILE_HEADER Contains information from datafile headers This example illustrates the use of one of these views, V$DATAFILE. SELECT NAME, FILE#, STATUS, CHECKPOINT_CHANGE# "CHECKPOINT" FROM V$DATAFILE; NAME -------------------------------/u01/oracle/rbdb1/system01.dbf /u02/oracle/rbdb1/temp01.dbf /u02/oracle/rbdb1/users03.dbf FILE# ----1 2 3 STATUS ------SYSTEM ONLINE OFFLINE CHECKPOINT ---------3839 3782 3782 FILE# lists the file number of each datafile; the first datafile in the SYSTEM tablespace created with the database is always file 1. STATUS lists other information about a datafile. If a datafile is part of the SYSTEM tablespace, its status is SYSTEM (unless it requires recovery). If a datafile in a non-SYSTEM tablespace is online, its status is ONLINE. If a datafile in a non-SYSTEM tablespace is offline, its status can be either OFFLINE or RECOVER. CHECKPOINT lists the final SCN (system change number) written for the most recent checkpoint of a datafile. Managing Datafiles and Tempfiles 9-25 Viewing Datafile Information See Also: Oracle Database Reference for complete descriptions of these views 9-26 Oracle Database Administrator’s Guide 10 Managing the Undo Tablespace This chapter describes how to manage the undo tablespace, which stores information used to roll back changes to the Oracle Database. It contains the following topics: ■ What Is Undo? ■ Introduction to Automatic Undo Management ■ Setting the Undo Retention Period ■ Sizing the Undo Tablespace ■ Managing Undo Tablespaces ■ Migrating to Automatic Undo Management ■ Viewing Information About Undo See Also: Part III, "Automated File and Storage Management" for information about creating an undo tablespace whose datafiles are both created and managed by the Oracle Database server. What Is Undo? Every Oracle Database must have a method of maintaining information that is used to roll back, or undo, changes to the database. Such information consists of records of the actions of transactions, primarily before they are committed. These records are collectively referred to as undo. Undo records are used to: ■ Roll back transactions when a ROLLBACK statement is issued ■ Recover the database ■ Provide read consistency ■ Analyze data as of an earlier point in time by using Oracle Flashback Query ■ Recover from logical corruptions using Oracle Flashback features When a ROLLBACK statement is issued, undo records are used to undo changes that were made to the database by the uncommitted transaction. During database recovery, undo records are used to undo any uncommitted changes applied from the redo log to the datafiles. Undo records provide read consistency by maintaining the before image of the data for users who are accessing the data at the same time that another user is changing it. Managing the Undo Tablespace 10-1 Introduction to Automatic Undo Management Introduction to Automatic Undo Management This section introduces the concepts of Automatic Undo Management and discusses the following topics: ■ Overview of Automatic Undo Management ■ Undo Retention Overview of Automatic Undo Management Oracle provides a fully automated mechanism, referred to as automatic undo management, for managing undo information and space. In this management mode, you create an undo tablespace, and the server automatically manages undo segments and space among the various active sessions. You set the UNDO_MANAGEMENT initialization parameter to AUTO to enable automatic undo management. A default undo tablespace is then created at database creation. An undo tablespace can also be created explicitly. The methods of creating an undo tablespace are explained in "Creating an Undo Tablespace" on page 10-7. When the instance starts, the database automatically selects the first available undo tablespace. If no undo tablespace is available, then the instance starts without an undo tablespace and stores undo records in the SYSTEM tablespace. This is not recommended in normal circumstances, and an alert message is written to the alert log file to warn that the system is running without an undo tablespace. If the database contains multiple undo tablespaces, you can optionally specify at startup that you want to use a specific undo tablespace. This is done by setting the UNDO_TABLESPACE initialization parameter, as shown in this example: UNDO_TABLESPACE = undotbs_01 In this case, if you have not already created the undo tablespace (in this example, undotbs_01), the STARTUP command fails. The UNDO_TABLESPACE parameter can be used to assign a specific undo tablespace to an instance in an Oracle Real Application Clusters environment. The following is a summary of the initialization parameters for automatic undo management: Initialization Parameter Description UNDO_MANAGEMENT If AUTO, use automatic undo management. The default is MANUAL. UNDO_TABLESPACE An optional dynamic parameter specifying the name of an undo tablespace. This parameter should be used only when the database has multiple undo tablespaces and you want to direct the database instance to use a particular undo tablespace. When automatic undo management is enabled, if the initialization parameter file contains parameters relating to manual undo management, they are ignored. See Also: Oracle Database Reference for complete descriptions of initialization parameters used in automatic undo management 10-2 Oracle Database Administrator’s Guide Introduction to Automatic Undo Management Undo Retention After a transaction is committed, undo data is no longer needed for rollback or transaction recovery purposes. However, for consistent read purposes, long-running queries may require this old undo information for producing older images of data blocks. Furthermore, the success of several Oracle Flashback features can also depend upon the availability of older undo information. For these reasons, it is desirable to retain the old undo information for as long as possible. When automatic undo management is enabled, there is always a current undo retention period, which is the minimum amount of time that Oracle Database attempts to retain old undo information before overwriting it. Old (committed) undo information that is older than the current undo retention period is said to be expired. Old undo information with an age that is less than the current undo retention period is said to be unexpired. Oracle Database automatically tunes the undo retention period based on undo tablespace size and system activity. You can specify a minimum undo retention period (in seconds) by setting the UNDO_RETENTION initialization parameter. The database makes its best effort to honor the specified minimum undo retention period, provided that the undo tablespace has space available for new transactions. When available space for new transactions becomes short, the database begins to overwrite expired undo. If the undo tablespace has no space for new transactions after all expired undo is overwritten, the database may begin overwriting unexpired undo information. If any of this overwritten undo information is required for consistent read in a current long-running query, the query could fail with the snapshot too old error message. The following points explain the exact impact of the UNDO_RETENTION parameter on undo retention: ■ ■ The UNDO_RETENTION parameter is ignored for a fixed size undo tablespace. The database may overwrite unexpired undo information when tablespace space becomes low. For an undo tablespace with the AUTOEXTEND option enabled, the database attempts to honor the minimum retention period specified by UNDO_RETENTION. When space is low, instead of overwriting unexpired undo information, the tablespace auto-extends. If the MAXSIZE clause is specified for an auto-extending undo tablespace, when the maximum size is reached, the database may begin to overwrite unexpired undo information. Retention Guarantee To guarantee the success of long-running queries or Oracle Flashback operations, you can enable retention guarantee. If retention guarantee is enabled, the specified minimum undo retention is guaranteed; the database never overwrites unexpired undo data even if it means that transactions fail due to lack of space in the undo tablespace. If retention guarantee is not enabled, the database can overwrite unexpired undo when space is low, thus lowering the undo retention for the system. This option is disabled by default. WARNING: Enabling retention guarantee can cause multiple DML operations to fail. Use with caution. You enable retention guarantee by specifying the RETENTION GUARANTEE clause for the undo tablespace when you create it with either the CREATE DATABASE or CREATE UNDO TABLESPACE statement. Or, you can later specify this clause in an ALTER Managing the Undo Tablespace 10-3 Introduction to Automatic Undo Management TABLESPACE statement. You disable retention guarantee with the RETENTION NOGUARANTEE clause. You can use the DBA_TABLESPACES view to determine the retention guarantee setting for the undo tablespace. A column named RETENTION contains a value of GUARANTEE, NOGUARANTEE, or NOT APPLY (used for tablespaces other than the undo tablespace). Automatic Tuning of Undo Retention Oracle Database automatically tunes the undo retention period based on how the undo tablespace is configured. ■ ■ If the undo tablespace is fixed size, the database tunes the retention period for the best possible undo retention for that tablespace size and the current system load. This tuned retention period can be significantly greater than the specified minimum retention period. If the undo tablespace is configured with the AUTOEXTEND option, the database tunes the undo retention period to be somewhat longer than the longest-running query on the system at that time. Again, this tuned retention period can be greater than the specified minimum retention period. Automatic tuning of undo retention is not supported for LOBs. This is because undo information for LOBs is stored in the segment itself and not in the undo tablespace. For LOBs, the database attempts to honor the minimum undo retention period specified by UNDO_RETENTION. However, if space becomes low, unexpired LOB undo information may be overwritten. Note: You can determine the current retention period by querying the TUNED_UNDORETENTION column of the V$UNDOSTAT view. This view contains one row for each 10-minute statistics collection interval over the last 4 days. (Beyond 4 days, the data is available in the DBA_HIST_UNDOSTAT view.) TUNED_UNDORETENTION is given in seconds. select to_char(begin_time, 'DD-MON-RR HH24:MI') begin_time, to_char(end_time, 'DD-MON-RR HH24:MI') end_time, tuned_undoretention from v$undostat order by end_time; BEGIN_TIME --------------04-FEB-05 00:01 ... 07-FEB-05 23:21 07-FEB-05 23:31 07-FEB-05 23:41 07-FEB-05 23:51 END_TIME TUNED_UNDORETENTION --------------- ------------------04-FEB-05 00:11 12100 07-FEB-05 07-FEB-05 07-FEB-05 07-FEB-05 23:31 23:41 23:51 23:52 86700 86700 86700 86700 576 rows selected. See Oracle Database Reference for more information about V$UNDOSTAT. Undo Retention Tuning and Alert Thresholds For a fixed size undo tablespace, the database calculates the maximum undo retention period based on database statistics and on the size of the undo tablespace. For optimal undo management, rather than tuning based on 100% of the tablespace size, the database tunes the undo retention period based on 85% of the tablespace size, or on the warning alert threshold 10-4 Oracle Database Administrator’s Guide Sizing the Undo Tablespace percentage for space used, whichever is lower. (The warning alert threshold defaults to 85%, but can be changed.) Therefore, if you set the warning alert threshold of the undo tablespace below 85%, this may reduce the tuned length of the undo retention period. For more information on tablespace alert thresholds, see "Managing Tablespace Alerts" on page 14-1. Setting the Undo Retention Period You set the undo retention period by setting the UNDO_RETENTION initialization parameter. This parameter specifies the desired minimum undo retention period in seconds. As described in "Undo Retention" on page 10-3, the current undo retention period may be automatically tuned to be greater than UNDO_RETENTION, or, unless retention guarantee is enabled, less than UNDO_RETENTION if space is low. To set the undo retention period: ■ Do one of the following: – Set UNDO_RETENTION in the initialization parameter file. UNDO_RETENTION = 1800 – Change UNDO_RETENTION at any time using the ALTER SYSTEM statement: ALTER SYSTEM SET UNDO_RETENTION = 2400; The effect of an UNDO_RETENTION parameter change is immediate, but it can only be honored if the current undo tablespace has enough space. Sizing the Undo Tablespace You can size the undo tablespace appropriately either by using automatic extension of the undo tablespace or by using the Undo Advisor for a fixed sized tablespace. Using Auto-Extensible Tablespaces Oracle Database supports automatic extension of the undo tablespace to facilitate capacity planning of the undo tablespace in the production environment. When the system is first running in the production environment, you may be unsure of the space requirements of the undo tablespace. In this case, you can enable automatic extension of the undo tablespace so that it automatically increases in size when more space is needed. You do so by including the AUTOEXTEND keyword when you create the undo tablespace. Sizing Fixed-Size Undo Tablespaces If you have decided on a fixed-size undo tablespace, the Undo Advisor can help you estimate needed capacity. You can access the Undo Advisor through Enterprise Manager or through the DBMS_ADVISOR PL/SQL package. Enterprise Manager is the preferred method of accessing the advisor. For more information on using the Undo Advisor through Enterprise Manager, please refer to Oracle Database 2 Day DBA. The Undo Advisor relies for its analysis on data collected in the Automatic Workload Repository (AWR). It is therefore important that the AWR have adequate workload statistics available so that the Undo Advisor can make accurate recommendations. For newly created databases, adequate statistics may not be available immediately. In such cases, an auto-extensible undo tablespace can be used. Managing the Undo Tablespace 10-5 Managing Undo Tablespaces An adjustment to the collection interval and retention period for AWR statistics can affect the precision and the type of recommendations that the advisor produces. See "Automatic Workload Repository" on page 1-21 for more information. To use the Undo Advisor, you first estimate these two values: ■ The length of your expected longest running query After the database has been up for a while, you can view the Longest Running Query field on the Undo Management page of Enterprise Manager. ■ The longest interval that you will require for flashback operations For example, if you expect to run Flashback Queries for up to 48 hours in the past, your flashback requirement is 48 hours. You then take the maximum of these two undo retention values and use that value to look up the required undo tablespace size on the Undo Advisor graph. The Undo Advisor PL/SQL Interface You can activate the Undo Advisor by creating an undo advisor task through the advisor framework. The following example creates an undo advisor task to evaluate the undo tablespace. The name of the advisor is 'Undo Advisor'. The analysis is based on Automatic Workload Repository snapshots, which you must specify by setting parameters START_SNAPSHOT and END_SNAPSHOT. In the following example, the START_SNAPSHOT is "1" and END_SNAPSHOT is "2". DECLARE tid NUMBER; tname VARCHAR2(30); oid NUMBER; BEGIN DBMS_ADVISOR.CREATE_TASK('Undo Advisor', tid, tname, 'Undo Advisor Task'); DBMS_ADVISOR.CREATE_OBJECT(tname, 'UNDO_TBS', null, null, null, 'null', oid); DBMS_ADVISOR.SET_TASK_PARAMETER(tname, 'TARGET_OBJECTS', oid); DBMS_ADVISOR.SET_TASK_PARAMETER(tname, 'START_SNAPSHOT', 1); DBMS_ADVISOR.SET_TASK_PARAMETER(tname, 'END_SNAPSHOT', 2); DBMS_ADVISOR.SET_TASK_PARAMETER(name, 'INSTANCE', 1); DBMS_ADVISOR.execute_task(tname); end; / After you have created the advisor task, you can view the output and recommendations in the Automatic Database Diagnostic Monitor in Enterprise Manager. This information is also available in the DBA_ADVISOR_* data dictionary views. See Also: ■ ■ Oracle Database 2 Day DBA for more information on using advisors and "Using the Segment Advisor" on page 14-16 for an example of creating an advisor task for a different advisor Oracle Database Reference for information about the DBA_ADVISOR_* data dictionary views Managing Undo Tablespaces This section describes the various steps involved in undo tablespace management and contains the following sections: 10-6 Oracle Database Administrator’s Guide Managing Undo Tablespaces ■ Creating an Undo Tablespace ■ Altering an Undo Tablespace ■ Dropping an Undo Tablespace ■ Switching Undo Tablespaces ■ Establishing User Quotas for Undo Space Creating an Undo Tablespace There are two methods of creating an undo tablespace. The first method creates the undo tablespace when the CREATE DATABASE statement is issued. This occurs when you are creating a new database, and the instance is started in automatic undo management mode (UNDO_MANAGEMENT = AUTO). The second method is used with an existing database. It uses the CREATE UNDO TABLESPACE statement. You cannot create database objects in an undo tablespace. It is reserved for system-managed undo data. Oracle Database enables you to create a single-file undo tablespace. Single-file, or bigfile, tablespaces are discussed in "Bigfile Tablespaces" on page 8-6. Using CREATE DATABASE to Create an Undo Tablespace You can create a specific undo tablespace using the UNDO TABLESPACE clause of the CREATE DATABASE statement. The following statement illustrates using the UNDO TABLESPACE clause in a CREATE DATABASE statement. The undo tablespace is named undotbs_01 and one datafile, /u01/oracle/rbdb1/undo0101.dbf, is allocated for it. CREATE DATABASE rbdb1 CONTROLFILE REUSE . . . UNDO TABLESPACE undotbs_01 DATAFILE '/u01/oracle/rbdb1/undo0101.dbf'; If the undo tablespace cannot be created successfully during CREATE DATABASE, the entire CREATE DATABASE operation fails. You must clean up the database files, correct the error and retry the CREATE DATABASE operation. The CREATE DATABASE statement also lets you create a single-file undo tablespace at database creation. This is discussed in "Supporting Bigfile Tablespaces During Database Creation" on page 2-16. Oracle Database SQL Reference for the syntax for using the CREATE DATABASE statement to create an undo tablespace See Also: Using the CREATE UNDO TABLESPACE Statement The CREATE UNDO TABLESPACE statement is the same as the CREATE TABLESPACE statement, but the UNDO keyword is specified. The database determines most of the attributes of the undo tablespace, but you can specify the DATAFILE clause. This example creates the undotbs_02 undo tablespace with the AUTOEXTEND option: CREATE UNDO TABLESPACE undotbs_02 DATAFILE '/u01/oracle/rbdb1/undo0201.dbf' SIZE 2M REUSE AUTOEXTEND ON; Managing the Undo Tablespace 10-7 Managing Undo Tablespaces You can create more than one undo tablespace, but only one of them can be active at any one time. Oracle Database SQL Reference for the syntax for using the CREATE UNDO TABLESPACE statement to create an undo tablespace See Also: Altering an Undo Tablespace Undo tablespaces are altered using the ALTER TABLESPACE statement. However, since most aspects of undo tablespaces are system managed, you need only be concerned with the following actions: ■ Adding a datafile ■ Renaming a datafile ■ Bringing a datafile online or taking it offline ■ Beginning or ending an open backup on a datafile ■ Enabling and disabling undo retention guarantee These are also the only attributes you are permitted to alter. If an undo tablespace runs out of space, or you want to prevent it from doing so, you can add more files to it or resize existing datafiles. The following example adds another datafile to undo tablespace undotbs_01: ALTER TABLESPACE undotbs_01 ADD DATAFILE '/u01/oracle/rbdb1/undo0102.dbf' AUTOEXTEND ON NEXT 1M MAXSIZE UNLIMITED; You can use the ALTER DATABASE...DATAFILE statement to resize or extend a datafile. See Also: ■ "Changing Datafile Size" on page 9-5 ■ Oracle Database SQL Reference for ALTER TABLESPACE syntax Dropping an Undo Tablespace Use the DROP TABLESPACE statement to drop an undo tablespace. The following example drops the undo tablespace undotbs_01: DROP TABLESPACE undotbs_01; An undo tablespace can only be dropped if it is not currently used by any instance. If the undo tablespace contains any outstanding transactions (for example, a transaction died but has not yet been recovered), the DROP TABLESPACE statement fails. However, since DROP TABLESPACE drops an undo tablespace even if it contains unexpired undo information (within retention period), you must be careful not to drop an undo tablespace if undo information is needed by some existing queries. DROP TABLESPACE for undo tablespaces behaves like DROP TABLESPACE...INCLUDING CONTENTS. All contents of the undo tablespace are removed. See Also: Oracle Database SQL Reference for DROP TABLESPACE syntax 10-8 Oracle Database Administrator’s Guide Managing Undo Tablespaces Switching Undo Tablespaces You can switch from using one undo tablespace to another. Because the UNDO_TABLESPACE initialization parameter is a dynamic parameter, the ALTER SYSTEM SET statement can be used to assign a new undo tablespace. The following statement switches to a new undo tablespace: ALTER SYSTEM SET UNDO_TABLESPACE = undotbs_02; Assuming undotbs_01 is the current undo tablespace, after this command successfully executes, the instance uses undotbs_02 in place of undotbs_01 as its undo tablespace. If any of the following conditions exist for the tablespace being switched to, an error is reported and no switching occurs: ■ The tablespace does not exist ■ The tablespace is not an undo tablespace ■ The tablespace is already being used by another instance (in a RAC environment only) The database is online while the switch operation is performed, and user transactions can be executed while this command is being executed. When the switch operation completes successfully, all transactions started after the switch operation began are assigned to transaction tables in the new undo tablespace. The switch operation does not wait for transactions in the old undo tablespace to commit. If there are any pending transactions in the old undo tablespace, the old undo tablespace enters into a PENDING OFFLINE mode (status). In this mode, existing transactions can continue to execute, but undo records for new user transactions cannot be stored in this undo tablespace. An undo tablespace can exist in this PENDING OFFLINE mode, even after the switch operation completes successfully. A PENDING OFFLINE undo tablespace cannot be used by another instance, nor can it be dropped. Eventually, after all active transactions have committed, the undo tablespace automatically goes from the PENDING OFFLINE mode to the OFFLINE mode. From then on, the undo tablespace is available for other instances (in an Oracle Real Application Cluster environment). If the parameter value for UNDO TABLESPACE is set to '' (two single quotes), then the current undo tablespace is switched out and the next available undo tablespace is switched in. Use this statement with care because there may be no undo tablespace available. The following example unassigns the current undo tablespace: ALTER SYSTEM SET UNDO_TABLESPACE = ''; Establishing User Quotas for Undo Space The Oracle Database Resource Manager can be used to establish user quotas for undo space. The Database Resource Manager directive UNDO_POOL allows DBAs to limit the amount of undo space consumed by a group of users (resource consumer group). You can specify an undo pool for each consumer group. An undo pool controls the amount of total undo that can be generated by a consumer group. When the total undo generated by a consumer group exceeds its undo limit, the current UPDATE transaction generating the undo is terminated. No other members of the consumer group can perform further updates until undo space is freed from the pool. Managing the Undo Tablespace 10-9 Migrating to Automatic Undo Management When no UNDO_POOL directive is explicitly defined, users are allowed unlimited undo space. See Also: Chapter 24, "Using the Database Resource Manager" Migrating to Automatic Undo Management If you are currently using rollback segments to manage undo space, Oracle strongly recommends that you migrate your database to automatic undo management. Oracle Database provides a function that provides information on how to size your new undo tablespace based on the configuration and usage of the rollback segments in your system. DBA privileges are required to execute this function: DECLARE utbsiz_in_MB NUMBER; BEGIN utbsiz_in_MB := DBMS_UNDO_ADV.RBU_MIGRATION; end; / The function returns the sizing information directly. Viewing Information About Undo This section lists views that are useful for viewing information about undo space in the automatic undo management mode and provides some examples. In addition to views listed here, you can obtain information from the views available for viewing tablespace and datafile information. Please refer to "Viewing Datafile Information" on page 9-25 for information on getting information about those views. Oracle Database also provides proactive help in managing tablespace disk space use by alerting you when tablespaces run low on available space. Please refer to "Managing Tablespace Alerts" on page 14-1 for information on how to set alert thresholds for the undo tablespace. In addition to the proactive undo space alerts, Oracle Database also provides alerts if your system has long-running queries that cause SNAPSHOT TOO OLD errors. To prevent excessive alerts, the long query alert is issued at most once every 24 hours. When the alert is generated, you can check the Undo Advisor Page of Enterprise Manager to get more information about the undo tablespace. The following dynamic performance views are useful for obtaining space information about the undo tablespace: View Description V$UNDOSTAT Contains statistics for monitoring and tuning undo space. Use this view to help estimate the amount of undo space required for the current workload. The database also uses this information to help tune undo usage in the system. This view is meaningful only in automatic undo management mode. V$ROLLSTAT For automatic undo management mode, information reflects behavior of the undo segments in the undo tablespace V$TRANSACTION Contains undo segment information DBA_UNDO_EXTENTS Shows the status and size of each extent in the undo tablespace. 10-10 Oracle Database Administrator’s Guide Viewing Information About Undo View Description DBA_HIST_UNDOSTAT Contains statistical snapshots of V$UNDOSTAT information. Please refer to Oracle Database 2 Day DBA for more information. See Also: Oracle Database Reference for complete descriptions of the views used in automatic undo management mode The V$UNDOSTAT view is useful for monitoring the effects of transaction execution on undo space in the current instance. Statistics are available for undo space consumption, transaction concurrency, the tuning of undo retention, and the length and SQL ID of long-running queries in the instance. Each row in the view contains statistics collected in the instance for a ten-minute interval. The rows are in descending order by the BEGIN_TIME column value. Each row belongs to the time interval marked by (BEGIN_TIME, END_TIME). Each column represents the data collected for the particular statistic in that time interval. The first row of the view contains statistics for the (partial) current time period. The view contains a total of 576 rows, spanning a 4 day cycle. The following example shows the results of a query on the V$UNDOSTAT view. SELECT TO_CHAR(BEGIN_TIME, 'MM/DD/YYYY HH24:MI:SS') BEGIN_TIME, TO_CHAR(END_TIME, 'MM/DD/YYYY HH24:MI:SS') END_TIME, UNDOTSN, UNDOBLKS, TXNCOUNT, MAXCONCURRENCY AS "MAXCON" FROM v$UNDOSTAT WHERE rownum <= 144; BEGIN_TIME ------------------10/28/2004 14:25:12 10/28/2004 14:15:12 10/28/2004 14:05:12 10/28/2004 13:55:12 ... 10/27/2004 14:45:12 10/27/2004 14:35:12 END_TIME UNDOTSN UNDOBLKS TXNCOUNT MAXCON ------------------- ---------- ---------- ---------- ---------10/28/2004 14:32:17 8 74 12071108 3 10/28/2004 14:25:12 8 49 12070698 2 10/28/2004 14:15:12 8 125 12070220 1 10/28/2004 14:05:12 8 99 12066511 3 10/27/2004 14:55:12 10/27/2004 14:45:12 8 8 15 154 11831676 11831165 1 2 144 rows selected. The preceding example shows how undo space is consumed in the system for the previous 24 hours from the time 14:35:12 on 10/27/2004. Managing the Undo Tablespace 10-11 Viewing Information About Undo 10-12 Oracle Database Administrator’s Guide Part III Automated File and Storage Management Part III describes how to use the Oracle-managed files and Automatic Storage Management features to simplify management of your database files. You use the Oracle-managed files feature to specify file system directories in which Oracle automatically creates and manages files for you at the database object level. For example, you need only specify that you want to create a tablespace, you do not need to specify the DATAFILE clause. This feature works well with a logical volume manager (LVM). Automatic Storage Management provides a logical volume manager that is integrated into the Oracle database and eliminates the need for you to purchase a third party product. Oracle creates Oracle-managed files within "disk groups" that you specify and provides redundancy and striping. This part contains the following chapters: ■ Chapter 11, "Using Oracle-Managed Files" ■ Chapter 12, "Using Automatic Storage Management" 11 Using Oracle-Managed Files This chapter discusses the use of the Oracle-managed files and contains the following topics: ■ What Are Oracle-Managed Files? ■ Enabling the Creation and Use of Oracle-Managed Files ■ Creating Oracle-Managed Files ■ Behavior of Oracle-Managed Files ■ Scenarios for Using Oracle-Managed Files What Are Oracle-Managed Files? Using Oracle-managed files simplifies the administration of an Oracle Database. Oracle-managed files eliminate the need for you, the DBA, to directly manage the operating system files comprising an Oracle Database. You specify operations in terms of database objects rather than filenames. The database internally uses standard file system interfaces to create and delete files as needed for the following database structures: ■ Tablespaces ■ Redo log files ■ Control files ■ Archived logs ■ Block change tracking files ■ Flashback logs ■ RMAN backups Through initialization parameters, you specify the file system directory to be used for a particular type of file. The database then ensures that a unique file, an Oracle-managed file, is created and deleted when no longer needed. This feature does not affect the creation or naming of administrative files such as trace files, audit files, alert logs, and core files. Using Oracle-Managed Files 11-1 What Are Oracle-Managed Files? See Also: Chapter 12, "Using Automatic Storage Management" for information about the Oracle Database integrated storage management system that extends the power of Oracle-managed files. With Oracle-managed files, files are created and managed automatically for you, but with Automatic Storage Management you get the additional benefits of features such as file redundancy and striping, without the need to purchase a third-party logical volume manager. Who Can Use Oracle-Managed Files? Oracle-managed files are most useful for the following types of databases: ■ ■ Databases that are supported by the following: – A logical volume manager that supports striping/RAID and dynamically extensible logical volumes – A file system that provides large, extensible files Low end or test databases The Oracle-managed files feature is not intended to ease administration of systems that use raw disks. This feature provides better integration with operating system functionality for disk space allocation. Since there is no operating system support for allocation of raw disks (it is done manually), this feature cannot help. On the other hand, because Oracle-managed files require that you use the operating system file system (unlike raw disks), you lose control over how files are laid out on the disks and thus, you lose some I/O tuning ability. What Is a Logical Volume Manager? A logical volume manager (LVM) is a software package available with most operating systems. Sometimes it is called a logical disk manager (LDM). It allows pieces of multiple physical disks to be combined into a single contiguous address space that appears as one disk to higher layers of software. An LVM can make the logical volume have better capacity, performance, reliability, and availability characteristics than any of the underlying physical disks. It uses techniques such as mirroring, striping, concatenation, and RAID 5 to implement these characteristics. Some LVMs allow the characteristics of a logical volume to be changed after it is created, even while it is in use. The volume may be resized or mirrored, or it may be relocated to different physical disks. What Is a File System? A file system is a data structure built inside a contiguous disk address space. A file manager (FM) is a software package that manipulates file systems, but it is sometimes called the file system. All operating systems have file managers. The primary task of a file manager is to allocate and deallocate disk space into files within a file system. A file system allows the disk space to be allocated to a large number of files. Each file is made to appear as a contiguous address space to applications such as Oracle Database. The files may not actually be contiguous within the disk space of the file system. Files can be created, read, written, resized, and deleted. Each file has a name associated with it that is used to refer to the file. A file system is commonly built on top of a logical volume constructed by an LVM. Thus all the files in a particular file system have the same performance, reliability, and availability characteristics inherited from the underlying logical volume. A file system 11-2 Oracle Database Administrator’s Guide Enabling the Creation and Use of Oracle-Managed Files is a single pool of storage that is shared by all the files in the file system. If a file system is out of space, then none of the files in that file system can grow. Space available in one file system does not affect space in another file system. However some LVM/FM combinations allow space to be added or removed from a file system. An operating system can support multiple file systems. Multiple file systems are constructed to give different storage characteristics to different files as well as to divide the available disk space into pools that do not affect each other. Benefits of Using Oracle-Managed Files Consider the following benefits of using Oracle-managed files: ■ They make the administration of the database easier. There is no need to invent filenames and define specific storage requirements. A consistent set of rules is used to name all relevant files. The file system defines the characteristics of the storage and the pool where it is allocated. ■ They reduce corruption caused by administrators specifying the wrong file. Each Oracle-managed file and filename is unique. Using the same file in two different databases is a common mistake that can cause very large down times and loss of committed transactions. Using two different names that refer to the same file is another mistake that causes major corruptions. ■ They reduce wasted disk space consumed by obsolete files. Oracle Database automatically removes old Oracle-managed files when they are no longer needed. Much disk space is wasted in large systems simply because no one is sure if a particular file is still required. This also simplifies the administrative task of removing files that are no longer required on disk and prevents the mistake of deleting the wrong file. ■ They simplify creation of test and development databases. You can minimize the time spent making decisions regarding file structure and naming, and you have fewer file management tasks. You can focus better on meeting the actual requirements of your test or development database. ■ Oracle-managed files make development of portable third-party tools easier. Oracle-managed files eliminate the need to put operating system specific file names in SQL scripts. Oracle-Managed Files and Existing Functionality Using Oracle-managed files does not eliminate any existing functionality. Existing databases are able to operate as they always have. New files can be created as managed files while old ones are administered in the old way. Thus, a database can have a mixture of Oracle-managed and unmanaged files. Enabling the Creation and Use of Oracle-Managed Files The following initialization parameters allow the database server to use the Oracle-managed files feature: Using Oracle-Managed Files 11-3 Enabling the Creation and Use of Oracle-Managed Files Initialization Parameter Description DB_CREATE_FILE_DEST Defines the location of the default file system directory where the database creates datafiles or tempfiles when no file specification is given in the creation operation. Also used as the default file system directory for redo log and control files if DB_CREATE_ONLINE_LOG_DEST_n is not specified. DB_CREATE_ONLINE_LOG_DEST_n Defines the location of the default file system directory for redo log files and control file creation when no file specification is given in the creation operation. You can use this initialization parameter multiple times, where n specifies a multiplexed copy of the redo log or control file. You can specify up to five multiplexed copies. DB_RECOVERY_FILE_DEST Defines the location of the default file system directory where the database creates RMAN backups when no format option is used, archived logs when no other local destination is configured, and flashback logs. Also used as the default file system directory for redo log and control files if DB_CREATE_ONLINE_LOG_DEST_n is not specified. The file system directory specified by either of these parameters must already exist: the database does not create it. The directory must also have permissions to allow the database to create the files in it. The default location is used whenever a location is not explicitly specified for the operation creating the file. The database creates the filename, and a file thus created is an Oracle-managed file. Both of these initialization parameters are dynamic, and can be set using the ALTER SYSTEM or ALTER SESSION statement. See Also: ■ ■ Oracle Database Reference for additional information about initialization parameters "How Oracle-Managed Files Are Named" on page 11-6 Setting the DB_CREATE_FILE_DEST Initialization Parameter Include the DB_CREATE_FILE_DEST initialization parameter in your initialization parameter file to identify the default location for the database server to create: ■ Datafiles ■ Tempfiles ■ Redo log files ■ Control files ■ Block change tracking files You specify the name of a file system directory that becomes the default location for the creation of the operating system files for these entities. The following example sets /u01/oradata as the default directory to use when creating Oracle-managed files: DB_CREATE_FILE_DEST = '/u01/oradata' 11-4 Oracle Database Administrator’s Guide Creating Oracle-Managed Files Setting the DB_RECOVERY_FILE_DEST Parameter Include the DB_RECOVERY_FILE_DEST and DB_RECOVERY_FILE_DEST_SIZE parameters in your initialization parameter file to identify the default location in which Oracle Database should create: ■ Redo log files ■ Control files ■ RMAN backups (datafile copies, control file copies, backup pieces, control file autobackups) ■ Archived logs ■ Flashback logs You specify the name of file system directory that becomes the default location for creation of the operating system files for these entities. For example: DB_RECOVERY_FILE_DEST = '/u01/oradata' DB_RECOVERY_FILE_DEST_SIZE = 20G Setting the DB_CREATE_ONLINE_LOG_DEST_n Initialization Parameter Include the DB_CREATE_ONLINE_LOG_DEST_n initialization parameter in your initialization parameter file to identify the default location for the database server to create: ■ Redo log files ■ Control files You specify the name of a file system directory that becomes the default location for the creation of the operating system files for these entities. You can specify up to five multiplexed locations. For the creation of redo log files and control files only, this parameter overrides any default location specified in the DB_CREATE_FILE_DEST and DB_RECOVERY_FILE_DEST initialization parameters. If you do not specify a DB_CREATE_FILE_DEST parameter, but you do specify the DB_CREATE_ONLINE_LOG_DEST_n parameter, then only redo log files and control files can be created as Oracle-managed files. It is recommended that you specify at least two parameters. For example: DB_CREATE_ONLINE_LOG_DEST_1 = '/u02/oradata' DB_CREATE_ONLINE_LOG_DEST_2 = '/u03/oradata' This allows multiplexing, which provides greater fault-tolerance for the redo log and control file if one of the destinations fails. Creating Oracle-Managed Files If you have met any of the following conditions, then Oracle Database creates Oracle-managed files for you, as appropriate, when no file specification is given in the creation operation: ■ You have included any of the DB_CREATE_FILE_DEST, DB_REDOVERY_FILE_DEST, or DB_CREATE_ONLINE_LOG_DEST_n initialization parameters in your initialization parameter file. Using Oracle-Managed Files 11-5 Creating Oracle-Managed Files ■ ■ You have issued the ALTER SYSTEM statement to dynamically set any of DB_RECOVERY_FILE_DEST, DB_CREATE_FILE_DEST, or DB_CREATE_ONLINE_LOG_DEST_n initialization parameters You have issued the ALTER SESSION statement to dynamically set any of the DB_CREATE_FILE_DEST, DB_RECOVERY_FILE_DEST, or DB_CREATE_ONLINE_LOG_DEST_n initialization parameters. If a statement that creates an Oracle-managed file finds an error or does not complete due to some failure, then any Oracle-managed files created by the statement are automatically deleted as part of the recovery of the error or failure. However, because of the large number of potential errors that can occur with file systems and storage subsystems, there can be situations where you must manually remove the files using operating system commands. When an Oracle-managed file is created, its filename is written to the alert log. This information can be used to find the file if it is necessary to manually remove the file. The following topics are discussed in this section: ■ How Oracle-Managed Files Are Named ■ Creating Oracle-Managed Files at Database Creation ■ Creating Datafiles for Tablespaces Using Oracle-Managed Files ■ Creating Tempfiles for Temporary Tablespaces Using Oracle-Managed Files ■ Creating Control Files Using Oracle-Managed Files ■ Creating Redo Log Files Using Oracle-Managed Files ■ Creating Archived Logs Using Oracle-Managed Files How Oracle-Managed Files Are Named The filenames of Oracle-managed files comply with the Optimal Flexible Architecture (OFA) standard for file naming. The assigned names are intended to meet the following requirements: ■ Database files are easily distinguishable from all other files. ■ Files of one database type are easily distinguishable from other database types. ■ Files are clearly associated with important attributes specific to the file type. For example, a datafile name may include the tablespace name to allow for easy association of datafile to tablespace, or an archived log name may include the thread, sequence, and creation date. No two Oracle-managed files are given the same name. The name that is used for creation of an Oracle-managed file is constructed from three sources: ■ ■ ■ The default creation location A file name template that is chosen based on the type of the file. The template also depends on the operating system platform and whether or not automatic storage management is used. A unique string created by the Oracle Database server or the operating system. This ensures that file creation does not damage an existing file and that the file cannot be mistaken for some other file. As a specific example, filenames for Oracle-managed files have the following format on a Solaris file system: /o1_mf_%t_%u_.dbf 11-6 Oracle Database Administrator’s Guide Creating Oracle-Managed Files where: ■ is / / where: – is the location specified in DB_CREATE_FILE_DEST – is the globally unique name (DB_UNIQUE_NAME initialization parameter) of the target database. If there is no DB_UNIQUE_NAME parameter, then the DB_NAME initialization parameter value is used. ■ %t is the tablespace name. ■ %u is an eight-character string that guarantees uniqueness For example, assume the following parameter settings: DB_CREATE_FILE_DEST = /u01/oradata DB_UNIQUE_NAME = PAYROLL Then an example datafile name would be: /u01/oradata/PAYROLL/datafile/o1_mf_tbs1_2ixh90q_.dbf Names for other file types are similar. Names on other platforms are also similar, subject to the constraints of the naming rules of the platform. The examples on the following pages use Oracle-managed file names as they might appear with a Solaris file system as an OMF destination. Caution: Do not rename an Oracle-managed file. The database identifies an Oracle-managed file based on its name. If you rename the file, the database is no longer able to recognize it as an Oracle-managed file and will not manage the file accordingly. Creating Oracle-Managed Files at Database Creation The behavior of the CREATE DATABASE statement for creating database structures when using Oracle-managed files is discussed in this section. Oracle Database SQL Reference for a description of the CREATE DATABASE statement See Also: Specifying Control Files at Database Creation At database creation, the control file is created in the files specified by the CONTROL_FILES initialization parameter. If the CONTROL_FILES parameter is not set and at least one of the initialization parameters required for the creation of Oracle-managed files is set, then an Oracle-managed control file is created in the default control file destinations. In order of precedence, the default destination is defined as follows: ■ One or more control files as specified in the DB_CREATE_ONLINE_LOG_DEST_n initialization parameter. The file in the first directory is the primary control file. When DB_CREATE_ONLINE_LOG_DEST_n is specified, the database does not Using Oracle-Managed Files 11-7 Creating Oracle-Managed Files create a control file in DB_CREATE_FILE_DEST or in DB_RECOVERY_FILE_DEST (the flash recovery area). ■ ■ ■ If no value is specified for DB_CREATE_ONLINE_LOG_DEST_n, but values are set for both the DB_CREATE_FILE_DEST and DB_RECOVERY_FILE_DEST, then the database creates one control file in each location. The location specified in DB_CREATE_FILE_DEST is the primary control file. If a value is specified only for DB_CREATE_FILE_DEST, then the database creates one control file in that location. If a value is specified only for DB_RECOVERY_FILE_DEST, then the database creates one control file in that location. If the CONTROL_FILES parameter is not set and none of these initialization parameters are set, then the Oracle Database default behavior is operating system dependent. At least one copy of a control file is created in an operating system dependent default location. Any copies of control files created in this fashion are not Oracle-managed files, and you must add a CONTROL_FILES initialization parameter to any initialization parameter file. If the database creates an Oracle-managed control file, and if there is a server parameter file, then the database creates a CONTROL_FILES initialization parameter entry in the server parameter file. If there is no server parameter file, then you must manually include a CONTROL_FILES initialization parameter entry in the text initialization parameter file. See Also: Chapter 5, "Managing Control Files" Specifying Redo Log Files at Database Creation The LOGFILE clause is not required in the CREATE DATABASE statement, and omitting it provides a simple means of creating Oracle-managed redo log files. If the LOGFILE clause is omitted, then redo log files are created in the default redo log file destinations. In order of precedence, the default destination is defined as follows: ■ ■ ■ ■ If either the DB_CREATE_ONLINE_LOG_DEST_n is set, then the database creates a log file member in each directory specified, up to the value of the MAXLOGMEMBERS initialization parameter. If the DB_CREATE_ONLINE_LOG_DEST_n parameter is not set, but both the DB_CREATE_FILE_DEST and DB_RECOVERY_FILE_DEST initialization parameters are set, then the database creates one Oracle-managed log file member in each of those locations. The log file in the DB_CREATE_FILE_DEST destination is the first member. If only the DB_CREATE_FILE_DEST initialization parameter is specified, then the database creates a log file member in that location. If only the DB_RECOVERY_FILE_DEST initialization parameter is specified, then the database creates a log file member in that location. The default size of an Oracle-managed redo log file is 100 MB. Optionally, you can create Oracle-managed redo log files, and override default attributes, by including the LOGFILE clause but omitting a filename. Redo log files are created the same way, except for the following: If no filename is provided in the LOGFILE clause of CREATE DATABASE, and none of the initialization parameters required for creating Oracle-managed files are provided, then the CREATE DATABASE statement fails. 11-8 Oracle Database Administrator’s Guide Creating Oracle-Managed Files See Also: Chapter 6, "Managing the Redo Log" Specifying the SYSTEM and SYSAUX Tablespace Datafiles at Database Creation The DATAFILE or SYSAUX DATAFILE clause is not required in the CREATE DATABASE statement, and omitting it provides a simple means of creating Oracle-managed datafiles for the SYSTEM and SYSAUX tablespaces. If the DATAFILE clause is omitted, then one of the following actions occurs: ■ ■ If DB_CREATE_FILE_DEST is set, then one Oracle-managed datafile for the SYSTEM tablespace and another for the SYSAUX tablespace are created in the DB_CREATE_FILE_DEST directory. If DB_CREATE_FILE_DEST is not set, then the database creates one SYSTEM, and one SYSAUX, tablespace datafile whose name and size are operating system dependent. Any SYSTEM or SYSAUX tablespace datafile created in this manner is not an Oracle-managed file. The default size for an Oracle-managed datafile is 100 MB and the file is autoextensible. When autoextension is required, the database extends the datafile by its existing size or 100 MB, whichever is smaller. You can also explicitly specify the autoextensible unit using the NEXT parameter of the STORAGE clause when you specify the datafile (in a CREATE or ALTER TABLESPACE operation). Optionally, you can create an Oracle-managed datafile for the SYSTEM or SYSAUX tablespace and override default attributes. This is done by including the DATAFILE clause, omitting a filename, but specifying overriding attributes. When a filename is not supplied and the DB_CREATE_FILE_DEST parameter is set, an Oracle-managed datafile for the SYSTEM or SYSAUX tablespace is created in the DB_CREATE_FILE_DEST directory with the specified attributes being overridden. However, if a filename is not supplied and the DB_CREATE_FILE_DEST parameter is not set, then the CREATE DATABASE statement fails. When overriding the default attributes of an Oracle-managed file, if a SIZE value is specified but no AUTOEXTEND clause is specified, then the datafile is not autoextensible. Specifying the Undo Tablespace Datafile at Database Creation The DATAFILE subclause of the UNDO TABLESPACE clause is optional and a filename is not required in the file specification. If a filename is not supplied and the DB_CREATE_FILE_DEST parameter is set, then an Oracle-managed datafile is created in the DB_CREATE_FILE_DEST directory. If DB_CREATE_FILE_DEST is not set, then the statement fails with a syntax error. The UNDO TABLESPACE clause itself is optional in the CREATE DATABASE statement. If it is not supplied, and automatic undo management mode is enabled, then a default undo tablespace named SYS_UNDOTBS is created and a 10 MB datafile that is autoextensible is allocated as follows: ■ ■ If DB_CREATE_FILE_DEST is set, then an Oracle-managed datafile is created in the indicated directory. If DB_CREATE_FILE_DEST is not set, then the datafile location is operating system specific. See Also: Chapter 10, "Managing the Undo Tablespace" Using Oracle-Managed Files 11-9 Creating Oracle-Managed Files Specifying the Default Temporary Tablespace Tempfile at Database Creation The TEMPFILE subclause is optional for the DEFAULT TEMPORARY TABLESPACE clause and a filename is not required in the file specification. If a filename is not supplied and the DB_CREATE_FILE_DEST parameter set, then an Oracle-managed tempfile is created in the DB_CREATE_FILE_DEST directory. If DB_CREATE_FILE_DEST is not set, then the CREATE DATABASE statement fails with a syntax error. The DEFAULT TEMPORARY TABLESPACE clause itself is optional. If it is not specified, then no default temporary tablespace is created. The default size for an Oracle-managed tempfile is 100 MB and the file is autoextensible with an unlimited maximum size. CREATE DATABASE Statement Using Oracle-Managed Files: Examples This section contains examples of the CREATE DATABASE statement when using the Oracle-managed files feature. CREATE DATABASE: Example 1 This example creates a database with the following Oracle-managed files: ■ ■ ■ ■ ■ A SYSTEM tablespace datafile in directory /u01/oradata that is 100 MB and autoextensible up to an unlimited size. A SYSAUX tablespace datafile in directory /u01/oradata that is 100 MB and autoextensible up to an unlimited size. The tablespace is locally managed with automatic segment-space management. Two online log groups with two members of 100 MB each, one each in /u02/oradata and /u03/oradata. If automatic undo management mode is enabled, then an undo tablespace datafile in directory /u01/oradata that is 10 MB and autoextensible up to an unlimited size. An undo tablespace named SYS_UNDOTBS is created. If no CONTROL_FILES initialization parameter is specified, then two control files, one each in /u02/oradata and /u03/oradata. The control file in /u02/oradata is the primary control file. The following parameter settings relating to Oracle-managed files, are included in the initialization parameter file: DB_CREATE_FILE_DEST = '/u01/oradata' DB_CREATE_ONLINE_LOG_DEST_1 = '/u02/oradata' DB_CREATE_ONLINE_LOG_DEST_2 = '/u03/oradata' The following statement is issued at the SQL prompt: SQL> CREATE DATABASE sample; CREATE DATABASE: Example 2 This example creates a database with the following Oracle-managed files: ■ ■ A 100 MB SYSTEM tablespace datafile in directory /u01/oradata that is autoextensible up to an unlimited size. A SYSAUX tablespace datafile in directory /u01/oradata that is 100 MB and autoextensible up to an unlimited size. The tablespace is locally managed with automatic segment-space management. 11-10 Oracle Database Administrator’s Guide Creating Oracle-Managed Files ■ ■ ■ Two redo log files of 100 MB each in directory /u01/oradata. They are not multiplexed. An undo tablespace datafile in directory /u01/oradata that is 10 MB and autoextensible up to an unlimited size. An undo tablespace named SYS_UNDOTBS is created. A control file in /u01/oradata. In this example, it is assumed that: ■ ■ ■ No DB_CREATE_ONLINE_LOG_DEST_n initialization parameters are specified in the initialization parameter file. No CONTROL_FILES initialization parameter was specified in the initialization parameter file. Automatic undo management mode is enabled. The following statements are issued at the SQL prompt: SQL> ALTER SYSTEM SET DB_CREATE_FILE_DEST = '/u01/oradata'; SQL> CREATE DATABASE sample2; This database configuration is not recommended for a production database. The example illustrates how a very low-end database or simple test database can easily be created. To better protect this database from failures, at least one more control file should be created and the redo log should be multiplexed. In this example, the file size for the Oracle-managed files for the default temporary tablespace and undo tablespace are specified. A database with the following Oracle-managed files is created: CREATE DATABASE: Example 3 ■ ■ ■ ■ ■ ■ A 400 MB SYSTEM tablespace datafile in directory /u01/oradata. Because SIZE is specified, the file in not autoextensible. A 200 MB SYSAUX tablespace datafile in directory /u01/oradata. Because SIZE is specified, the file in not autoextensible. The tablespace is locally managed with automatic segment-space management. Two redo log groups with two members of 100 MB each, one each in directories /u02/oradata and /u03/oradata. For the default temporary tablespace dflt_ts, a 10 MB tempfile in directory /u01/oradata. Because SIZE is specified, the file in not autoextensible. For the undo tablespace undo_ts, a 10 MB datafile in directory /u01/oradata. Because SIZE is specified, the file in not autoextensible. If no CONTROL_FILES initialization parameter was specified, then two control files, one each in directories /u02/oradata and /u03/oradata. The control file in /u02/oradata is the primary control file. The following parameter settings are included in the initialization parameter file: DB_CREATE_FILE_DEST = '/u01/oradata' DB_CREATE_ONLINE_LOG_DEST_1 = '/u02/oradata' DB_CREATE_ONLINE_LOG_DEST_2 = '/u03/oradata' The following statement is issued at the SQL prompt: SQL> 2> 3> 4> CREATE DATABASE sample3 DATAFILE SIZE 400M SYSAUX DATAFILE SIZE 200M DEFAULT TEMPORARY TABLESPACE dflt_ts TEMPFILE SIZE 10M UNDO TABLESPACE undo_ts DATAFILE SIZE 10M; Using Oracle-Managed Files 11-11 Creating Oracle-Managed Files Creating Datafiles for Tablespaces Using Oracle-Managed Files The following statements that can create datafiles are relevant to the discussion in this section: ■ CREATE TABLESPACE ■ CREATE UNDO TABLESPACE ■ ALTER TABLESPACE ... ADD DATAFILE When creating a tablespace, either a regular tablespace or an undo tablespace, the DATAFILE clause is optional. When you include the DATAFILE clause the filename is optional. If the DATAFILE clause or filename is not provided, then the following rules apply: ■ ■ If the DB_CREATE_FILE_DEST initialization parameter is specified, then an Oracle-managed datafile is created in the location specified by the parameter. If the DB_CREATE_FILE_DEST initialization parameter is not specified, then the statement creating the datafile fails. When you add a datafile to a tablespace with the ALTER TABLESPACE...ADD DATAFILE statement the filename is optional. If the filename is not specified, then the same rules apply as discussed in the previous paragraph. By default, an Oracle-managed datafile for a regular tablespace is 100 MB and is autoextensible with an unlimited maximum size. However, if in your DATAFILE clause you override these defaults by specifying a SIZE value (and no AUTOEXTEND clause), then the datafile is not autoextensible. See Also: ■ ■ ■ "Specifying the SYSTEM and SYSAUX Tablespace Datafiles at Database Creation" on page 11-9 "Specifying the Undo Tablespace Datafile at Database Creation" on page 11-9 Chapter 8, "Managing Tablespaces" CREATE TABLESPACE: Examples The following are some examples of creating tablespaces with Oracle-managed files. Oracle Database SQL Reference for a description of the CREATE TABLESPACE statement See Also: The following example sets the default location for datafile creations to /u01/oradata and then creates a tablespace tbs_1 with a datafile in that location. The datafile is 100 MB and is autoextensible with an unlimited maximum size. CREATE TABLESPACE: Example 1 SQL> ALTER SYSTEM SET DB_CREATE_FILE_DEST = '/u01/oradata'; SQL> CREATE TABLESPACE tbs_1; This example creates a tablespace named tbs_2 with a datafile in the directory /u01/oradata. The datafile initial size is 400 MB, and because the SIZE clause is specified, the datafile is not autoextensible. CREATE TABLESPACE: Example 2 The following parameter setting is included in the initialization parameter file: DB_CREATE_FILE_DEST = '/u01/oradata' 11-12 Oracle Database Administrator’s Guide Creating Oracle-Managed Files The following statement is issued at the SQL prompt: SQL> CREATE TABLESPACE tbs_2 DATAFILE SIZE 400M; CREATE TABLESPACE: Example 3 This example creates a tablespace named tbs_3 with an autoextensible datafile in the directory /u01/oradata with a maximum size of 800 MB and an initial size of 100 MB: The following parameter setting is included in the initialization parameter file: DB_CREATE_FILE_DEST = '/u01/oradata' The following statement is issued at the SQL prompt: SQL> CREATE TABLESPACE tbs_3 DATAFILE AUTOEXTEND ON MAXSIZE 800M; The following example sets the default location for datafile creations to /u01/oradata and then creates a tablespace named tbs_4 in that directory with two datafiles. Both datafiles have an initial size of 200 MB, and because a SIZE value is specified, they are not autoextensible CREATE TABLESPACE: Example 4 SQL> ALTER SYSTEM SET DB_CREATE_FILE_DEST = '/u01/oradata'; SQL> CREATE TABLESPACE tbs_4 DATAFILE SIZE 200M SIZE 200M; CREATE UNDO TABLESPACE: Example The following example creates an undo tablespace named undotbs_1 with a datafile in the directory /u01/oradata. The datafile for the undo tablespace is 100 MB and is autoextensible with an unlimited maximum size. The following parameter setting is included in the initialization parameter file: DB_CREATE_FILE_DEST = '/u01/oradata' The following statement is issued at the SQL prompt: SQL> CREATE UNDO TABLESPACE undotbs_1; Oracle Database SQL Reference for a description of the CREATE UNDO TABLESPACE statement See Also: ALTER TABLESPACE: Example This example adds an Oracle-managed autoextensible datafile to the tbs_1 tablespace. The datafile has an initial size of 100 MB and a maximum size of 800 MB. The following parameter setting is included in the initialization parameter file: DB_CREATE_FILE_DEST = '/u01/oradata' The following statement is entered at the SQL prompt: SQL> ALTER TABLESPACE tbs_1 ADD DATAFILE AUTOEXTEND ON MAXSIZE 800M; Oracle Database SQL Reference for a description of the ALTER TABLESPACE statement See Also: Creating Tempfiles for Temporary Tablespaces Using Oracle-Managed Files The following statements that create tempfiles are relevant to the discussion in this section: ■ CREATE TEMPORARY TABLESPACE Using Oracle-Managed Files 11-13 Creating Oracle-Managed Files ■ ALTER TABLESPACE ... ADD TEMPFILE When creating a temporary tablespace the TEMPFILE clause is optional. If you include the TEMPFILE clause, then the filename is optional. If the TEMPFILE clause or filename is not provided, then the following rules apply: ■ ■ If the DB_CREATE_FILE_DEST initialization parameter is specified, then an Oracle-managed tempfile is created in the location specified by the parameter. If the DB_CREATE_FILE_DEST initialization parameter is not specified, then the statement creating the tempfile fails. When you add a tempfile to a tablespace with the ALTER TABLESPACE...ADD TEMPFILE statement the filename is optional. If the filename is not specified, then the same rules apply as discussed in the previous paragraph. When overriding the default attributes of an Oracle-managed file, if a SIZE value is specified but no AUTOEXTEND clause is specified, then the datafile is not autoextensible. See Also: "Specifying the Default Temporary Tablespace Tempfile at Database Creation" on page 11-10 CREATE TEMPORARY TABLESPACE: Example The following example sets the default location for datafile creations to /u01/oradata and then creates a tablespace named temptbs_1 with a tempfile in that location. The tempfile is 100 MB and is autoextensible with an unlimited maximum size. SQL> ALTER SYSTEM SET DB_CREATE_FILE_DEST = '/u01/oradata'; SQL> CREATE TEMPORARY TABLESPACE temptbs_1; Oracle Database SQL Reference for a description of the CREATE TABLESPACE statement See Also: ALTER TABLESPACE... ADD TEMPFILE: Example The following example sets the default location for datafile creations to /u03/oradata and then adds a tempfile in the default location to a tablespace named temptbs_1. The tempfile initial size is 100 MB. It is autoextensible with an unlimited maximum size. SQL> ALTER SYSTEM SET DB_CREATE_FILE_DEST = '/u03/oradata'; SQL> ALTER TABLESPACE TBS_1 ADD TEMPFILE; Oracle Database SQL Reference for a description of the ALTER TABLESPACE statement See Also: Creating Control Files Using Oracle-Managed Files When you issue the CREATE CONTROLFILE statement, a control file is created (or reused, if REUSE is specified) in the files specified by the CONTROL_FILES initialization parameter. If the CONTROL_FILES parameter is not set, then the control file is created in the default control file destinations. The default destination is determined according to the precedence documented in "Specifying Control Files at Database Creation" on page 11-7. If Oracle Database creates an Oracle-managed control file, and there is a server parameter file, then the database creates a CONTROL_FILES initialization parameter for the server parameter file. If there is no server parameter file, then you must create a 11-14 Oracle Database Administrator’s Guide Creating Oracle-Managed Files CONTROL_FILES initialization parameter manually and include it in the initialization parameter file. If the datafiles in the database are Oracle-managed files, then the database-generated filenames for the files must be supplied in the DATAFILE clause of the statement. If the redo log files are Oracle-managed files, then the NORESETLOGS or RESETLOGS keyword determines what can be supplied in the LOGFILE clause: ■ ■ If the NORESETLOGS keyword is used, then the database-generated filenames for the Oracle-managed redo log files must be supplied in the LOGFILE clause. If the RESETLOGS keyword is used, then the redo log file names can be supplied as with the CREATE DATABASE statement. See "Specifying Redo Log Files at Database Creation" on page 11-8. The sections that follow contain examples of using the CREATE CONTROLFILE statement with Oracle-managed files. See Also: ■ ■ Oracle Database SQL Reference for a description of the CREATE CONTROLFILE statement "Specifying Control Files at Database Creation" on page 11-7 CREATE CONTROLFILE Using NORESETLOGS Keyword: Example The following CREATE CONTROLFILE statement is generated by an ALTER DATABASE BACKUP CONTROLFILE TO TRACE statement for a database with Oracle-managed datafiles and redo log files: CREATE CONTROLFILE DATABASE sample LOGFILE GROUP 1 ('/u01/oradata/SAMPLE/onlinelog/o1_mf_1_o220rtt9_.log', '/u02/oradata/SAMPLE/onlinelog/o1_mf_1_v2o0b2i3_.log') SIZE 100M, GROUP 2 ('/u01/oradata/SAMPLE/onlinelog/o1_mf_2_p22056iw_.log', '/u02/oradata/SAMPLE/onlinelog/o1_mf_2_p02rcyg3_.log') SIZE 100M NORESETLOGS DATAFILE '/u01/oradata/SAMPLE/datafile/o1_mf_system_xu34ybm2_.dbf' SIZE 100M, '/u01/oradata/SAMPLE/datafile/o1_mf_sysaux_aawbmz51_.dbf' SIZE 100M, '/u01/oradata/SAMPLE/datafile/o1_mf_sys_undotbs_apqbmz51_.dbf' SIZE 100M MAXLOGFILES 5 MAXLOGHISTORY 100 MAXDATAFILES 10 MAXINSTANCES 2 ARCHIVELOG; CREATE CONTROLFILE Using RESETLOGS Keyword: Example The following is an example of a CREATE CONTROLFILE statement with the RESETLOGS option. Some combination of DB_CREATE_FILE_DEST, DB_RECOVERY_FILE_DEST, and DB_CREATE_ONLINE_LOG_DEST_n or must be set. CREATE CONTROLFILE DATABASE sample RESETLOGS Using Oracle-Managed Files 11-15 Creating Oracle-Managed Files DATAFILE '/u01/oradata/SAMPLE/datafile/o1_mf_system_aawbmz51_.dbf', '/u01/oradata/SAMPLE/datafile/o1_mf_sysaux_axybmz51_.dbf', '/u01/oradata/SAMPLE/datafile/o1_mf_sys_undotbs_azzbmz51_.dbf' SIZE 100M MAXLOGFILES 5 MAXLOGHISTORY 100 MAXDATAFILES 10 MAXINSTANCES 2 ARCHIVELOG; Later, you must issue the ALTER DATABASE OPEN RESETLOGS statement to re-create the redo log files. This is discussed in "Using the ALTER DATABASE OPEN RESETLOGS Statement" on page 11-17. If the previous log files are Oracle-managed files, then they are not deleted. Creating Redo Log Files Using Oracle-Managed Files Redo log files are created at database creation time. They can also be created when you issue either of the following statements: ■ ALTER DATABASE ADD LOGFILE ■ ALTER DATABASE OPEN RESETLOGS Oracle Database SQL Reference for a description of the ALTER DATABASE statement See Also: Using the ALTER DATABASE ADD LOGFILE Statement The ALTER DATABASE ADD LOGFILE statement lets you later add a new group to your current redo log. The filename in the ADD LOGFILE clause is optional if you are using Oracle-managed files. If a filename is not provided, then a redo log file is created in the default log file destination. The default destination is determined according to the precedence documented in "Specifying Redo Log Files at Database Creation" on page 11-8. If a filename is not provided and you have not provided one of the initialization parameters required for creating Oracle-managed files, then the statement returns an error. The default size for an Oracle-managed log file is 100 MB. You continue to add and drop redo log file members by specifying complete filenames. See Also: ■ ■ "Specifying Redo Log Files at Database Creation" on page 11-8 "Creating Control Files Using Oracle-Managed Files" on page 11-14 The following example creates a log group with a member in /u01/oradata and another member in /u02/oradata. The size of each log file is 100 MB. Adding New Redo Log Files: Example The following parameter settings are included in the initialization parameter file: DB_CREATE_ONLINE_LOG_DEST_1 = '/u01/oradata' DB_CREATE_ONLINE_LOG_DEST_2 = '/u02/oradata' The following statement is issued at the SQL prompt: 11-16 Oracle Database Administrator’s Guide Behavior of Oracle-Managed Files SQL> ALTER DATABASE ADD LOGFILE; Using the ALTER DATABASE OPEN RESETLOGS Statement If you previously created a control file specifying RESETLOGS and either did not specify filenames or specified nonexistent filenames, then the database creates redo log files for you when you issue the ALTER DATABASE OPEN RESETLOGS statement. The rules for determining the directories in which to store redo log files, when none are specified in the control file, are the same as those discussed in "Specifying Redo Log Files at Database Creation" on page 11-8. Creating Archived Logs Using Oracle-Managed Files Archived logs are created in the DB_RECOVERY_FILE_DEST location when: ■ The ARC or LGWR background process archives an online redo log or ■ An ALTER SYSTEM ARHIVE LOG CURRENT statement is issued. For example, assume that the following parameter settings are included in the initialization parameter file: DB_RECOVERY_FILE_DEST_SIZE = 20G DB_RECOVERY_FILE_DEST = '/u01/oradata' LOG_ARCHIVE_DEST_1 = 'LOCATION=USE_DB_RECOVERY_FILE_DEST' Behavior of Oracle-Managed Files The filenames of Oracle-managed files are accepted in SQL statements wherever a filename is used to identify an existing file. These filenames, like other filenames, are stored in the control file and, if using Recovery Manager (RMAN) for backup and recovery, in the RMAN catalog. They are visible in all of the usual fixed and dynamic performance views that are available for monitoring datafiles and tempfiles (for example, V$DATAFILE or DBA_DATA_FILES). The following are some examples of statements using database-generated filenames: SQL> ALTER DATABASE 2> RENAME FILE '/u01/oradata/mydb/datafile/o1_mf_tbs01_ziw3bopb_.dbf' 3> TO '/u01/oradata/mydb/tbs0101.dbf'; SQL> ALTER DATABASE 2> DROP LOGFILE '/u01/oradata/mydb/onlinelog/o1_mf_1_wo94n2xi_.log'; SQL> ALTER TABLE emp 2> ALLOCATE EXTENT 3> (DATAFILE '/u01/oradata/mydb/datafile/o1_mf_tbs1_2ixfh90q_.dbf'); You can backup and restore Oracle-managed datafiles, tempfiles, and control files as you would corresponding non Oracle-managed files. Using database-generated filenames does not impact the use of logical backup files such as export files. This is particularly important for tablespace point-in-time recovery (TSPITR) and transportable tablespace export files. There are some cases where Oracle-managed files behave differently. These are discussed in the sections that follow. Using Oracle-Managed Files 11-17 Scenarios for Using Oracle-Managed Files Dropping Datafiles and Tempfiles Unlike files that are not managed by the database, when an Oracle-managed datafile or tempfile is dropped, the filename is removed from the control file and the file is automatically deleted from the file system. The statements that delete Oracle-managed files when they are dropped are: ■ DROP TABLESPACE ■ ALTER DATABASE TEMPFILE ... DROP You can also use these statements, which always delete files, Orace-managed or not: ■ ALTER TABLESPACE ... DROP DATAFILE ■ ALTER TABLESPACE ... DROP TEMPFILE Dropping Redo Log Files When an Oracle-managed redo log file is dropped its Oracle-managed files are deleted. You specify the group or members to be dropped. The following statements drop and delete redo log files: ■ ALTER DATABASE DROP LOGFILE ■ ALTER DATABASE DROP LOGFILE MEMBER Renaming Files The following statements are used to rename files: ■ ALTER DATABASE RENAME FILE ■ ALTER TABLESPACE ... RENAME DATAFILE These statements do not actually rename the files on the operating system, but rather, the names in the control file are changed. If the old file is an Oracle-managed file and it exists, then it is deleted. You must specify each filename using the conventions for filenames on your operating system when you issue this statement. Managing Standby Databases The datafiles, control files, and redo log files in a standby database can be managed by the database. This is independent of whether Oracle-managed files are used on the primary database. When recovery of a standby database encounters redo for the creation of a datafile, if the datafile is an Oracle-managed file, then the recovery process creates an empty file in the local default file system location. This allows the redo for the new file to be applied immediately without any human intervention. When recovery of a standby database encounters redo for the deletion of a tablespace, it deletes any Oracle-managed datafiles in the local file system. Note that this is independent of the INCLUDING DATAFILES option issued at the primary database. Scenarios for Using Oracle-Managed Files This section further demonstrates the use of Oracle-managed files by presenting scenarios of their use. 11-18 Oracle Database Administrator’s Guide Scenarios for Using Oracle-Managed Files Scenario 1: Create and Manage a Database with Multiplexed Redo Logs In this scenario, a DBA creates a database where the datafiles and redo log files are created in separate directories. The redo log files and control files are multiplexed. The database uses an undo tablespace, and has a default temporary tablespace. The following are tasks involved with creating and maintaining this database. 1. Setting the initialization parameters The DBA includes three generic file creation defaults in the initialization parameter file before creating the database. Automatic undo management mode is also specified. DB_CREATE_FILE_DEST = '/u01/oradata' DB_CREATE_ONLINE_LOG_DEST_1 = '/u02/oradata' DB_CREATE_ONLINE_LOG_DEST_2 = '/u03/oradata' UNDO_MANAGEMENT = AUTO The DB_CREATE_FILE_DEST parameter sets the default file system directory for the datafiles and tempfiles. The DB_CREATE_ONLINE_LOG_DEST_1 and DB_CREATE_ONLINE_LOG_DEST_2 parameters set the default file system directories for redo log file and control file creation. Each redo log file and control file is multiplexed across the two directories. 2. Creating a database Once the initialization parameters are set, the database can be created by using this statement: SQL> CREATE DATABASE sample 2> DEFAULT TEMPORARY TABLESPACE dflttmp; Because a DATAFILE clause is not present and the DB_CREATE_FILE_DEST initialization parameter is set, the SYSTEM tablespace datafile is created in the default file system (/u01/oradata in this scenario). The filename is uniquely generated by the database. The file is autoextensible with an initial size of 100 MB and an unlimited maximum size. The file is an Oracle-managed file. A similar datafile is created for the SYSAUX tablespace. Because a LOGFILE clause is not present, two redo log groups are created. Each log group has two members, with one member in the DB_CREATE_ONLINE_LOG_DEST_1 location and the other member in the DB_CREATE_ONLINE_LOG_DEST_2 location. The filenames are uniquely generated by the database. The log files are created with a size of 100 MB. The log file members are Oracle-managed files. Similarly, because the CONTROL_FILES initialization parameter is not present, and two DB_CREATE_ONLINE_LOG_DEST_n initialization parameters are specified, two control files are created. The control file located in the DB_CREATE_ONLINE_LOG_DEST_1 location is the primary control file; the control file located in the DB_CREATE_ONLINE_LOG_DEST_2 location is a multiplexed copy. The filenames are uniquely generated by the database. They are Oracle-managed files. Assuming there is a server parameter file, a CONTROL_FILES initialization parameter is generated. Automatic undo management mode is specified, but because an undo tablespace is not specified and the DB_CREATE_FILE_DEST initialization parameter is set, a default undo tablespace named SYS_UNDOTBS is created in the directory Using Oracle-Managed Files 11-19 Scenarios for Using Oracle-Managed Files specified by DB_CREATE_FILE_DEST. The datafile is a 10 MB datafile that is autoextensible. It is an Oracle-managed file. Lastly, a default temporary tablespace named dflttmp is specified. Because DB_CREATE_FILE_DEST is included in the parameter file, the tempfile for dflttmp is created in the directory specified by that parameter. The tempfile is 100 MB and is autoextensible with an unlimited maximum size. It is an Oracle-managed file. The resultant file tree, with generated filenames, is as follows: /u01 /oradata /SAMPLE /datafile /o1_mf_system_cmr7t30p_.dbf /o1_mf_sysaux_cmr7t88p_.dbf /o1_mf_sys_undotbs_2ixfh90q_.dbf /o1_mf_dflttmp_157se6ff_.tmp /u02 /oradata /SAMPLE /onlinelog /o1_mf_1_0orrm31z_.log /o1_mf_2_2xyz16am_.log /controlfile /o1_mf_cmr7t30p_.ctl /u03 /oradata /SAMPLE /onlinelog /o1_mf_1_ixfvm8w9_.log /o1_mf_2_q89tmp28_.log /controlfile /o1_mf_x1sr8t36_.ctl The internally generated filenames can be seen when selecting from the usual views. For example: SQL> SELECT NAME FROM V$DATAFILE; NAME ---------------------------------------------------/u01/oradata/SAMPLE/datafile/o1_mf_system_cmr7t30p_.dbf /u01/oradata/SAMPLE/datafile/o1_mf_sysaux_cmr7t88p_.dbf /u01/oradata/SAMPLE/datafile/o1_mf_sys_undotbs_2ixfh90q_.dbf 3 rows selected The name is also printed to the alert log when the file is created. 3. Managing control files The control file was created when generating the database, and a CONTROL_FILES initialization parameter was added to the parameter file. If needed, then the DBA can re-create the control file or build a new one for the database using the CREATE CONTROLFILE statement. The correct Oracle-managed filenames must be used in the DATAFILE and LOGFILE clauses. The ALTER DATABASE BACKUP CONTROLFILE TO TRACE statement generates a script with the correct filenames. Alternatively, the filenames 11-20 Oracle Database Administrator’s Guide Scenarios for Using Oracle-Managed Files can be found by selecting from the V$DATAFILE, V$TEMPFILE, and V$LOGFILE views. The following example re-creates the control file for the sample database: CREATE CONTROLFILE REUSE DATABASE sample LOGFILE GROUP 1('/u02/oradata/SAMPLE/onlinelog/o1_mf_1_0orrm31z_.log', '/u03/oradata/SAMPLE/onlinelog/o1_mf_1_ixfvm8w9_.log'), GROUP 2('/u02/oradata/SAMPLE/onlinelog/o1_mf_2_2xyz16am_.log', '/u03/oradata/SAMPLE/onlinelog/o1_mf_2_q89tmp28_.log') NORESETLOGS DATAFILE '/u01/oradata/SAMPLE/datafile/o1_mf_system_cmr7t30p_.dbf', '/u01/oradata/SAMPLE/datafile/o1_mf_sysaux_cmr7t88p_.dbf', '/u01/oradata/SAMPLE/datafile/o1_mf_sys_undotbs_2ixfh90q_.dbf', '/u01/oradata/SAMPLE/datafile/o1_mf_dflttmp_157se6ff_.tmp' MAXLOGFILES 5 MAXLOGHISTORY 100 MAXDATAFILES 10 MAXINSTANCES 2 ARCHIVELOG; The control file created by this statement is located as specified by the CONTROL_FILES initialization parameter that was generated when the database was created. The REUSE clause causes any existing files to be overwritten. 4. Managing the redo log To create a new group of redo log files, the DBA can use the ALTER DATABASE ADD LOGFILE statement. The following statement adds a log file with a member in the DB_CREATE_ONLINE_LOG_DEST_1 location and a member in the DB_CREATE_ONLINE_LOG_DEST_2 location. These files are Oracle-managed files. SQL> ALTER DATABASE ADD LOGFILE; Log file members continue to be added and dropped by specifying complete filenames. The GROUP clause can be used to drop a log group. In the following example the operating system file associated with each Oracle-managed log file member is automatically deleted. SQL> ALTER DATABASE DROP LOGFILE GROUP 3; 5. Managing tablespaces The default storage for all datafiles for future tablespace creations in the sample database is the location specified by the DB_CREATE_FILE_DEST initialization parameter (/u01/oradata in this scenario). Any datafiles for which no filename is specified, are created in the file system specified by the initialization parameter DB_CREATE_FILE_DEST. For example: SQL> CREATE TABLESPACE tbs_1; The preceding statement creates a tablespace whose storage is in /u01/oradata. A datafile is created with an initial of 100 MB and it is autoextensible with an unlimited maximum size. The datafile is an Oracle-managed file. When the tablespace is dropped, the Oracle-managed files for the tablespace are automatically removed. The following statement drops the tablespace and all the Oracle-managed files used for its storage: Using Oracle-Managed Files 11-21 Scenarios for Using Oracle-Managed Files SQL> DROP TABLESPACE tbs_1; Once the first datafile is full, the database does not automatically create a new datafile. More space can be added to the tablespace by adding another Oracle-managed datafile. The following statement adds another datafile in the location specified by DB_CREATE_FILE_DEST: SQL> ALTER TABLESPACE tbs_1 ADD DATAFILE; The default file system can be changed by changing the initialization parameter. This does not change any existing datafiles. It only affects future creations. This can be done dynamically using the following statement: SQL> ALTER SYSTEM SET DB_CREATE_FILE_DEST='/u04/oradata'; 6. Archiving redo information Archiving of redo log files is no different for Oracle-managed files, than it is for unmanaged files. A file system location for the archived log files can be specified using the LOG_ARCHIVE_DEST_n initialization parameters. The filenames are formed based on the LOG_ARCHIVE_FORMAT parameter or its default. The archived logs are not Oracle-managed files 7. Backup, restore, and recover Since an Oracle-managed file is compatible with standard operating system files, you can use operating system utilities to backup or restore Oracle-managed files. All existing methods for backing up, restoring, and recovering the database work for Oracle-managed files. Scenario 2: Create and Manage a Database with Database and Flash Recovery Areas In this scenario, a DBA creates a database where the control files and redo log files are multiplexed. Archived logs and RMAN backups are created in the flash recovery area. The following tasks are involved in creating and maintaining this database: 1. Setting the initialization parameters The DBA includes the following generic file creation defaults: DB_CREATE_FILE_DEST = '/u01/oradata' DB_RECOVERY_FILE_DEST_SIZE = 10G DB_RECOVERY_FILE_DEST = '/u02/oradata' LOG_ARCHIVE_DEST_1 = 'LOCATION = USE_DB_RECOVERY_FILE_DEST' The DB_CREATE_FILE_DEST parameter sets the default file system directory for datafiles, tempfiles, control files, and redo logs. The DB_RECOVERY_FILE_DEST parameter sets the default file system directory for control files, redo logs, and RMAN backups. The LOG_ARCHIVE_DEST_1 configuration 'LOCATION=USE_DB_RECOVERY_FILE_DEST' redirects archived logs to the DB_RECOVERY_FILE_DEST location. The DB_CREATE_FILE_DEST and DB_RECOVERY_FILE_DEST parameters set the default directory for log file and control file creation. Each redo log and control file is multiplexed across the two directories. 2. Creating a database 3. Managing control files 11-22 Oracle Database Administrator’s Guide Scenarios for Using Oracle-Managed Files 4. Managing the redo log 5. Managing tablespaces Tasks 2, 3, 4, and 5 are the same as in Scenario 1, except that the control files and redo logs are multiplexed across the DB_CREATE_FILE_DEST and DB_RECOVERY_FILE_DEST locations. 6. Archiving redo log information Archiving online logs is no different for Oracle-managed files than it is for unmanaged files. The archived logs are created in DB_RECOVERY_FILE_DEST and are Oracle-managed files. 7. Backup, restore, and recover An Oracle-managed file is compatible with standard operating system files, so you can use operating system utilities to backup or restore Oracle-managed files. All existing methods for backing up, restoring, and recovering the database work for Oracle-managed files. When no format option is specified, all disk backups by RMAN are created in the DB_RECOVERY_FILE_DEST location. The backups are Oracle-managed files. Scenario 3: Adding Oracle-Managed Files to an Existing Database Assume in this case that an existing database does not have any Oracle-managed files, but the DBA would like to create new tablespaces with Oracle-managed files and locate them in directory /u03/oradata. 1. Setting the initialization parameters To allow automatic datafile creation, set the DB_CREATE_FILE_DEST initialization parameter to the file system directory in which to create the datafiles. This can be done dynamically as follows: SQL> ALTER SYSTEM SET DB_CREATE_FILE_DEST = '/u03/oradata'; 2. Creating tablespaces Once DB_CREATE_FILE_DEST is set, the DATAFILE clause can be omitted from a CREATE TABLESPACE statement. The datafile is created in the location specified by DB_CREATE_FILE_DEST by default. For example: SQL> CREATE TABLESPACE tbs_2; When the tbs_2 tablespace is dropped, its datafiles are automatically deleted. Using Oracle-Managed Files 11-23 Scenarios for Using Oracle-Managed Files 11-24 Oracle Database Administrator’s Guide 12 Using Automatic Storage Management This chapter discusses some of the concepts behind Automatic Storage Management and describes how to use it. It contains the following topics: ■ What Is Automatic Storage Management? ■ Administering an Automatic Storage Management Instance ■ Administering Automatic Storage Management Disk Groups ■ Using Automatic Storage Management in the Database ■ Migrating a Database to Automatic Storage Management ■ Accessing Automatic Storage Management Files with the XML DB Virtual Folder ■ Viewing Information About Automatic Storage Management What Is Automatic Storage Management? Automatic Storage Management (ASM) is an integrated file system and volume manager expressly built for Oracle database files. ASM provides the performance of raw I/O with the easy management of a file system. It simplifies database administration by eliminating the need for you to directly manage potentially thousands of Oracle database files. It does this by enabling you to divide all available storage into disk groups. You manage a small set of disk groups and ASM automates the placement of the database files within those disk groups. In the SQL statements that you use for creating database structures such as tablespaces, control files, and redo and archive log files, you specify file location in terms of disk groups. ASM then creates and manages the associated underlying files for you. Mirroring and Striping ASM extends the power of Oracle-managed files. With Oracle-managed files, files are created and managed automatically for you, but with ASM you get the additional benefits of features such as mirroring and striping. ASM divides files into 1 MB extents and spreads each file's extents evenly across all disks in the disk group. This optimizes performance and disk utilization, making manual I/O performance tuning unnecessary. ASM mirroring is more flexible than operating system mirrored disks because ASM mirroring enables the redundancy level to be specified on a per-file basis. Thus two files can share the same disk group with one file being mirrored while the other is not. Mirroring takes place at the extent level. If a file is mirrored, depending upon redundancy level set for the file, each extent has one or more mirrored copies, and Using Automatic Storage Management 12-1 What Is Automatic Storage Management? mirrored copies are always kept on different disks in the disk group. Table 12–1 describes the three mirroring options that ASM supports on a per-file basis. Table 12–1 ASM Mirroring Options Mirroring Option Description 2-way mirroring Each extent has 1 mirrored copy. 3-way mirroring Each extent has 2 mirrored copies. Unprotected ASM provides no mirroring. Used when mirroring is provided by the disk subsystem itself. Dynamic Storage Configuration ASM enables you to change the storage configuration without having to take the database offline. It automatically rebalances—redistributes file data evenly across all the disks of the disk group—after you add disks to or drop disks from a disk group. Should a disk failure occur, ASM automatically rebalances to restore full redundancy for files that had extents on the failed disk. When you replace the failed disk with a new disk, ASM rebalances the disk group to spread data evenly across all disks, including the replacement disk. Interoperability with Existing Databases ASM does not eliminate any existing database functionality. Existing databases using file systems or with storage on raw devices can operate as they always have. New files can be created as ASM files while existing ones are administered as before. Databases can have a mixture of ASM files and non-ASM files. ASM Instance ASM is implemented as a special kind of Oracle instance, with its own System Global Area (SGA) and background processes. The ASM instance typically has a much smaller SGA than a database instance. Single Instance and Clustered Environments Each database server that has database files managed by ASM needs to be running an ASM instance. A single ASM instance can service one or more single-instance databases on a stand-alone server. Each ASM disk group can be shared among all the databases on the server. In a clustered environment, each node runs an ASM instance, and the ASM instances communicate with each other on a peer-to-peer basis. This is true for both Real Application Clusters (RAC) environments, and non-RAC clustered environments where multiple single-instance databases across multiple nodes share a clustered pool of storage that is managed by ASM. If a node is part of a Real Application Clusters (RAC) system, the peer-to-peer communications service is already installed on that server. If the node is part of a cluster where RAC is not installed, the Oracle Clusterware, Cluster Ready Services (CRS), must be installed on that node. An ASM instance on a node in a cluster can manage storage simultaneously for one or more RAC database instances and one or more single instance databases. See Also: Oracle Database Concepts for an overview of Automatic Storage Management 12-2 Oracle Database Administrator’s Guide Overview of the Components of Automatic Storage Management Overview of the Components of Automatic Storage Management The components of Automatic Storage Management (ASM) are disk groups, disks, failure groups, files, and templates. Disk Groups The primary component of ASM is the disk group. A disk group consists of a grouping of disks that are managed together as a unit. You configure ASM by creating disk groups to store database files. Oracle provides SQL statements that create and manage disk groups, their contents, and their metadata. The disk group type determines the levels of mirroring that files in the disk group can be created with. You specify disk group type when you create the disk group. Table 12–2 lists ASM disk group types, their supported mirroring levels, and their default mirroring levels. The default mirroring level indicates the mirroring level that a file is created with unless you designate a different mirroring level. (See Templates, later in this section.) Table 12–2 Mirroring Options for Each Disk Group Type Disk Group Type Supported Mirroring Levels Default Mirroring Level Normal redundancy 2-way 2-way 3-way Unprotected (none) High redundancy 3-way External redundancy Unprotected (none) Unprotected 3-way If you do not specify a disk group type (redundancy level) when you create a disk group, the disk group defaults to normal redundancy. As Table 12–2 indicates, files in a high redundancy disk group are always 3-way mirrored, files in an external redundancy disk group have no ASM mirroring, and files in a normal redundancy disk group can be 2-way or 3-way mirrored or unprotected, with 2-way mirroring as the default. Mirroring level for each file is set by templates, which are described later in this section. Disks The disks in a disk group are referred to as ASM disks. On Windows operating systems, an ASM disk is always a partition. On all other platforms, an ASM disk can be: ■ A partition of a logical unit number (LUN) ■ A network-attached file Although you can also present a volume (a logical collection of disks) for management by ASM, it is not recommended to run ASM on top of another host-based volume manager. Note: When an ASM instance starts, it automatically discovers all available ASM disks. Discovery is the process of determining every disk device to which the ASM instance has been given I/O permissions (by some operating system mechanism), and of examining the contents of the first block of such disks to see if they are recognized as belonging to a disk group. ASM discovers disks in the paths that are listed in an Using Automatic Storage Management 12-3 Administering an Automatic Storage Management Instance initialization parameter, or if the parameter is NULL, in an operating system–dependent default path. Failure Groups Failure groups define ASM disks that share a common potential failure mechanism. An example of a failure group is a set of SCSI disks sharing the same SCSI controller. Failure groups are used to determine which ASM disks to use for storing redundant copies of data. For example, if two-way mirroring is specified for a file, ASM automatically stores redundant copies of file extents in separate failure groups. Failure groups apply only to normal and high redundancy disk groups. You define the failure groups in a disk group when you create or alter the disk group. Disk Group 1 Failure Group 1 Disk 1 Disk 2 Disk 3 Disk 4 Disk Controller 1 Disk 6 Disk 7 Disk 8 Disk Controller 2 Failure Group 2 Disk 5 Files Files written on ASM disks are ASM files, whose names are automatically generated by ASM. You can specify user-friendly alias names (or just aliases) for ASM files, and you can create a hierarchical directory structure for these aliases. Each ASM file is completely contained within a single disk group, and is evenly spaced over all of the ASM disks in the disk group. Templates Templates are collections of file attribute values, and are used to set mirroring and striping attributes of each type of database file (datafile, control file, redo log file, and so on) created in ASM disk groups. Each disk group has a default template associated with each file type. See "Managing Disk Group Templates" on page 12-29 for more information. You can also create your own templates to meet unique requirements. You can then include a template name when creating a file, thereby assigning desired attributes on an individual file basis rather than on the basis of file type. See "About ASM Filenames" on page 12-34 for more information. Administering an Automatic Storage Management Instance Administering an Automatic Storage Management (ASM) instance is similar to managing a database instance, but with fewer tasks. An ASM instance does not require a database instance to be running for you to administer it. You can perform all ASM administration tasks with SQL*Plus. 12-4 Oracle Database Administrator’s Guide Administering an Automatic Storage Management Instance Note: To administer ASM with SQL*Plus, you must set the ORACLE_SID environment variable to the ASM SID before you start SQL*Plus. If ASM and the database have different Oracle homes, you must also set the ORACLE_HOME environment variable to point to the ASM Oracle home. Depending on platform, you may have to change other environment variables as well. See "Selecting an Instance with Environment Variables" on page 1-7 for more information. The default ASM SID for a single instance database is +ASM, and the default SID for ASM on Real Application Clusters nodes is +ASMnode#. You can also use Oracle Enterprise Manager (EM) or the Database Configuration Assistant (DBCA) for configuring and altering disk groups. DBCA eases the configuration and creation of your database while EM provides an integrated graphical approach for managing both your ASM instance and database instance. See Appendix A of Oracle Database 2 Day DBA for instructions on administering an ASM instance with EM. This section contains the following topics: ■ Installing ASM ■ Authentication for Accessing an ASM Instance ■ Setting Initialization Parameters for an ASM Instance ■ Starting Up an ASM Instance ■ Shutting Down an ASM Instance Oracle Database 2 Day DBA for information on administering ASM with Enterprise Manager. See Also: Installing ASM Because ASM is integrated into the database server, you use the Oracle Universal Installer (OUI) and the Database Configuration Assistant (DBCA) to install and initially configure it. OUI has options to either install and configure a database that uses ASM for storage management, or to install and configure an ASM instance by itself, without creating a database instance. Refer to the Oracle Database Installation Guide for your operating system for details on installing ASM. ASM Installation Tips Keep the following in mind when installing ASM: ■ When running more than one database instance on a single server or node, it is recommended that you install ASM in its own Oracle home on that server or node. This is advisable even if you are running only one database instance but plan to add one or more database instances to the server or node in the future. With separate Oracle homes, you can upgrade and patch ASM and databases independently, and you can deinstall database software without impacting the ASM instance. If an ASM instance does not already exist and you select the OUI option to install and configure ASM only, OUI installs ASM in its own Oracle home. Using Automatic Storage Management 12-5 Administering an Automatic Storage Management Instance ■ If you are running a single database instance on a server or node and have no plans to add one or more database instances to this server or node, ASM and the database can share a single Oracle home If an ASM instance does not already exist and you select the OUI option to create a database that uses ASM for storage, OUI creates this single-home configuration. ■ ■ ■ When you install ASM in a single-instance configuration, DBCA creates a separate server parameter file (SPFILE) and password file for the ASM instance. When installing ASM in a clustered environment where the ASM Oracle home is shared among all nodes, DBCA creates an SPFILE for ASM. In a clustered environment without a shared ASM Oracle home, DBCA installs a text initialization parameter file (PFILE) for ASM on each node. Before installing ASM, you may want to install the ASM support library, ASMLib. ASMLib is an application program interface (API) developed by Oracle to simplify the operating system–to-database interface, and to exploit the capabilities and strengths of vendors' storage arrays. The purpose of ASMLib, which is an optional add-on to ASM, is to provide an alternative interface for the ASM-enabled kernel to discover and access block devices. It provides storage and operating system vendors the opportunity to supply extended storage-related features. These features provide benefits such as improved performance and greater data integrity. See the ASM page of the Oracle Technology Network web site at http://www.oracle.com/technology/products/database/asm for more information on ASMLib. To download ASMLib for Linux, go to http://www.oracle.com/technology/tech/linux/asmlib. Authentication for Accessing an ASM Instance ASM security considerations derive from the fact that a particular ASM instance is tightly bound to one or more database instances operating on the same server. In effect, the ASM instance is a logical extension of these database instances. Both the ASM instance and the database instances must have equivalent operating system access rights (read/write) to the disk group member disks. For UNIX this is typically provided through shared UNIX group membership. See the Oracle Database Installation Guide for your operating system for information on how to ensure that the ASM and database instances have the proper access to member disks. ASM instances do not have a data dictionary, so the only way to connect to one is as an administrator. This means that you use operating system authentication and connect as SYSDBA, or when connecting remotely through Oracle Net Services, you use a password file. Operating System Authentication for ASM Using operating system authentication, the authorization to connect locally with the SYSDBA privilege is granted through the use of a special operating system user group, generically referred to as OSDBA. (On UNIX, OSDBA is typically the dba group.) See "Using Operating System Authentication" on page 1-14 for more information about OSDBA. By default, members of the OSDBA group are authorized to connect with the SYSDBA privilege on all instances on the node, including the ASM instance. Users who connect to the ASM instance with SYSDBA privilege have complete administrative access to all disk groups that are managed by that ASM instance. 12-6 Oracle Database Administrator’s Guide Administering an Automatic Storage Management Instance The user that is the software owner for the database Oracle home (typically, the user named oracle) must be a member of the OSDBA group defined for the ASM Oracle home. This is automatically the case when ASM and a single instance of Oracle Database share the same Oracle home. If you install the ASM and database instances in separate Oracle homes, you must ensure that you configure the proper group memberships, otherwise the database instance will not be able to connect to the ASM instance. Note: Password File Authentication for ASM To enable remote administration of ASM (through Oracle Net Services), a password file must be created for ASM. A password file is also required for Enterprise Manager to connect to ASM. The Oracle Database Configuration Assistant (DBCA) creates a password file for ASM when it initially configures ASM disk groups. Like a database password file, the only user added to the password file upon creation is SYS. If you want to add other users to the password file, you must share the password file with a database instance and add the users with the database. If you configure an ASM instance without using DBCA, you must create a password file yourself. See "Creating and Maintaining a Password File" on page 1-16 for more information. Setting Initialization Parameters for an ASM Instance The ASM instance has its own initialization parameter file, which, like that of the database instance, can be a server parameter file (SPFILE) or a text file. For ASM installations in clustered environments, server parameter files (SPFILEs) are not used unless there is a shared ASM Oracle home. Without a shared ASM Oracle home, each ASM instance gets its own text initialization parameter file (PFILE). Note: The ASM parameter file name is distinguished from the database file name by the SID embedded in the name. (The SID for ASM defaults to +ASM for a single-instance database and +ASMnode# for Real Application Clusters nodes.) The same rules for file name, default location, and search order that apply to the database initialization parameter file apply to the ASM initialization parameter file. Thus, on single-instance Unix platforms, the server parameter file for ASM would have the following path: $ORACLE_HOME/dbs/spfile+ASM.ora For more information about initialization parameter files, see "Understanding Initialization Parameters" on page 2-19. Some initialization parameters are specifically relevant to an ASM instance. Of those initialization parameters intended for a database instance, only a few are relevant to an ASM instance. You can set those parameters at database creation time using Database Configuration Assistant or later using Enterprise Manager. The remainder of this section describes setting the parameters manually by editing the initialization parameter file. Using Automatic Storage Management 12-7 Administering an Automatic Storage Management Instance Initialization Parameters for ASM Instances The following initialization parameters relate to an ASM instance. Parameters that start with ASM_ cannot be set in database instances. Name Description INSTANCE_TYPE Must be set to ASM Note: This is the only required parameter. All other parameters take suitable defaults for most environments. ASM_POWER_LIMIT The default power for disk rebalancing. Default: 1, Range: 0 – 11 See Also: "Tuning Rebalance Operations" on page 12-9 ASM_DISKSTRING A comma-separated list of strings that limits the set of disks that ASM discovers. May include wildcard characters. Only disks that match one of the strings are discovered. String format depends on the ASM library in use and on the operating system. The standard system library for ASM supports glob pattern matching. For example, on a Solaris server that does not use ASMLib, to limit discovery to disks that are in the /dev/rdsk/ directory, ASM_DISKSTRING would be set to: /dev/rdsk/* Note that the asterisk cannot be omitted. To limit discovery to disks in this directory that have a name that ends in s3 or s4, ASM_DISKSTRING would be set to: /dev/rdsk/*s3,/dev/rdsk/*s4 This could be simplified to: /dev/rdsk/*s[34] The ? character, when used as the first character of a path, expands to the Oracle home directory. Depending on operating system, when the ? character is used elsewhere in the path, it is a wildcard for a single character. Default: NULL. A NULL value causes ASM to search a default path for all disks in the system to which the ASM instance has read/write access. The default search path is platform-specific. See the Oracle Database Administrator's Reference for UNIX Systems on OTN for a list of default search paths for Unix platforms. For the Windows platform, the default search path is \\.\ORCLDISK*. See the Oracle Database Installation Guide for Windows or the Real Application Clusters Quick Installation Guide for Oracle Database Standard Edition for Windows for more information. See Also: "Improving Disk Discovery Time" on page 12-9 12-8 Oracle Database Administrator’s Guide Administering an Automatic Storage Management Instance Name Description ASM_DISKGROUPS A list of the names of disk groups to be mounted by an ASM instance at startup, or when the ALTER DISKGROUP ALL MOUNT statement is used. Default: NULL (If this parameter is not specified, then no disk groups are mounted.) This parameter is dynamic, and if you are using a server parameter file (SPFILE), you should not need to manually alter this value. ASM automatically adds a disk group to this parameter when the disk group is successfully created or mounted, and automatically removes a disk group from this parameter when the disk group is dropped or dismounted. However, when using a text initialization parameter file (PFILE), you must edit the initialization parameter file to add the name of any disk group that you want automatically mounted at instance startup, and remove the name of any disk group that you no longer want automatically mounted. Note: Issuing the ALTER DISKGROUP...ALL MOUNT or ALTER DISKGROUP...ALL DISMOUNT command does not affect the value of this parameter. Tuning Rebalance Operations If the POWER clause is not specified in an ALTER DISKGROUP command, or when rebalance is implicitly invoked by adding or dropping a disk, the rebalance power defaults to the value of the ASM_POWER_LIMIT initialization parameter. You can adjust this parameter dynamically. The higher the limit, the faster a rebalance operation may complete. Lower values cause rebalancing to take longer, but consume fewer processing and I/O resources. This leaves these resources available for other applications, such as the database. The default value of 1 minimizes disruption to other applications. The appropriate value is dependent upon your hardware configuration as well as performance and availability requirements. If a rebalance is in progress because a disk is manually or automatically dropped, increasing the power of the rebalance shortens the window during which redundant copies of that data on the dropped disk are reconstructed on other disks. The V$ASM_OPERATION view provides information that can be used for adjusting ASM_POWER_LIMIT and the resulting power of rebalance operations. The V$ASM_OPERATION view also gives an estimate in the EST_MINUTES column of the amount of time remaining for the rebalance operation to complete. You can see the effect of changing the rebalance power by observing the change in the time estimate. See Also: "Manually Rebalancing a Disk Group" on page 12-24 for more information. Improving Disk Discovery Time The value for the ASM_DISKSTRING initialization parameter is an operating system–dependent value used by ASM to limit the set of paths that the discovery process uses to search for disks. When a new disk is added to a disk group, each ASM instance that has the disk group mounted must be able to discover the new disk using its ASM_DISKSTRING. In many cases, the default value (NULL) is sufficient. Using a more restrictive value may reduce the time required for ASM to perform discovery, and thus improve disk group mount time or the time for adding a disk to a disk group. It may be necessary to dynamically change the ASM_DISKSTRING before adding a disk so that the new disk will be discovered through this parameter. Using Automatic Storage Management 12-9 Administering an Automatic Storage Management Instance Note that the default value of ASM_DISKSTRING may not find all disks in all situations. If your site is using a third-party vendor ASMLib, that vendor may have discovery string conventions you must use for ASM_DISKSTRING. In addition, if your installation uses multipathing software, the software may place pseudo-devices in a path that is different from the operating system default. Consult the multipathing vendor documentation for details. Behavior of Database Initialization Parameters in an ASM Instance If you specify a database instance initialization parameter in an ASM initialization parameter file, it can have one of these effects: ■ ■ If the parameter is not valid in the ASM instance, it produces an ORA-15021 error. If the database parameter is valid in an ASM instance, for example parameters relating to dump destinations and some buffer cache parameters, ASM accepts the parameter. In general, ASM selects appropriate defaults for any database parameters that are relevant to the ASM instance. Behavior of ASM Initialization Parameters in a Database Instance If you specify any of the ASM-specific parameters (names starting with ASM_) in a database instance parameter file, you receive an ORA-15021 error. Starting Up an ASM Instance ASM instances are started similarly to Oracle database instances with some minor differences: ■ ■ To connect to the ASM instance with SQL*Plus, you must set the ORACLE_SID environment variable to the ASM SID. (The default ASM SID for a single instance database is +ASM, and the default SID for ASM on Real Application Clusters is +ASMnode#.) Depending on your operating system and whether or not you installed ASM in its own Oracle home, you may have to change other environment variables. For more information, see "Selecting an Instance with Environment Variables" on page 1-7. The initialization parameter file, which can be a server parameter file, must contain: INSTANCE_TYPE = ASM This parameter signals the Oracle executable that an ASM instance is starting and not a database instance. ■ The STARTUP command, rather than trying to mount and open a database, tries to mount the disk groups specified by the initialization parameter ASM_DISKGROUPS. If ASM_DISKGROUPS is blank, the ASM instance starts and warns that no disk groups were mounted. You can then mount disk groups with the ALTER DISKGROUP...MOUNT command. The SQL*Plus STARTUP command parameters are interpreted by ASM as follows: STARTUP Parameter Description FORCE Issues a SHUTDOWN ABORT to the ASM instance before restarting it MOUNT, OPEN Mounts the disk groups specified in the ASM_DISKGROUPS initialization parameter. This is the default if no command parameter is specified. 12-10 Oracle Database Administrator’s Guide Administering an Automatic Storage Management Instance STARTUP Parameter Description NOMOUNT Starts up the ASM instance without mounting any disk groups The following is a sample SQL*Plus session in which an ASM instance is started. % sqlplus /nolog SQL> CONNECT / AS sysdba Connected to an idle instance. SQL> STARTUP ASM instance started Total System Global Area Fixed Size Variable Size ASM Cache ASM diskgroups mounted 71303168 1069292 45068052 25165824 bytes bytes bytes bytes ASM Instance Memory Requirements ASM instances are smaller than database instances. A 64 MB SGA should be sufficient for all but the largest ASM installations. Total memory footprint for a typical ASM instance is approximately 100 MB. CSS Requirement The Cluster Synchronization Services (CSS) daemon is required to enable synchronization between ASM and its client database instances. The CSS daemon is normally started (and configured to start upon reboot) when you use Database Configuration Assistant (DBCA) to create your database. If you did not use DBCA to create the database, you must ensure that the CSS daemon is running before you start the ASM instance. CSS Daemon on the Linux and Unix Platforms To determine if the CSS daemon is running, issue the command crsctl check cssd. If you receive the message CSS appears healthy, the CSS daemon is running. To start the CSS daemon and configure the host to always start the daemon upon reboot, do the following: 1. Log in to the host as root. 2. Ensure that $ORACLE_HOME/bin is in your PATH environment variable. 3. Enter the following command: localconfig add CSS Daemon on the Windows Platform You can also use the crsctl and localconfig commands on the Windows platform to check the status of the CSS daemon or to start it. If you prefer to use Windows GUI tools, you can do the following: To determine if the CSS daemon is properly configured and running, double-click the Services icon in the Windows Control Panel, and look for the OracleCSService service. Its status should be Started and its startup type should be Automatic. Refer to the Windows documentation for information on how to start a Windows service and how to configure it for Automatic startup. Using Automatic Storage Management 12-11 Administering an Automatic Storage Management Instance Disk Discovery When an ASM instance initializes, ASM discovers and examines the contents of all of the disks that are in the paths designated in the ASM_DISKSTRING initialization parameter. For purposes of this discussion, a "disk" is an ASM disk as defined in "Overview of the Components of Automatic Storage Management" on page 12-3. Disk discovery also takes place when you: ■ ■ Run the ALTER DISKGROUP...ADD DISK and ALTER DISKGROUP...RESIZE DISK commands. Query the V$ASM_DISKGROUP and V$ASM_DISK views. After a disk is successfully discovered, it appears in the V$ASM_DISK view. Disks that belong to a disk group—that is, that have a disk group name in the disk header—have a header status of MEMBER. Disks that were discovered but that have not yet been assigned to a disk group have a header status of either CANDIDATE or PROVISIONED. Note: The PROVISIONED header status implies that an additional platform-specific action has been taken by an administrator to make the disk available for ASM. For example, on Windows, the administrator used asmtool or asmtoolg to stamp the disk with a header, or on Linux, the administrator used ASMLib to prepare the disk for ASM. The following query shows six MEMBER disks and one CANDIDATE disk. SQL> select name, header_status, path from v$asm_disk; NAME HEADER_STATUS PATH ------------ ------------- ------------------------CANDIDATE /dev/rdsk/disk07 DISK06 MEMBER /dev/rdsk/disk06 DISK05 MEMBER /dev/rdsk/disk05 DISK04 MEMBER /dev/rdsk/disk04 DISK03 MEMBER /dev/rdsk/disk03 DISK02 MEMBER /dev/rdsk/disk02 DISK01 MEMBER /dev/rdsk/disk01 7 rows selected. Discovery Rules Rules for discovery are as follows: ■ ■ ■ ASM discovers no more than 10,000 disks. That is, if more than 10,000 disks match the ASM_DISKSTRING initialization parameter, only the first 10,000 are discovered. ASM does not discover a disk that contains an operating system partition table, even if the disk is in an ASM_DISKSTRING search path and ASM has read/write permission on the disk. If ASM recognizes a disk header as that of an Oracle object, such as the header of an Oracle datafile, the disk is discovered, but can only be added to a disk group with the FORCE keyword. Such a disk appears in V$ASM_DISK with a header status of FOREIGN. In addition, ASM identifies the following configuration errors during discovery: ■ Multiple paths to the same disk 12-12 Oracle Database Administrator’s Guide Administering an Automatic Storage Management Instance In this case, if the disk is part of a disk group, disk group mount fails. If the disk is being added to a disk group with the ADD DISK or CREATE DISKGROUP command, the command fails. To correct the error, restrict ASM_DISKSTRING so that it does not include multiple paths to the same disk, or if you are using multipathing software, ensure that you include only the pseudo-device in ASM_DISKSTRING. ■ Multiple ASM disks with the same disk header This can be caused by a bit copy of one disk onto another. In this case, disk group mount fails. Disk Group Recovery Like any other file system or volume manager, if an ASM instance fails, then all Oracle database instances on the same node as the ASM instance and that use a disk group managed by the ASM instance also fail. In a single ASM instance configuration, if the ASM instance fails while ASM metadata is open for update, then after the ASM instance reinitializes, it reads the disk group log and recovers all transient changes. With multiple ASM instances sharing disk groups, if one ASM instance should fail, another ASM instance automatically recovers transient ASM metadata changes caused by the failed instance. The failure of an Oracle database instance is not significant here because only ASM instances update ASM metadata. Shutting Down an ASM Instance Automatic Storage Management shutdown is initiated by issuing the SHUTDOWN command in SQL*Plus. You must first ensure that the ORACLE_SID environment variable is set to the ASM SID to connect to the ASM instance. Depending on your operating system and whether or not you installed ASM in its own Oracle home, you may have to change other environment variables before starting SQL*Plus. For more information, see "Selecting an Instance with Environment Variables" on page 1-7. % sqlplus /nolog SQL> CONNECT / AS sysdba Connected. SQL> SHUTDOWN NORMAL The table that follows lists the SHUTDOWN modes and describes the behavior of the ASM instance in each mode. SHUTDOWN Mode Action Taken By Automatic Storage Management NORMAL, IMMEDIATE, or TRANSACTIONAL ASM waits for any in-progress SQL to complete before doing an orderly dismount of all disk groups and shutting down the ASM instance. If any database instances are connected to the ASM instance, the SHUTDOWN command returns an error and leaves the ASM instance running. ABORT The ASM instance immediately shuts down without the orderly dismount of disk groups. This causes recovery upon the next startup of ASM. If any database instance is connected to the ASM instance, the database instance aborts. It is strongly recommended that you shut down all database instances that use the ASM instance before shutting down the ASM instance. Using Automatic Storage Management 12-13 Administering Automatic Storage Management Disk Groups Administering Automatic Storage Management Disk Groups This section explains how to create and manage your Automatic Storage Management (ASM) disk groups. If you have one or more database instances that use ASM, you can keep them open and running while you administer disk groups. You can administer ASM disk groups with Database Configuration Assistant (DBCA), Enterprise Manager (EM), or SQL*Plus. In addition, the ASM command-line utility, ASMCMD, enables you to easily view and manipulate files and directories within disk groups. This section provides details on administration with SQL*Plus, but includes background information that applies to all administration methods. The SQL statements introduced in this section are only available in an ASM instance. All sample SQL*Plus sessions in this section assume that the ORACLE_SID environment variable is changed to the ASM SID before starting SQL*Plus. Depending on your operating system and whether or not you installed ASM in its own Oracle home, other environment variables may have to be changed as well. For more information, see "Selecting an Instance with Environment Variables" on page 1-7. Note: The following topics are contained in this section: ■ Considerations and Guidelines for Configuring Disk Groups ■ Creating a Disk Group ■ Altering the Disk Membership of a Disk Group ■ Mounting and Dismounting Disk Groups ■ Checking Internal Consistency of Disk Group Metadata ■ Dropping Disk Groups ■ Managing Disk Group Directories ■ Managing Alias Names for ASM Filenames ■ Dropping Files and Associated Aliases from a Disk Group ■ Managing Disk Group Templates See Also: ■ ■ Appendix A of Oracle Database 2 Day DBA for instructions on administering ASM disk groups with Enterprise Manager. Oracle Database Utilities for details on the ASMCMD utility. Considerations and Guidelines for Configuring Disk Groups The following are some considerations and guidelines to be aware of as you configure disk groups. Determining the Number of Disk Groups The following criteria can help you determine the number of disk groups that you create: 12-14 Oracle Database Administrator’s Guide Administering Automatic Storage Management Disk Groups ■ ■ Disks in a given disk group should have similar size and performance characteristics. If you have several different types of disks in terms of size and performance, then it would be better to form several disk groups accordingly. For recovery reasons, you might feel more comfortable having separate disk groups for your database files and flash recovery area files. Using this approach, even with the loss of one disk group, the database would still be intact. Performance Characteristics when Grouping Disks ASM eliminates the need for manual I/O tuning. However, to ensure consistent performance, you should avoid placing dissimilar disks in the same disk group. For example, the newest and fastest disks might reside in a disk group reserved for the database work area, and slower drives could reside in a disk group reserved for the flash recovery area. ASM load balances file activity by uniformly distributing file extents across all disks in a disk group. For this technique to be effective it is important that the disks used in a disk group be of similar performance characteristics. There may be situations where it is acceptable to temporarily have disks of different sizes and performance co-existing in a disk group. This would be the case when migrating from an old set of disks to a new set of disks. The new disks would be added and the old disks dropped. As the old disks are dropped, their storage is migrated to the new disks while the disk group is online. Effects of Adding and Dropping Disks from a Disk Group ASM automatically rebalances whenever disks are added or dropped. For a normal drop operation (without the FORCE option), a disk is not released from a disk group until data is moved off of the disk through rebalancing. Likewise, a newly added disk cannot support its share of the I/O workload until rebalancing completes. It is more efficient to add or drop multiple disks at the same time so that they are rebalanced as a single operation. This avoids unnecessary movement of data. For a drop operation, when rebalance is complete, ASM takes the disk offline momentarily, and then drops it, setting disk header status to FORMER. You can add or drop disks without shutting down the database. However, a performance impact on I/O activity may result. How ASM Handles Disk Failures Depending on the redundancy level of a disk group and how failure groups are defined, the failure of one or more disks could result in either of the following: ■ The disks are first taken offline and then automatically dropped. The disk group remains mounted and serviceable, and, thanks to mirroring, all disk group data remains accessible. Following the disk drop, ASM performs a rebalance to restore full redundancy for the data on the failed disks. ■ The entire disk group is automatically dismounted, which means loss of data accessibility. Disk failure in this context means individual spindle failure or failure of another disk subsystem component, such as power supply, a controller, or host bus adapter. Here are the rules for how ASM handles disk failures: ■ A failure group is considered to have failed if at least one disk in the failure group fails. Using Automatic Storage Management 12-15 Administering Automatic Storage Management Disk Groups ■ ■ ■ A normal redundancy disk group can tolerate the failure of one failure group. If only one failure group fails, the disk group remains mounted and serviceable, and ASM performs a rebalance of the surviving disks (including the surviving disks in the failed failure group) to restore redundancy for the data in the failed disks. If more than one failure group fails, ASM dismounts the disk group. A high redundancy disk group can tolerate the failure of two failure groups. If one or two failure groups fail, the disk group remains mounted and serviceable, and ASM performs a rebalance of the surviving disks to restore redundancy for the data in the failed disks. If more than two failure groups fail, ASM dismounts the disk group. An external redundancy disk group cannot tolerate the failure of any disks in the disk group. Any kind of disk failure causes ASM to dismount the disk group. When considering these rules, remember that if a disk is not explicitly assigned to a failure group with the CREATE DISKGROUP command, ASM puts the disk in its own failure group. Also, failure of one disk in a failure group does not affect the other disks in a failure group. For example, a failure group could consist of 6 disks connected to the same disk controller. If one of the 6 disks fails, the other 5 disks can continue to operate. The failed disk is dropped from the disk group and the other 5 remain in the disk group. Depending on the rules stated previously, the disk group may then remain mounted, or it may be dismounted. When ASM drops a disk, the disk is not automatically added back to the disk group when it is repaired or replaced. You must issue an ALTER DISKGROUP...ADD DISK command to return the disk to the disk group. Similarly, when ASM automatically dismounts a disk group, you must issue an ALTER DISKGROUP...MOUNT command to remount the disk group. Failure Groups and Mirroring Mirroring of metadata and user data is achieved through failure groups. System reliability can be hampered if an insufficient number of failure groups are provided. Consequently, failure group configuration is very important to creating a highly reliable system. Here are some rules and guidelines for failure groups: ■ ■ ■ ■ ■ ■ Each disk in a disk group belongs to exactly one failure group. After a disk has been assigned to a failure group, it cannot be reassigned to another failure group. If it needs to be in another failure group, it can be dropped from the disk group and then added back. Because the choice of failure group depends on hardware configuration, a disk does not need to be reassigned unless it is physically moved. It is best if all failure groups are the same size. Failure groups of different sizes can lead to wasted disk space. ASM requires at least two failure groups to create a normal redundancy disk groups and at least three failure groups to create a high redundancy disk group. This implies that if you do not explicitly define failure groups, a normal redundancy disk group requires at least two disks, and a high redundancy disk group requires at least 3 disks. Most systems do not need to explicitly define failure groups. The default behavior of putting every disk in its own failure group works well for most installations. Failure groups are only needed for large, complex systems that need to protect against failures other than individual spindle failures. Choice of failure groups depends on the kinds of failures that need to be tolerated without loss of data availability. For small numbers of disks (fewer than 20) it is 12-16 Oracle Database Administrator’s Guide Administering Automatic Storage Management Disk Groups usually best to use default failure groups, where every disk is in its own failure group. This is true even for large numbers of disks when the main concern is spindle failure. If there is a need to protect against the simultaneous loss of multiple disk drives due to a single component failure, you can specify failure groups. For example, a disk group may be constructed from several small modular disk arrays. If the system needs to continue operation when an entire modular array fails, a failure group should consist of all the disks in one module. If one module fails, a rebalance occurs on the surviving modules to restore redundancy to the data that was on the failed module. You must place disks in the same failure group if they depend on a common piece of hardware whose failure needs to be tolerated with no loss of availability. ■ Having a small number of large failure groups may actually reduce availability in some cases. For example, half the disks in a disk group could be on one power supply, while the other half are on a different power supply. If this is used to divide the disk group into two failure groups, tripping the breaker on one power supply could drop half the disks in the disk group. Restoring redundancy for data on the dropped disks would require copying all the data from the surviving disks. This can be done online, but consumes a lot of I/O and leaves the disk group unprotected against a spindle failure during the copy. However, if each disk were in its own failure group, the disk group would be dismounted when the breaker tripped (assuming that this caused more failure groups to fail than the disk group can tolerate). Resetting the breaker would allow the disk group to be manually remounted and no data copying would be needed. Managing Capacity in Disk Groups You must have sufficient spare capacity in each disk group to handle the largest failure that you are willing to tolerate. After one or more disks fail, the process of restoring redundancy for all data requires space from the surviving disks in the disk group. If not enough space remains, some files may end up with reduced redundancy. Reduced redundancy means that one or more extents in the file are not mirrored at the expected level. For example, a reduced redundancy file in a high redundancy disk group has at least one file extent with two or fewer total copies of the extent instead of three. In the case of unprotected files, data extents could be missing altogether. Other causes of reduced redundancy files are disks running out of space or an insufficient number of failure groups. The V$ASM_FILE column REDUNDANCY_LOWERED indicates a file with reduced redundancy. The following guidelines help ensure that you have sufficient space to restore full redundancy for all disk group data after the failure of one or more disks. ■ ■ In a normal redundancy disk group, it is best to have enough free space in your disk group to tolerate the loss of all disks in one failure group. The amount of free space should be equivalent to the size of the largest failure group. In a high redundancy disk group, it is best to have enough free space to cope with the loss of all disks in two failure groups. The amount of free space should be equivalent to the sum of the sizes of the two largest failure groups. The V$ASM_DISKGROUP view contains columns that help you manage capacity: ■ ■ REQUIRED_MIRROR_FREE_MB indicates the amount of space that must be available in the disk group to restore full redundancy after the worst failure that can be tolerated by the disk group. USABLE_FILE_MB indicates the amount of free space, adjusted for mirroring, that is available for new files. Using Automatic Storage Management 12-17 Administering Automatic Storage Management Disk Groups USABLE_FILE_MB is computed by subtracting REQUIRED_MIRROR_FREE_MB from total free space in the disk group and then adjusting for mirroring. For example, in a normal redundancy disk group, where by default mirrored files take up disk space equal to twice their size, if 4 GB of actual usable file space remains, USABLE_FILE_MB equals roughly 2 GB. You can then add up to a 2 GB file. The following query shows capacity metrics for a normal redundancy disk group that consists of six 1 GB (1024 MB) disks, each in its own failure group: select name, type, total_mb, free_mb, required_mirror_free_mb, usable_file_mb from v$asm_diskgroup; NAME TYPE TOTAL_MB FREE_MB REQUIRED_MIRROR_FREE_MB USABLE_FILE_MB ------------ ------ ---------- ---------- ----------------------- -------------DISKGROUP1 NORMAL 6144 3768 1024 1372 The REQUIRED_MIRROR_FREE_MB column shows that 1 GB of extra capacity must be available to restore full redundancy after one or more disks fail. Note that the first three numeric columns in the query results are raw numbers. That is, they do not take redundancy into account. Only the last column is adjusted for normal redundancy. Notice that: FREE_MB - REQUIRED_MIRROR_FREE_MB = 2 * USABLE_FILE_MB or 3768 - 1024 = 2 * 1372 = 2744 Negative Values of USABLE_FILE_MB Due to the relationship between FREE_MB, REQUIRED_MIRROR_FREE_MB, and USABLE_FILE_MB, USABLE_FILE_MB can go negative. Although this is not necessarily a critical situation, it does mean that: ■ Depending on the value of FREE_MB, you may not be able to create new files. ■ The next failure may result in files with reduced redundancy. If USABLE_FILE_MB becomes negative, it is strongly recommended that you add more space to the disk group as soon as possible. Scalability ASM imposes the following limits: ■ 63 disk groups in a storage system ■ 10,000 ASM disks in a storage system ■ 4 petabyte maximum storage for each ASM disk ■ 40 exabyte maximum storage for each storage system ■ 1 million files for each disk group ■ Maximum files sizes as shown in the following table: Disk Group Type Maximum File Size External redundancy 35 TB Normal redundancy 5.8 TB High redundancy 3.9 TB 12-18 Oracle Database Administrator’s Guide Administering Automatic Storage Management Disk Groups Creating a Disk Group You use the CREATE DISKGROUP statement to create disk groups. This statement enables you to assign a name to the disk group and to specify the disks that are to be formatted as ASM disks belonging to the disk group. You specify the disks as one or more operating system dependent search strings that ASM then uses to find the disks. You can specify the disks as belonging to specific failure groups, and you can specify the redundancy level for the disk group. If you want ASM to mirror files, you specify the redundancy level as NORMAL REDUNDANCY (2-way mirroring by default for most file types) or HIGH REDUNDANCY (3-way mirroring for all files). You specify EXTERNAL REDUNDANCY if you want no mirroring by ASM. For example, you might choose EXTERNAL REDUNDANCY if you want to use storage array protection features. See the Oracle Database SQL Reference for more information on redundancy levels. See "Overview of the Components of Automatic Storage Management" on page 12-3 and "Failure Groups and Mirroring" on page 12-16 for information on failure groups. ASM programmatically determines the size of each disk. If for some reason this is not possible, or if you want to restrict the amount of space used on a disk, you are able to specify a SIZE clause for each disk. ASM creates operating system–independent names for the disks in a disk group that you can use to reference the disks in other SQL statements. Optionally, you can provide your own name for a disk using the NAME clause. Disk names are available in the V$ASM_DISK view. The ASM instance ensures that any disk being included in the newly created disk group is addressable and is not already a member of another disk group. This requires reading the first block of the disk to determine if it already belongs to a disk group. If not, a header is written. It is not possible for a disk to be a member of multiple disk groups. If a disk has a header indicates that it is part of another disk group, you can force it to become a member of the disk group you are creating by specifying the FORCE clause. For example, a disk with an ASM header might have failed temporarily, so that its header could not be cleared when it was dropped from its disk group. After the disk is repaired, it is no longer part of any disk group, but it still has an ASM header. The FORCE flag is required to use the disk in a new disk group. The original disk group must not be mounted, and the disk must have a disk group header, otherwise the operation fails. Note that if you do this, you may cause another disk group to become unusable. If you specify NOFORCE, which is the default, you receive an error if you attempt to include a disk that already belongs to another disk group. The CREATE DISKGROUP statement mounts the disk group for the first time, and adds the disk group name to the ASM_DISKGROUPS initialization parameter if a server parameter file is being used. If a text initialization parameter file is being used and you want the disk group to be automatically mounted at instance startup, then you must remember to add the disk group name to the ASM_DISKGROUPS initialization parameter before the next time that you shut down and restart the ASM instance. Creating a Disk Group: Example The following examples assume that the ASM_DISKSTRING initialization parameter is set to '/devices/*'. Assume the following: ■ ASM disk discovery identifies the following disks in directory /devices. /devices/diska1 /devices/diska2 /devices/diska3 Using Automatic Storage Management 12-19 Administering Automatic Storage Management Disk Groups /devices/diska4 /devices/diskb1 /devices/diskb2 /devices/diskb3 /devices/diskb4 ■ The disks diska1 - diska4 are on a separate SCSI controller from disks diskb1 diskb4. The following SQL*Plus session illustrates starting an ASM instance and creating a disk group named dgroup1. % SQLPLUS /NOLOG SQL> CONNECT / AS SYSDBA Connected to an idle instance. SQL> STARTUP NOMOUNT SQL> CREATE DISKGROUP dgroup1 NORMAL REDUNDANCY 2 FAILGROUP controller1 DISK 3 '/devices/diska1', 4 '/devices/diska2', 5 '/devices/diska3', 6 '/devices/diska4' 7 FAILGROUP controller2 DISK 8 '/devices/diskb1', 9 '/devices/diskb2', 10 '/devices/diskb3', 11 '/devices/diskb4'; In this example, dgroup1 is composed of eight disks that are defined as belonging to either failure group controller1 or controller2. Because NORMAL REDUNDANCY level is specified for the disk group, ASM provides mirroring for each type of database file according to the mirroring settings in the system default templates. For example, in the system default templates shown in Table 12–5 on page 12-30, default redundancy for the online redo log files (ONLINELOG template) for a normal redundancy disk group is MIRROR. This means that when one copy of a redo log file extent is written to a disk in failure group controller1, a mirrored copy of the file extent is written to a disk in failure group controller2. You can see that to support the default mirroring of a normal redundancy disk group, at least two failure groups must be defined. If you do not specify failure groups, each disk is automatically placed in its own failure group. Note: Because no NAME clauses are provided for any of the disks being included in the disk group, the disks are assigned the names of dgroup1_0001, dgroup1_0002, ..., dgroup1_0008. Note: If you do not provide a NAME clause and you assigned a label to a disk through ASMLib, the label is used as the disk name. Altering the Disk Membership of a Disk Group At a later time after the creation of a disk group, you can change its composition by adding more disks, resizing disks, or dropping disks. You use clauses of the ALTER 12-20 Oracle Database Administrator’s Guide Administering Automatic Storage Management Disk Groups DISKGROUP statement to perform these actions. You can perform multiple operations with one ALTER DISKGROUP statement. ASM automatically rebalances when the composition of a disk group changes. Because rebalancing can be a long running operation, the ALTER DISKGROUP statement by default does not wait until the operation is complete before returning. To monitor progress of these long running operations, query the V$ASM_OPERATION dynamic performance view. If you want the ALTER DISKGROUP statement to wait until the rebalance operation is complete before returning, you can add the REBALANCE WAIT clause. This is especially useful in scripts. The statement also accepts a REBALANCE NOWAIT clause, which invokes the default behavior of conducting the rebalance operation asynchronously in the background. You can interrupt a rebalance running in wait mode by typing CTRL-C on most platforms. This causes the command to return immediately with the message ORA-01013: user requested cancel of current operation, and to continue the operation asynchronously. Typing CTRL-C does not cancel the rebalance operation or any disk add, drop, or resize operations. To control the speed and resource consumption of the rebalance operation, you can include the REBALANCE POWER clause in statements that add, drop, or resize disks. See "Manually Rebalancing a Disk Group" on page 12-24 for more information on this clause. Adding Disks to a Disk Group You use the ADD clause of the ALTER DISKGROUP statement to add disks to a disk group, or to add a failure group to the disk group. The ALTER DISKGROUP clauses that you can use when adding disks to a disk group are similar to those that can be used when specifying the disks to be included when initially creating a disk group. This is discussed in "Creating a Disk Group" on page 12-19. The new disks will gradually start to carry their share of the workload as rebalancing progresses. ASM behavior when adding disks to a disk group is best illustrated through examples. Adding Disks to a Disk Group: Example 1 The following statement adds disks to dgroup1: ALTER DISKGROUP dgroup1 ADD DISK '/devices/diska5' NAME diska5, '/devices/diska6' NAME diska6; Because no FAILGROUP clauses are included in the ALTER DISKGROUP statement, each disk is assigned to its own failure group. The NAME clauses assign names to the disks, otherwise they would have been assigned system-generated names. Note: If you do not provide a NAME clause and you assigned a label to a disk through ASMLib, the label is used as the disk name. Adding Disks to a Disk Group: Example 2 The statements presented in this example demonstrate the interactions of disk discovery with the ADD DISK operation. Assume that disk discovery now identifies the following disks in directory /devices: /devices/diska1 -- member of dgroup1 /devices/diska2 -- member of dgroup1 /devices/diska3 -- member of dgroup1 Using Automatic Storage Management 12-21 Administering Automatic Storage Management Disk Groups /devices/diska4 -- member of dgroup1 /devices/diska5 -- candidate disk /devices/diska6 -- candidate disk /devices/diska7 -- candidate disk /devices/diska8 -- candidate disk /devices/diskb1 -- member of dgroup1 /devices/diskb2 -- member of dgroup1 /devices/diskb3 -- member of dgroup1 /devices/diskb4 -- member of dgroup2 /devices/diskc1 -- member of dgroup2 /devices/diskc2 -- member of dgroup2 /devices/diskc3 -- member of dgroup3 /devices/diskc4 -- candidate disk /devices/diskd1 -- candidate disk /devices/diskd2 -- candidate disk /devices/diskd3 -- candidate disk /devices/diskd4 -- candidate disk /devices/diskd5 -- candidate disk /devices/diskd6 -- candidate disk /devices/diskd7 -- candidate disk /devices/diskd8 -- candidate disk On a server that runs Unix, Solaris, or Linux and that does not have ASMLib installed, issuing the following statement would successfully add disks /devices/diska5 through /devices/diska8 to dgroup1. ALTER DISKGROUP dgroup1 ADD DISK '/devices/diska[5678]'; The following statement would successfully add disks /devices/diska5 and /devices/diskd5 to dgroup1. ALTER DISKGROUP dgroup1 ADD DISK '/devices/disk*5'; The following statement would fail because /devices/diska1 - /devices/diska4 already belong to dgroup1. ALTER DISKGROUP dgroup1 ADD DISK '/devices/diska*'; The following statement would fail because the search string matches disks that are contained in other disk groups. Specifically, /devices/diska4 belongs to disk group dgroup1 and /devices/diskb4 belongs to disk group dgroup2. ALTER DISKGROUP dgroup1 ADD DISK '/devices/disk*4'; The following statement would successfully add /devices/diska5 and /devices/diskd1 through /devices/diskd8 to disk group dgroup1. It does not matter that /devices/diskd5 is included in both search strings. This statement runs with a rebalance power of 5, and does not return until the rebalance operation is complete. ALTER DISKGROUP dgroup1 ADD DISK '/devices/disk*5', 12-22 Oracle Database Administrator’s Guide Administering Automatic Storage Management Disk Groups '/devices/diskd*' REBALANCE POWER 5 WAIT; The following use of the FORCE clause enables /devices/diskc3 to be added to dgroup2, even though it is a current member of dgroup3. ALTER DISKGROUP dgroup2 ADD DISK '/devices/diskc3' FORCE; For this statement to succeed, dgroup3 cannot be mounted. Dropping Disks from Disk Groups To drop disks from a disk group, use the DROP DISK clause of the ALTER DISKGROUP statement. You can also drop all of the disks in specified failure groups using the DROP DISKS IN FAILGROUP clause. When a disk is dropped, the disk group is rebalanced by moving all of the file extents from the dropped disk to other disks in the disk group. A drop disk operation may fail if not enough space is available on the other disks. If you intend to add some disks and drop others, it is prudent to add disks first to ensure that enough space is available for the drop operation. The best approach is to perform both the add and drop with the same ALTER DISKGROUP statement. This can reduce total time spent rebalancing. WARNING: The ALTER DISKGROUP...DROP DISK statement returns before the drop and rebalance operations are complete. Do not reuse, remove, or disconnect the dropped disk until the HEADER_STATUS column for this disk in the V$ASM_DISK view changes to FORMER. You can query the V$ASM_OPERATION view to determine the amount of time remaining for the drop/rebalance operation to complete. For more information, see Oracle Database SQL Reference and Oracle Database Reference. If you specify the FORCE clause for the drop operation, the disk is dropped even if Automatic Storage Management cannot read or write to the disk. You cannot use the FORCE flag when dropping a disk from an external redundancy disk group. Caution: A DROP FORCE operation leaves data at reduced redundancy for as long as it takes for the subsequent rebalance operation to complete. This increases your exposure to data loss if there is a subsequent disk failure during rebalancing. DROP FORCE should be used only with great care. Dropping Disks from Disk Groups: Example 1 This example drops diska5 (the operating system independent name assigned to /devices/diska5 in "Adding Disks to a Disk Group: Example 1" on page 12-21) from disk group dgroup1. ALTER DISKGROUP dgroup1 DROP DISK diska5; Dropping Disks from Disk Groups: Example 2 This example drops diska5 from disk group dgroup1, and also illustrates how multiple actions are possible with one ALTER DISKGROUP statement. ALTER DISKGROUP dgroup1 DROP DISK diska5 ADD FAILGROUP failgrp1 DISK '/devices/diska9' NAME diska9; Using Automatic Storage Management 12-23 Administering Automatic Storage Management Disk Groups Resizing Disks in Disk Groups The RESIZE clause of ALTER DISKGROUP enables you to perform the following operations: ■ Resize all disks in the disk group ■ Resize specific disks ■ Resize all of the disks in a specified failure group If you do not specify a new size in the SIZE clause then ASM uses the size of the disk as returned by the operating system. This could be a means of recovering disk space when you had previously restricted the size of the disk by specifying a size smaller than disk capacity. The new size is written to the ASM disk header record and if the size of the disk is increasing, then the new space is immediately available for allocation. If the size is decreasing, rebalancing must relocate file extents beyond the new size limit to available space below the limit. If the rebalance operation can successfully relocate all extents, then the new size is made permanent, otherwise the rebalance fails. Resizing Disks in Disk Groups: Example The following example resizes all of the disks in failure group failgrp1 of disk group dgroup1. If the new size is greater than disk capacity, the statement will fail. ALTER DISKGROUP dgroup1 RESIZE DISKS IN FAILGROUP failgrp1 SIZE 100G; Undropping Disks in Disk Groups The UNDROP DISKS clause of the ALTER DISKGROUP statement enables you to cancel all pending drops of disks within disk groups. If a drop disk operation has already completed, then this statement cannot be used to restore it. This statement cannot be used to restore disks that are being dropped as the result of a DROP DISKGROUP statement, or for disks that are being dropped using the FORCE clause. Undropping Disks in Disk Groups: Example The following example cancels the dropping of disks from disk group dgroup1: ALTER DISKGROUP dgroup1 UNDROP DISKS; Manually Rebalancing a Disk Group You can manually rebalance the files in a disk group using the REBALANCE clause of the ALTER DISKGROUP statement. This would normally not be required, because ASM automatically rebalances disk groups when their composition changes. You might want to do a manual rebalance operation if you want to control the speed of what would otherwise be an automatic rebalance operation. The POWER clause of the ALTER DISKGROUP...REBALANCE statement specifies the degree of parallelization, and thus the speed of the rebalance operation. It can be set to a value from 0 to 11. A value of 0 halts a rebalancing operation until the statement is either implicitly or explicitly reinvoked. The default rebalance power is set by the ASM_POWER_LIMIT initialization parameter. See "Tuning Rebalance Operations" on page 12-9 for more information. The power level of an ongoing rebalance operation can be changed by entering the rebalance statement with a new level. The ALTER DISKGROUP...REBALANCE command by default returns immediately so that you can issue other commands while the rebalance operation takes place 12-24 Oracle Database Administrator’s Guide Administering Automatic Storage Management Disk Groups asynchronously in the background. You can query the V$ASM_OPERATION view for the status of the rebalance operation. If you want the ALTER DISKGROUP...REBALANCE command to wait until the rebalance operation is complete before returning, you can add the WAIT keyword to the REBALANCE clause. This is especially useful in scripts. The command also accepts a NOWAIT keyword, which invokes the default behavior of conducting the rebalance operation asynchronously. You can interrupt a rebalance running in wait mode by typing CTRL-C on most platforms. This causes the command to return immediately with the message ORA-01013: user requested cancel of current operation, and to continue the rebalance operation asynchronously. Additional rules for the rebalance operation include the following: ■ ■ ■ ■ The ALTER DISKGROUP...REBALANCE statement uses the resources of the single node upon which it is started. ASM can perform one rebalance at a time on a given instance. Rebalancing continues across a failure of the ASM instance performing the rebalance. The REBALANCE clause (with its associated POWER and WAIT/NOWAIT keywords) can also be used in ALTER DISKGROUP commands that add, drop, or resize disks. Manually Rebalancing a Disk Group: Example The following example manually rebalances the disk group dgroup2. The command does not return until the rebalance operation is complete. ALTER DISKGROUP dgroup2 REBALANCE POWER 5 WAIT; See Also: "Tuning Rebalance Operations" on page 12-9 Mounting and Dismounting Disk Groups Disk groups that are specified in the ASM_DISKGROUPS initialization parameter are mounted automatically at ASM instance startup. This makes them available to all database instances running on the same node as ASM. The disk groups are dismounted at ASM instance shutdown. ASM also automatically mounts a disk group when you initially create it, and dismounts a disk group if you drop it. There may be times that you want to mount or dismount disk groups manually. For these actions use the ALTER DISKGROUP...MOUNT or ALTER DISKGROUP...DISMOUNT statement. You can mount or dismount disk groups by name, or specify ALL. If you try to dismount a disk group that contains open files, the statement will fail, unless you also specify the FORCE clause. Dismounting Disk Groups: Example The following statement dismounts all disk groups that are currently mounted to the ASM instance: ALTER DISKGROUP ALL DISMOUNT; Mounting Disk Groups: Example The following statement mounts disk group dgroup1: ALTER DISKGROUP dgroup1 MOUNT; Using Automatic Storage Management 12-25 Administering Automatic Storage Management Disk Groups Checking Internal Consistency of Disk Group Metadata You can check the internal consistency of disk group metadata using the ALTER DISKGROUP...CHECK statement. Checking can be specified for specific files in a disk group, specific disks or all disks in a disk group, or specific failure groups within a disk group. The disk group must be mounted in order to perform these checks. If any errors are detected, an error message is displayed and details of the errors are written to the alert log. Automatic Storage Management attempts to correct any errors, unless you specify the NOREPAIR clause in your ALTER DISKGROUP...CHECK statement. The following statement checks for consistency in the metadata for all disks in the dgroup1 disk group: ALTER DISKGROUP dgroup1 CHECK ALL; See Oracle Database SQL Reference for additional CHECK clause syntax. Dropping Disk Groups The DROP DISKGROUP statement enables you to delete an ASM disk group and optionally, all of its files. You can specify the INCLUDING CONTENTS clause if you want any files that may still be contained in the disk group also to be deleted. The default is EXCLUDING CONTENTS, which provides syntactic consistency and prevents you from dropping the disk group if it has any contents The ASM instance must be started and the disk group must be mounted with none of the disk group files open, in order for the DROP DISKGROUP statement to succeed. The statement does not return until the disk group has been dropped. When you drop a disk group, ASM dismounts the disk group and removes the disk group name from the ASM_DISKGROUPS initialization parameter if a server parameter file is being used. If a text initialization parameter file is being used, and the disk group is mentioned in the ASM_DISKGROUPS initialization parameter, then you must remember to remove the disk group name from the ASM_DISKGROUPS initialization parameter before the next time that you shut down and restart the ASM instance. The following statement deletes dgroup1: DROP DISKGROUP dgroup1; After ensuring that none of the files contained in dgroup1 are open, ASM rewrites the header of each disk in the disk group to remove ASM formatting information. The statement does not specify INCLUDING CONTENTS, so the drop operation will fail if the disk group contains any files. Managing Disk Group Directories ASM disk groups contain a system-generated hierarchical directory structure for storing ASM files. The system-generated filename that ASM assigns to each file represents a path in this directory hierarchy. The following is an example of a system-generated filename: +dgroup2/sample/controlfile/Current.256.541956473 The plus sign represents the root of the ASM file system. The dgroup2 directory is the parent directory for all files in the dgroup2 disk group. The sample directory is the parent directory for all files in the sample database, and the controlfile directory contains all control files for the sample database. 12-26 Oracle Database Administrator’s Guide Administering Automatic Storage Management Disk Groups You can create your own directories within this hierarchy to store aliases that you create. Thus, in addition to having user-friendly alias names for ASM files, you can have user-friendly paths to those names. This section describes how to use the ALTER DISKGROUP statement to create a directory structure for aliases. It also describes how you can rename a directory or drop a directory. See Also: ■ ■ The chapter on ASMCMD in Oracle Database Utilities. ASMCMD is a command line utility that you can use to easily create aliases and directories in ASM disk groups. "About ASM Filenames" on page 12-34 for a discussion of ASM filenames and how they are formed Creating a New Directory Use the ADD DIRECTORY clause of the ALTER DISKGROUP statement to create a hierarchical directory structure for alias names for ASM files. Use the slash character (/) to separate components of the directory path. The directory path must start with the disk group name, preceded by a plus sign (+), followed by any subdirectory names of your choice. The parent directory must exist before attempting to create a subdirectory or alias in that directory. The following statement creates a hierarchical directory for disk group dgroup1, which can contain, for example, the alias name +dgroup1/mydir/control_file1: Creating a New Directory: Example 1 ALTER DISKGROUP dgroup1 ADD DIRECTORY '+dgroup1/mydir'; Assuming no subdirectory exists under the directory +dgoup1/mydir, the following statement fails: Creating a New Directory: Example 2 ALTER DISKGROUP dgroup1 ADD DIRECTORY '+dgroup1/mydir/first_dir/second_dir'; Renaming a Directory The RENAME DIRECTORY clause of the ALTER DISKGROUP statement enables you to rename a directory. System created directories (those containing system-generated names) cannot be renamed. Renaming a Directory: Example The following statement renames a directory: ALTER DISKGROUP dgroup1 RENAME DIRECTORY '+dgroup1/mydir' TO '+dgroup1/yourdir'; Dropping a Directory You can delete a directory using the DROP DIRECTORY clause of the ALTER DISKGROUP statement. You cannot drop a system created directory. You cannot drop a directory containing alias names unless you also specify the FORCE clause. Dropping a Directory: Example This statement deletes a directory along with its contents: ALTER DISKGROUP dgroup1 DROP DIRECTORY '+dgroup1/yourdir' FORCE; Using Automatic Storage Management 12-27 Administering Automatic Storage Management Disk Groups Managing Alias Names for ASM Filenames Alias names (or just "aliases") are intended to provide a more user-friendly means of referring to ASM files, rather than using the system-generated filenames. You can create an alias for a file when you create it in the database, or you can add an alias to an existing file using the ADD ALIAS clause of the ALTER DISKGROUP statement. You can create an alias in any system-generated or user-created ASM directory. You cannot create an alias at the root level (+), however. The V$ASM_ALIAS view contains a row for every alias name known to the ASM instance. This view also contains ASM directories. V$ASM_ALIAS also contains a row for every system-generated filename. These rows are indicated by the value 'Y' in the column SYSTEM_CREATED. Note: The chapter on ASMCMD in Oracle Database Utilities. ASMCMD is a command line utility that you can use to easily create aliases. See Also: Adding an Alias Name for an ASM Filename Use the ADD ALIAS clause of the ALTER DISKGROUP statement to create an alias name for an ASM filename. The alias name must consist of the full directory path and the alias itself. Adding an Alias Name for an ASM Filename: Example 1 The following statement adds a new alias name for a system-generated file name: ALTER DISKGROUP dgroup1 ADD ALIAS '+dgroup1/mydir/second.dbf' FOR '+dgroup1/sample/datafile/mytable.342.3'; Adding an Alias Name for as ASM Filename: Example 2 This statement illustrates another means of specifying the ASM filename for which the alias is to be created. It uses the numeric form of the ASM filename, which is an abbreviated and derived form of the system-generated filename. ALTER DISKGROUP dgroup1 ADD ALIAS '+dgroup1/mydir/second.dbf' FOR '+dgroup1.342.3'; Renaming an Alias Name for an ASM Filename Use the RENAME ALIAS clause of the ALTER DISKGROUP statement to rename an alias for an ASM filename. The old and the new alias names must consist of the full directory paths of the alias names. Renaming an Alias Name for an ASM Filename: Example The following statement renames an alias: ALTER DISKGROUP dgroup1 RENAME ALIAS '+dgroup1/mydir/datafile.dbf' TO '+dgroup1/payroll/compensation.dbf'; Dropping an Alias Name for an ASM Filename Use the DROP ALIAS clause of the ALTER DISKGROUP statement to drop an alias for an ASM filename. The alias name must consist of the full directory path and the alias itself. The underlying file to which the alias refers is unchanged. 12-28 Oracle Database Administrator’s Guide Administering Automatic Storage Management Disk Groups Dropping an Alias Name for an ASM Filename: Example 1 The following statement drops an alias: ALTER DISKGROUP dgroup1 DROP ALIAS '+dgroup1/payroll/compensation.dbf'; Dropping an Alias Name for an ASM Filename: Example 2 The following statement will fail because it attempts to drop a system-generated filename. This is not allowed: ALTER DISKGROUP dgroup1 DROP ALIAS '+dgroup1/sample/datafile/mytable.342.3'; Dropping Files and Associated Aliases from a Disk Group You can delete ASM files and their associated alias names from a disk group using the DROP FILE clause of the ALTER DISKGROUP statement. You must use a fully qualified filename, a numeric filename, or an alias name when specifying the file that you want to delete. Some reasons why you might need to delete files include: ■ ■ Files created using aliases are not Oracle-managed files. Consequently, they are not automatically deleted. A point in time recovery of a database might restore the database to a time before a tablespace was created. The restore does not delete the tablespace, but there is no reference to the tablespace (or its datafile) in the restored database. You can manually delete the datafile. Dropping an alias does not drop the underlying file on the file system. Dropping Files and Associated Aliases from a Disk Group: Example 1 The following statement uses the alias name for the file to delete both the file and the alias: ALTER DISKGROUP dgroup1 DROP FILE '+dgroup1/payroll/compensation.dbf'; Dropping Files and Associated Aliases from a Disk Group: Example 2 In this example the system-generated filename is used to drop the file and any associated alias: ALTER DISKGROUP dgroup1 DROP FILE '+dgroup1/sample/datafile/mytable.342.372642'; Managing Disk Group Templates Templates are used to set redundancy (mirroring) and striping attributes of files created in an ASM disk group. When a file is created, redundancy and striping attributes are set for that file based on an explicitly named template or the system template that is the default template for the file type. When a disk group is created, ASM creates a set of default templates for that disk group. The set consists of one template for each file type (data file, control file, redo log file, and so on) supported by ASM. For example, a template named ONLINELOG provides the default file redundancy and striping attributes for all redo log files written to ASM disks. Default template settings depend on the disk group type. For example, the default template for datafiles for a normal redundancy disk group sets 2-way mirroring, while the corresponding default template in a high redundancy disk group sets 3-way mirroring.You can modify these default templates. Table 12–5 lists the default templates and the attributes that they apply to matching files. As the table Using Automatic Storage Management 12-29 Administering Automatic Storage Management Disk Groups shows, the initial redundancy value of each default template depends on the type of disk group that the template belongs to. The striping attribute of templates applies to all disk group types (normal redundancy, high redundancy, and external redundancy). However, the mirroring attribute of templates applies only to normal redundancy disk groups, and is ignored for high-redundancy disk groups (where every file is always 3-way mirrored) and external redundancy disk groups (where no files are mirrored by ASM). Nevertheless, each type of disk group gets a full set of templates, and the redundancy value in each template is always set to the proper default for the disk group type. Note: Using clauses of the ALTER DISKGROUP statement, you can add new templates to a disk group, modify existing ones, or drop templates. The reason to add templates is to create the right combination of attributes to meet unique requirements. You can then reference a template name when creating a file, thereby assigning desired attributes on an individual file basis rather than on the basis of file type.The V$ASM_TEMPLATE view lists all of the templates known to the ASM instance. Template Attributes Table 12–3 shows the permitted striping attribute values, and Table 12–4 shows the permitted redundancy values for ASM templates. These values correspond to the STRIPE and REDUND columns of V$ASM_TEMPLATE. Table 12–3 Permitted Values for ASM Template Striping Attribute Striping Attribute Value Description FINE Striping in 128KB chunks. COARSE Striping in 1MB chunks. Table 12–4 Permitted Values for ASM Template Redundancy Attribute Redundancy Attribute Value Resulting Mirroring in Resulting Mirroring Resulting Mirroring in Normal Redundancy in High Redundancy External Redundancy Disk Group Disk Group Disk Group MIRROR 2-way mirroring 3-way mirroring (Not allowed) HIGH 3-way mirroring 3-way mirroring (Not allowed) UNPROTECTED No mirroring (Not allowed) No mirroring Table 12–5 ASM System Default Templates Attribute Settings Mirroring, High Redundancy Disk Group Mirroring, External Redundancy Disk Group Template Name Striping Mirroring, Normal Redundancy Disk Group CONTROLFILE FINE HIGH HIGH UNPROTECTED DATAFILE COARSE MIRROR HIGH UNPROTECTED ONLINELOG FINE MIRROR HIGH UNPROTECTED 12-30 Oracle Database Administrator’s Guide Administering Automatic Storage Management Disk Groups Table 12–5 (Cont.) ASM System Default Templates Attribute Settings Template Name Striping Mirroring, Normal Redundancy Disk Group Mirroring, High Redundancy Disk Group Mirroring, External Redundancy Disk Group ARCHIVELOG COARSE MIRROR HIGH UNPROTECTED TEMPFILE COARSE MIRROR HIGH UNPROTECTED BACKUPSET COARSE MIRROR HIGH UNPROTECTED PARAMETERFILE COARSE MIRROR HIGH UNPROTECTED DATAGUARDCONFIG COARSE MIRROR HIGH UNPROTECTED FLASHBACK FINE MIRROR HIGH UNPROTECTED CHANGETRACKING COARSE MIRROR HIGH UNPROTECTED DUMPSET COARSE MIRROR HIGH UNPROTECTED XTRANSPORT COARSE MIRROR HIGH UNPROTECTED AUTOBACKUP COARSE MIRROR HIGH UNPROTECTED Adding Templates to a Disk Group To add a new template for a disk group, you use the ADD TEMPLATE clause of the ALTER DISKGROUP statement. You specify the name of the template, its redundancy attribute, and its striping attribute. If the name of your new template is not one of the names listed in Table 12–5, it is not used as a default template for database file types. To use it, you must reference its name when creating a file. See "About ASM Filenames" on page 12-34 for more information. Note: The syntax of the ALTER DISKGROUP command for adding a template is as follows: ALTER DISKGROUP disk_group_name ADD TEMPLATE template_name ATTRIBUTES ([{MIRROR|HIGH|UNPROTECTED}] [{FINE|COARSE}]); Both types of attribute are optional. If no redundancy attribute is specified, the value defaults to MIRROR for a normal redundancy disk group, HIGH for a high redundancy disk group, and UNPROTECTED for an external redundancy disk group. If no striping attribute is specified, the value defaults to COARSE. Adding Templates to a Disk Group: Example 1 The following statement creates a new template named reliable for the normal redundancy disk group dgroup2: ALTER DISKGROUP dgroup2 ADD TEMPLATE reliable ATTRIBUTES (HIGH FINE); Adding Templates to a Disk Group: Example 2 This statement creates a new template named unreliable that specifies files are to be unprotected (no mirroring). (Oracle discourages the use of unprotected files unless hardware mirroring is in place; this example is presented only to further illustrate how the attributes for templates are set.) ALTER DISKGROUP dgroup2 ADD TEMPLATE unreliable ATTRIBUTES (UNPROTECTED); Oracle Database SQL Reference for more information on the ALTER DISKGROUP...ADD TEMPLATE command. See Also: Using Automatic Storage Management 12-31 Using Automatic Storage Management in the Database Modifying a Disk Group Template The ALTER TEMPLATE clause of the ALTER DISKGROUP statement enables you to modify the attribute specifications of an existing system default or user-defined disk group template. Only specified template properties are changed. Unspecified properties retain their current value. When you modify an existing template, only new files created by the template will reflect the attribute changes. Existing files maintain their attributes. Modifying a Disk Group Template: Example The following example changes the striping attribute specification of the reliable template for disk group dgroup2. ALTER DISKGROUP dgroup2 ALTER TEMPLATE reliable ATTRIBUTES (COARSE); Dropping Templates from a Disk Group Use the DROP TEMPLATE clause of the ALTER DISKGROUP statement to drop one or more templates from a disk group. You can only drop templates that are user-defined; you cannot drop system default templates. Dropping Templates from a Disk Group: Example This example drops the previously created template unreliable from dgroup2: ALTER DISKGROUP dgroup2 DROP TEMPLATE unreliable; Using Automatic Storage Management in the Database This section discusses how you use Automatic Storage Management (ASM) to manage database files. This section does not address Real Application Clusters environments. For this information, see Oracle Real Application Clusters Installation and Configuration Guide and Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide. Note: When you use ASM, Oracle database files are stored in ASM disk groups. Your Oracle Database then benefits from the improved performance, improved resource utilization, and higher availability provided by ASM. Most ASM files are Oracle-managed files. See Table 12–7 and Chapter 11, "Using Oracle-Managed Files" for more information. 12-32 Oracle Database Administrator’s Guide Using Automatic Storage Management in the Database ASM supports good forward and backward compatibility between 10.x versions of Oracle Database and 10.x versions of ASM. That is, any combination of versions 10.1.x.y and 10.2.x.y for either the ASM instance or the database instance works correctly, with one caveat: For a 10.1.x.y database instance to connect to a 10.2.x.y ASM instance, the database must be version 10.1.0.3 or later. Note: When mixing software versions, ASM functionality reverts to the functionality of the earliest version in use. For example, a 10.1.0.3 database working with a 10.2.0.0 ASM instance does not exploit new features in ASM 10.2.0.0. Conversely, a 10.2.0.0 database working with a 10.1.0.2 ASM instance behaves as a 10.1.0.2 database instance when interacting with ASM. Two columns in the view V$ASM_CLIENT, SOFTWARE_VERSION and COMPATIBLE_VERSION, display version information. These columns are described in Oracle Database Reference. Oracle database files that are stored in disk groups are not visible to the operating system or its utilities, but are visible to RMAN, ASMCMD, and to the XML DB repository. This following topics are contained in this section: ■ What Types of Files Does ASM Support? ■ About ASM Filenames ■ Starting the ASM and Database Instances ■ Creating and Referencing ASM Files in the Database ■ Creating a Database in ASM ■ Creating Tablespaces in ASM ■ Creating Redo Logs in ASM ■ Creating a Control File in ASM ■ Creating Archive Log Files in ASM ■ Recovery Manager (RMAN) and ASM What Types of Files Does ASM Support? ASM supports most file types required by the database. However, most administrative files cannot be stored on a ASM disk group. These include trace files, audit files, alert logs, backup files, export files, tar files, and core files. Table 12–6 lists file types, indicates if they are supported, and lists the system default template that provides the attributes for file creation. Some of the file types shown in the table are related to specific products or features, and are not discussed in this book. Table 12–6 File Types Supported by Automatic Storage Management File Type Supported Default Templates Control files yes CONTROLFILE Datafiles yes DATAFILE Redo log files yes ONLINELOG Archive log files yes ARCHIVELOG Using Automatic Storage Management 12-33 Using Automatic Storage Management in the Database Table 12–6 (Cont.) File Types Supported by Automatic Storage Management File Type Supported Default Templates Trace files no n/a Temporary files yes TEMPFILE Datafile backup pieces yes BACKUPSET Datafile incremental backup pieces yes BACKUPSET Archive log backup piece yes BACKUPSET Datafile copy yes DATAFILE Persistent initialization parameter file (SPFILE) yes PARAMETERFILE Disaster recovery configurations yes DATAGUARDCONFIG Flashback logs yes FLASHBACK Change tracking file yes CHANGETRACKING Data Pump dumpset yes DUMPSET Automatically generated control file backup yes AUTOBACKUP Cross-platform transportable datafiles yes XTRANSPORT Operating system files no n/a See Also: "Managing Disk Group Templates" on page 12-29 for a description of the system default templates About ASM Filenames Every file created in ASM gets a system-generated filename, otherwise known as a fully qualified filename. The fully qualified filename represents a complete path name in the ASM file system. An example of a fully qualified filename is: +dgroup2/sample/controlfile/Current.256.541956473 You can use the fully qualified filename to reference (read or retrieve) an ASM file. You can also use other abbreviated filename formats, described later in this section, to reference an ASM file. ASM generates a fully qualified filename upon any request to create a file. A creation request does not—in fact, cannot—specify a fully qualified filename. Instead, it uses a simpler syntax to specify a file, such as an alias or just a disk group name. ASM then creates the file, placing it in the correct ASM "path" according to file type, and then assigns an appropriate fully qualified filename. If you specify an alias in the creation request, ASM also creates the alias so that it references the fully qualified filename. ASM file creation requests are either single file creation requests or multiple file creation request. Single File Creation Request This is a request to create a single file, such as a datafile or a control file. The form of the ASM filename in this type of request is either an alias (such as +dgroup2/control/ctl.f) or a disk group name preceded by a plus sign. You use the alias or disk group name where a filename is called for in a statement, such as CREATE TABLESPACE or CREATE CONTROLFILE. 12-34 Oracle Database Administrator’s Guide Using Automatic Storage Management in the Database '/ ' and '\' are interchangeable in filenames. Filenames are case insensitive, but case retentive. Note: Multiple File Creation Request This is a request that can occur multiple times to create an ASM file. For example, if you assign a value to the initialization parameter DB_CREATE_FILE_DEST, you can issue a CREATE TABLESPACE statement (without a filename specification) multiple times. Each time, ASM creates a different unique datafile name. One form of the ASM filename to use in this type of request is an incomplete filename, which is just a disk group name preceded by a plus sign. In this case, you set DB_CREATE_FILE_DEST to an incomplete filename (for example, +dgroup2), and whenever a command is executed that must create a database file in DB_CREATE_FILE_DEST, the file is created in the designated disk group and assigned a unique fully qualified name. You can use an incomplete filename in other *_DEST initialization parameters. Every time ASM creates a fully qualified name, it writes an alert log message containing the ASM-generated name. You can also find the generated name in database views displaying Oracle file names, such as V$DATAFILE and V$LOGFILE. You can use this name, or an abbreviated form of it, if you later need to reference an ASM file in a SQL statement. Like other Oracle database filenames, ASM filenames are kept in the control file and the RMAN catalog. Note: The sections that follow provide details on each of the six possible forms of an ASM filename: ■ Fully Qualified ASM Filename ■ Numeric ASM Filename ■ Alias ASM Filenames ■ Alias ASM Filename with Template ■ Incomplete ASM Filename ■ Incomplete ASM Filename with Template The following table specifies the valid contexts for each filename form, and if the form is used for file creation, whether the created file is an Oracle-managed file (OMF). Table 12–7 Valid Contexts for the ASM Filename Forms Valid Context OMF Filename Form Reference Single-File Creation Multiple File Created as Creation Oracle-managed? Fully qualified filename Yes No No Numeric filename Yes No No Alias filename Yes Yes No No Alias with template filename No Yes No No Incomplete filename No Yes Yes Yes Incomplete filename with template No Yes Yes Yes Using Automatic Storage Management 12-35 Using Automatic Storage Management in the Database Fully qualified and numeric filenames can be used in single-file create if you specify the REUSE keyword, as described in "Using ASM Filenames in SQL Statements" on page 12-41. Note: Fully Qualified ASM Filename This form of ASM filename can be used for referencing existing ASM files. It is the filename that ASM always automatically generates when an ASM file is created. A fully qualified filename has the following form: +group/dbname/file_type/file_type_tag.file.incarnation where: ■ +group is the disk group name preceded by a plus sign. You can think of the plus sign (+) as the root directory of the ASM file system, like the slash (/) in Unix operating systems. ■ ■ ■ ■ dbname is the DB_UNIQUE_NAME of the database to which the file belongs. file_type is the Oracle file type and can be one of the file types shown in Table 12–8. file_type_tag is type specific information about the file and can be one of the tags shown in Table 12–8. file.incarnation is the file/incarnation pair, used to ensure uniqueness. An example of a fully qualified ASM filename is: +dgroup2/sample/controlfile/Current.256.541956473 Table 12–8 Oracle File Types and Automatic Storage Management File Type Tags Automatic Storage Management file_type Automatic Storage Management file_type_tag Comments Control files and backup control files Current -- DATAFILE Datafiles and datafile copies tsname Tablespace into which the file is added ONLINELOG Online logs group_group# -- ARCHIVELOG Archive logs thread_thread#_seq_s -equence# TEMPFILE Tempfiles tsname Tablespace into which the file is added BACKUPSET Datafile and archive log backup pieces; datafile incremental backup pieces hasspfile_timestamp hasspfile can take one of two values: s indicates that the backup set includes the spfile; n indicates that the backup set does not include the spfile. PARAMETERFILE Persistent parameter files spfile CONTROLFILE Description 12-36 Oracle Database Administrator’s Guide Backup Using Automatic Storage Management in the Database Table 12–8 (Cont.) Oracle File Types and Automatic Storage Management File Type Tags Automatic Storage Management file_type Description Automatic Storage Management file_type_tag Comments DAATAGUARDCONFIG Data Guard configuration file db_unique_name Data Guard tries to use the service provider name if it is set. Otherwise the tag defaults to DRCname. FLASHBACK Flashback logs log_log# -- CHANGETRACKING Block change tracking ctf data Used during incremental backups DUMPSET Data Pump dumpset user_obj#_file# Dump set files encode the user name, the job number that created the dump set, and the file number as part of the tag. XTRANSPORT Datafile convert tsname -- AUTOBACKUP Automatic backup files hasspfile_timestamp hasspfile can take one of two values: s indicates that the backup set includes the spfile; n indicates that the backup set does not include the spfile. Numeric ASM Filename The numeric ASM filename can be used for referencing existing files. It is derived from the fully qualified ASM filename and takes the form: +group.file.incarnation Numeric ASM filenames can be used in any interface that requires an existing file name. An example of a numeric ASM filename is: +dgroup2.257.541956473 Alias ASM Filenames Alias ASM filenames, otherwise known as aliases, can be used both for referencing existing ASM files and for creating new ASM files. Alias names start with the disk group name preceded by a plus sign, after which you specify a name string of your choosing. Alias filenames are implemented using a hierarchical directory structure, with the slash (/) or backslash (\) character separating name components. You can create an alias in any system-generated or user-created ASM directory. You cannot create an alias at the root level (+), however. When you create an ASM file with an alias filename, the file is created with a fully qualified name, and the alias filename is additionally created. You can then access the file with either name. Alias ASM filenames are distinguished from fully qualified filenames or numeric filenames because they do not end in a dotted pair of numbers. It is an error to attempt to create an alias that ends in a dotted pair of numbers. Examples of ASM alias filenames are: +dgroup1/myfiles/control_file1 +dgroup2/mydir/second.dbf Using Automatic Storage Management 12-37 Using Automatic Storage Management in the Database Oracle Database references database files by their alias filenames, but only if you create the database files with aliases. If you create database files without aliases and then add aliases later, the database references the files by their fully qualified filenames. The following are examples of how the database uses alias filenames: ■ ■ ■ Alias filenames appear in V$ views. For example, if you create a tablespace and use an alias filename for the datafile, the V$DATAFILE view shows the alias filename. When a controlfile points to datafiles and online redo log files, it can use alias filenames. The CONTROL_FILES initialization parameter can use the alias filenames of the controlfiles. The Database Configuration Assistant (DBCA) creates controlfiles with alias filenames. Files created using an alias filename are not considered Oracle-managed files and may require manual deletion in the future if they are no longer needed. Note: See Also: ■ ■ "Managing Alias Names for ASM Filenames" on page 12-28 The chapter on ASMCMD in Oracle Database Utilities. ASMCMD is a command line utility that you can use to easily create aliases and directories in ASM disk groups. Alias ASM Filename with Template An alias ASM filename with template is used only for ASM file creation operations. It has the following format: +dgroup(template_name)/alias Alias filenames with template behave identically to alias filenames. The only difference is that a file created with an alias filename with template receives the mirroring and striping attributes specified by the named template. The template must belong to the disk group that the file is being created in. The creation and maintenance of ASM templates is discussed in "Managing Disk Group Templates" on page 12-29. An example of an alias ASM filename with template is: +dgroup1(my_template)/config1 Explicitly specifying a template name, as in this example, overrides the system default template for the type of file being created. Files created using an alias filename with template are not considered Oracle-managed files and may require manual deletion in the future if they are no longer needed. Note: Incomplete ASM Filename Incomplete ASM filenames are used only for file creation operations and are used for both single and multiple file creation. They consist only of the disk group name. ASM uses a system default template to determine the ASM file mirroring and striping 12-38 Oracle Database Administrator’s Guide Using Automatic Storage Management in the Database attributes. The system template that is used is determined by the file type that is being created. For example, if you are creating a datafile for a tablespace, the datafile template is used. An example of using an incomplete ASM filename is setting the DB_CREATE_FILE_DEST initialization parameter to: +dgroup1 With this setting, every time you create a tablespace, a datafile is created in the disk group dgroup1, and each datafile is assigned a different fully qualified name. See "Creating ASM Files Using a Default Disk Group Specification" on page 12-40 for more information. Incomplete ASM Filename with Template Incomplete ASM filenames with templates are used only for file creation operations and are used for both single and multiple file creation. They consist of the disk group name followed by the template name in parentheses. When you explicitly specify a template in a file name, ASM uses the specified template instead of the default template for that file type to determine mirroring and striping attributes for the file. An example of using an incomplete ASM filename with template is setting the DB_CREATE_FILE_DEST initialization parameter to: +dgroup1(my_template) Starting the ASM and Database Instances Start the ASM and database instances in the following order: 1. Start the ASM Instance. You start the ASM instance on the same node as the database before you start the database instance. Starting an ASM instance is discussed in "Starting Up an ASM Instance" on page 12-10 2. Start the database instance. Consider the following before you start your database instance: ■ ■ When using SQL*Plus to start the database, you must first set the ORACLE_SID environment variable to the database SID. If ASM and the database have different Oracle homes, you must also set the ORACLE_HOME environment variable. Depending on platform, you may have to change other environment variables as well. You must have the INSTANCE_TYPE initialization parameter set as follows: INSTANCE_TYPE = RDBMS This the default. ■ If you want ASM to be the default destination for creating database files, you must specify an incomplete ASM filename in one or more of the following initialization parameters (see "Creating ASM Files Using a Default Disk Group Specification" on page 12-40): – DB_CREATE_FILE_DEST – DB_CREATE_ONLINE_LOG_DEST_n – DB_RECOVERY_FILE_DEST Using Automatic Storage Management 12-39 Using Automatic Storage Management in the Database ■ – CONTROL_FILES – LOG_ARCHIVE_DEST_n – LOG_ARCHIVE_DEST – STANDBY_ARCHIVE_DEST Some additional initialization parameter considerations: – LOG_ARCHIVE_FORMAT is ignored if a disk group is specified for LOG_ARCHIVE_DEST (for example, LOG_ARCHIVE_DEST = +dgroup1). – DB_BLOCK_SIZE must be one of the standard block sizes (2K, 4K, 8K, 16K or 32K bytes). – LARGE_POOL_SIZE must be set to at least 1 MB. Your database instance is now able to create ASM files. You can keep your database instance open and running when you reconfigure disk groups. When you add or remove disks from a disk group, ASM automatically rebalances file data in the reconfigured disk group to ensure a balanced I/O load, even while the database is running. Creating and Referencing ASM Files in the Database ASM files are Oracle-managed files unless you created the file using an alias. Any Oracle-managed file is automatically deleted when it is no longer needed. An ASM file is deleted if the creation fails. Creating ASM Files Using a Default Disk Group Specification Using the Oracle-managed files feature for operating system files, you can specify a directory as the default location for the creation of datafiles, tempfiles, redo log files, and control files. Using the Oracle-managed files feature for ASM, you can specify a disk group, in the form of an incomplete ASM filename, as the default location for creation of these files, and additional types of files, including archived log files. As for operating system files, the name of the default disk group is stored in an initialization parameter and is used whenever a file specification (for example, DATAFILE clause) is not explicitly specified during file creation. The following initialization parameters accept the multiple file creation context form of ASM filenames as a destination: Initialization Parameter Description DB_CREATE_FILE_DEST Specifies the default disk group location in which to create: ■ Datafiles ■ Tempfiles If DB_CREATE_ONLINE_LOG_DEST_n is not specified, then also specifies the default disk group for: DB_CREATE_ONLINE_LOG_DEST_n 12-40 Oracle Database Administrator’s Guide ■ Redo log files ■ Control file Specifies the default disk group location in which to create: ■ Redo log files ■ Control files Using Automatic Storage Management in the Database Initialization Parameter Description DB_RECOVERY_FILE_DEST If this parameter is specified and DB_CREATE_ONLINE_LOG_DEST_n and CONTROL_FILES are not specified, then this parameter specifies a default disk group for a flash recovery area that contains a copy of: ■ Control file ■ Redo log files If no local archive destination is specified, then this parameter implicitly sets LOG_ARCHIVE_DEST_10 to the USE_DB_RECOVERY_FILE_DEST value. Specifies a disk group in which to create control files. CONTROL_FILES The following initialization parameters accept the multiple file creation context form of the ASM filenames and ASM directory names as a destination: Initialization Parameter Description LOG_ARCHIVE_DEST_n Specifies a default disk group or ASM directory as destination for archiving redo log files LOG_ARCHIVE_DEST Optional parameter to use to specify a default disk group or ASM directory as destination for archiving redo log files. Use when specifying only one destination. STANDBY_ARCHIVE_DEST Relevant only for a standby database in managed recovery mode. It specifies a default disk group or ASM directory that is the location of archive logs arriving from a primary database. Not discussed in this book. See Oracle Data Guard Concepts and Administration. The following example illustrates how an ASM file, in this case a datafile, might be created in a default disk group. Creating a Datafile Using a Default Disk Group: Example Assume the following initialization parameter setting: DB_CREATE_FILE_DEST = '+dgroup1' The following statement creates tablespace tspace1. CREATE TABLESPACE tspace1; ASM automatically creates and manages the datafile for tspace1 on ASM disks in the disk group dgroup1. File extents are stored using the attributes defined by the default template for a datafile. Using ASM Filenames in SQL Statements You can specify ASM filenames in the file specification clause of your SQL statements. If you are creating a file for the first time, use the creation form of an ASM filename. If the ASM file already exists, you must use the reference context form of the filename, and if you are trying to re-create the file, you must add the REUSE keyword. The space will be reused for the new file. This usage might occur when, for example, trying to re-create a control file, as shown in "Creating a Control File in ASM" on page 12-44. If a reference context form is used with the REUSE keyword and the file does not exist, an error results. Using Automatic Storage Management 12-41 Using Automatic Storage Management in the Database Partially created files resulting from system errors are automatically deleted. Using an ASM Filename in a SQL Statement: Example The following is an example of specifying an ASM filename in a SQL statement. In this case, it is used in the file creation context: CREATE TABLESPACE tspace2 DATAFILE '+dgroup2' SIZE 200M AUTOEXTEND ON; The tablespace tspace2 is created and is comprised of one datafile of size 200M contained in the disk group dgroup2. The datafile is set to auto-extensible with an unlimited maximum size. An AUTOEXTEND clause can be used to override this default. Creating a Database in ASM The recommended method of creating your database is to use the Database Configuration Assistant (DBCA). However, if you choose to create your database manually using the CREATE DATABASE statement, then ASM enables you to create a database and all of its underlying files with a minimum of input from you. The following is an example of using the CREATE DATABASE statement, where database files are created and managed automatically by ASM. Creating a Database in ASM: Example This example creates a database with the following ASM files: ■ ■ ■ ■ ■ A SYSTEM tablespace datafile in disk group dgroup1. A SYSAUX tablespace datafile in disk group dgroup1. The tablespace is locally managed with automatic segment-space management. A multiplexed online redo log is created with two online log groups, one member of each in dgroup1 and dgroup2 (flash recovery area). If automatic undo management mode is enabled, then an undo tablespace datafile in directory dgroup1. If no CONTROL_FILES initialization parameter is specified, then two control files, one in drgoup1 and another in dgroup2 (flash recovery area). The control file in dgroup1 is the primary control file. The following initialization parameter settings are included in the initialization parameter file: DB_CREATE_FILE_DEST = '+dgroup1' DB_RECOVERY_FILE_DEST = '+dgroup2' DB_RECOVERY_FILE_DEST_SIZE = 10G The following statement is issued at the SQL prompt: SQL> CREATE DATABASE sample; Creating Tablespaces in ASM When ASM creates a datafile for a permanent tablespace (or a tempfile for a temporary tablespace), the datafile is set to auto-extensible with an unlimited maximum size and 100 MB default size. You can use the AUTOEXTEND clause to override this default extensibility and the SIZE clause to override the default size. Automatic Storage Management applies attributes to the datafile, as specified in the system default template for a datafile as shown in the table in "Managing Disk Group Templates" on page 12-29. You can also create and specify your own template. 12-42 Oracle Database Administrator’s Guide Using Automatic Storage Management in the Database Files in a tablespace may be in both ASM files and non-ASM files as a result of the tablespace history. RMAN commands enable non-ASM files to be relocated to a ASM disk group and enable ASM files to be relocated as non-ASM files. The following are some examples of creating tablespaces using Automatic Storage Management. The examples assume that disk groups have already been configured. See Also: ■ Oracle Database Backup and Recovery Basics ■ Oracle Database Backup and Recovery Advanced User's Guide Creating a Tablespace in ASM: Example 1 This example illustrates "out of the box" usage of Automatic Storage Management. You let Automatic Storage Management create and manage the tablespace datafile for you, using Oracle supplied defaults that are adequate for most situations. Assume the following initialization parameter setting: DB_CREATE_FILE_DEST = '+dgroup2' The following statement creates the tablespace and its datafile: CREATE TABLESPACE tspace2; Creating a Tablespace in ASM: Example 2 The following statements create a tablespace that uses a user defined template (assume it has been defined) to specify the redundancy and striping attributes of the datafile: SQL> ALTER SYSTEM SET DB_CREATE_FILE_DEST = '+dgroup1(my_template)'; SQL> CREATE TABLESPACE tspace3; Creating a Tablespace in ASM: Example 3 The following statement creates an undo tablespace with a datafile that has an alias name, and with attributes that are set by the user defined template my_undo_template. It assumes a directory has been created in disk group dgroup3 to contain the alias name and that the user defined template exists. Because an alias is used to create the datafile, the file is not an Oracle-managed file and will not be automatically deleted when the tablespace is dropped. CREATE UNDO TABLESPACE myundo DATAFILE '+dgroup3(my_undo_template)/myfiles/my_undo_ts' SIZE 200M; The following statement drops the file manually after the tablespace has been dropped: ALTER DISKGROUP dgroup3 DROP FILE '+dgroup3/myfiles/my_undo_ts'; Creating Redo Logs in ASM Online redo logs can be created in multiple disk groups, either implicitly in the initialization parameter file or explicitly in an ALTER DATABASE...ADD LOGFILE statement. Each online log should have one log member in multiple disk groups. The filenames for log file members are automatically generated. All partially created redo log files, created as a result of a system error, are automatically deleted. Using Automatic Storage Management 12-43 Using Automatic Storage Management in the Database Adding New Redo Log Files: Example The following example creates a log file with a member in each of the disk groups dgroup1 and dgroup2. The following parameter settings are included in the initialization parameter file: DB_CREATE_ONLINE_LOG_DEST_1 = '+dgroup1' DB_CREATE_ONLINE_LOG_DEST_2 = '+dgroup2' The following statement is issued at the SQL prompt: ALTER DATABASE ADD LOGFILE; Creating a Control File in ASM Control files can be explicitly created in multiple disk groups. The filenames for control files are automatically generated. If an attempt to create a control file fails, partially created control files will be automatically be deleted. There may be times when you need to specify a control file by name. Alias filenames are provided to allow administrators to reference ASM files with human-understandable names. The use of an alias in the specification of the control file during its creation allows the DBA to later refer to the control file with a human-meaningful name. Furthermore, an alias can be specified as a control file name in the CONTROL_FILES initialization parameter. Control files created without aliases can be given alias names at a later time. The ALTER DISKGROUP...CREATE ALIAS statement is used for this purpose. When creating a control file, datafiles and log files stored in an ASM disk group should be given to the CREATE CONTROLFILE command using the file reference context form of their ASM filenames. However, the use of the RESETLOGS option requires the use of a file creation context form for the specification of the log files. Creating a Control File in ASM: Example 1 The following CREATE CONTROLFILE statement is generated by an ALTER DATABASE BACKUP CONTROLFILE TO TRACE command for a database with datafiles and log files created on disk groups dgroup1 and dgroup2: CREATE CONTROLFILE REUSE DATABASE "SAMPLE" NORESETLOGS ARCHIVELOG MAXLOGFILES 16 MAXLOGMEMBERS 2 MAXDATAFILES 30 MAXINSTANCES 1 MAXLOGHISTORY 226 LOGFILE GROUP 1 ( '+DGROUP1/db/onlinelog/group_1.258.541956457', '+DGROUP2/db/onlinelog/group_1.256.541956473' ) SIZE 100M, GROUP 2 ( '+DGROUP1/db/onlinelog/group_2.257.541956477', '+DGROUP2/db/onlinelog/group_2.258.541956487' ) SIZE 100M DATAFILE '+DGROUP1/db/datafile/system.260.541956497', '+DGROUP1/db/datafile/sysaux.259.541956511' CHARACTER SET US7ASCII ; 12-44 Oracle Database Administrator’s Guide Using Automatic Storage Management in the Database Creating a Control File in ASM: Example 2 This example is a CREATE CONTROLFILE statement for a database with datafiles, but uses a RESETLOGS clause, and thus uses the creation context form for log files: CREATE CONTROLFILE REUSE DATABASE "SAMPLE" RESETLOGS ARCHIVELOG MAXLOGFILES 16 MAXLOGMEMBERS 2 MAXDATAFILES 30 MAXINSTANCES 1 MAXLOGHISTORY 226 LOGFILE GROUP 1 ( '+DGROUP1', '+DGROUP2' ) SIZE 100M, GROUP 2 ( '+DGROUP1', '+DGROUP2' ) SIZE 100M DATAFILE '+DGROUP1/db/datafile/system.260.541956497', '+DGROUP1/db/datafile/sysaux.259.541956511' CHARACTER SET US7ASCII ; Creating Archive Log Files in ASM Disk groups can be specified as archive log destinations in the LOG_ARCHIVE_DEST and LOG_ARCHIVE_DEST_n initialization parameters. When destinations are specified in this manner, the archive log filename will be unique, even if archived twice. All partially created archive files, created as a result of a system error, are automatically deleted. If LOG_ARCHIVE_DEST is set to a disk group name, LOG_ARCHIVE_FORMAT is ignored. Unique filenames for archived logs are automatically created by the Oracle database. If LOG_ARCHIVE_DEST is set to a directory in a disk group, LOG_ARCHIVE_FORMAT has its normal semantics. The following sample archive log names might be generated with DB_RECOVERY_FILE_DEST set to +dgroup2. SAMPLE is the value of the DB_UNIQUE_NAME parameter: +DGROUP2/SAMPLE/ARCHIVELOG/2003_09_23/thread_1_seq_38.614.541956473 +DGROUP2/SAMPLE/ARCHIVELOG/2003_09_23/thread_4_seq_35.609.541956477 +DGROUP2/SAMPLE/ARCHIVELOG/2003_09_23/thread_2_seq_34.603.541956487 +DGROUP2/SAMPLE/ARCHIVELOG/2003_09_25/thread_3_seq_100.621.541956497 +DGROUP2/SAMPLE/ARCHIVELOG/2003_09_25/thread_1_seq_38.614.541956511 Recovery Manager (RMAN) and ASM RMAN is critical to ASM and is responsible for tracking the ASM filenames and for deleting obsolete ASM files. Because ASM files cannot be copied through normal operating system interfaces (other than with the XML DB repository through FTP or HTTP/WebDAV), RMAN is the preferred means of copying ASM files. RMAN is the only method for performing backups of a database containing ASM files. RMAN can also be used for moving databases or files into ASM storage. Using Automatic Storage Management 12-45 Migrating a Database to Automatic Storage Management See Also: ■ Oracle Database Backup and Recovery Basics ■ Oracle Database Backup and Recovery Advanced User's Guide Migrating a Database to Automatic Storage Management With a new installation of Oracle Database and Automatic Storage Management (ASM), you initially create your database in ASM. If you have an existing Oracle database that stores database files in the operating system file system or on raw devices, you can migrate some or all of these database files to ASM. There are two ways to migrate database files to ASM: ■ Manually, using RMAN For detailed instructions on migrating a database to ASM using RMAN, see Oracle Database Backup and Recovery Advanced User's Guide. You can also use RMAN to migrate a single tablespace or datafile to ASM. ■ With the Migrate Database To ASM wizard in Enterprise Manager You access the wizard from the Administration page, under the Change Database heading. Refer to the Enterprise Manager online help for instructions on using the wizard. Accessing Automatic Storage Management Files with the XML DB Virtual Folder Automatic Storage Management (ASM) files and directories can be accessed through a virtual folder in the XML DB repository. The repository path to the virtual folder is /sys/asm. The folder is virtual because its contents do not actually reside in the repository; they exist as normal ASM files and directories. /sys/asm provides a means to access and manipulate the ASM files and directories with programmatic APIs such as the DBMS_XDB package and with XML DB protocols such as FTP and HTTP/WebDAV. A typical use for this capability might be to view /sys/asm as a Web Folder in a graphical user interface (with the WebDAV protocol), and then copy a Data Pump dumpset from an ASM disk group to an operating system file system by dragging and dropping. You must log in as a user other than SYS and you must have been granted the DBA role to access /sys/asm with XML DB protocols. The FTP protocol is initially disabled for a new XML DB installation. To enable it, you must set the FTP port to a non-zero value. The easiest way to do this is with the catxdbdbca.sql script. This script takes two arguments. The first is the FTP port number, and the second is the HTTP/WebDAV port number. The following example configures the FTP port number to 7787, and the HTTP/WebDAV port number to 8080: Note: SQL> @?/rdbms/admin/catxdbdbca.sql 7787 8080 Another way to set these port numbers is with the XDB Configuration page in Enterprise Manager. 12-46 Oracle Database Administrator’s Guide Accessing Automatic Storage Management Files with the XML DB Virtual Folder See Also: Oracle XML DB Developer's Guide for information on Oracle XML DB, including additional ways to configure port numbers for the XML DB protocol servers, and Oracle Database PL/SQL Packages and Types Reference for information on the DBMS_XDB package. Inside /sys/asm The ASM virtual folder is created by default during XML DB installation. If the database is not configured to use ASM, the folder is empty and no operations are permitted on it. The ASM virtual folder contains folders and subfolders that follow the hierarchy defined by the structure of an ASM fully qualified file name. Thus, /sys/asm contains one subfolder for every mounted disk group, and each disk group folder contains one subfolder for each database that uses the disk group. (In addition, a disk group folder may contain files and folders corresponding to aliases created by the administrator.) Continuing the hierarchy, the database folders contain file type folders, which contain the ASM files. This hierarchy is shown in the following diagram, which for simplicity, excludes directories created for aliases. /sys/asm Disk Groups DATA RECOVERY Databases HR MFG HR MFG File Types DATAFILE TEMPFILE CONTROLFILE ONLINELOG CONTROLFILE ONLINELOG ARCHIVELOG Restrictions The following are usage restrictions on /sys/asm: ■ ■ You cannot create hard links to existing ASM files or directories with APIs such as DBMS_XDB.LINK. You cannot rename (move) an ASM file to another disk group or to a directory outside ASM. Using Automatic Storage Management 12-47 Viewing Information About Automatic Storage Management Sample FTP Session In the following sample FTP session, the disk groups are DATA and RECOVERY, the database name is MFG, and dbs is a directory that was created for aliases. All files in /sys/asm are binary. ftp> open myhost 7777 ftp> user system ftp> passwd dba ftp> cd /sys/asm ftp> ls DATA RECOVERY ftp> cd DATA ftp> ls dbs MFG ftp> cd dbs ftp> ls t_dbl.f t_axl.f ftp> binary ftp> get t_dbl.f t_axl.f ftp> put t_db2.f Viewing Information About Automatic Storage Management You can use these views to query information about Automatic Storage Management: View Description V$ASM_DISKGROUP In an ASM instance, describes a disk group (number, name, size related info, state, and redundancy type). In a DB instance, contains one row for every ASM disk group mounted by the local ASM instance. This view performs disk discovery every time it is queried. V$ASM_DISK In an ASM instance, contains one row for every disk discovered by the ASM instance, including disks that are not part of any disk group. In a DB instance, contains rows only for disks in the disk groups in use by that DB instance. This view performs disk discovery every time it is queried. V$ASM_DISKGROUP_STAT Has the same columns as V$ASM_DISKGROUP, but to reduce overhead, does not perform a discovery when it is queried. It therefore does not return information on any disks that are new to the storage system. For the most accurate data, use V$ASM_DISKGROUP instead. V$ASM_DISK_STAT Has the same columns as V$ASM_DISK, but to reduce overhead, does not perform a discovery when it is queried. It therefore does not return information on any disks that are new to the storage system. For the most accurate data, use V$ASM_DISK instead. V$ASM_FILE In an ASM instance, contains one row for every ASM file in every disk group mounted by the ASM instance. In a DB instance, contains no rows. 12-48 Oracle Database Administrator’s Guide Viewing Information About Automatic Storage Management View Description V$ASM_TEMPLATE In an ASM or DB instance, contains one row for every template present in every disk group mounted by the ASM instance. V$ASM_ALIAS In an ASM instance, contains one row for every alias present in every disk group mounted by the ASM instance. In a DB instance, contains no rows. V$ASM_OPERATION In an ASM instance, contains one row for every active ASM long running operation executing in the ASM instance. In a DB instance, contains no rows. V$ASM_CLIENT In an ASM instance, identifies databases using disk groups managed by the ASM instance. In a DB instance, contains one row for the ASM instance if the database has any open ASM files. Oracle Database Reference for details on all of these dynamic performance views See Also: Using Automatic Storage Management 12-49 Viewing Information About Automatic Storage Management 12-50 Oracle Database Administrator’s Guide Part IV Schema Objects Part IV describes the creation and maintenance of schema objects in the Oracle Database. It includes the following chapters: ■ Chapter 13, "Managing Schema Objects" ■ Chapter 14, "Managing Space for Schema Objects" ■ Chapter 15, "Managing Tables" ■ Chapter 16, "Managing Indexes" ■ Chapter 17, "Managing Partitioned Tables and Indexes" ■ Chapter 18, "Managing Clusters" ■ Chapter 19, "Managing Hash Clusters" ■ Chapter 20, "Managing Views, Sequences, and Synonyms" ■ Chapter 21, "Using DBMS_REPAIR to Repair Data Block Corruption" 13 Managing Schema Objects This chapter describes schema object management issues that are common across multiple types of schema objects. The following topics are presented: ■ Creating Multiple Tables and Views in a Single Operation ■ Analyzing Tables, Indexes, and Clusters ■ Truncating Tables and Clusters ■ Enabling and Disabling Triggers ■ Managing Integrity Constraints ■ Renaming Schema Objects ■ Managing Object Dependencies ■ Managing Object Name Resolution ■ Switching to a Different Schema ■ Displaying Information About Schema Objects Creating Multiple Tables and Views in a Single Operation You can create several tables and views and grant privileges in one operation using the CREATE SCHEMA statement. The CREATE SCHEMA statement is useful if you want to guarantee the creation of several tables, views, and grants in one operation. If an individual table, view or grant fails, the entire statement is rolled back. None of the objects are created, nor are the privileges granted. Specifically, the CREATE SCHEMA statement can include only CREATE TABLE, CREATE VIEW, and GRANT statements. You must have the privileges necessary to issue the included statements. You are not actually creating a schema, that is done when the user is created with a CREATE USER statement. Rather, you are populating the schema. The following statement creates two tables and a view that joins data from the two tables: CREATE SCHEMA AUTHORIZATION scott CREATE TABLE dept ( deptno NUMBER(3,0) PRIMARY KEY, dname VARCHAR2(15), loc VARCHAR2(25)) CREATE TABLE emp ( empno NUMBER(5,0) PRIMARY KEY, ename VARCHAR2(15) NOT NULL, job VARCHAR2(10), Managing Schema Objects 13-1 Analyzing Tables, Indexes, and Clusters mgr NUMBER(5,0), hiredate DATE DEFAULT (sysdate), sal NUMBER(7,2), comm NUMBER(7,2), deptno NUMBER(3,0) NOT NULL CONSTRAINT dept_fkey REFERENCES dept) CREATE VIEW sales_staff AS SELECT empno, ename, sal, comm FROM emp WHERE deptno = 30 WITH CHECK OPTION CONSTRAINT sales_staff_cnst GRANT SELECT ON sales_staff TO human_resources; The CREATE SCHEMA statement does not support Oracle Database extensions to the ANSI CREATE TABLE and CREATE VIEW statements, including the STORAGE clause. Oracle Database SQL Reference for syntax and other information about the CREATE SCHEMA statement See Also: Analyzing Tables, Indexes, and Clusters You analyze a schema object (table, index, or cluster) to: ■ Collect and manage statistics for it ■ Verify the validity of its storage format ■ Identify migrated and chained rows of a table or cluster Do not use the COMPUTE and ESTIMATE clauses of ANALYZE to collect optimizer statistics. These clauses are supported for backward compatibility. Instead, use the DBMS_STATS package, which lets you collect statistics in parallel, collect global statistics for partitioned objects, and fine tune your statistics collection in other ways. The cost-based optimizer, which depends upon statistics, will eventually use only statistics that have been collected by DBMS_STATS. See Oracle Database PL/SQL Packages and Types Reference for more information on the DBMS_STATS package. Note: You must use the ANALYZE statement (rather than DBMS_STATS) for statistics collection not related to the cost-based optimizer, such as: ■ To use the VALIDATE or LIST CHAINED ROWS clauses ■ To collect information on freelist blocks The following topics are discussed in this section: ■ Using DBMS_STATS to Collect Table and Index Statistics ■ Validating Tables, Indexes, Clusters, and Materialized Views ■ Listing Chained Rows of Tables and Clusters Using DBMS_STATS to Collect Table and Index Statistics You can use the DBMS_STATS package or the ANALYZE statement to gather statistics about the physical storage characteristics of a table, index, or cluster. These statistics 13-2 Oracle Database Administrator’s Guide Analyzing Tables, Indexes, and Clusters are stored in the data dictionary and can be used by the optimizer to choose the most efficient execution plan for SQL statements accessing analyzed objects. Oracle recommends using the more versatile DBMS_STATS package for gathering optimizer statistics, but you must use the ANALYZE statement to collect statistics unrelated to the optimizer, such as empty blocks, average space, and so forth. The DBMS_STATS package allows both the gathering of statistics, including utilizing parallel execution, and the external manipulation of statistics. Statistics can be stored in tables outside of the data dictionary, where they can be manipulated without affecting the optimizer. Statistics can be copied between databases or backup copies can be made. The following DBMS_STATS procedures enable the gathering of optimizer statistics: ■ GATHER_INDEX_STATS ■ GATHER_TABLE_STATS ■ GATHER_SCHEMA_STATS ■ GATHER_DATABASE_STATS See Also: ■ ■ Oracle Database Performance Tuning Guide for information about using DBMS_STATS to gather statistics for the optimizer Oracle Database PL/SQL Packages and Types Reference for a description of the DBMS_STATS package Validating Tables, Indexes, Clusters, and Materialized Views To verify the integrity of the structure of a table, index, cluster, or materialized view, use the ANALYZE statement with the VALIDATE STRUCTURE option. If the structure is valid, no error is returned. However, if the structure is corrupt, you receive an error message. For example, in rare cases such as hardware or other system failures, an index can become corrupted and not perform correctly. When validating the index, you can confirm that every entry in the index points to the correct row of the associated table. If the index is corrupt, you can drop and re-create it. If a table, index, or cluster is corrupt, you should drop it and re-create it. If a materialized view is corrupt, perform a complete refresh and ensure that you have remedied the problem. If the problem is not corrected, drop and re-create the materialized view. The following statement analyzes the emp table: ANALYZE TABLE emp VALIDATE STRUCTURE; You can validate an object and all related objects (for example, indexes) by including the CASCADE option. The following statement validates the emp table and all associated indexes: ANALYZE TABLE emp VALIDATE STRUCTURE CASCADE; You can specify that you want to perform structure validation online while DML is occurring against the object being validated. There can be a slight performance impact when validating with ongoing DML affecting the object, but this is offset by the flexibility of being able to perform ANALYZE online. The following statement validates the emp table and all associated indexes online: Managing Schema Objects 13-3 Analyzing Tables, Indexes, and Clusters ANALYZE TABLE emp VALIDATE STRUCTURE CASCADE ONLINE; Listing Chained Rows of Tables and Clusters You can look at the chained and migrated rows of a table or cluster using the ANALYZE statement with the LIST CHAINED ROWS clause. The results of this statement are stored in a specified table created explicitly to accept the information returned by the LIST CHAINED ROWS clause. These results are useful in determining whether you have enough room for updates to rows. Creating a CHAINED_ROWS Table To create the table to accept data returned by an ANALYZE...LIST CHAINED ROWS statement, execute the UTLCHAIN.SQL or UTLCHN1.SQL script. These scripts are provided by the database. They create a table named CHAINED_ROWS in the schema of the user submitting the script. Your choice of script to execute for creating the CHAINED_ROWS table is dependent upon the compatibility level of your database and the type of table you are analyzing. See the Oracle Database SQL Reference for more information. Note: After a CHAINED_ROWS table is created, you specify it in the INTO clause of the ANALYZE statement. For example, the following statement inserts rows containing information about the chained rows in the emp_dept cluster into the CHAINED_ROWS table: ANALYZE CLUSTER emp_dept LIST CHAINED ROWS INTO CHAINED_ROWS; See Also: ■ ■ Oracle Database Reference for a description of the CHAINED_ROWS table "Using the Segment Advisor" on page 14-16 for information on how the Segment Advisor reports tables with excess row chaining. Eliminating Migrated or Chained Rows in a Table You can use the information in the CHAINED_ROWS table to reduce or eliminate migrated and chained rows in an existing table. Use the following procedure. 1. Use the ANALYZE statement to collect information about migrated and chained rows. ANALYZE TABLE order_hist LIST CHAINED ROWS; 2. Query the output table: SELECT * FROM CHAINED_ROWS WHERE TABLE_NAME = 'ORDER_HIST'; OWNER_NAME ---------SCOTT SCOTT SCOTT TABLE_NAME ---------ORDER_HIST ORDER_HIST ORDER_HIST 13-4 Oracle Database Administrator’s Guide CLUST... -----... ... ... ... HEAD_ROWID -----------------AAAAluAAHAAAAA1AAA AAAAluAAHAAAAA1AAB AAAAluAAHAAAAA1AAC TIMESTAMP --------04-MAR-96 04-MAR-96 04-MAR-96 Truncating Tables and Clusters The output lists all rows that are either migrated or chained. 3. If the output table shows that you have many migrated or chained rows, then you can eliminate migrated rows by continuing through the following steps: 4. Create an intermediate table with the same columns as the existing table to hold the migrated and chained rows: CREATE TABLE int_order_hist AS SELECT * FROM order_hist WHERE ROWID IN (SELECT HEAD_ROWID FROM CHAINED_ROWS WHERE TABLE_NAME = 'ORDER_HIST'); 5. Delete the migrated and chained rows from the existing table: DELETE FROM order_hist WHERE ROWID IN (SELECT HEAD_ROWID FROM CHAINED_ROWS WHERE TABLE_NAME = 'ORDER_HIST'); 6. Insert the rows of the intermediate table into the existing table: INSERT INTO order_hist SELECT * FROM int_order_hist; 7. Drop the intermediate table: DROP TABLE int_order_history; 8. Delete the information collected in step 1 from the output table: DELETE FROM CHAINED_ROWS WHERE TABLE_NAME = 'ORDER_HIST'; 9. Use the ANALYZE statement again, and query the output table. Any rows that appear in the output table are chained. You can eliminate chained rows only by increasing your data block size. It might not be possible to avoid chaining in all situations. Chaining is often unavoidable with tables that have a LONG column or large CHAR or VARCHAR2 columns. Truncating Tables and Clusters You can delete all rows of a table or all rows in a group of clustered tables so that the table (or cluster) still exists, but is completely empty. For example, consider a table that contains monthly data, and at the end of each month, you need to empty it (delete all rows) after archiving its data. To delete all rows from a table, you have the following options: ■ Use the DELETE statement. ■ Use the DROP and CREATE statements. ■ Use the TRUNCATE statement. These options are discussed in the following sections Managing Schema Objects 13-5 Truncating Tables and Clusters Using DELETE You can delete the rows of a table using the DELETE statement. For example, the following statement deletes all rows from the emp table: DELETE FROM emp; If there are many rows present in a table or cluster when using the DELETE statement, significant system resources are consumed as the rows are deleted. For example, CPU time, redo log space, and undo segment space from the table and any associated indexes require resources. Also, as each row is deleted, triggers can be fired. The space previously allocated to the resulting empty table or cluster remains associated with that object. With DELETE you can choose which rows to delete, whereas TRUNCATE and DROP affect the entire object. Oracle Database SQL Reference for syntax and other information about the DELETE statement See Also: Using DROP and CREATE You can drop a table and then re-create the table. For example, the following statements drop and then re-create the emp table: DROP TABLE emp; CREATE TABLE emp ( ... ); When dropping and re-creating a table or cluster, all associated indexes, integrity constraints, and triggers are also dropped, and all objects that depend on the dropped table or clustered table are invalidated. Also, all grants for the dropped table or clustered table are dropped. Using TRUNCATE You can delete all rows of the table using the TRUNCATE statement. For example, the following statement truncates the emp table: TRUNCATE TABLE emp; Using the TRUNCATE statement provides a fast, efficient method for deleting all rows from a table or cluster. A TRUNCATE statement does not generate any undo information and it commits immediately. It is a DDL statement and cannot be rolled back. A TRUNCATE statement does not affect any structures associated with the table being truncated (constraints and triggers) or authorizations. A TRUNCATE statement also specifies whether space currently allocated for the table is returned to the containing tablespace after truncation. You can truncate any table or cluster in your own schema. Any user who has the DROP ANY TABLE system privilege can truncate a table or cluster in any schema. Before truncating a table or clustered table containing a parent key, all referencing foreign keys in different tables must be disabled. A self-referential constraint does not have to be disabled. As a TRUNCATE statement deletes rows from a table, triggers associated with the table are not fired. Also, a TRUNCATE statement does not generate any audit information corresponding to DELETE statements if auditing is enabled. Instead, a single audit record is generated for the TRUNCATE statement being issued. See the Oracle Database Security Guide for information about auditing. 13-6 Oracle Database Administrator’s Guide Enabling and Disabling Triggers A hash cluster cannot be truncated, nor can tables within a hash or index cluster be individually truncated. Truncation of an index cluster deletes all rows from all tables in the cluster. If all the rows must be deleted from an individual clustered table, use the DELETE statement or drop and re-create the table. The REUSE STORAGE or DROP STORAGE options of the TRUNCATE statement control whether space currently allocated for a table or cluster is returned to the containing tablespace after truncation. The default option, DROP STORAGE, reduces the number of extents allocated to the resulting table to the original setting for MINEXTENTS. Freed extents are then returned to the system and can be used by other objects. Alternatively, the REUSE STORAGE option specifies that all space currently allocated for the table or cluster remains allocated to it. For example, the following statement truncates the emp_dept cluster, leaving all extents previously allocated for the cluster available for subsequent inserts and deletes: TRUNCATE CLUSTER emp_dept REUSE STORAGE; The REUSE or DROP STORAGE option also applies to any associated indexes. When a table or cluster is truncated, all associated indexes are also truncated. The storage parameters for a truncated table, cluster, or associated indexes are not changed as a result of the truncation. Oracle Database SQL Reference for syntax and other information about the TRUNCATE TABLE and TRUNCATE CLUSTER statements See Also: Enabling and Disabling Triggers Database triggers are procedures that are stored in the database and activated ("fired") when specific conditions occur, such as adding a row to a table. You can use triggers to supplement the standard capabilities of the database to provide a highly customized database management system. For example, you can create a trigger to restrict DML operations against a table, allowing only statements issued during regular business hours. Database triggers can be associated with a table, schema, or database. They are implicitly fired when: ■ ■ ■ DML statements are executed (INSERT, UPDATE, DELETE) against an associated table Certain DDL statements are executed (for example: ALTER, CREATE, DROP) on objects within a database or schema A specified database event occurs (for example: STARTUP, SHUTDOWN, SERVERERROR) This is not a complete list. See the Oracle Database SQL Reference for a full list of statements and database events that cause triggers to fire Create triggers with the CREATE TRIGGER statement. They can be defined as firing BEFORE or AFTER the triggering event, or INSTEAD OF it. The following statement creates a trigger scott.emp_permit_changes on table scott.emp. The trigger fires before any of the specified statements are executed. CREATE TRIGGER scott.emp_permit_changes BEFORE DELETE OR INSERT OR UPDATE ON scott.emp . Managing Schema Objects 13-7 Enabling and Disabling Triggers . . pl/sql block . . . You can later remove a trigger from the database by issuing the DROP TRIGGER statement. A trigger can be in either of two distinct modes: ■ Enabled An enabled trigger executes its trigger body if a triggering statement is issued and the trigger restriction, if any, evaluates to true. By default, triggers are enabled when first created. ■ Disabled A disabled trigger does not execute its trigger body, even if a triggering statement is issued and the trigger restriction (if any) evaluates to true. To enable or disable triggers using the ALTER TABLE statement, you must own the table, have the ALTER object privilege for the table, or have the ALTER ANY TABLE system privilege. To enable or disable an individual trigger using the ALTER TRIGGER statement, you must own the trigger or have the ALTER ANY TRIGGER system privilege. See Also: ■ ■ ■ Oracle Database Concepts for a more detailed description of triggers Oracle Database SQL Reference for syntax of the CREATE TRIGGER statement Oracle Database Application Developer's Guide - Fundamentals for information about creating and using triggers Enabling Triggers You enable a disabled trigger using the ALTER TRIGGER statement with the ENABLE option. To enable the disabled trigger named reorder on the inventory table, enter the following statement: ALTER TRIGGER reorder ENABLE; To enable all triggers defined for a specific table, use the ALTER TABLE statement with the ENABLE ALL TRIGGERS option. To enable all triggers defined for the INVENTORY table, enter the following statement: ALTER TABLE inventory ENABLE ALL TRIGGERS; Oracle Database SQL Reference for syntax and other information about the ALTER TRIGGER statement See Also: Disabling Triggers Consider temporarily disabling a trigger if one of the following conditions is true: ■ An object that the trigger references is not available. 13-8 Oracle Database Administrator’s Guide Managing Integrity Constraints ■ ■ You must perform a large data load and want it to proceed quickly without firing triggers. You are loading data into the table to which the trigger applies. You disable a trigger using the ALTER TRIGGER statement with the DISABLE option. To disable the trigger reorder on the inventory table, enter the following statement: ALTER TRIGGER reorder DISABLE; You can disable all triggers associated with a table at the same time using the ALTER TABLE statement with the DISABLE ALL TRIGGERS option. For example, to disable all triggers defined for the inventory table, enter the following statement: ALTER TABLE inventory DISABLE ALL TRIGGERS; Managing Integrity Constraints Integrity constraints are rules that restrict the values for one or more columns in a table. Constraint clauses can appear in either CREATE TABLE or ALTER TABLE statements, and identify the column or columns affected by the constraint and identify the conditions of the constraint. This section discusses the concepts of constraints and identifies the SQL statements used to define and manage integrity constraints. The following topics are contained in this section: ■ Integrity Constraint States ■ Setting Integrity Constraints Upon Definition ■ Modifying, Renaming, or Dropping Existing Integrity Constraints ■ Deferring Constraint Checks ■ Reporting Constraint Exceptions ■ Viewing Constraint Information See Also: ■ ■ Oracle Database Concepts for a more thorough discussion of integrity constraints Oracle Database Application Developer's Guide - Fundamentals for detailed information and examples of using integrity constraints in applications Integrity Constraint States You can specify that a constraint is enabled (ENABLE) or disabled (DISABLE). If a constraint is enabled, data is checked as it is entered or updated in the database, and data that does not conform to the constraint is prevented from being entered. If a constraint is disabled, then data that does not conform can be allowed to enter the database. Additionally, you can specify that existing data in the table must conform to the constraint (VALIDATE). Conversely, if you specify NOVALIDATE, you are not ensured that existing data conforms. An integrity constraint defined on a table can be in one of the following states: Managing Schema Objects 13-9 Managing Integrity Constraints ■ ENABLE, VALIDATE ■ ENABLE, NOVALIDATE ■ DISABLE, VALIDATE ■ DISABLE, NOVALIDATE For details about the meaning of these states and an understanding of their consequences, see the Oracle Database SQL Reference. Some of these consequences are discussed here. Disabling Constraints To enforce the rules defined by integrity constraints, the constraints should always be enabled. However, consider temporarily disabling the integrity constraints of a table for the following performance reasons: ■ ■ ■ When loading large amounts of data into a table When performing batch operations that make massive changes to a table (for example, changing every employee's number by adding 1000 to the existing number) When importing or exporting one table at a time In all three cases, temporarily disabling integrity constraints can improve the performance of the operation, especially in data warehouse configurations. It is possible to enter data that violates a constraint while that constraint is disabled. Thus, you should always enable the constraint after completing any of the operations listed in the preceding bullet list. Enabling Constraints While a constraint is enabled, no row violating the constraint can be inserted into the table. However, while the constraint is disabled such a row can be inserted. This row is known as an exception to the constraint. If the constraint is in the enable novalidated state, violations resulting from data entered while the constraint was disabled remain. The rows that violate the constraint must be either updated or deleted in order for the constraint to be put in the validated state. You can identify exceptions to a specific integrity constraint while attempting to enable the constraint. See "Reporting Constraint Exceptions" on page 13-14. All rows violating constraints are noted in an EXCEPTIONS table, which you can examine. Enable Novalidate Constraint State When a constraint is in the enable novalidate state, all subsequent statements are checked for conformity to the constraint. However, any existing data in the table is not checked. A table with enable novalidated constraints can contain invalid data, but it is not possible to add new invalid data to it. Enabling constraints in the novalidated state is most useful in data warehouse configurations that are uploading valid OLTP data. Enabling a constraint does not require validation. Enabling a constraint novalidate is much faster than enabling and validating a constraint. Also, validating a constraint that is already enabled does not require any DML locks during validation (unlike validating a previously disabled constraint). Enforcement guarantees that no violations are introduced during the validation. Hence, enabling without validating enables you to reduce the downtime typically associated with enabling a constraint. 13-10 Oracle Database Administrator’s Guide Managing Integrity Constraints Efficient Use of Integrity Constraints: A Procedure Using integrity constraint states in the following order can ensure the best benefits: 1. Disable state. 2. Perform the operation (load, export, import). 3. Enable novalidate state. 4. Enable state. Some benefits of using constraints in this order are: ■ No locks are held. ■ All constraints can go to enable state concurrently. ■ Constraint enabling is done in parallel. ■ Concurrent activity on table is permitted. Setting Integrity Constraints Upon Definition When an integrity constraint is defined in a CREATE TABLE or ALTER TABLE statement, it can be enabled, disabled, or validated or not validated as determined by your specification of the ENABLE/DISABLE clause. If the ENABLE/DISABLE clause is not specified in a constraint definition, the database automatically enables and validates the constraint. Disabling Constraints Upon Definition The following CREATE TABLE and ALTER TABLE statements both define and disable integrity constraints: CREATE TABLE emp ( empno NUMBER(5) PRIMARY KEY DISABLE, . . . ; ALTER TABLE emp ADD PRIMARY KEY (empno) DISABLE; An ALTER TABLE statement that defines and disables an integrity constraint never fails because of rows in the table that violate the integrity constraint. The definition of the constraint is allowed because its rule is not enforced. Enabling Constraints Upon Definition The following CREATE TABLE and ALTER TABLE statements both define and enable integrity constraints: CREATE TABLE emp ( empno NUMBER(5) CONSTRAINT emp.pk PRIMARY KEY, . . . ; ALTER TABLE emp ADD CONSTRAINT emp.pk PRIMARY KEY (empno); An ALTER TABLE statement that defines and attempts to enable an integrity constraint can fail because rows of the table violate the integrity constraint. If this case, the statement is rolled back and the constraint definition is not stored and not enabled. When you enable a UNIQUE or PRIMARY KEY constraint an associated index is created. Managing Schema Objects 13-11 Managing Integrity Constraints An efficient procedure for enabling a constraint that can make use of parallelism is described in "Efficient Use of Integrity Constraints: A Procedure" on page 13-11. Note: See Also: "Creating an Index Associated with a Constraint" on page 16-7 Modifying, Renaming, or Dropping Existing Integrity Constraints You can use the ALTER TABLE statement to enable, disable, modify, or drop a constraint. When the database is using a UNIQUE or PRIMARY KEY index to enforce a constraint, and constraints associated with that index are dropped or disabled, the index is dropped, unless you specify otherwise. While enabled foreign keys reference a PRIMARY or UNIQUE key, you cannot disable or drop the PRIMARY or UNIQUE key constraint or the index. Disabling Enabled Constraints The following statements disable integrity constraints. The second statement specifies that the associated indexes are to be kept. ALTER TABLE dept DISABLE CONSTRAINT dname_ukey; ALTER TABLE dept DISABLE PRIMARY KEY KEEP INDEX, DISABLE UNIQUE (dname, loc) KEEP INDEX; The following statements enable novalidate disabled integrity constraints: ALTER TABLE dept ENABLE NOVALIDATE CONSTRAINT dname_ukey; ALTER TABLE dept ENABLE NOVALIDATE PRIMARY KEY, ENABLE NOVALIDATE UNIQUE (dname, loc); The following statements enable or validate disabled integrity constraints: ALTER TABLE dept MODIFY CONSTRAINT dname_key VALIDATE; ALTER TABLE dept MODIFY PRIMARY KEY ENABLE NOVALIDATE; The following statements enable disabled integrity constraints: ALTER TABLE dept ENABLE CONSTRAINT dname_ukey; ALTER TABLE dept ENABLE PRIMARY KEY, ENABLE UNIQUE (dname, loc); To disable or drop a UNIQUE key or PRIMARY KEY constraint and all dependent FOREIGN KEY constraints in a single step, use the CASCADE option of the DISABLE or DROP clauses. For example, the following statement disables a PRIMARY KEY constraint and any FOREIGN KEY constraints that depend on it: 13-12 Oracle Database Administrator’s Guide Managing Integrity Constraints ALTER TABLE dept DISABLE PRIMARY KEY CASCADE; Renaming Constraints The ALTER TABLE...RENAME CONSTRAINT statement enables you to rename any currently existing constraint for a table. The new constraint name must not conflict with any existing constraint names for a user. The following statement renames the dname_ukey constraint for table dept: ALTER TABLE dept RENAME CONSTRAINT dname_ukey TO dname_unikey; When you rename a constraint, all dependencies on the base table remain valid. The RENAME CONSTRAINT clause provides a means of renaming system generated constraint names. Dropping Constraints You can drop an integrity constraint if the rule that it enforces is no longer true, or if the constraint is no longer needed. You can drop the constraint using the ALTER TABLE statement with one of the following clauses: ■ DROP PRIMARY KEY ■ DROP UNIQUE ■ DROP CONSTRAINT The following two statements drop integrity constraints. The second statement keeps the index associated with the PRIMARY KEY constraint: ALTER TABLE dept DROP UNIQUE (dname, loc); ALTER TABLE emp DROP PRIMARY KEY KEEP INDEX, DROP CONSTRAINT dept_fkey; If FOREIGN KEYs reference a UNIQUE or PRIMARY KEY, you must include the CASCADE CONSTRAINTS clause in the DROP statement, or you cannot drop the constraint. Deferring Constraint Checks When the database checks a constraint, it signals an error if the constraint is not satisfied. You can defer checking the validity of constraints until the end of a transaction. When you issue the SET CONSTRAINTS statement, the SET CONSTRAINTS mode lasts for the duration of the transaction, or until another SET CONSTRAINTS statement resets the mode. Notes: ■ ■ You cannot issue a SET CONSTRAINT statement inside a trigger. Deferrable unique and primary keys must use nonunique indexes. Managing Schema Objects 13-13 Managing Integrity Constraints Set All Constraints Deferred Within the application being used to manipulate the data, you must set all constraints deferred before you actually begin processing any data. Use the following DML statement to set all deferrable constraints deferred: SET CONSTRAINTS ALL DEFERRED; Note: The SET CONSTRAINTS statement applies only to the current transaction. The defaults specified when you create a constraint remain as long as the constraint exists. The ALTER SESSION SET CONSTRAINTS statement applies for the current session only. Check the Commit (Optional) You can check for constraint violations before committing by issuing the SET CONSTRAINTS ALL IMMEDIATE statement just before issuing the COMMIT. If there are any problems with a constraint, this statement fails and the constraint causing the error is identified. If you commit while constraints are violated, the transaction is rolled back and you receive an error message. Reporting Constraint Exceptions If exceptions exist when a constraint is validated, an error is returned and the integrity constraint remains novalidated. When a statement is not successfully executed because integrity constraint exceptions exist, the statement is rolled back. If exceptions exist, you cannot validate the constraint until all exceptions to the constraint are either updated or deleted. To determine which rows violate the integrity constraint, issue the ALTER TABLE statement with the EXCEPTIONS option in the ENABLE clause. The EXCEPTIONS option places the rowid, table owner, table name, and constraint name of all exception rows into a specified table. You must create an appropriate exceptions report table to accept information from the EXCEPTIONS option of the ENABLE clause before enabling the constraint. You can create an exception table by executing the UTLEXCPT.SQL script or the UTLEXPT1.SQL script. Your choice of script to execute for creating the EXCEPTIONS table is dependent upon the compatibility level of your database and the type of table you are analyzing. See the Oracle Database SQL Reference for more information. Note: Both of these scripts create a table named EXCEPTIONS. You can create additional exceptions tables with different names by modifying and resubmitting the script. The following statement attempts to validate the PRIMARY KEY of the dept table, and if exceptions exist, information is inserted into a table named EXCEPTIONS: ALTER TABLE dept ENABLE PRIMARY KEY EXCEPTIONS INTO EXCEPTIONS; If duplicate primary key values exist in the dept table and the name of the PRIMARY KEY constraint on dept is sys_c00610, then the following query will display those exceptions: 13-14 Oracle Database Administrator’s Guide Managing Integrity Constraints SELECT * FROM EXCEPTIONS; The following exceptions are shown: fROWID -----------------AAAAZ9AABAAABvqAAB AAAAZ9AABAAABvqAAG OWNER --------SCOTT SCOTT TABLE_NAME -------------DEPT DEPT CONSTRAINT ----------SYS_C00610 SYS_C00610 A more informative query would be to join the rows in an exception report table and the master table to list the actual rows that violate a specific constraint, as shown in the following statement and results: SELECT deptno, dname, loc FROM dept, EXCEPTIONS WHERE EXCEPTIONS.constraint = 'SYS_C00610' AND dept.rowid = EXCEPTIONS.row_id; DEPTNO ---------10 10 DNAME -------------ACCOUNTING RESEARCH LOC ----------NEW YORK DALLAS All rows that violate a constraint must be either updated or deleted from the table containing the constraint. When updating exceptions, you must change the value violating the constraint to a value consistent with the constraint or to a null. After the row in the master table is updated or deleted, the corresponding rows for the exception in the exception report table should be deleted to avoid confusion with later exception reports. The statements that update the master table and the exception report table should be in the same transaction to ensure transaction consistency. To correct the exceptions in the previous examples, you might issue the following transaction: UPDATE dept SET deptno = 20 WHERE dname = 'RESEARCH'; DELETE FROM EXCEPTIONS WHERE constraint = 'SYS_C00610'; COMMIT; When managing exceptions, the goal is to eliminate all exceptions in your exception report table. While you are correcting current exceptions for a table with the constraint disabled, it is possible for other users to issue statements creating new exceptions. You can avoid this by marking the constraint ENABLE NOVALIDATE before you start eliminating exceptions. Note: Oracle Database Reference for a description of the EXCEPTIONS table See Also: Viewing Constraint Information Oracle Database provides the following views that enable you to see constraint definitions on tables and to identify columns that are specified in constraints: Managing Schema Objects 13-15 Renaming Schema Objects View Description DBA_CONSTRAINTS DBA view describes all constraint definitions in the database. ALL view describes constraint definitions accessible to current user. USER view describes constraint definitions owned by the current user. ALL_CONSTRAINTS USER_CONSTRAINTS DBA_CONS_COLUMNS ALL_CONS_COLUMNS USER_CONS_COLUMNS DBA view describes all columns in the database that are specified in constraints. ALL view describes only those columns accessible to current user that are specified in constraints. USER view describes only those columns owned by the current user that are specified in constraints. Oracle Database Reference contains descriptions of the columns in these views See Also: Renaming Schema Objects To rename an object, it must be in your schema. You can rename schema objects in either of the following ways: ■ Drop and re-create the object ■ Rename the object using the RENAME statement If you drop and re-create an object, all privileges granted for that object are lost. Privileges must be regranted when the object is re-created. Alternatively, a table, view, sequence, or a private synonym of a table, view, or sequence can be renamed using the RENAME statement. When using the RENAME statement, integrity constraints, indexes, and grants made for the object are carried forward for the new name. For example, the following statement renames the sales_staff view: RENAME sales_staff TO dept_30; You cannot use RENAME for a stored PL/SQL program unit, public synonym, index, or cluster. To rename such an object, you must drop and re-create it. Note: Before renaming a schema object, consider the following effects: ■ ■ All views and PL/SQL program units dependent on a renamed object become invalid, and must be recompiled before next use. All synonyms for a renamed object return an error when used. See Also: Oracle Database SQL Reference for syntax of the RENAME statement Managing Object Dependencies This section describes object dependencies, and contains the following topics: ■ About Dependencies Among Schema Objects ■ Manually Recompiling Views ■ Manually Recompiling Procedures and Functions 13-16 Oracle Database Administrator’s Guide Managing Object Dependencies ■ Manually Recompiling Packages About Dependencies Among Schema Objects Some types of schema objects can reference other objects as part of their definition. For example, a view is defined by a query that references tables or other views. A procedure’s body can include SQL statements that reference other schema objects. An object that references another object as part of its definition is called a dependent object, while the object being referenced is a referenced object. If you alter the definition of a referenced object, dependent objects may become invalid. The next time that an invalid object is referenced—for example, used in a query or invoked by a procedure—the database automatically recompiles it before proceeding with the current operation. In many cases, this restores the validity of the object. However, some changes to a referenced object cause invalidations to dependent objects that cannot be remedied by just recompilation. For example, if you drop a table, any procedure that queries that table will generate errors upon recompilation. In this case, code changes are required before that procedure can be made valid again. Even if all currently invalid objects can be made valid again with just recompilation, it is obvious that a large number of invalid objects can have a negative impact on application performance. In addition, if a package were to become invalid and then run by an application, the automatic recompilation of that package could result in an error ORA-04068 (existing state of package pkg_name has been discarded) in one or more sessions. As a DBA, you must therefore be aware of object dependencies before altering the definitions of schema objects. Oracle Database automatically tracks dependencies among schema objects and tracks the current state (VALID or INVALID) of every schema object. You can determine the state of all objects in a schema with the following query: select object_name, object_type, status from user_objects; You can force the database to recompile a schema object using the appropriate SQL statement with the COMPILE clause. The following are some generalized rules for the invalidation of schema objects: ■ ■ ■ If you change the definition of a schema object, dependent objects are cascade invalidated. This means that if B depends on A and C depends on B, if you change the definition of A, then B becomes invalid, which in turn invalidates C. If you revoke privileges from a schema object, dependent objects are cascade invalidated. There are a small number of situations where altering the definition of a schema object does not invalidate dependent objects. For example, if a procedure includes a query on a view, if you change only the view’s WHERE clause, the procedure remains valid. Table 13–1 shows how objects are affected by changes to other objects on which they depend. Managing Schema Objects 13-17 Managing Object Dependencies Table 13–1 Operations that Affect Object Status Operation ALTER TABLE (ADD, RENAME, MODIFY columns) Resulting Status of Dependent Objects INVALID RENAME [TABLE|SEQUENCE|SYNONYM|VIEW] CREATE OR REPLACE VIEW Either INVALID or No change, depending on dependent object type and what is changed in new version of view CREATE OR REPLACE SYNONYM No change if the synonym is repointed in a compatible1 way, otherwise INVALID DROP any object INVALID CREATE OR REPLACE PROCEDURE2 INVALID CREATE OR REPLACE PACKAGE INVALID CREATE OR REPLACE PACKAGE BODY No change REVOKE object privilege3 ON object TO|FROM user All objects of user that depend on object are INVALID3 REVOKE object privilege3 ON object TO|FROM PUBLIC All objects in the database that depend on object are INVALID3 Notes to Table 13–1: 1. Compatible means that the select list of the new target object is identical to that of the old target, and the new target and old target have identical privileges. An identical select list means that the names, data types, and order of columns are identical. 2. Standalone procedures and functions, packages, and triggers. 3. Only DML object privileges, including SELECT, INSERT, UPDATE, DELETE, and EXECUTE; revalidation does not require recompiling. Oracle Database Concepts for more information on dependencies among schema objects. See Also: Manually Recompiling Views To recompile a view manually, you must have the ALTER ANY TABLE system privilege or the view must be contained in your schema. Use the ALTER VIEW statement with the COMPILE clause to recompile a view. The following statement recompiles the view emp_dept contained in your schema: ALTER VIEW emp_dept COMPILE; 13-18 Oracle Database Administrator’s Guide Managing Object Name Resolution Oracle Database SQL Reference for syntax and other information about the ALTER VIEW statement See Also: Manually Recompiling Procedures and Functions To recompile a standalone procedure manually, you must have the ALTER ANY PROCEDURE system privilege or the procedure must be contained in your schema. Use the ALTER PROCEDURE/FUNCTION statement with the COMPILE clause to recompile a standalone procedure or function. The following statement recompiles the stored procedure update_salary contained in your schema: ALTER PROCEDURE update_salary COMPILE; Oracle Database SQL Reference for syntax and other information about the ALTER PROCEDURE statement See Also: Manually Recompiling Packages To recompile a package manually, you must have the ALTER ANY PROCEDURE system privilege or the package must be contained in your schema. Use the ALTER PACKAGE statement with the COMPILE clause to recompile either a package body or both a package specification and body. The following statement recompiles just the body of the package acct_mgmt: ALTER PACKAGE acct_mgmt COMPILE BODY; The next statement compiles both the body and specification of the package acct_mgmt: ALTER PACKAGE acct_mgmt COMPILE PACKAGE; Oracle Database SQL Reference for syntax and other information about the ALTER PACKAGE statement See Also: Managing Object Name Resolution Object names referenced in SQL statements can consist of several pieces, separated by periods. The following describes how the database resolves an object name. 1. Oracle Database attempts to qualify the first piece of the name referenced in the SQL statement. For example, in scott.emp, scott is the first piece. If there is only one piece, the one piece is considered the first piece. a. In the current schema, the database searches for an object whose name matches the first piece of the object name. If it does not find such an object, it continues with step b. b. The database searches for a public synonym that matches the first piece of the name. If it does not find one, it continues with step c. c. The database searches for a schema whose name matches the first piece of the object name. If it finds one, it returns to step b, now using the second piece of the name as the object to find in the qualified schema. If the second piece does not correspond to an object in the previously qualified schema or there is not a second piece, the database returns an error. If no schema is found in step c, the object cannot be qualified and the database returns an error. Managing Schema Objects 13-19 Managing Object Name Resolution 2. A schema object has been qualified. Any remaining pieces of the name must match a valid part of the found object. For example, if scott.emp.deptno is the name, scott is qualified as a schema, emp is qualified as a table, and deptno must correspond to a column (because emp is a table). If emp is qualified as a package, deptno must correspond to a public constant, variable, procedure, or function of that package. When global object names are used in a distributed database, either explicitly or indirectly within a synonym, the local database resolves the reference locally. For example, it resolves a synonym to global object name of a remote table. The partially resolved statement is shipped to the remote database, and the remote database completes the resolution of the object as described here. Because of how the database resolves references, it is possible for an object to depend on the nonexistence of other objects. This situation occurs when the dependent object uses a reference that would be interpreted differently were another object present. For example, assume the following: ■ ■ At the current point in time, the company schema contains a table named emp. A PUBLIC synonym named emp is created for company.emp and the SELECT privilege for company.emp is granted to the PUBLIC role. ■ The jward schema does not contain a table or private synonym named emp. ■ The user jward creates a view in his schema with the following statement: CREATE VIEW dept_salaries AS SELECT deptno, MIN(sal), AVG(sal), MAX(sal) FROM emp GROUP BY deptno ORDER BY deptno; When jward creates the dept_salaries view, the reference to emp is resolved by first looking for jward.emp as a table, view, or private synonym, none of which is found, and then as a public synonym named emp, which is found. As a result, the database notes that jward.dept_salaries depends on the nonexistence of jward.emp and on the existence of public.emp. Now assume that jward decides to create a new view named emp in his schema using the following statement: CREATE VIEW emp AS SELECT empno, ename, mgr, deptno FROM company.emp; Notice that jward.emp does not have the same structure as company.emp. As it attempts to resolve references in object definitions, the database internally makes note of dependencies that the new dependent object has on "nonexistent" objects--schema objects that, if they existed, would change the interpretation of the object's definition. Such dependencies must be noted in case a nonexistent object is later created. If a nonexistent object is created, all dependent objects must be invalidated so that dependent objects can be recompiled and verified and all dependent function-based indexes must be marked unusable. Therefore, in the previous example, as jward.emp is created, jward.dept_salaries is invalidated because it depends on jward.emp. Then when jward.dept_salaries is used, the database attempts to recompile the view. As the database resolves the reference to emp, it finds jward.emp (public.emp is no longer the referenced object). Because jward.emp does not have a sal column, the database finds errors when replacing the view, leaving it invalid. 13-20 Oracle Database Administrator’s Guide Displaying Information About Schema Objects In summary, you must manage dependencies on nonexistent objects checked during object resolution in case the nonexistent object is later created. See Also: "Schema Objects and Database Links" on page 29-14 for information about name resolution in a distributed database Switching to a Different Schema The following statement sets the schema of the current session to the schema name specified in the statement. ALTER SESSION SET CURRENT_SCHEMA = In subsequent SQL statements, Oracle Database uses this schema name as the schema qualifier when the qualifier is omitted. In addition, the database uses the temporary tablespace of the specified schema for sorts, joins, and storage of temporary database objects. The session retains its original privileges and does not acquire any extra privileges by the preceding ALTER SESSION statement. For example: CONNECT scott/tiger ALTER SESSION SET CURRENT_SCHEMA = joe; SELECT * FROM emp; Because emp is not schema-qualified, the table name is resolved under schema joe. But if scott does not have select privilege on table joe.emp, then scott cannot execute the SELECT statement. Displaying Information About Schema Objects Oracle Database provides a PL/SQL package that enables you to determine the DDL that created an object and data dictionary views that you can use to display information about schema objects. Packages and views that are unique to specific types of schema objects are described in the associated chapters. This section describes views and packages that are generic in nature and apply to multiple schema objects. Using a PL/SQL Package to Display Information About Schema Objects The Oracle-supplied PL/SQL package DBMS_METADATA.GET_DDL lets you obtain metadata (in the form of DDL used to create the object) about a schema object. See Also: Oracle Database PL/SQL Packages and Types Reference for a description of PL/SQL packages Example: Using the DBMS_METADATA Package The DBMS_METADATA package is a powerful tool for obtaining the complete definition of a schema object. It enables you to obtain all of the attributes of an object in one pass. The object is described as DDL that can be used to (re)create it. In the following statements the GET_DDL function is used to fetch the DDL for all tables in the current schema, filtering out nested tables and overflow segments. The SET_TRANSFORM_PARAM (with the handle value equal to DBMS_METADATA.SESSION_TRANSFORM meaning "for the current session") is used to specify that storage clauses are not to be returned in the SQL DDL. Afterwards, the session-level transform parameters are reset to their defaults. Once set, transform parameter values remain in effect until specifically reset to their defaults. Managing Schema Objects 13-21 Displaying Information About Schema Objects EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM( DBMS_METADATA.SESSION_TRANSFORM,'STORAGE',false); SELECT DBMS_METADATA.GET_DDL('TABLE',u.table_name) FROM USER_ALL_TABLES u WHERE u.nested='NO' AND (u.iot_type is null or u.iot_type='IOT'); EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM( DBMS_METADATA.SESSION_TRANSFORM,'DEFAULT'); The output from DBMS_METADATA.GET_DDL is a LONG datatype. When using SQL*Plus, your output may be truncated by default. Issue the following SQL*Plus command before issuing the DBMS_METADATA.GET_DDL statement to ensure that your output is not truncated: SQL> SET LONG 9999 See Also: Oracle XML Developer's Kit Programmer's Guide for detailed information and further examples relating to the use of the DBMS_METADATA package Using Views to Display Information About Schema Objects These views display general information about schema objects: View Description DBA_OBJECTS DBA view describes all schema objects in the database. ALL view describes objects accessible to current user. USER view describes objects owned by the current user. ALL_OBJECTS USER_OBJECTS DBA_CATALOG ALL_CATALOG List the name, type, and owner (USER view does not display owner) for all tables, views, synonyms, and sequences in the database. USER_CATALOG DBA_DEPENDENCIES ALL_DEPENDENCIES Describe all dependencies between procedures, packages, functions, package bodies, and triggers, including dependencies on views without any database links. USER_DEPENDENCIES The following sections contain examples of using some of these views. Oracle Database Reference for a complete description of data dictionary views See Also: Example 1: Displaying Schema Objects By Type The following query lists all of the objects owned by the user issuing the query: SELECT OBJECT_NAME, OBJECT_TYPE FROM USER_OBJECTS; The following is the query output: OBJECT_NAME ------------------------EMP_DEPT EMP DEPT EMP_DEPT_INDEX PUBLIC_EMP 13-22 Oracle Database Administrator’s Guide OBJECT_TYPE ------------------CLUSTER TABLE TABLE INDEX SYNONYM Displaying Information About Schema Objects EMP_MGR VIEW Example 2: Displaying Dependencies of Views and Synonyms When you create a view or a synonym, the view or synonym is based on its underlying base object. The ALL_DEPENDENCIES, USER_DEPENDENCIES, and DBA_DEPENDENCIES data dictionary views can be used to reveal the dependencies for a view. The ALL_SYNONYMS, USER_SYNONYMS, and DBA_SYNONYMS data dictionary views can be used to list the base object of a synonym. For example, the following query lists the base objects for the synonyms created by user jward: SELECT TABLE_OWNER, TABLE_NAME, SYNONYM_NAME FROM DBA_SYNONYMS WHERE OWNER = 'JWARD'; The following is the query output: TABLE_OWNER ---------------------SCOTT SCOTT TABLE_NAME ----------DEPT EMP SYNONYM_NAME ----------------DEPT EMP Managing Schema Objects 13-23 Displaying Information About Schema Objects 13-24 Oracle Database Administrator’s Guide 14 Managing Space for Schema Objects This chapter offers guidelines for managing space for schema objects. You should familiarize yourself with the concepts in this chapter before attempting to manage specific schema objects as described in later chapters. This chapter contains the following topics: ■ Managing Tablespace Alerts ■ Managing Space in Data Blocks ■ Managing Storage Parameters ■ Managing Resumable Space Allocation ■ Reclaiming Wasted Space ■ Understanding Space Usage of Datatypes ■ Displaying Information About Space Usage for Schema Objects ■ Capacity Planning for Database Objects Managing Tablespace Alerts Oracle Database provides proactive help in managing disk space for tablespaces by alerting you when available space is running low. Two alert thresholds are defined by default: warning and critical. The warning threshold is the limit at which space is beginning to run low. The critical threshold is a serious limit that warrants your immediate attention. The database issues alerts at both thresholds. There are two ways to specify alert thresholds for both locally managed and dictionary managed tablespaces: ■ By percent full For both warning and critical thresholds, when space used becomes greater than or equal to a percent of total space, an alert is issued. ■ By free space remaining (in kilobytes (KB)) For both warning and critical thresholds, when remaining space falls below an amount in KB, an alert is issued. Free-space-remaining thresholds are more useful for very large tablespaces. Alerts for locally managed tablespaces are server-generated. For dictionary managed tablespaces, Enterprise Manager provides this functionality. See "Server-Generated Alerts" on page 4-18 for more information. New tablespaces are assigned alert thresholds as follows: Managing Space for Schema Objects 14-1 Managing Tablespace Alerts ■ ■ Locally managed tablespace—When you create a new locally managed tablespace, it is assigned the default threshold values defined for the database. A newly created database has a default of 85% full for the warning threshold and 97% full for the critical threshold. Defaults for free space remaining thresholds for a new database are both zero (disabled). You can change these database defaults, as described later in this section. Dictionary managed tablespace—When you create a new dictionary managed tablespace, it is assigned the threshold values that Enterprise Manager lists for "All others" in the metrics categories "Tablespace Free Space (MB) (dictionary managed)" and "Tablespace Space Used (%) (dictionary managed)." You change these values on the Metric and Policy Settings page. In a database that is upgraded from version 9.x or earlier to 10.x, database defaults for all locally managed tablespace alert thresholds are set to zero. This setting effectively disables the alert mechanism to avoid excessive alerts in a newly migrated database. Note: Setting Alert Thresholds For each tablespace, you can set just percent-full thresholds, just free-space-remaining thresholds, or both types of thresholds simultaneously. Setting either type of threshold to zero disables it. The ideal setting for the warning threshold is one that issues an alert early enough for you to resolve the problem before it becomes critical. The critical threshold should be one that issues an alert still early enough so that you can take immediate action to avoid loss of service. To set alert threshold values: ■ ■ For locally managed tablespaces, use Enterprise Manager (see Oracle Database 2 Day DBA for instructions) or the DBMS_SERVER_ALERT.SET_THRESHOLD package procedure (see Oracle Database PL/SQL Packages and Types Reference for usage details). For dictionary managed tablespaces, use Enterprise Manager. See Oracle Database 2 Day DBA for instructions. Example—Locally Managed Tablespace The following example sets the free-space-remaining thresholds in the USERS tablespace to 10 MB (warning) and 2 MB (critical), and disables the percent-full thresholds. BEGIN DBMS_SERVER_ALERT.SET_THRESHOLD( metrics_id => DBMS_SERVER_ALERT.TABLESPACE_BYT_FREE, warning_operator => DBMS_SERVER_ALERT.OPERATOR_LE, warning_value => '10240', critical_operator => DBMS_SERVER_ALERT.OPERATOR_LE, critical_value => '2048', observation_period => 1, consecutive_occurrences => 1, instance_name => NULL, object_type => DBMS_SERVER_ALERT.OBJECT_TYPE_TABLESPACE, object_name => 'USERS'); DBMS_SERVER_ALERT.SET_THRESHOLD( 14-2 Oracle Database Administrator’s Guide Managing Tablespace Alerts metrics_id warning_operator warning_value critical_operator critical_value observation_period consecutive_occurrences instance_name object_type object_name END; / => => => => => => => => => => DBMS_SERVER_ALERT.TABLESPACE_PCT_FULL, DBMS_SERVER_ALERT.OPERATOR_GT, '0', DBMS_SERVER_ALERT.OPERATOR_GT, '0', 1, 1, NULL, DBMS_SERVER_ALERT.OBJECT_TYPE_TABLESPACE, 'USERS'); When setting non-zero values for percent-full thresholds, use the greater-than-or-equal-to operator, OPERATOR_GE. Note: Restoring a Tablespace to Database Default Thresholds After explicitly setting values for locally managed tablespace alert thresholds, you can cause the values to revert to the database defaults by setting them to NULL with DBMS_SERVER_ALERT.SET_THRESHOLD. Modifying Database Default Thresholds To modify database default thresholds for locally managed tablespaces, invoke DBMS_SERVER_ALERT.SET_THRESHOLD as shown in the previous example, but set object_name to NULL. All tablespaces that use the database default are then switched to the new default. Viewing Alerts You view alerts by accessing the home page of Enterprise Manager Database Control. You can also view alerts for locally managed tablespaces with the DBA_OUTSTANDING_ALERTS view. See "Viewing Alert Data" on page 4-21 for more information. Limitations Threshold-based alerts have the following limitations: ■ Alerts are not issued for locally managed tablespaces that are offline or in read-only mode. However, the database reactivates the alert system for such tablespaces after they become read/write or available. Managing Space for Schema Objects 14-3 Managing Space in Data Blocks ■ When you take a tablespace offline or put it in read-only mode, you should disable the alerts for the tablespace by setting the thresholds to zero. You can then reenable the alerts by resetting the thresholds when the tablespace is once again online and in read/write mode. See Also: ■ ■ ■ ■ ■ "Server-Generated Alerts" on page 4-18 for additional information on server-generated alerts in general Oracle Database PL/SQL Packages and Types Reference for information on the procedures of the DBMS_SERVER_ALERT package and how to use them Oracle Database Performance Tuning Guide for information on using the Automatic Workload Repository to gather statistics on space usage "Reclaiming Wasted Space" on page 14-15 for various ways to reclaim space that is no longer being used in the tablespace "Purging Objects in the Recycle Bin" on page 15-40 for information on reclaiming recycle bin space Managing Space in Data Blocks The following topics are contained in this section: ■ Specifying the INITRANS Parameter See Also: ■ ■ Oracle Database Concepts for more information on data blocks Oracle Database SQL Reference for syntax and other details of the INITRANS physical attributes parameters Specifying the INITRANS Parameter INITRANS specifies the number of update transaction entries for which space is initially reserved in the data block header. Space is reserved in the headers of all data blocks in the associated segment. As multiple transactions concurrently access the rows of the same data block, space is allocated for each update transaction entry in the block. Once the space reserved by INITRANS is depleted, space for additional transaction entries is allocated out of the free space in a block, if available. Once allocated, this space effectively becomes a permanent part of the block header. In earlier releases of Oracle Database, the MAXTRANS parameter limited the number of transaction entries that could concurrently use data in a data block. This parameter has been deprecated. Oracle Database now automatically allows up to 255 concurrent update transactions for any data block, depending on the available space in the block. Note: The database ignores MAXTRANS when specified by users only for new objects created when the COMPATIBLE initialization parameter is set to 10.0 or greater. 14-4 Oracle Database Administrator’s Guide Managing Storage Parameters You should consider the following when setting the INITRANS parameter for a schema object: ■ ■ The space you would like to reserve for transaction entries compared to the space you would reserve for database data The number of concurrent transactions that are likely to touch the same data blocks at any given time For example, if a table is very large and only a small number of users simultaneously access the table, the chances of multiple concurrent transactions requiring access to the same data block is low. Therefore, INITRANS can be set low, especially if space is at a premium in the database. Alternatively, assume that a table is usually accessed by many users at the same time. In this case, you might consider preallocating transaction entry space by using a high INITRANS. This eliminates the overhead of having to allocate transaction entry space, as required when the object is in use. In general, Oracle recommends that you not change the value of INITRANS from its default. Managing Storage Parameters This section describes the storage parameters that you can specify for schema object segments to tell the database how to store the object in the database. Schema objects include tables, indexes, partitions, clusters, materialized views, and materialized view logs The following topics are contained in this section: ■ Identifying the Storage Parameters ■ Specifying Storage Parameters at Object Creation ■ Setting Storage Parameters for Clusters ■ Setting Storage Parameters for Partitioned Tables ■ Setting Storage Parameters for Index Segments ■ Setting Storage Parameters for LOBs, Varrays, and Nested Tables ■ Changing Values of Storage Parameters ■ Understanding Precedence in Storage Parameters Identifying the Storage Parameters Storage parameters determine space allocation for objects when their segments are created in a tablespace. Not all storage parameters can be specified for every type of database object, and not all storage parameters can be specified in both the CREATE and ALTER statements. Storage parameters for objects in locally managed tablespaces are supported mainly for backward compatibility. The Oracle Database server manages extents for locally managed tablespaces. If you specified the UNIFORM clause when the tablespace was created, then the database creates all extents of a uniform size that you specified (or a default size) for any objects created in the tablespace. If you specified the AUTOALLOCATE clause, then the database determines the extent sizing policy for the tablespace. So, for example, if you specific the INITIAL clause when you create an object in a locally managed tablespace Managing Space for Schema Objects 14-5 Managing Storage Parameters you are telling the database to preallocate at least that much space. The database then determines the appropriate number of extents needed to allocate that much space. Table 14–1 contains a brief description of each storage parameter. For a complete description of these parameters, including their default, minimum, and maximum settings, see the Oracle Database SQL Reference. Table 14–1 Object Storage Parameters Parameter Description INITIAL In a tablespace that is specified as EXTENT MANAGEMENT LOCAL, the database uses the value of INITIAL with the extent size for the tablespace to determine the initial amount of space to reserve for the object. For example, in a uniform locally managed tablespace with 5M extents, if you specify an INITIAL value of 1M, then the database must allocate one 5M extent. If the extent size of the tablespace is smaller than the value of INITIAL, then the initial amount of space allocated will in fact be more than one extent. MINEXTENTS In a tablespace that is specified as EXTENT MANAGEMENT LOCAL, MINEXTENTS is used to compute the initial amount of space that is allocated. The initial amount of space that is allocated is equal to INITIAL * MINEXTENTS.Thereafter it is set to 1 (as seen in the DBA_SEGMENTS view). BUFFER POOL Defines a default buffer pool (cache) for a schema object. For information on the use of this parameter, see Oracle Database Performance Tuning Guide. Specifying Storage Parameters at Object Creation At object creation, you can specify storage parameters for each individual schema object. These parameter settings override any default storage settings. Use the STORAGE clause of the CREATE or ALTER statement for specifying storage parameters for the individual object. Setting Storage Parameters for Clusters Use the STORAGE clause of the CREATE TABLE or ALTER TABLE statement to set the storage parameters for non-clustered tables. In contrast, set the storage parameters for the data segments of a cluster using the STORAGE clause of the CREATE CLUSTER or ALTER CLUSTER statement, rather than the individual CREATE or ALTER statements that put tables into the cluster. Storage parameters specified when creating or altering a clustered table are ignored. The storage parameters set for the cluster override the table storage parameters. Setting Storage Parameters for Partitioned Tables With partitioned tables, you can set default storage parameters at the table level. When creating a new partition of the table, the default storage parameters are inherited from the table level (unless you specify them for the individual partition). If no storage parameters are specified at the table level, then they are inherited from the tablespace. Setting Storage Parameters for Index Segments Storage parameters for an index segment created for a table index can be set using the STORAGE clause of the CREATE INDEX or ALTER INDEX statement. 14-6 Oracle Database Administrator’s Guide Managing Storage Parameters Storage parameters of an index segment created for the index used to enforce a primary key or unique key constraint can be set in either of the following ways: ■ ■ In the ENABLE ... USING INDEX clause of the CREATE TABLE or ALTER TABLE statement In the STORAGE clause of the ALTER INDEX statement Setting Storage Parameters for LOBs, Varrays, and Nested Tables A table or materialized view can contain LOB, varray, or nested table column types. These entities can be stored in their own segments. LOBs and varrays are stored in LOB segments, while a nested table is stored in a storage table. You can specify a STORAGE clause for these segments that will override storage parameters specified at the table level. See Also: ■ ■ Oracle Database Application Developer's Guide - Large Objects for more information about LOBs Oracle Database Application Developer's Guide - Object-Relational Features for more information about varrays and nested tables Changing Values of Storage Parameters You can alter default storage parameters for tablespaces and specific storage parameters for individual objects if you so choose. Default storage parameters can be reset for a tablespace; however, changes affect only new objects created in the tablespace or new extents allocated for a segment. As discussed previously, you cannot specify default storage parameters for locally managed tablespaces, so this discussion does not apply. The INITIAL and MINEXTENTS storage parameters cannot be altered for an existing table, cluster, index. If only NEXT is altered for a segment, the next incremental extent is the size of the new NEXT, and subsequent extents can grow by PCTINCREASE as usual. If both NEXT and PCTINCREASE are altered for a segment, the next extent is the new value of NEXT, and from that point forward, NEXT is calculated using PCTINCREASE as usual. Understanding Precedence in Storage Parameters Starting with default values, the storage parameters in effect for a database object at a given time are determined by the following, listed in order of precedence (where higher numbers take precedence over lower numbers): 1. Oracle Database default values 2. DEFAULT STORAGE clause of CREATE TABLESPACE statement 3. DEFAULT STORAGE clause of ALTER TABLESPACE statement 4. STORAGE clause of CREATE [TABLE | CLUSTER | MATERIALIZED VIEW | MATERIALIZED VIEW LOG | INDEX] statement 5. STORAGE clause of ALTER [TABLE | CLUSTER | MATERIALIZED VIEW | MATERIALIZED VIEW LOG | INDEX] statement Any storage parameter specified at the object level overrides the corresponding option set at the tablespace level. When storage parameters are not explicitly set at the object Managing Space for Schema Objects 14-7 Managing Resumable Space Allocation level, they default to those at the tablespace level. When storage parameters are not set at the tablespace level, Oracle Database system defaults apply. If storage parameters are altered, the new options apply only to the extents not yet allocated. The storage parameters for temporary segments always use the default storage parameters set for the associated tablespace. Note: Managing Resumable Space Allocation Oracle Database provides a means for suspending, and later resuming, the execution of large database operations in the event of space allocation failures. This enables you to take corrective action instead of the Oracle Database server returning an error to the user. After the error condition is corrected, the suspended operation automatically resumes. This feature is called resumable space allocation. The statements that are affected are called resumable statements. This section contains the following topics: ■ Resumable Space Allocation Overview ■ Enabling and Disabling Resumable Space Allocation ■ Detecting Suspended Statements ■ Operation-Suspended Alert ■ Resumable Space Allocation Example: Registering an AFTER SUSPEND Trigger Resumable Space Allocation Overview This section provides an overview of resumable space allocation. It describes how resumable space allocation works, and specifically defines qualifying statements and error conditions. How Resumable Space Allocation Works The following is an overview of how resumable space allocation works. Details are contained in later sections. 1. 2. 3. A statement executes in a resumable mode only if its session has been enabled for resumable space allocation by one of the following actions: ■ The RESUMABLE_TIMEOUT initialization parameter is set to a nonzero value. ■ The ALTER SESSION ENABLE RESUMABLE statement is issued. A resumable statement is suspended when one of the following conditions occur (these conditions result in corresponding errors being signalled for non-resumable statements): ■ Out of space condition ■ Maximum extents reached condition ■ Space quota exceeded condition. When the execution of a resumable statement is suspended, there are mechanisms to perform user supplied operations, log errors, and to query the status of the statement execution. When a resumable statement is suspended the following actions are taken: ■ The error is reported in the alert log. 14-8 Oracle Database Administrator’s Guide Managing Resumable Space Allocation ■ ■ The system issues the Resumable Session Suspended alert. If the user registered a trigger on the AFTER SUSPEND system event, the user trigger is executed. A user supplied PL/SQL procedure can access the error message data using the DBMS_RESUMABLE package and the DBA_ or USER_RESUMABLE view. 4. Suspending a statement automatically results in suspending the transaction. Thus all transactional resources are held through a statement suspend and resume. 5. When the error condition is resolved (for example, as a result of user intervention or perhaps sort space released by other queries), the suspended statement automatically resumes execution and the Resumable Session Suspended alert is cleared. 6. A suspended statement can be forced to throw the exception using the DBMS_RESUMABLE.ABORT() procedure. This procedure can be called by a DBA, or by the user who issued the statement. 7. A suspension time out interval is associated with resumable statements. A resumable statement that is suspended for the timeout interval (the default is two hours) wakes up and returns the exception to the user. 8. A resumable statement can be suspended and resumed multiple times during execution. What Operations are Resumable? The following operations are resumable: ■ Queries SELECT statements that run out of temporary space (for sort areas) are candidates for resumable execution. When using OCI, the calls OCIStmtExecute() and OCIStmtFetch() are candidates. ■ DML INSERT, UPDATE, and DELETE statements are candidates. The interface used to execute them does not matter; it can be OCI, SQLJ, PL/SQL, or another interface. Also, INSERT INTO...SELECT from external tables can be resumable. ■ Import/Export As for SQL*Loader, a command line parameter controls whether statements are resumable after recoverable errors. ■ DDL The following statements are candidates for resumable execution: – CREATE TABLE ... AS SELECT – CREATE INDEX – ALTER INDEX ... REBUILD – ALTER TABLE ... MOVE PARTITION – ALTER TABLE ... SPLIT PARTITION – ALTER INDEX ... REBUILD PARTITION – ALTER INDEX ... SPLIT PARTITION – CREATE MATERIALIZED VIEW Managing Space for Schema Objects 14-9 Managing Resumable Space Allocation – CREATE MATERIALIZED VIEW LOG What Errors are Correctable? There are three classes of correctable errors: ■ Out of space condition The operation cannot acquire any more extents for a table/index/temporary segment/undo segment/cluster/LOB/table partition/index partition in a tablespace. For example, the following errors fall in this category: ORA-1653 unable to extend table ... in tablespace ... ORA-1654 unable to extend index ... in tablespace ... ■ Maximum extents reached condition The number of extents in a table/index/temporary segment/undo segment/cluster/LOB/table partition/index partition equals the maximum extents defined on the object. For example, the following errors fall in this category: ORA-1631 max # extents ... reached in table ... ORA-1654 max # extents ... reached in index ... ■ Space quota exceeded condition The user has exceeded his assigned space quota in the tablespace. Specifically, this is noted by the following error: ORA-1536 space quote exceeded for tablespace string Resumable Space Allocation and Distributed Operations In a distributed environment, if a user enables or disables resumable space allocation, or if you, as a DBA, alter the RESUMABLE_TIMEOUT initialization parameter, only the local instance is affected. In a distributed transaction, sessions or remote instances are suspended only if RESUMABLE has been enabled in the remote instance. Parallel Execution and Resumable Space Allocation In parallel execution, if one of the parallel execution server processes encounters a correctable error, that server process suspends its execution. Other parallel execution server processes will continue executing their respective tasks, until either they encounter an error or are blocked (directly or indirectly) by the suspended server process. When the correctable error is resolved, the suspended process resumes execution and the parallel operation continues execution. If the suspended operation is terminated, the parallel operation aborts, throwing the error to the user. Different parallel execution server processes may encounter one or more correctable errors. This may result in firing an AFTER SUSPEND trigger multiple times, in parallel. Also, if a parallel execution server process encounters a non-correctable error while another parallel execution server process is suspended, the suspended statement is immediately aborted. For parallel execution, every parallel execution coordinator and server process has its own entry in the DBA_ or USER_RESUMABLE view. Enabling and Disabling Resumable Space Allocation Resumable space allocation is only possible when statements are executed within a session that has resumable mode enabled. There are two means of enabling and 14-10 Oracle Database Administrator’s Guide Managing Resumable Space Allocation disabling resumable space allocation. You can control it at the system level with the RESUMABLE_TIMEOUT initialization parameter, or users can enable it at the session level using clauses of the ALTER SESSION statement. Because suspended statements can hold up some system resources, users must be granted the RESUMABLE system privilege before they are allowed to enable resumable space allocation and execute resumable statements. Note: Setting the RESUMABLE_TIMEOUT Initialization Parameter You can enable resumable space allocation system wide and specify a timeout interval by setting the RESUMABLE_TIMEOUT initialization parameter. For example, the following setting of the RESUMABLE_TIMEOUT parameter in the initialization parameter file causes all sessions to initially be enabled for resumable space allocation and sets the timeout period to 1 hour: RESUMABLE_TIMEOUT = 3600 If this parameter is set to 0, then resumable space allocation is disabled initially for all sessions. This is the default. You can use the ALTER SYSTEM SET statement to change the value of this parameter at the system level. For example, the following statement will disable resumable space allocation for all sessions: ALTER SYSTEM SET RESUMABLE_TIMEOUT=0; Within a session, a user can issue the ALTER SESSION SET statement to set the RESUMABLE_TIMEOUT initialization parameter and enable resumable space allocation, change a timeout value, or to disable resumable mode. Using ALTER SESSION to Enable and Disable Resumable Space Allocation A user can enable resumable mode for a session, using the following SQL statement: ALTER SESSION ENABLE RESUMABLE; To disable resumable mode, a user issues the following statement: ALTER SESSION DISABLE RESUMABLE; The default for a new session is resumable mode disabled, unless the RESUMABLE_TIMEOUT initialization parameter is set to a nonzero value. The user can also specify a timeout interval, and can provide a name used to identify a resumable statement. These are discussed separately in following sections. See Also: "Using a LOGON Trigger to Set Default Resumable Mode" on page 14-12 Specifying a Timeout Interval A timeout period, after which a suspended statement will error if no intervention has taken place, can be specified when resumable mode is enabled. The following statement specifies that resumable transactions will time out and error after 3600 seconds: ALTER SESSION ENABLE RESUMABLE TIMEOUT 3600; The value of TIMEOUT remains in effect until it is changed by another ALTER SESSION ENABLE RESUMABLE statement, it is changed by another means, or the Managing Space for Schema Objects 14-11 Managing Resumable Space Allocation session ends. The default timeout interval when using the ENABLE RESUMABLE TIMEOUT clause to enable resumable mode is 7200 seconds. See Also: "Setting the RESUMABLE_TIMEOUT Initialization Parameter" on page 14-11 for other methods of changing the timeout interval for resumable space allocation Naming Resumable Statements Resumable statements can be identified by name. The following statement assigns a name to resumable statements: ALTER SESSION ENABLE RESUMABLE TIMEOUT 3600 NAME 'insert into table'; The NAME value remains in effect until it is changed by another ALTER SESSION ENABLE RESUMABLE statement, or the session ends. The default value for NAME is 'User username(userid), Session sessionid, Instance instanceid'. The name of the statement is used to identify the resumable statement in the DBA_RESUMABLE and USER_RESUMABLE views. Using a LOGON Trigger to Set Default Resumable Mode Another method of setting default resumable mode, other than setting the RESUMABLE_TIMEOUT initialization parameter, is that you can register a database level LOGON trigger to alter a user's session to enable resumable and set a timeout interval. If there are multiple triggers registered that change default mode and timeout for resumable statements, the result will be unspecified because Oracle Database does not guarantee the order of trigger invocation. Note: Detecting Suspended Statements When a resumable statement is suspended, the error is not raised to the client. In order for corrective action to be taken, Oracle Database provides alternative methods for notifying users of the error and for providing information about the circumstances. Notifying Users: The AFTER SUSPEND System Event and Trigger When a resumable statement encounter a correctable error, the system internally generates the AFTER SUSPEND system event. Users can register triggers for this event at both the database and schema level. If a user registers a trigger to handle this system event, the trigger is executed after a SQL statement has been suspended. SQL statements executed within a AFTER SUSPEND trigger are always non-resumable and are always autonomous. Transactions started within the trigger use the SYSTEM rollback segment. These conditions are imposed to overcome deadlocks and reduce the chance of the trigger experiencing the same error condition as the statement. Users can use the USER_RESUMABLE or DBA_RESUMABLE views, or the DBMS_RESUMABLE.SPACE_ERROR_INFO function, within triggers to get information about the resumable statements. Triggers can also call the DBMS_RESUMABLE package to terminate suspended statements and modify resumable timeout values. In the following example, the default system timeout is changed by creating a system wide AFTER SUSPEND trigger that calls DBMS_RESUMABLE to set the timeout to 3 hours: 14-12 Oracle Database Administrator’s Guide Managing Resumable Space Allocation CREATE OR REPLACE TRIGGER resumable_default_timeout AFTER SUSPEND ON DATABASE BEGIN DBMS_RESUMABLE.SET_TIMEOUT(10800); END; See Also: Oracle Database Application Developer's Guide Fundamentals for information about system events, triggers, and attribute functions Using Views to Obtain Information About Suspended Statements The following views can be queried to obtain information about the status of resumable statements: View Description DBA_RESUMABLE These views contain rows for all currently executing or suspended resumable statements. They can be used by a DBA, AFTER SUSPEND trigger, or another session to monitor the progress of, or obtain specific information about, resumable statements. USER_RESUMABLE V$SESSION_WAIT When a statement is suspended the session invoking the statement is put into a wait state. A row is inserted into this view for the session with the EVENT column containing "statement suspended, wait error to be cleared". Oracle Database Reference for specific information about the columns contained in these views See Also: Using the DBMS_RESUMABLE Package The DBMS_RESUMABLE package helps control resumable space allocation. The following procedures can be invoked: Procedure Description ABORT(sessionID) This procedure aborts a suspended resumable statement. The parameter sessionID is the session ID in which the statement is executing. For parallel DML/DDL, sessionID is any session ID which participates in the parallel DML/DDL. Oracle Database guarantees that the ABORT operation always succeeds. It may be called either inside or outside of the AFTER SUSPEND trigger. The caller of ABORT must be the owner of the session with sessionID, have ALTER SYSTEM privilege, or have DBA privileges. GET_SESSION_TIMEOU T(sessionID) This function returns the current timeout value of resumable space allocation for the session with sessionID. This returned timeout is in seconds. If the session does not exist, this function returns -1. SET_SESSION_TIMEOU T(sessionID, timeout) This procedure sets the timeout interval of resumable space allocation for the session with sessionID. The parameter timeout is in seconds. The new timeout setting will applies to the session immediately. If the session does not exist, no action is taken. Managing Space for Schema Objects 14-13 Managing Resumable Space Allocation Procedure Description GET_TIMEOUT() This function returns the current timeout value of resumable space allocation for the current session. The returned value is in seconds. SET_TIMEOUT(timeou t) This procedure sets a timeout value for resumable space allocation for the current session. The parameter timeout is in seconds. The new timeout setting applies to the session immediately. See Also: Oracle Database PL/SQL Packages and Types Reference Operation-Suspended Alert When a resumable session is suspended, an operation-suspended alert is issued on the object that needs allocation of resource for the operation to complete. Once the resource is allocated and the operation completes, the operation-suspended alert is cleared. Please refer to "Managing Tablespace Alerts" on page 14-1 for more information on system-generated alerts. Resumable Space Allocation Example: Registering an AFTER SUSPEND Trigger In the following example, a system wide AFTER SUSPEND trigger is created and registered as user SYS at the database level. Whenever a resumable statement is suspended in any session, this trigger can have either of two effects: ■ ■ If an undo segment has reached its space limit, then a message is sent to the DBA and the statement is aborted. If any other recoverable error has occurred, the timeout interval is reset to 8 hours. Here are the statements for this example: CREATE OR REPLACE TRIGGER resumable_default AFTER SUSPEND ON DATABASE DECLARE /* declare transaction in this trigger is autonomous */ /* this is not required because transactions within a trigger are always autonomous */ PRAGMA AUTONOMOUS_TRANSACTION; cur_sid NUMBER; cur_inst NUMBER; errno NUMBER; err_type VARCHAR2; object_owner VARCHAR2; object_type VARCHAR2; table_space_name VARCHAR2; object_name VARCHAR2; sub_object_name VARCHAR2; error_txt VARCHAR2; msg_body VARCHAR2; ret_value BOOLEAN; mail_conn UTL_SMTP.CONNECTION; BEGIN -- Get session ID SELECT DISTINCT(SID) INTO cur_SID FROM V$MYSTAT; -- Get instance number cur_inst := userenv('instance'); 14-14 Oracle Database Administrator’s Guide Reclaiming Wasted Space -- Get space error information ret_value := DBMS_RESUMABLE.SPACE_ERROR_INFO(err_type,object_type,object_owner, table_space_name,object_name, sub_object_name); /* -- If the error is related to undo segments, log error, send email -- to DBA, and abort the statement. Otherwise, set timeout to 8 hours. --- sys.rbs_error is a table which is to be -- created by a DBA manually and defined as -- (sql_text VARCHAR2(1000), error_msg VARCHAR2(4000), -- suspend_time DATE) */ IF OBJECT_TYPE = 'UNDO SEGMENT' THEN /* LOG ERROR */ INSERT INTO sys.rbs_error ( SELECT SQL_TEXT, ERROR_MSG, SUSPEND_TIME FROM DBMS_RESUMABLE WHERE SESSION_ID = cur_sid AND INSTANCE_ID = cur_inst ); SELECT ERROR_MSG INTO error_txt FROM DBMS_RESUMABLE WHERE SESSION_ID = cur_sid and INSTANCE_ID = cur_inst; -- Send email to receipient via UTL_SMTP package msg_body:='Subject: Space Error Occurred Space limit reached for undo segment ' || object_name || on ' || TO_CHAR(SYSDATE, 'Month dd, YYYY, HH:MIam') || '. Error message was ' || error_txt; mail_conn := UTL_SMTP.OPEN_CONNECTION('localhost', 25); UTL_SMTP.HELO(mail_conn, 'localhost'); UTL_SMTP.MAIL(mail_conn, 'sender@localhost'); UTL_SMTP.RCPT(mail_conn, 'recipient@localhost'); UTL_SMTP.DATA(mail_conn, msg_body); UTL_SMTP.QUIT(mail_conn); -- Abort the statement DBMS_RESUMABLE.ABORT(cur_sid); ELSE -- Set timeout to 8 hours DBMS_RESUMABLE.SET_TIMEOUT(28800); END IF; /* commit autonomous transaction */ COMMIT; END; / Reclaiming Wasted Space This section explains how to reclaim wasted space, and also introduces the Segment Advisor, which is the Oracle Database component that identifies segments that have space available for reclamation. In This Section ■ Understanding Reclaimable Unused Space Managing Space for Schema Objects 14-15 Reclaiming Wasted Space ■ Using the Segment Advisor ■ Shrinking Database Segments Online ■ Deallocating Unused Space Understanding Reclaimable Unused Space Over time, updates and deletes on objects within a tablespace can create pockets of empty space that individually are not large enough to be reused for new data. This type of empty space is referred to as fragmented free space. Objects with fragmented free space can result in much wasted space, and can impact database performance. The preferred way to defragment and reclaim this space is to perform an online segment shrink. This process consolidates fragmented free space below the high water mark and compacts the segment. After compaction, the high water mark is moved, resulting in new free space above the high water mark. That space above the high water mark is then deallocated. The segment remains available for queries and DML during most of the operation, and no extra disk space need be allocated. You use the Segment Advisor to identify segments that would benefit from online segment shrink. Only segments in locally managed tablespaces with automatic segment space management (ASSM) are eligible. Other restrictions on segment type exist. For more information, see "Shrinking Database Segments Online" on page 14-29. If a table with reclaimable space is not eligible for online segment shrink, or if you want to make changes to logical or physical attributes of the table while reclaiming space, you can use online table redefinition as an alternative to segment shrink. Online redefinition is also referred to as reorganization. Unlike online segment shrink, it requires extra disk space to be allocated. See "Redefining Tables Online" on page 15-21 for more information. Using the Segment Advisor The Segment Advisor identifies segments that have space available for reclamation. It performs its analysis by examining usage and growth statistics in the Automatic Workload Repository (AWR), and by sampling the data in the segment. It is configured to run automatically at regular intervals, and you can also run it on demand (manually). The regularly scheduled Segment Advisor run is known as the Automatic Segment Advisor. The Segment Advisor generates the following types of advice: ■ ■ If the Segment Advisor determines that an object has a significant amount of free space, it recommends online segment shrink. If the object is a table that is not eligible for shrinking, as in the case of a table in a tablespace without automatic segment space management, the Segment Advisor recommends online table redefinition. If the Segment Advisor encounters a table with row chaining above a certain threshold, it records that fact that the table has an excess of chained rows. The Segment Advisor flags only the type of row chaining that results from updates that increase row length. Note: If you receive a space management alert, or if you decide that you want to reclaim space, you should start with the Segment Advisor. 14-16 Oracle Database Administrator’s Guide Reclaiming Wasted Space To use the Segment Advisor: 1. Check the results of the Automatic Segment Advisor. To understand the Automatic Segment Advisor, see "Automatic Segment Advisor", later in this section. For details on how to view results, see "Viewing Segment Advisor Results" on page 14-21. 2. (Optional) Obtain updated results on individual segments by rerunning the Segment Advisor manually. See "Running the Segment Advisor Manually", later in this section. Automatic Segment Advisor The Automatic Segment Advisor is started by a Scheduler job that is configured to run during the default maintenance window. The default maintenance window is specified in the Scheduler, and is initially defined as follows: ■ ■ Weeknights, Monday through Friday, from 10:00 p.m. to 6:00 a.m. (8 hours each night) Weekends, from Saturday morning at 12:00 a.m. to Monday morning at 12:00 a.m. (for a total of 48 hours) The Automatic Segment Advisor does not analyze every database object. Instead, it examines database statistics, samples segment data, and then selects the following objects to analyze: ■ Tablespaces that have exceeded a critical or warning space threshold ■ Segments that have the most activity ■ Segments that have the highest growth rate If an object is selected for analysis but the maintenance window expires before the Segment Advisor can process the object, the object is included in the next Automatic Segment Advisor run. You cannot change the set of tablespaces and segments that the Automatic Segment Advisor selects for analysis. You can, however, enable or disable the Automatic Segment Advisor job, change the times during which the Automatic Segment Advisor is scheduled to run, or adjust Automatic Segment Advisor system resource utilization. See "Configuring the Automatic Segment Advisor Job" on page 14-27 for more information. See Also: ■ "Viewing Segment Advisor Results" on page 14-21 ■ Chapter 27, "Using the Scheduler" ■ Chapter 23, "Managing Automatic System Tasks Using the Maintenance Window" Running the Segment Advisor Manually You can manually run the Segment Advisor at any time with Enterprise Manager or with PL/SQL package procedure calls. Reasons to manually run the Segment Advisor include the following: ■ You want to analyze a tablespace or segment that was not selected by the Automatic Segment Advisor. Managing Space for Schema Objects 14-17 Reclaiming Wasted Space ■ You want to repeat the analysis of an individual tabl